uuid int64 541B 3,299B | dataset stringclasses 1
value | text stringlengths 1 4.29M |
|---|---|---|
2,877,628,090,354 | arxiv | \section{Mathematical description of gravitational clustering}
The gravitational clustering of a system of collisionless point particles in an expanding universe poses
several challenging theoretical questions. Though the problem can be
tackled in a `practical' manner using high resolution numerical simulations,
such an approach hides the physical principles which govern
the behaviour of the system. To understand the physics, it is necessary that we
attack the problem from several directions using analytic and semianalytic
methods. These lectures will describe such attempts and will emphasise
the semianalytic approach and outstanding issues, rather than more well established results. In the same spirit, I have concentrated on the study of dark matter and have not discussed the baryonic physics.
The standard paradigm for the description of the observed universe proceeds in two steps: We model the universe as made of a uniform smooth background with inhomogeneities like galaxies etc. superimposed on it. When the distribution of matter is averaged over very large scales (say, over 200 $h^{-1}Mpc$) the universe is expected to be described, reasonably accurately, by the Friedmann model. The problem then reduces to understanding the formation of small scale structures in this, specified Friedman background. If we now further assume that, at some time in the past,
there were small deviations from homogeneity in the universe then these
deviations can grow due to gravitational instability over a period of time
and, eventually, form galaxies, clusters etc.
The study of structure formation therefore reduces to the study of the growth of inhomogeneities in an otherwise smooth universe. This --- in turn --- can be divided into two parts: As long as these inhomogeneities are small, their growth can be
studied by the linear perturbation around a background Friedmann universe. Once the deviations from the
smooth universe become large, linear theory fails and
we have to use other techniques to understand
the nonlinear evolution. [More details regarding structure formation can be found e.g. in Padmanabhan, 1993; 1996]
It should be noted that this
approach {\it assumes} the existence of small inhomogeneities at
some initial time. To be considered complete, the
cosmological model should also
{\it produce} these initial inhomogeneities by some viable physical
mechanism. We shall not discuss such mechanisms in these lectures and will merely postulate their existence. There is also a tacit assumption that averaging the matter density and solving the Einstein's equations with the smooth density distribution, will lead to results comparable to those obtained by averaging the exact solution obtained with inhomogeneities. Since the latter is not known with any degree of confidence for a realistic universe there is no straightforward way of checking this assumption theoretically. [It is actually possible to provide counter examples to this conjecture in specific contexts; see Padmanabhan, 1987] If this assumption is wrong there could be an effective correction term to the source distribution on the right hand side of Einstein's equation arising from the averaging of the energy density distribution. It is assumed that no such correction exists and the universe at large can be indeed described by a Friedmann model.
The above paradigm motivates us to study the growth of perturbations around the Friedmann model. Consider a perturbation of
the metric $g_{\alpha \beta}(x)$ and the stress-tensor $T_{\alpha \beta}$
into the
form $(g_{\alpha\beta}+\delta g_{\alpha\beta })$ and
$(T_{\alpha\beta }+\delta T_{\alpha\beta})$,
where the set $(g_{\alpha\beta },
T_{\alpha\beta })$ corresponds to the smooth background universe, while
the set $(\delta g_{\alpha\beta}, \delta T_{\alpha\beta })$
denotes the perturbation.
Assuming the latter to be `small' in some suitable manner, we can linearize Einstein's
equations to obtain a second-order differential equation of the form
\begin{equation}
\hat{\cal L}(g_{\alpha\beta})\delta g_{\alpha\beta }
=\delta T_{\alpha\beta }
\end{equation}
where $\hat {\cal L}$ is a linear differential operator depending on the
background space-time. Since this is a linear equation, it is convenient
to expand the solution in terms of some appropriate mode functions. For the sake of simplicity, let us
consider the spatially flat $(\Omega=1)$
universe. The mode functions could then be taken as plane waves
and by Fourier transforming the spatial
variables
we can obtain a set of separate equations
$\hat {\cal L}_{(\bld k)}\delta g_{(\bld k)}=\delta T_{(\bld k)}$
for each mode,
labeled by a wave vector ${\bf k}$. Here $\hat {\cal L}_{\bf k}$ is a linear second order differential operator in time. Solving this set of ordinary differential equations, with given initial conditions, we can
determine the evolution of each mode separately.
[Similar procedure, of course,
works for the case with $\Omega \not= 1.$
In this case, the mode functions will be more complicated than
the plane waves; but, with a suitable choice
of orthonormal functions, we can obtain a
similar set of equations]. This solves the problem of {\it linear} gravitational clustering completely.
There is, however, one major conceptual difficulty in interpreting the results of this
program. In general relativity, the form (and numerical value) of the metric
coefficients $g_{\alpha \beta}$
(or the stress-tensor components $T_{\alpha\beta }$) can be
changed by a relabeling of coordinates $x^{\alpha} \to x^{\alpha \prime}$.
By such a trivial change we
can make a small $\delta T_{\alpha\beta}$
large or even generate a component
which was originally absent.
Thus the perturbations may grow at
different rates $-$
or even decay $-$ when we relabel coordinates. It is necessary to
tackle this ambiguity before we can meaningfully talk about
the growth of inhomogeneities.
There are two different approaches to handling such difficulties
in general relativity. The first method is to
resolve the problem by force: We may choose a particular
coordinate system and compute everything in that coordinate system.
If the coordinate system is physically well motivated, then the
quantities computed in that system can be interpreted easily;
for example, we will treat $\delta T^0_0$
to be the perturbed mass (energy) density even though it is
coordinate dependent. The
difficulty with this method is that one
cannot fix the gauge {\it completely} by simple
physical arguments; the residual gauge ambiguities do create
some problems.
The second approach is
to construct quantities $-$ linear combinations of
various perturbed physical variables $-$ which are
scalars under coordinate transformations. [see eg. the contribution by Brandenberger to this volume and references cited therein]
Einstein's equations are then rewritten as equations for
these gauge invariant quantities. This approach, of course,
is manifestly gauge invariant from start to
finish. However, it is more
complicated than the first one; besides, the gauge
invariant objects do not, in general, possess any simple
physical interpretation.
In these lectures, we shall be mainly concerned with the first approach.
Since the gauge ambiguity is a purely general relativistic effect, it is necessary to determine when such effects are significant. The effects due to the
curvature of space-time will be important at
length scales bigger than (or comparable to) the Hubble radius,
defined as $d_H(t)\equiv (\dot a/a)^{-1}$.
Writing the Friedmann equation as
\begin{equation}
{\dot a^2 \over a^2} = H^2_0 \left[ \Omega_R \left( {a_0 \over a }\right)^4 + \Omega_{NR} \left( {a_0 \over a } \right)^3 + \Omega_V +
(1-\Omega)\frab{a_0}{a}^2\right] \label{qfulevol}
\end{equation}
where $\Omega_R, \Omega_{NR}, \Omega_V$ and $\Omega$ represent the density parameters for relativistic matter (with $p_R = (1/3)\rho_R; \rho_R \propto a^{-4})$, non relativistic matter with $p_{NR} = 0; \rho_{NR} \propto a^{-3})$, cosmological constant $(p_V = -\rho_V; \rho_V = {\rm constant})$ and total energy density $(\Omega = \Omega_R + \Omega_{NR} + \Omega_V)$, respectively,
it follows that
\begin{equation}
d_H(z) = H_0^{-1} \left[ \Omega_R (1+z)^4 + \Omega_{\rm NR} (1+z)^3 + (1-\Omega)(1+z)^2 + \Omega_V\right]^{-1/2} .
\end{equation}
This has the limiting forms
\begin{equation}
d_H(z)\cong \cases{H_0^{-1} \Omega_R^{-1/2}(1+z)^{-2} & $(z\gg z_{\rm eq})$\cr
H_0^{-1} \Omega_{\rm NR}^{-1/2}(1+z)^{-3/2} & $(z_{\rm eq}\gg z\gg z_{\rm curv}; \Omega_V=0)$\cr}
\label{qdhz}
\end{equation}
during radiation dominated and matter dominated epochs where
\begin{equation}
(1+z_{eq})\equiv\fra{\Omega_{NR}}{\Omega_R};\quad (1+z_{curv})\equiv\fra{1}{\Omega_{NR}}-1
\end{equation}
(The universe is radiation dominated for $z \gg z_{eq}$ and makes the transition to matter dominated phase at $z \simeq z_{eq}$. It becomes `curvature dominated' sometime in the past, for $z \la z_{\rm curv}$, if $\Omega_{\rm NR} < 0.5$. We have set $\Omega_V =0 $ for simplicity).The physical wave length $\lambda_0$, characterizing a perturbation of size $\lambda_0$ today, will evolve as $\lambda (z) = \lambda_0 (1+z)^{-1}$ t. Since $d_H$ increases faster with redshift, (as $(1+z)^{-3/2}$ in matter dominated phase and as $(1+z)^{-2}$ in the radiation dominated phase) $\lambda (z) > d_H(z)$ at sufficiently large redshifts.
For a given $\lambda_0$ we can assign a particular redshift $z_{\rm enter}$ at which $\lambda (z_{\rm enter}) = d_H (z_{\rm enter})$. For $z > z_{\rm enter}$, the proper wavelength is bigger than the Hubble radius and general relativistic effects are important; while for $z<z_{\rm enter}$ we have $\lambda < d_H$ and one can ignore the effects of general relativity. It is conventional to say that the scale $\lambda_0$ ``enters the Hubble radius'' at the epoch $z_{\rm enter}$.
The exact relation between $\lambda_0$ and $z_{\rm enter}$ differs in the case of radiation dominated and matter dominated phases since $d_H(z)$ has different scalings in these two cases. Using equation (\ref{qdhz}) it is easy to verify that: (i) A scale
\begin{equation}
\lambda_{\rm eq} \cong \left( \frac{H_0^{-1}} { \sqrt 2}\right) (\Omega_R^{1/2} / \Omega_{NR}) \cong 14 {\rm Mpc} (\Omega_{\rm NR} h^2)^{-1}
\end{equation}
enters the Hubble radius at $z= z_{\rm eq}$. (ii) Scales with $\lambda > \lambda_{\rm eq}$ enter the Hubble radius in the matter dominated epoch with
\begin{equation}
z_{\rm enter} \simeq 900 \left( \Omega_{\rm NR} h^2 \right)^{-1} \left( {\lambda_0\over 100 \ {\rm Mpc}}\right) ^{-2} .
\end{equation}
(iii) Scales with $\lambda < \lambda_{\rm eq}$ enter the Hubble radius in the radiation dominated epoch with
\begin{equation}
z_{\rm enter} \simeq 4.55 \times 10^5 \left( {\lambda_0 \over 1\, {\rm Mpc}}\right)^{-1} .\label{qzenter}
\end{equation}
One can characterize the wavelength $\lambda_0$ of the perturbation more meaningfully as follows:
As the universe expands, the wavelength $\lambda$ grows as $\lambda(t) = \lambda_0[a(t)/a_0]$ and the density of non-relativistic matter decreases as $\rho(t) = \rho_0 [a_0/a(t)]^3$. Hence the mass of nonrelativistic matter, $M(\lambda_0)$ contained inside a sphere of radius $(\lambda/2)$ remains constant at:
\begin{equation}
M={4\pi\over 3} \rho(t) \left[{\lambda(t)\over 2}\right]^3 = {4\pi \over 3} \rho_0 \left( {\lambda_0\over 2}\right)^3 = 1.45\times 10^{11}{\rm M}_\odot (\Omega_{\rm NR} h^2) \left( {\lambda_0\over 1\, {\rm Mpc}}\right)^3. \label{mpcnine}
\end{equation}
This relation shows that a comoving scale $\lambda_0 \approx 1$ Mpc contains a typical galaxy mass and $\lambda_0 \approx 10$ Mpc contains a typical cluster mass. From (\ref{qzenter}), we see that all these --- astrophysically interesting --- scales enter the Hubble radius in radiation dominated epoch.
This feature suggests the following strategy for studying the gravitational clustering. At $z \gg z_{\rm enter}$ (for any given $\lambda_0$), the perturbations need to be studied using general relativistic, linear perturbation theory. For $ z \ll z_{\rm enter}$, general relativistic effects are ignorable and the problem of gravitational clustering can be studied using newtonian gravity in proper coordinates. Observations indicate that the perturbations are only of the order of $(10^{-4} - 10^{-5})$ at $z \simeq z _{\rm enter}$ for all $\lambda_0$. Hence the nonlinear epochs of gravitational clustering occur only in the regime of newtonian gravity. In fact the only role of general relativity in this formalism is to evolve the initial perturbations upto $z \la z_{\rm enter}$, after which newtonian gravity can take over. Also note that, in the nonrelativistic regime $(z \la z_{\rm enter}\, ; \lambda \la d_H),$ there exists a natural choice of coordinates
in which newtonian gravity is applicable. Hence, all the physical
quantities can be unambiguously defined in this context.
\section{Linear growth in the general relativistic regime}
Let us start by analysing the growth of the perturbations when the proper wavelength of the mode is larger than the Hubble radius. Since $\lambda \gg d_H$ we cannot use newtonian perturbation theory. Nevertheless, it is easy to determine the evolution of the density perturbation by the following argument.
Consider a spherical region of radius $\lambda(\gg d_H)$, containing energy density $\rho_1 = \rho_b+\delta \rho$,
embedded in a $k=0$ Friedmann universe of
density $\rho_b$. It follows from spherical symmetry
that
the
inner region is not
affected by the matter outside; hence the inner region
evolves as a $k\ne 0$ Friedmann universe.
Therefore, we can write, for the two regions:
\begin{equation}
H^2={8\pi G\over 3}\rho_b, \quad
H^2+{k\over a^2}=
{8\pi G\over 3}(\rho_b+\delta \rho).
\end{equation}
The change of density from $\rho_b$ to $\rho_b+\delta \rho$ is accommodated by adding a spatial curvature term $(k/a^2)$. If this condition is to be maintained at all times, we must have
\begin{equation}
{8\pi G\over 3} \delta \rho={k\over a^2},
\end{equation}
or
\begin{equation}
{\delta\rho\over \rho_b}=
{3\over 8\pi G(\rho_b a^2)}. \label{qpert}
\end{equation}
If $(\delta\rho/\rho_b)$ is small, $a(t)$ in the right hand side will only differ slightly from the expansion factor of the unperturbed universe. This allows one to determine how $(\delta \rho/\rho_b)$
scales with $a$ for $\lambda > d_H$. Since $\rho_b\propto a^{-4}$ in the radiation dominated
phase $(t<t_{\rm eq})$ and
$\rho_b\propto a^{-3}$ in the matter dominated phase
$(t>t_{\rm eq})$ we get
\begin{equation}
\left({\delta\rho\over \rho}\right)\propto\cases{
a^2 & $({\rm for}\ t< t_{\rm eq}$)\cr
a & $({\rm for} \ t > t_{\rm eq}$).\cr}\label{qgrowth}
\end{equation}
Thus, the amplitude of the
mode with $\lambda >d_H$ always grows; as $a^2$
in the radiation dominated phase
and as $a$ in the matter dominated phase. Since no microscopic processes can operate at scales bigger than $d_H$ all components of density (dark matter, baryons, photons), grow in the same manner, as $\delta \propto (\rho_b a^2)^{-1}$ when $\lambda > d_H$.
A more formal way of obtaining this result is as follows: We first
recall that there is an {\it exact} equation in general relativity
connecting the geodesic acceleration ${\bf g}$ with
the density and pressure:
\begin{equation}
\nabla \cdot {\bf g} = - 4\pi G (\rho + 3p)
\end{equation}
Perturbing this equation, in a medium with the equation of state
$p= w\rho$, we get
\begin{equation}
\nabla_{\bf r}\cdot [\delta {\bf g}] = - 4 \pi G \left( \delta \rho + 3 \delta p\right) = - 4\pi G \rho_b \left( 1 + 3 w \right) \delta = a^{-1} \nabla_{\bf x} \cdot [ \delta {\bf g}]
\end{equation}
where $\delta = (\delta\rho/\rho)$ is the density contrast. Let us produce a $\delta \bld g$ by introducing a perturbation of the proper coordinate ${\bf r} = a(t) {\bf x}$
to the form ${\bf r+l} = a(t) {\bf x}[1+\epsilon]$ such that
${\bf l}\cong a{\bf x} \epsilon$. The corresponding perturbed acceleration is given by
$\delta {\bf g} = {\bf x}[a\ddot\epsilon + 2\dot a\dot \epsilon]$.
Taking the divergence of this $\delta {\bf g}$ with respect to ${\bf x}$
we get
\begin{equation}
\nabla_{\bf x} \cdot [ \delta {\bf g}] = 3 \left[ a \ddot \epsilon + 2\dot a \dot \epsilon \right] = - 4 \pi G \rho_b a (1 + 3 w) \delta
\label{qbbb}
\end{equation}
This perturbation also changes the proper volume by an amount
\begin{equation}
(\delta V/V) = (3l/r) = 3\epsilon
\end{equation}
If we now consider a {\it metric} perturbation of the form
$g_{ik} \to g_{ik}+ h_{ik}$, the proper volume changes due to the
change in $\sqrt{-g}$ by the amount
\begin{equation}
(\delta V/V) = - (h/2)
\end{equation}
where $h$ is the trace of $h_{ik}$. Comparison of the expressions for $(\delta V/V)$ suggests that, as far as the dynamics is concerned, the equation
satisfied by $3\epsilon$ and that satisfied by $-(h/2)$ will be
identical. Substituting $\epsilon = (-h/6)$ in equation
(\ref{qbbb}), we get
\begin{equation}
\ddot h + 2 \frab{\dot a}{a}\, \dot h = 8 \pi G \rho_b ( 1 + 3 w) \delta \label{qccc}
\end{equation}
(A more formal approach --- using full machinery of general relativity --- leads to the same equation.)
We next note that $\dot \delta$ and $\dot h$ can be related through conservation of
mass. From the equation $d(\rho V) = - p dV$ we obtain
\begin{equation}
\delta = \fra{\delta \rho}{\rho} = -(1+w) \fra{\delta V}{V}= - 3(1+w) \epsilon \end{equation}
giving
\begin{equation}
\dot \delta = - 3 \dot \epsilon (1 + w) = + (1+w) \fra{\dot h}{2}\label{qddd}
\end{equation}
Combining (\ref{qccc}) \ and (\ref{qddd}) \ we find the equation satisfied by $\delta$ to be
\begin{equation}
\ddot \delta + 2{\dot a\over a}\dot \delta = 4\pi G \rho_b (1+w) (1+3w) \delta.\label{qdencon}
\end{equation}
This is the equation satisfied by the density contrast in a medium with equation of state $p =w\rho$.
To solve this equation, we need the background solution which determines $a(t)$ and $\rho_b(t)$. When the background matter is described by the equation of state $p = w\rho$, the background density evolves as $\rho_b\propto a^{-3(1+w)}$. In that case, Friedmann equation (with $\Omega = 1$) leads to
\begin{equation}
a(t) \propto t^{[2/3(1+w)]}; \quad \rho_b = {1\over 6\pi G (1+w)^2 t^2}
\label{twone}
\end{equation}
provided $w\ne -1$. When $w=-1$, $a(t) \propto \exp (\mu t)$ with a constant $\mu$. We will consider $w\ne -1$ case first. Substituting the solution for $a(t)$ and $\rho_b(t)$ into (\ref{qdencon}) we get
\begin{equation}
\ddot \delta + {4\over 3 (1+w)} {\dot \delta\over t} = {2\over 3} {(1+3w)\over (1+w)} {\delta \over t^2}.
\end{equation}
This equation is homogeneous in $t$ and hence admits power law solutions. Using an ansatz $\delta \propto t^n$, and solving the quadratic equation for $n$, we find the two linearly independent solutions $(\delta_g , \delta_d)$ to be
\begin{equation}
\delta_g \propto t^n; \quad \delta_d \propto {1\over t}; \quad n={2\over 3} {(1+3w)\over (1+w)}.
\label{twthree}
\end{equation}
In the case of $w= -1$, $a(t) \propto {\rm exp} \ (\mu t)$ and the equation for $\delta$ reduces to
\begin{equation}
\ddot \delta + 2 \lambda \dot \delta = 0.
\end{equation}
This has the solution $\delta_g \propto \exp (-2\mu t) \propto a^{-2}$.
All the above solutions can be expressed in a unified manner. By direct substitution it can be verified that $\delta_g$ in all the above cases can be expressed as
\begin{equation}
\delta_g \propto {1\over \rho_b a^2}.
\end{equation}
which is exactly the result obtained originally in (\ref{qpert}). This allows us to evolve the perturbation from an initial epoch till $z = z_{\rm enter}$, after which newtonian theory can take over.
\section{Gravitational clustering in Newtonian theory}
Once the mode enters the Hubble radius, dark matter perturbations can be treated by newtonian theory of gravitational clustering. Though $\delta_{\lambda} \ll 1$ at $z \la z_{\rm enter}$, we shall develop the full formalism of newtonian gravity at one go rather than do the linear perturbation theory separately.
In any region small compared to $d_{\rm H}$ one can set up an unambiguous coordinate system in which the {\it proper} coordinate of a particle ${\bf r} (t)=a(t){\bf x}(t)$ satisfies the newtonian equation $\ddot {\bf r} = - {\nabla }_{\bf r}\Phi$ where $\Phi$ is the gravitational potential. Expanding $\ddot \bld r$ and writing $\Phi = \Phi_{\rm FRW} + \phi$ where $\Phi_{\rm FRW}$ is due to the smooth component and $\phi$ is due to the perturbations, we get
\begin{equation}
\ddot a {\bf x} + 2 \dot a \dot{\bf x} + a\ddot{\bf x} = - \nabla_{\bf r} \Phi_{\rm FRW} - \nabla_{\bf r}\phi = - \nabla_{\bf r} \Phi_{\rm FRW} - a^{-1} \nabla_{\bf x} \phi
\end{equation}
The first terms on both sides of the equation $\left( \ddot a{\bf x} \ {\rm and} -\nabla_{\bld r} \Phi_{\rm FRW} \right)$ should match since they refer to the global expansion of the background FRW universe. Equating them individually gives the results
\begin{equation}
\ddot{\bf x} + 2 {\dot a \over a}\dot{\bf x} = - {1 \over a^2} \nabla_x \phi\ ; \qquad \Phi_{\rm FRW} = - {1 \over 2}{\ddot a \over a} r^2 = - {2\pi G \over 3}(\rho + 3p)r^2
\end{equation}
where $\phi$ is generated by the perturbed, newtonian, mass density through
\begin{equation}
\nabla^2_x \phi = 4 \pi Ga^2(\delta \rho) = 4 \pi G \rho_ba^2 \delta . \end{equation}
If ${\bf x}_i(t)$ is the trajectory of the $i-th$ particle, then equations for newtonian gravitational clustering can be summarized as
\begin{equation}
\dot{\bf x}_i + { 2\dot a \over a} \dot{\bf x}_i = - {1 \over a^2} \nabla_{\bf x}
\phi;\quad \nabla_x^2 \phi = 4\pi G a^2 \rho_b \delta \label{twnine}
\end{equation}
where $\rho_b$ is the smooth background density of matter. We stress that, in the non-relativistic limit,
the perturbed potential $\phi$ satisfies the usual Poisson equation.
Usually one is interested in the evolution of the density contrast $\delta \left( t, \bld x \right)$ rather than in the trajectories. Since the density contrast can be expressed in terms of the trajectories of the particles, it should be possible to write down a differential equation for $\delta (t, \bld x)$ based on the equations for the trajectories $\bld x_i (t)$ derived above. It is, however, somewhat easier to write down an equation for $\delta_{\bld k} (t)$ which is the spatial fourier transform of $\delta (t, \bld x)$. To do this, we begin with the fact that the density $\rho(\bld x,t)$ due to a set of point particles, each of mass $m$, is given by
\begin{equation}
\rho (\bld x,t) = {m\over a^3 (t)} \sum\limits_i \delta_D [ \bld x - \bld x _{T} (t, \bld q)]
\end{equation}
where $\bld x_{i}(t)$ is the trajectory of the ith particle. To verify the $a^{-3}$ normalization, we can calculate the average of $\rho(\bld x,t)$ over a large volume $V$. We get
\begin{equation}
\rho_b(t) \equiv \int {d^3 \bld x \over V} \rho (\bld x, t) = {m\over a^3(t)} \left( {N\over V}\right) = {M\over a^3 V} = {\rho_0\over a^3}
\end{equation}
where $N$ is the total number of particles inside the volume $V$ and $M = Nm$ is the mass contributed by them. Clearly $\rho_b \propto a^{-3}$, as it should. The density contrast $\delta (\bld x,t)$ is related to $\rho(\bld x, t)$ by
\begin{equation}
1+\delta (\bld x,t) \equiv {\rho(\bld x, t) \over \rho_b} = {V \over N} \sum\limits_i \delta_D [\bld x - \bld x_i(t)] = \int d^3 {\bld q} \delta_D [\bld x - \bld x_{T} (t, \bld q)] .
\end{equation}
In arriving at the last equality we have taken the continuum limit by replacing: (i) $\bld x_i(t)$ by $\bld x_T(t,\bld q)$ where the initial position $\bld q$ of a particle lables it; and (ii) $(V/N)$ by $d^3{\bld q}$ since both represent volume per particle. Fourier transforming both sides we get
\begin{equation}
\delta_{\bld k}(t) \equiv \int d^3\bld x {\rm e}^{i\bld k \cdot \bld x} \delta (\bld x,t) = \int d^3 {\bld q} \ {\rm exp}[ - i {\bf k} . {\bf x}_{T} (t, \bld q)] -(2 \pi)^3 \delta_D (\bld k)
\end{equation}
Differentiating this expression,
and using the equation of motion (\ref{twnine}) for the trajectories give, after straightforward algebra, the equation:
\begin{equation}
\ddot \delta_{\bf k} + 2 {\dot a \over a} \dot \delta_{\bf k} = 4 \pi G \rho_b \delta_{\bf k} + A _{\bld k}- B_{\bld k} \label{exev}
\end{equation}
with
\begin{equation}
A_{\bld k} =4\pi G\rho_b \int{d^3{\bf k}' \over (2 \pi)^3} \delta_{\bf k'} \delta_{{\bf k} - {\bf k'}} \left[{{\bf k}. {\bf k'} \over k^{'2}} \right]
\end{equation}
\begin{equation}
B_{\bld k} = \int d^3 \bld q \left({\bf k}.{\dot{\bf x}_T} \right)^2 {\rm exp} \left[ -i{\bf k}. {\bf x }_T(t, \bld q) \right] .\label{exevii}
\end{equation}
This equation is exact but involves $\dot{\bf x}_{T}(t, \bld q)$ on the right hand side and hence cannot be considered as closed. [see, eg. Peebles, 1980; the expression for $A_{\bld k}$ is usually given in symmetrised form in $\bld k'$ and $(\bld k - \bld k')$ in the literature].
The structure of (\ref{exev}) and (\ref{exevii}) can be simplified if we use the perturbed gravitational potential (in Fourier space) $\phi_{\bf k}$ related to $\delta_{\bf k}$ by
\begin{equation}
\delta_{\bf k} = - {k^2\phi_{\bld k} \over 4 \pi G \rho_b a^2} = - \left( {k^2 a \over 4 \pi G \rho_0}\right) \phi_{\bld k} = - \left( {2 \over 3H_0^2 }\right) k^2a \phi_{\bld k}
\end{equation}
and write the integrand for $A_{\bld k}$ in the symmetrised form as
\begin{eqnarray}
\delta_{\bld k'} \delta_{\bld k - \bld k'} \left[ {\bld k . \bld k' \over k^{'2}} \right]& = &{1 \over 2} \delta_{\bld k'} \delta_{\bld k - \bld k'}\left[ {\bld k . \bld k' \over k^{'2}} + {\bld k . (\bld k - \bld k') \over | \bld k - \bld k'|^2} \right] \nonumber \\
&=& { 1\over 2} \left( {\delta_{\bld k}'} \over k^{'2} \right) \left( {\delta_{\bld k - \bld k'} \over | \bld k - \bld k'|^2} \right) \left[ (\bld k - \bld k')^2 \bld k . \bld k' + k^{'2}\left( k^2 - \bld k . \bld k'\right)\right]\nonumber \\
&=& {1\over 2} \left({2a \over 3H_0^2}\right)^2 \phi_{\bld k'} \phi_{\bld k - \bld k'} \left[ k^2 (\bld k . \bld k' + k^{'2}) - 2(\bld k . \bld k')^2 \right] \nonumber \\
\end{eqnarray}
In terms of $\phi_{\bld k}$, equation (\ref{exev}) becomes, for a $\Omega =1 $ universe,
\begin{eqnarray}
\ddot \phi_{\bf k} + 4 {\dot a \over a} \dot\phi_{\bf k} &= & - {1 \over 2a^2} \int {d^3{\bf k}' \over (2 \pi )^3} \phi_{{\bf k}'}
\phi_{{\bf k }-{\bf k}'}\left[\bld k^{\prime } . (\bld k + \bld k')-2 \left( {\bld k . \bld k' \over k}\right)^2 \right] \nonumber \\
&+ &\left({3H_0^2 \over 2}\right) \int {d^3{\bf q} \over a} \left({\bld k} . \dot {\bld x}\over k\right) ^2 e^{i{\bf k}.{\bf x}} \label{powtransf} \nonumber \\
\end{eqnarray}
where $\bld x = \bld x_T(t, \bld q)$. We shall see later how this helps one to understand power transfer in gravitational clustering.
If the density contrasts are small and linear perturbation theory is to be valid, we should be able to ignore the terms $A_{\bld k}$ and $B_{\bld k}$ in (\ref{exev}). Thus the liner perturbation theory in newtonian limit is governed by the equation
\begin{equation}
\ddot \delta_{\bf k} + 2 {{\dot a} \over a} \dot \delta_{\bf k} = 4 \pi G \rho_b \delta_{\bf k} \label{linpertb}
\end{equation}
From the structure of equation (\ref{exev}) it is clear that we will obtain the linear equation if $A_{\bld k} \ll 4 \pi G\rho_b\delta_{\bld k}$ and $\bld B_{\bld k} \ll 4 \pi G \rho_b \delta_{\bld k}$. A {\it necessary} condition for this $\delta_{\bld k} \ll 1$ but this is {\it not} a sufficient condition --- a fact often ignored or incorrectly treated in literature. For example, if $\delta_{\bld k} \rightarrow 0$ for certain range of $\bld k$ at $t = t_0$ (but is nonzero elsewhere) then $A_{\bld k} \gg 4 \pi G \rho_b \delta_{\bld k}$ and the growth of perturbations around $\bld k$ will be entirely determined by nonlinear effects. We will discuss this feature in detail later on. For the present, we shall assume that $A_{\bld k}$ and $B_{\bld k}$ are ignorable and study the resulting system.
\section{Linear perturbations in the Newtonian limit}
At $z \la z_{\rm enter}$, the perturbation can be treated as linear $(\rho \ll 1)$ and newtonian $(\lambda \ll d_H)$.
In this case, the equations are
\begin{equation}
\ddot\delta_k + 2 {\dot a \over a} \dot\delta_k \cong 4 \pi G \rho_{DM} \delta_k \label{fortyone}
\end{equation}
\begin{equation}
{\dot a^2 \over a^2} + {k \over a^2} = {8 \pi G \over 3} \left( \rho_ R + \rho_{ DM} + \rho_{V} \right)
\end{equation}
where $\rho_{DM}, \rho_{R},$ and $\rho_{V}$ are defined in section 1. We will also assume that the dark matter is made of collisionless matter and is perturbed while the energy densities of radiation and cosmological constant are left unperturbed. Changing the variable from $t$ to $a$, the perturbation equation becomes
\begin{eqnarray}
2a^2 \left[ \rho_R + \rho_{DM} + \rho_V - {3k \over 8 \pi G a^2} \right] {d^2 \delta \over da^2} && \nonumber \\
&&\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!+ \; a\left[ 2 \rho_R + 3 \rho_{DM} + 6 \rho_{V} - 4 \left( {3k \over 8 \pi G a^2 }\right) \right] {d \delta \over da} = 3 \rho_{DM} \delta \nonumber \\
\end{eqnarray}
Introducing the variable $\tau \equiv (a /a_0) = (1 + z)^{-1}$ and by writing $\rho_i = \Omega_i\rho_c$ for the $i^{th}$ species, and $k = - (8 \pi G /3)\rho_ca_0^2(1 - \Omega)$, we can recast the equation in the form
\begin{eqnarray}
&2\tau&\left[ \Omega_V \tau^4 + (1 - \Omega)\tau^2 + \Omega_{DM}\tau + \Omega_R\right] \delta^{\prime \prime} \nonumber \\
&+&\left[ 6 \Omega_V \tau^4 + 4 \left( 1 - \Omega \right) \tau^2 + 3 \Omega_{DM}\tau + 2 \Omega_R \right] \delta' = 3 \Omega_{DM} \delta \label{seventau} \nonumber \\
\end{eqnarray}
where the prime denotes derivatives with respect to $\tau$. This equation is in a form convenient for numerical integration from $\tau=\tau_{\rm enter} = (1 + z_{\rm enter})^{-1} $ to $\tau = 1$.
The exact solution to (\ref{seventau}) cannot be given in terms of elementary functions. It is, however, possible to obtain insight into the form of solution by considering different epochs separately.
Let us first consider the epoch $1 \ll z \la z_{\rm enter}$ when we can take $\Omega_V = 0, \Omega = 1, $ reducing (\ref{seventau}) to
\begin{equation}
2\tau\left(\Omega_{DM} \tau + \Omega_{R}\right) \delta^{''}+ \left( 3 \Omega_{DM} \tau + 2 \Omega_R \right) \delta' = 3 \Omega_{DM} \delta
\end{equation}
Dividing thoughout by $\Omega_R$ and changing the independent variable to
\begin{equation}
x \equiv \tau \left( {\Omega_{DM} \over \Omega_R} \right) = {a \over a_0\left( \Omega_R / \Omega_{DM}\right)} = {a \over a_{eq}}
\end{equation}
we get
\begin{equation}
2x(1+x)
{d^2\delta_{\rm DM}\over dx^2}+
(2+3x){d\delta_{\rm DM}\over dx}=3\delta_{\rm DM};\qquad x={a\over a_{\rm eq}}. \end{equation}
One solution to this equation can be written down by inspection:
\begin{equation}
\delta_{\rm DM}=1+{3\over2}x.
\end{equation}
In other words $\delta_{\rm DM}\approx$ constant for $a\ll a_{\rm eq}$ (no growth in the radiation dominated phase)
and $\delta_{\rm DM}\propto a$ for $a\gg a_{\rm eq}$ (growth proportional to $a$
in the matter dominated phase).
We now have to find the
second solution. Given the first solution, the second solution
$\Delta$ can be found
by the Wronskian condition
$(Q^{\prime}/Q)=-[(2+3x)/2x(1+x)]$
where $Q=\delta_{\rm DM}\Delta^{\prime} - \delta^{\prime}_{\rm DM}\Delta.$
Writing the second solution as
$\Delta=f(x)\delta_{\rm DM}(x)$ and substituting in this
equation, we find
\begin{equation}
{f^{''}\over f^{\prime}}=-{2\delta^{\prime}_{\rm DM}\over \delta_{\rm DM}}-
{2+3x\over 2x(1+x)},
\end{equation}
which can be integrated to give
\begin{equation}
f=-\int{dx\over x(1+3x/2)^2(1+x)^{1/2}}.
\end{equation}
The integral is straightforward and the second solution is
\begin{equation}
\Delta=f\delta_{\rm DM}=
\left(1+{3x\over 2}\right)\ln
\left[{(1+x)^{1/2}+1\over(1+x)^{1/2}-1}\right]-3(1+x)^{1/2}.
\end{equation}
Thus the general solution to the perturbation equation, for a mode which is
inside the Hubble radius, is the linear superposition $\delta = A \delta_{\rm DM} + B\Delta$ with the asymptotic forms:
\begin{equation}
\delta_{\rm gen}(x)=A\delta_{\rm DM}(x)+B\Delta(x)=\cases{
A+B\ln(4/x)&$(x\ll 1)$\cr
(3/2)Ax+(4/5)Bx^{(-3/2)}&$(x\gg 1)$.\cr}\label{qdelgen}
\end{equation}
This result shows that dark matter perturbations can grow only logarithmically during the epoch $a_{\rm enter} < a< a_{\rm eq}$. During this phase the universe is dominated by radiation which is unperturbed. Hence the damping term due to expansion $(2\dot a/a)\dot \delta$ in equation (\ref{linpertb}) dominates over the gravitational potential term on the right hand side and restricts the growth of perturbations. In the matter dominated phase with $a\gg a_{\rm eq}$, the perturbations grow as $a$. This result, combined with that of section 2, shows that in the matter dominated phase {\it all the modes} (ie., modes which are inside or outside the Hubble radius) grow in proportion to the expansion factor.
Combining the above result with that of section 2, we can determine the evolution of density perturbations in dark matter during all relevant epochs.
The general solution after the mode has entered the Hubble radius is given by (\ref{qdelgen}). The constants $A$ and $B$ in this solution have to be fixed by matching
this solution to the growing solution, which was valid when the mode was
bigger than the Hubble radius. Since the latter
solution is given by
$\delta(x)=x^2$ in the radiation dominated phase, the
matching conditions become
\begin{eqnarray}
x^2_{{\rm enter}} &=&
\left[A\delta_{\rm DM}(x)+B\Delta(x)\right]_{x=x_{{\rm enter}}}\nonumber \\
2x_{{\rm enter}} &=&
\left[A\delta^{\prime}_{\rm DM}(x)+B\Delta^{\prime}(x)\right]_{x=x_{{\rm enter}}}. \nonumber \\
\end{eqnarray}
This determines the constants $A$ and $B$ in terms of $x_{{\rm enter}}$
$=(a_{{\rm enter}}/a_{\rm eq})$ which, in turn, depends on the wavelength of the
mode through $a_{{\rm enter}}$.
As an example, we consider a mode for which $x_{{\rm enter}}\ll 1$.
The second solution has the asymptotic form $\Delta(x)\simeq \ln(4/x)$ for
$x\ll 1$. Using this and matching the solution at $x=x_{enter}$
we get
the properly matched mode, inside the Hubble radius, to be
\begin{equation}
\delta(x)=
x^2_{\rm enter}
\left[1+2 \ln
\left({4\over x_{{\rm enter}}}\right)\right]
(1+{3x\over 2})-2x^2_{{\rm enter}}\ln\left({4\over x}\right).
\end{equation}
During the radiation dominated phase --- that is, till $a\la a_{\rm eq}$,
$x\la 1$ ---this mode can grow by a factor
\begin{eqnarray}
{\delta(x\simeq 1)\over \delta(x_{{\rm enter}})}&=&
{1\over x^2_{{\rm enter}}}
\delta(x\simeq 1)\cong
5\ln
\left({1\over x_{{\rm enter}}}\right)\nonumber \\
&=&5\ln
\left({a_{\rm eq}\over a_{{\rm enter}}}\right)=
{5\over 2}\ln
\left({t_{\rm eq}\over t_{{\rm enter}}}\right). \nonumber \\
\end{eqnarray}
Since the time $t_{{\rm enter}}$ for a mode with wavelength $\lambda$
is fixed by the condition $\lambda a_{{\rm enter}}$
$\propto \lambda t^{1/2}_{{\rm enter}}$
$\simeq d_H(t_{{\rm enter}})$ $\propto t_{{\rm enter}}$, it follows that
$\lambda\propto t^{1/2}_{{\rm enter}}$. Hence,
\begin{equation}
{\delta_{{\rm final}}\over \delta_{{\rm enter}}}\cong 5\ln
\left({\lambda_{\rm eq}\over \lambda}\right)\cong
{5\over 3}\ln
\left({M_{\rm eq}\over M}\right)\label{fortynine}
\end{equation}
for a mode with wavelength
$\lambda\ll \lambda_{\rm eq}$. [Here, $M$ is the mass contained in a sphere of
radius ($\lambda /2$); see equation (\ref{mpcnine}).] The growth in the radiation dominated phase,
therefore, is logarithmic. Notice that the matching procedure has
brought in an amplification factor
{\it which depends on the wavelength}.
In the discussion above, we have assumed that $\Omega = 1$, which is a valid assumption in the early phases of the universe. However, during the later stages of evolution in a matter dominated phase, we have to take into account the actual value of $\Omega$ and solve equation (\ref{fortyone}). This can be done along the following lines.
Let $\rho(t)$ be a solution to the background Friedmann model dominated
by pressureless dust. Consider now the function $\rho_1(t)$
$\equiv\rho(t+\tau)$ where
$\tau$ is some constant. Since the Friedmann equations contain $t$
only through the derivative, $\rho_1(t)$ is also a valid solution.
If we now take $\tau$ to be small, then $[\rho_1(t)$
$-\, \rho(t)]$ will be a small perturbation to the density. The corresponding density contrast is
\begin{equation}
\delta(t)=
{\rho_1(t)-\rho(t)\over \rho(t)}=
{\rho(t+\tau)-\rho(t)\over \rho(t)}\cong\tau
{d\ln\rho\over dt}=-3\tau H(t)
\end{equation}
where the last relation follows from the fact that $\rho\propto a^{-3}$
and $H(t)\equiv(\dot a/a)$. Since $\tau$ is a constant, it
follows that $H(t)$ is a solution to be the perturbation equation.
[This curious fact, of course, can be verified directly: From the
equations describing the
Friedmann
model, it follows that $\dot H+H^2=(-4\pi G\rho/3)$.
Differentiating this relation and using $\dot\rho=-3H\rho$ we immediately
get $\ddot H+2H\dot H$ $-4\pi G\rho H=0$. Thus $H$ satisfies the
same equation as $\delta$].
Since $\dot H=-H^2-(4\pi G\rho/3)$, we know that $\dot H < 0$; that is, $H$ is a decreasing function
of time, and the solution $\delta=H\equiv \delta_d$ is a decaying mode.
The growing solution $(\delta\equiv \delta_g)$ can be again found by
using the fact that, for any two linearly
independent solutions of the equation (\ref{linpertb}),
the Wronskian $(\dot\delta_g\delta_d$ $-\dot\delta_d\delta_g)$ has
a value $a^{-2}$. This implies that
\begin{equation}
\delta_g=\delta_d\int
{dt\over a^2 \delta^2_d}=
H(t)\int{dt\over a^2 H^2(t)}. \label{qdelgrow}
\end{equation}
Thus we see that the $H(t)$ of the
background spacetime allows one to completely determine the evolution
of density contrast.
It is more convenient to express this result in terms of the redshift $z$. For
a universe with arbitrary $\Omega$, we have the relations
\begin{equation}
a(z)=a_0(1+z)^{-1}, \qquad H(z)= H_0(1+z)(1+\Omega z)^{1/2}
\end{equation}
and
\begin{equation}
H_0 dt=-(1+z)^{-2}
(1+\Omega z)^{-{1\over 2}}dz.
\end{equation}
Taking $\delta_d=H(z)$, we get
\begin{eqnarray}
\delta_g &=&\delta_d(z)\int a^{-2}
\delta^{-2}_d(z)
\left({dt\over dz}\right)dz\nonumber \\
& =&(a_0 H_0)^{-2} (1+z)(1+\Omega z)^{1/2}
\int^{\infty}_z dx(1+x)^{-2}
(1+\Omega x)^{-{3\over 2}}. \label{fiftyfour}
\end{eqnarray}
This integral can be expressed in terms of elementary functions:
\begin{equation}
\delta_g={1+2\Omega+3\Omega z\over (1-\Omega)^2}-{3\over 2}
{\Omega(1+z)(1+\Omega z)^{1/2}\over (1-\Omega)^{5/2}}
\ln
\left[{(1+\Omega z)^{1/2}+(1-\Omega)^{1/2}\over
(1+\Omega z)^{1/2} - (1-\Omega)^{1/2}}\right]. \label{qgend}
\end{equation}
Thus $\delta_g(z)$ for an arbitrary $\Omega$ can be given in closed
form. The solution in (\ref{qgend}) is not normalized in any manner;
normalization can be achieved by multiplying $\delta_g$ by some constant
depending on the context.
For large $z$
(i.e., early times), $\delta_g\propto z^{-1}$. This is to be
expected because for large $z$, the curvature term can be ignored and
the Friedmann universe can be approximated as a $\Omega=1$ model.
[The large $z$ expansion of the logarithm in (\ref{qgend}) has to be taken
upto $O(z^{-5/2})$ to get the correct result; it is easier to obtain
the asymptotic form directly from the integral in (\ref{fiftyfour})].
For $\Omega\ll 1$, one can see that $\delta_g\simeq$ constant for
$z\ll \Omega^{-1}$. This is the curvature dominated phase, in which
the growth of perturbations is halted by rapid expansion.
We have thus obtained the complete evolutionary sequence for a perturbation in the linear theory, which is shown in figure 1. This result can be conveniently summarized in terms of a quantity called `transfer function' which we shall now describe.
\begin{figure}
\centering
\psfig{file=lineardelta.ps,width=3.5truein,height=3.0truein,angle=0}
\caption{Schematic figure showing the growth of linear perturbations in dark matter. The perturbation grows as $a^2$ before entering the Hubble radius when relativistic theory is required. During the radiation dominated phase it grows only as $lna$ and during the matter dominated phase it grows as $a$. In case the universe become dominated by curvature or background energy density, the perturbations do not grow significantly after that epoch.}
\label{figure1}
\end{figure}
\section{Transfer function}
If $\delta(t,\bld x)\ll 1$, then one can describe the evolution of $\delta(t,\bld x)$ by linear perturbation theory, in which each mode $\delta_{\bld k}(t)$ will evolve independently and we can write
\begin{equation}
\delta_{\bld k}(t) = T_{\bld k}(t, t_i) \delta_{\bld k} (t_i)
\end{equation}
where $T_{\bld k}(t,t_i) $ depends only on the dynamics and not on the initial conditions. We shall now determine the form of $T_{\bld k} (t, t_i)$.
Let $\delta_{\lambda}(t_i)$ denote the amplitude of
the dark matter perturbation corresponding to some
wavelength $\lambda$ at the initial instant $t_i$. To each $\lambda$, we
can associate a wavenumber $k\propto \lambda^{-1}$
and a mass $M\propto\lambda^3$; accordingly, we may label the
perturbation as $\delta_M(t)$ or $\delta_k(t)$, as well, with the scalings
$M\sim\lambda^3$, $k\sim \lambda^{-1}$. We are interested in
the value of $\delta_{\lambda}(t)$ at some $t\ga t_{\rm dec}$.
To begin with, consider the modes which enter the Hubble radius in the radiation dominated phase; their growth is suppressed in the radiation dominated
phase by the rapid expansion of the universe; therefore, they
do not grow significantly until $t=t_{\rm eq}$, giving
$\delta_{\lambda}(t_{\rm eq}) = L
\delta_{\lambda}(t_{{\rm enter}})$
where $L \simeq 5 \ln (\lambda_{\rm eq} / \lambda)$ is a logarithmic factor determined in (\ref{fortynine}). After matter begins to dominate, the amplitude of these modes grows in
proportion to the scale factor $a$.
Thus,
\begin{equation}
\delta_M(t)=L\delta_M(t_{{\rm enter}})
\left({a\over a_{\rm eq}}\right)
\quad ({\rm for}\; M<M_{\rm eq}). \label{qdelrad}
\end{equation}
Consider next the modes with $\lambda_{\rm eq}<\lambda<\lambda_H$
where $\lambda_H\equiv H^{-1}(t)$ is the Hubble radius at the time $t$
when we are studying the spectrum. These modes enter the Hubble radius
in the matter
dominated phase and grow proportional to $a$
afterwards. So,
\begin{equation}
\delta_M(t)=\delta_M(t_{{\rm enter}}).
\left({a\over a_{{\rm enter}}}\right)\quad ({\rm for}\; M_{\rm eq}<M<M_H)
\end{equation}
which may be rewritten as
\begin{equation}
\delta_M(t)=\delta_M(t_{{\rm enter}})
\left({a_{\rm eq}\over a_{{\rm enter}}}\right)
\left({a\over a_{\rm eq}}\right). \label{qdeltamd}
\end{equation}
But notice that, since $t_{{\rm enter}}$ is fixed by the
condition $\lambda a_{{\rm enter}}$
$\propto t_{{\rm enter}}$ $\propto \lambda t^{2/3}_{{\rm enter}}$,
we have
$t_{{\rm enter}}\propto \lambda^3$.
Further $(a_{\rm eq}/a_{{\rm enter}})$
$=(t_{\rm eq}/t_{{\rm enter}})^{2/3}$, giving
\begin{equation}
\left({a_{\rm eq}\over a_{{\rm enter}}}\right)=
\left({\lambda_{\rm eq}\over \lambda}\right)^2=
\left({M_{\rm eq}\over M}\right)^{2/3}. \label{qlambda}
\end{equation}
Substituting (\ref{qlambda}) $\,$ in (\ref{qdeltamd}), we get
\begin{equation}
\delta_M(t)=
\delta_M(t_{{\rm enter}})
\left({\lambda_{\rm eq}\over \lambda}\right)^2
\left({a\over a_{\rm eq}}\right)
=\delta_M(t_{{\rm enter}})
\left({M_{\rm eq}\over M}\right)^{2/3}
\left({a\over a_{\rm eq}}\right). \label{qscale}
\end{equation}
Comparing (\ref{qscale}) $\,$ and (\ref{qdelrad}) $\,$ we see that the mode
which enters the Hubble radius after $t_{\rm eq}$ has its amplitude
decreased by a factor $ L^{-1} M^{-2/3}$, compared to its original value.
Finally, consider the modes with $\lambda>\lambda_H$ which are still
outside the Hubble radius at $t$ and will enter the Hubble
radius at some {\it future}
time $t_{{\rm enter}}>t$. During the time
interval
$(t, t_{{\rm enter}})$, they will grow by a factor
$(a_{{\rm enter}}/a)$. Thus
\begin{equation}
\delta_{\lambda}(t_{{\rm enter}})=\delta_{\lambda}
(t)
\left({a_{{\rm enter}}\over a}\right)
\end{equation}
or
\begin{equation}
\delta_{\lambda}(t)=\delta_{\lambda}(t_{{\rm enter}})
\left({a\over a_{{\rm enter}}}\right)=\delta_M
(t_{{\rm enter}})
\left({M_{\rm eq}\over M}\right)^{2/3}
\left({a\over a_{\rm eq}}\right)\quad
(\lambda >\lambda_H).
\end{equation}
[The last equality follows from the previous analysis]. Thus the
behaviour of the modes is the same for the cases
$\lambda_{\rm eq}<\lambda <\lambda_H$ and
$\lambda_H <\lambda$; i.e. for all wavelengths $\lambda>\lambda_{\rm eq}$.
Combining all these pieces of information, we can state
the final result as follows:
\begin{equation}
\delta_{\lambda}(t)=\cases{
L \delta_{\lambda}(t_{\rm enter})
(a/a_{\rm eq}) \hspace{0.7in} (\lambda<\lambda_{\rm eq})\cr
\delta_{\lambda}(t_{{\rm enter}})
(a/a_{\rm eq})
(\lambda_{\rm eq}/ \lambda)^2 \hspace{0.3in}(\lambda_{\rm eq}<\lambda)}\label{fincases}
\end{equation}
or, equivalently
\begin{equation}
\delta_M(t)=\cases{
L\delta_M(t_{\rm enter})
(a/a_{\rm eq}) \hspace{0.9 in} (M<M_{\rm eq})\cr
\delta_M(t_{\rm enter})
(a/ a_{\rm eq})
(M_{\rm eq}/ M)^{2/3} \hspace{0.4in}(M_{\rm eq}<M). \cr}
\end{equation}
Thus the amplitude at late times is completely fixed by the amplitude
of the modes when they enter the Hubble radius.
In this approach, to determine $\delta (\bld x, t)$ or $\delta_{\bld k}(t)$ at time $t$, we need to know its exact space dependence (or $\bld k$ dependence) at some initial instant $t = t_i$ [eg. to determine $\delta (t,\bld x)$, we need to know $\delta (t_i,\bld x)$]. Often, we are not interested in the {\it exact} form of $\delta (t,\bld x)$ but only in its ``statistical properties'' in the following sense: We may assume that, for sufficiently small $t_i$, each fourier mode $\delta _{\bld k}(t_i)$ was a Gaussian random variable with
\begin{equation}
\langle \delta_{\bld k}(t_i) \delta_{\bld p}^* (t_i)\rangle = (2\pi)^3 P(\bld k,t_i) \delta_D(\bld k - \bld p)\label{qdegrf}
\end{equation}
where $P(\bld k, t_i)$ is the power spectrum of $\delta(t_i,\bld x)$ and $<\cdots>$ denotes an ensemble average. Then,
\begin{eqnarray}
\langle \delta_{\bld k} (t) \delta_{\bld p}^*(t) \rangle &=& T_{\bld k} (t,t_i) T_{\bld p}^* (t,t_i) \langle \delta_{\bld k}(t_i) \delta_{\bld p}^*(t_i)\rangle \nonumber \\
&=& (2\pi)^3 |T_k(t,t_i)|^2 P(\bld k, t_i) \delta_D(\bld k - \bld p) \nonumber \\
\end{eqnarray}
and the statistical nature of $\delta_{\bld k}$ is preserved by evolution with the power spectrum evolving as
\begin{equation}
P(\bld k,t) = |T_{\bld k}(t, t_i)|^2 P(\bld k, t_i).
\end{equation}
It should be stressed that as far as linear evolution of perturbations are concerned the statistics of the perturbations is maintained. For any random field one can define a power spectrum and study its evolution along the lines described below. In case of a {\it gaussian} random field with zero mean the power spectrum contains the complete information; in other cases the power spectrum will only provide partial information. This is the key difference between gaussian and other statistics. Some theories of structure formation describing the origin of initial perturbations {\it predict} the statistics of the perturbations to be gaussian. Since this seems to be fairly natural we shall confine to this case in our discussion.
A closely related quantity to the power spectrum is the two point correlation function, defined as
\begin{equation}
\xi_\delta (\bld x) = \langle \delta(\bld x+\bld y) \delta(\bld y)\rangle = \int {d^3\bld k\over (2\pi)^3} {d^3\bld p\over (2\pi)^3} \langle \delta_{\bld k} \delta_{\bld p}^*\rangle {\rm e}^{i\bld k\cdot (\bld x+\bld y)} {\rm e}^{-i\bld p\cdot\bld y}
\end{equation}
where $<\cdots>$ is the ensemble average. Using
\begin{equation}
\langle \delta_{\bld k} \delta_{\bld p}^*\rangle = (2\pi)^3 P(\bld k) \delta_D (\bld k - \bld p)
\end{equation}
we get
\begin{equation}
\xi_\delta (\bld x) = \int {d^3\bld k \over (2\pi)^3} P(\bld k) {\rm e}^{i\bld k \cdot \bld x}
\end{equation}
That is, the correlation function is the Fourier transform of the power spectrum.
Our analysis can be used to determine the growth of $P(k)$ or $\xi(x)$ as well. In practice, a more relevent quantity characterizing the density inhomogeneity is $\Delta_k^2 \equiv (k^3 P(k) / 2\pi^2)$ where $P(k) = |\delta_k|^2 $is the power spectrum. Physically, $\Delta^2_k$ represent the power in each logarithmic interval of $k$. From (\ref{fincases}) we find that quantity behaves as
\begin{equation}
\Delta_k^2 = \cases {L^2(k)\Delta_k^2(t_{\rm enter})(a/a_{\rm eq})^2 \hspace{0.6in} (for \ k_{\rm eq} < k )\cr
\Delta_k^2(t_{\rm enter})(a/a_{\rm eq})^2 (k/k_{\rm eq})^4 \quad \hspace{0.3in} (for \ k<k_{\rm eq} ).\cr}\label{qdelkk}
\end{equation}
Let us next determine $\Delta_k^2 (t_{\rm enter})$ if the initial power spectrum, when the mode was much larger than the Hubble radius, was a power law with $\Delta_k^2\propto k^3 P(k) \propto k^{n+3}$. This mode was growing as $a^2$ while it was bigger than the Hubble radius (in the radiation dominated phase). Hence $\Delta_k^2 (t_{\rm enter}) \propto a^4_{\rm enter}k^{n+3}$. In the radiation dominated phase, we can relate $a_{\rm enter}$ to $\lambda$ by noting that $\lambda a_{\rm enter} \propto t_{\rm enter}\propto a^2_{\rm enter}$; so $\lambda \propto a_{\rm enter} \propto k^{-1}$. Therefore,
\begin{equation}
\Delta_k^2 (t_{\rm enter}) \propto a^4_{\rm enter}k^{n+3} \propto k^{n-1} .\end{equation}
Using this in (\ref{qdelkk}) we find that
\begin{equation}
\Delta_k^2 = \cases{
L^2(k)k^{n-1} (a/a_{\rm eq})^2 \hspace{0.4in} ({\rm for}\ k_{\rm eq}< k )\cr
k^{n+3} (a/a_{\rm eq})^2 \hspace{0.8in} ({\rm for} \ k< k_{\rm eq} ).\cr}
\end{equation}
This is the shape of the power spectrum for $a>a_{\rm eq}$. It retains its initial primordial shape $\left( \Delta^2_k \propto k^{n+3}\right)$ at very large scales ($k< k_{\rm eq}$ \ or \ $\lambda>\lambda_{\rm eq}$). At smaller scales, its amplitude is essentially reduced by four powers of $k$ (from $k^{n+3}$ to $k^{n-1}$). This arises because the small wavelength modes enter the Hubble radius earlier on and their growth is suppressed more severely during the phase $a_{\rm enter} < a < a_{\rm eq}$.
Note that the index $n=1$ is special. In this case, $\Delta_k^2 (t_{\rm enter})$ is independent of $k$ and all the scales enter the Hubble radius with the same amplitude. The above analysis suggests that if $n=1$, then all scales in the range $k_{\rm eq} < k $ will have nearly the same power except for the weak, logarithmic dependence through $L^2(k)$. Small scales will have slightly more more power than the large scales due to this factor.
There is another --- completely different --- reason because of which $n=1$ spectrum is special. If $P(k) \propto k^n$, the power spectrum for gravitational potential $P_{\varphi}(k) \propto (P(k)/k^4)$ varies as $P_{\varphi}(k) \propto k^{n-4}$. The power per logarithmic band {\it in the gravitational potential } varies as $\Delta^2_{\varphi} \equiv (k^3P_{\varphi}(k)/2\pi^2)\propto k^{n-1}$. For $n=1$, this is independnet of $k$ and each logarithmic interval in $k$ space contributes the same amount of power to the gravitational potential. Hence {\it any} fundamental physical process which is scale invariant will generate a spectrum with $n=1$. Thus observational verification of the index to $n=1$ {\it only} verifies the fact that the fundamental process which led to the primordial fluctuations is scale invariant.
Finally, we mention a few other related measures of inhomogeneity.
Given a variable $\delta(\bld x)$ we can smooth it over some scale by using window functions $W(\bld x)$ of suitable radius and shape (We have suppressed the $t$ dependence in the notation, writing $\delta(\bld x , t)$ as $\delta(\bld x )$). Let the smoothed function be
\begin{equation}
\delta_W(\bld x) \equiv \int \delta (\bld x + \bld y) W(\bld y) d^3 \bld y .
\end{equation}
Fourier transforming $\delta_W(\bld x)$, we find that
\begin{equation}
\delta_W(\bld x) = \int {d^3\bld k\over (2\pi)^3} \delta_{\bld k} W_{\bld k}^* {\rm e}^{i\bld k\cdot \bld x} \equiv \int {d^3\bld k\over (2\pi)^3} Q_{\bld k}.
\end{equation}
If $\delta_{\bld k}$ is a Gaussian random variable, then $Q_{\bld k}$ is also a Gaussian random variable. Clearly $\delta_W(\bld x)$ --- which is obtained by adding several Gaussian random variables $Q_{\bld k}$ --- is also a Gaussian random variable. Therefore, to find the probability distribution of $\delta_W(\bld x)$ we only need to know the mean and variance of $\delta_W(\bld x)$. These are,
\begin{eqnarray}
\langle \delta_W(\bld x)\rangle = \int {d^3\bld k\over (2\pi)^3} \langle \delta_{\bld k}\rangle W_{\bld k}^* {\rm e}^{i\bld k \cdot \bld x} = 0\nonumber \\
\langle \delta_W^2(\bld x)\rangle = \int {d^3\bld k\over (2\pi)^3}P(\bld k)|W_{\bld k}|^2 \equiv \mu^2. \label{seventyseven}
\end{eqnarray}
Hence the probability of $\delta_W$ to have a value $q$ at any location is given by
\begin{equation}
{\cal P}(q) = {1\over (2\pi \mu^2)^{1/2}} \exp \left( - {q^2\over 2 \mu^2}\right).
\end{equation}
Note that this is independent of $\bld x$, as expected.
A more interesting construct will be based on the following
question: What is the probability that the value of $\delta_W$
at two points ${\bf x_1}$ and ${\bf x_2}$ are $q_1$ and $q_2$ ?
Once we choose $\left( \bld x_1, \bld x_2 \right)$ the $\delta_W \left( \bld x_1 \right), \delta_W \left( \bld x_2 \right)$ are {\it correlated} Gaussians with $\langle \delta_W \left( \bld x_1 \right) \delta_W \left( \bld x_2 \right) \rangle = \xi_R \left( \bld r \right) \ {\rm where} \ \bld r = \bld x_1 - \bld x _2 $. The simultaneous probability distribution for $\delta_W(\bld x_1)=q_1 $ and $\delta_W (\bld x_2)=q_2 $ \ for two correlated Gaussians is given by:
\begin{equation}
{\cal P} [q_1,q_2]= {1 \over 2 \pi \mu^2} \left( {1 \over 1 - A^2 }\right)^{1/2} \exp - Q [q_1, q_2]
\end{equation}
\noindent where
\begin{equation}
Q[q_1, q_2] = {1 \over 2} \left( { 1 \over 1 -A^2}\right) {1 \over \mu^2} \left[ q^2_1 + q^2_2 - 2Aq_1q_2 \right];
\end{equation}
with $A \equiv \left[ \xi_R (r)/\mu\right]$. (This is easily verified by computing $\langle q_1 \rangle, \langle q_2 \rangle$ and $ \langle q_iq_j \rangle$ explicitly). We can now ask: What is the probablility that both $q_1$ and $q_2$
are high density peaks ? Such a question is particularly relevant
since we may expect high density regions to be the locations of galaxy formation
in the universe (see e.g. Kaiser, 1985). Then the correlation function of the galaxies will be the correlation between the {\it high density} peaks of the underlying gaussian
random field. This is easily computed to be
\begin{equation}
P_2 \left[ q_1 > \nu\mu, q_2 > \nu\mu \right] = \int\limits^{\infty}_{\nu \mu} dq_1 \int\limits^{\infty}_{\nu \mu} dq_2 P[q_1, q_2] \equiv P^2_1 (q > \nu\mu) \left[ 1 + \xi_{\nu} (r) \right]
\end{equation}
\noindent where $\xi_{\nu}(r)$ denotes the correlation function for regions with density which is $\nu$ times higher than the variance of the field. Explicit computation now gives
\begin{equation}
P_2 \propto \int\limits^{\infty}_{\nu } dt_1 \int\limits^{\infty}_{\nu } dt_2 \exp \lbrace{ - {1 \over 2} {1 \over 1 -A^2} \left( t^2_1 + t^2_2 - 2 At_1t_2 \right) \rbrace}
\end{equation}
This result can be expressed in terms of error function. An interesting special case in which this expression can be approximated occurs when $A \ll 1$ and $ \nu \gg 1 \ {\rm though} \ A \nu^2$ \ is arbitrary. Then we get
\begin{equation}
P_2 \cong {1\over 2 \pi} e^{-\nu^2} \exp \left( A \nu^2 \right) \cong P^2_1 \left( q > \nu \mu \right) \exp \left( A \nu^2 \right)
\end{equation}
so that
\begin{equation}
\xi_{\nu} \left( r \right) = \exp \left( A \nu^2 \right) - 1 = \exp \left[ {\nu^2 \over \mu^2} \xi_R \left( r \right) \right] - 1
\end{equation}
In other words, the correlation function of high density peaks of a
gaussian random field can be significantly higher than the correlation function
of the underlying field. If we further assume that
$A \ll 1, \nu \gg 1$ and $ A \nu^2 \ll 1,$ then
\begin{equation}
\xi_{\nu} (r) \cong \nu^2 {\xi_R(r) \over \xi_R(0)} = \left( {\nu \over \mu}\right) ^2 \xi_R\left( r\right)
\end{equation}
In this limit $\xi_{\nu}(r) \propto \xi_R(r)$ with the correlation increasing as $\nu^2$.
A simple example of the window function arises in the following context.
\noindent Consider the mass contained within a sphere of radius $R$ centered at some point $\bld x$ in the universe. As we change $\bld x$,
keeping $R$ constant, the mass enclosed by the sphere will vary randomly
around a mean value $M_0 = (4\pi /3) \rho_B R^3$ where $\rho_B$ is the matter density of the background universe. The mean square fluctuation in this mass
$\langle (\delta M / M)^2_R \rangle$ is a good measure of the inhomogeneities present in the universe at the scale $R$. In this case, the window function is $W(\bld y) = 1$ for $|\bld y| \le R$ and zero otherwise. The variance in (\ref{seventyseven}) becomes:
\begin{eqnarray}
\sigma_{{\rm sph}} ^2 (R)& =& \langle \delta^2_W \rangle = \int {d^3 k\over (2\pi)^3} P(k) W_{{\rm sph}}(k) \nonumber \\
&= &\int _0^{\infty} {dk \over k} \left( {k^3 P\over 2\pi^2}\right) \left\{ {3\left( \sin kR - kR \cos kR\right) \over k^3R^3}\right\}^2\label{qsigsph}\nonumber \\
\end{eqnarray}
This will be a useful statistic in many contexts.
Another quantity which we will use extensively in latter sections is the average value of the correlation function within a sphere of radius $r$, defined to be
\begin{equation}
\bar\xi = {3\over r^3} \int_0^r \xi (x) x^2 dx \label{eighty}
\end{equation}
Using
\begin{equation}
\xi \left( \bld x \right) \equiv \int {d^3 \bld k \over \left( 2 \pi \right)^3 } P \left( \bld k \right) {\rm e}^{i \bld k . \bld x } = \int\limits^{\infty}_0 {dk \over k } \left( {k^3 P \left( k \right) \over 2 \pi^2 } \right) \left( {\sin kx \over kx } \right)
\end{equation}
and (\ref{eighty}) we find that
\begin{eqnarray}
\bar\xi\left( r \right) &=& {3 \over r^3} \int\limits^{\infty}_0 {dk \over k^2} \left( {k^3 P \over 2 \pi^2} \right) \int\limits^r_0 dx \left( x \sin kx \right) \nonumber \\
&=& {3 \over 2 \pi^2 r^3} \int\limits^{\infty}_0 {dk \over k} P \left( k \right) \left[ \sin kr - kr \cos kr \right] .\nonumber \\ \label{eightythree}
\end{eqnarray}
A simple computation relates $\sigma_{{\rm sph}}^2 (R)$ to $\xi(x)$ and $\bar\xi(x)$.
We can show that
\begin{equation}
\sigma_{{\rm sph}} ^2 \left( R \right) = {3 \over R^3 } \int^{2R}_0 x^2dx \xi \left( x \right) \left( 1 - {x \over 2R} \right)^2 \left( 1 + {x \over 4R} \right) .\label{qsigxi}
\end{equation}
and
\begin{equation}
\sigma_{{\rm sph}}^2 \left( R \right) = {3 \over 2} \int\limits^{2R}_0 {dx \over \left( 2 R \right)} \bar\xi \left( x \right) \left( {x \over R } \right)^3 \left[ 1 - \left( {x \over 2R} \right)^2 \right] .
\end{equation}
Note that $\sigma^2_{\rm sph}$ at $R$ is determined entirely by $\xi(x)$ (or $\bar\xi(x))$ in the range $0\leq x \leq 2R$. (For a derivation, see Padmanabhan, 1996)
The Gaussian nature of $\delta_k$ cannot be maintained if the evolution couples the modes for different values of $\bld k$. Equation (\ref{exev}), which describes the evolution of $\delta_{\bld k}(t)$, shows that the modes do mix with each other as time goes on. Thus, in general, Gaussian nature of $\delta_{\bld k}$'s cannot be maintained in the nonlinear epochs.
\section{Zeldovich approximation}
We shall next consider the evolution of perturbations in the nonlinear epochs. This is an intrinsically complex problem and the only exact procedure for studying it involves setting up large scale numerical simulations. Unfortunately numerical simulations tend to obscure the basic physics contained in the equations and essentially acts as a `black box'. Hence it is worthwhile to analyse the nonlinear regime using some simple analytic approximations in order to obtain insights into the problem. In sections 6 to 8 and in section 11 we shall describe a series of such approximations with increasing degree of complexity. The first one --- called Zeldovich approximation --- is fairly simple and leads to an idea of the kind of structures which generically form in the universe. This approximation, however, is not of much use for more detailed work. The second and third approximations described in sections 7 and 8 are more powerful and allow the modeling of the universe based on the evolution of the initially over dense region. Finally we discuss in section 11 a fairly sophisticated approach involving nonlinear scaling relations which are present in the dynamics of gravitational clustering. In between the discussion of these approximations, we also describe some useful procedures which can be adopted to answer questions that are directly relevant to structure formation in sections 9 and 10.
A useful insight into the nature of linear perturbation theory (as well as nonlinear clustering) can be
obtained by examining the nature of particle trajectories which lead
to the growth of the density contrast $\delta_L (a) \propto a$.
To determine the particle trajectories corresponding to the
linear limit, let us start by writing the trajectories in the form
\begin{equation}
{\bf x}_T (a,{\bf q}) = {\bf q} + {\bf L} (a,{\bf q})
\end{equation}
where ${\bf q}$ is the Lagrangian coordinate (indicating the
original postion of the particle) and ${\bf L}(a,{\bf q})$ is
the displacement. The corresponding fourier transform of the density contrast is given by the general expression
\begin{equation}
\delta (a,{\bf k})= \int d^3{\bf q}\, e^{-i{\bf k\cdot q}-i{\bf k\cdot L}(a,{\bf q})} - (2 \pi)^3 \delta_{\rm Dirac} [{\bf k}]
\end{equation}
In the linear regime, we expect the particles to have moved very little
and hence we can expand the integrand in the above equation in a Taylor
series in $({\bf k\cdot L})$. This gives, to the lowest order,
\begin{equation}
\delta (a,{\bf k})\cong -\int d^3{\bf q}\, e^{-i{\bf k\cdot q}} (i{\bf k\cdot L}(a,{\bf q})) = -\int d^3{\bf q}\, e^{-i{\bf k\cdot q}}\left( \nabla_q \cdot {\bf L}\right)
\end{equation}
showing that $\delta(a,\bld k)$ is Fourier transform of $-\nabla_{\bld q} . \bld L (a, \bld q)$. This allows us to identify $\nabla\cdot {\bf L}(a,{\bf q})$ with
the original density contrast in real space $- \delta (a,{\bf q})$. Using
the Poisson equation (for a $\Omega =1$, which is assumed for simplicity) we can write $\delta(a,\bld q)$ as a divergence; that is
\begin{equation}
\nabla \cdot {\bf L}(a,{\bf q}) = - \delta(a,{\bf q}) = - \fra{2}{3} H_0^{-2} a \nabla \cdot (\nabla \phi)
\end{equation}
which, in turn shows that {\it a consistent set} of displacements that will
lead to $\delta (a) \propto a$ is given by
\begin{equation}
{\bf L}(a,{\bf q}) = - (\nabla \psi)a \equiv a {\bf u}({\bf q}) ;
\qquad \psi\equiv (2/3) H_0^{-2}\phi \label{ninety}
\end{equation}
The trajectories in this limit
are, therefore, linear in $a$:
\begin{equation}
\bld x_{T} (a,{\bf q}) = {\bf q} + a {\bf u}({\bf q})\label{trajec}
\end{equation}
An useful approximation to describe the quasilinear stages of clustering is obtained by using the trajectory in (\ref{trajec}) as an ansatz valid {\it even at quasilinear epochs}. In this approximation, called Zeldovich approximation, the proper Eulerian position $\bld r $ of a particle is related to its Lagrangian position $\bld q $ by
\begin{equation}
{\bf r}(t) \equiv a(t) {\bf x}(t) = a(t) [{\bf q} +
a(t) {\bf u}({\bf q}) ] \label{lagq}
\end{equation}
where ${\bf x}(t)$ is the comoving Eulerian coordinate.
This relation in (\ref{trajec}) gives the comoving
position $({\bf x})$ and proper position $({\bf r})$ of a particle at
time $t$, given that at some time in the past it had the comoving position
${\bf q}$.
If the initial, unperturbed,
density is $\overline \rho$ (which is independent of ${\bf q})$,
then the conservation of mass implies that the perturbed density will be
\begin{equation}
\rho ({\bf r},t) d^3{\bf r} = \bar \rho d^3{
\bf q}.\label{qmcons}
\end{equation}
Therefore
\begin{equation}
\rho({\bf r},t) = \bar \rho \left[{\rm det} \left({ \partial q_i \over \partial r_j}\right)\right]^{-1} =
{\bar \rho/a^3 \over {\rm det}
(\partial x_j/\partial q_i)} = {\rho_b(t)
\over {\rm det}
( \delta_{ij} + a(t) (\partial u_j/\partial q_i))}\label{qjacob}
\end{equation}
where we have set $\rho_b(t) = [\bar \rho / a^3(t)]$.
Since ${\bf u}({\bf q})$ is a gradient of a scalar function,
the Jacobian in the denominator of (\ref{qjacob}) is the determinant of a real symmetric
matrix. This matrix
can be diagonolized at every point ${\bf q}$, to yield a set of
eigenvalues and principal axes as a function of ${\bf q}$.
If the eigenvalues of $(\partial u_j/
\partial q_i) $ are $[-\lambda_1({\bf q})$, $-\lambda_2({\bf q})$,
$-\lambda_3({\bf q})]$ then the perturbed density is given by
\begin{equation}
\rho({\bf r},t) = {\rho_b(t) \over (1 - a(t)\lambda_1({\bf q}))
(1 - a(t) \lambda_2({\bf q}))
(1 - a(t)\lambda_3({\bf q}))} \label{qeig}
\end{equation}
where ${\bf q}$ can be expressed as a function of ${\bf r}$ by solving (\ref{lagq}).
This expression describes the effect of
deformation of an infinitesimal, cubical,
volume (with the faces of the cube
determined by the eigenvectors corresponding to $\lambda_n$)
and the consequent change in the density.
A positive $\lambda$
denotes collapse and negative $\lambda$
signals expansion.
In a overdense region, the density will become
infinite if one of the terms in brackets in the denominator of (\ref{qeig})
becomes zero. In the generic case,
these eigenvalues will be different
from each other;
so that we can take, say, $\lambda_1\geq \lambda_2\geq \lambda_3$.
At any particular value of ${\bf q}$ the density will diverge for the first time when
$(1 - a(t)\lambda_1) = 0$;
at this instant
the material contained in a cube in the
${\bf q}$ space gets compressed to a sheet in the ${\bf r}$ space,
along the principal axis corresponding to $\lambda_1$.
Thus sheetlike structures, or `pancakes', will
be the first nonlinear structures to form when gravitational instability
amplifies density perturbations.
The trajectories in Zeldovich approximation, given by (\ref{trajec}) can be used in (\ref{powtransf}) to provide a {\it closed} integral equation for $\phi_{\bld k}$. In this case,
\begin{equation}
\bld x_T(\bld q, a) = \bld q + a \nabla \psi ; \quad \dot \bld x_{\rm T} = \left( {2a \over 3t}\right) \nabla \psi; \quad \psi = {2 \over 3H_0^2 } \varphi
\end{equation}
and, to the same order of accuracy, $B_{\bld k}$ in (\ref{exevii}) becomes:
\begin{equation}
\int d^3 \bld q \left( \bld k \cdot \dot\bld x_{\rm T}\right)^2e^{-i \bld k \cdot(\bld q + \bld L)} \cong \int d^3 \bld q ( \bld k \cdot \dot \bld x_{\rm T})^2 e^{-i \bld k \cdot \bld q}
\end{equation}
Substituting these expressions in (\ref{powtransf}) we find that the gravitational potential is described by the closed integral equation:
\begin{eqnarray}
\ddot \phi_{\bld k} + 4 {\dot a \over a} \dot \phi_{\bld k} &=& -{1 \over 3a^2} \int {d^3 \bld p \over (2 \pi)^3} \phi_{{1 \over 2} \bld k + \bld p} \phi_{{1 \over 2} \bld k - \bld p} {\cal G} (\bld k, \bld p)\nonumber \\
{\cal G} (\bld k, \bld p) &= &{7 \over 8} k^2 + {3 \over 2} p^2 - 5 \left( {\bld k \cdot \bld p\over k}\right)^2 \label{calgxx} \nonumber \\
\end{eqnarray}
This equation provides a powerful method for analysing non linear clustering since estimating $(A_{\bld k}-B_{\bld k})$ by Zeldovich approximation has a very large domain of applicability
(Padmanabhan, 1998).
It is also possible to determine the power spectrum corresponding to these
trajectories using our general formula
\begin{equation}
P({\bf k},a) = |\delta ({\bf k},a)|^2 = \int d^3{\bf q} d^3{\bf q}' e^{-i {\bf k}\cdot ({\bf q}-{\bf q}')} \left< e^{-i{\bf k}\cdot \left[ {\bld L} (a,{\bf q})- {\bld L} (a,{\bf q}')\right]}\right>
\end{equation}
The ensemble averaging can be performed using the general result for gaussian
random fields:
\begin{equation} \left< e^{i{\bf k\cdot V}}\right> = \exp \left( - k_i k_j \sigma^{ij} (V)/2\right)
\end{equation}
where $\sigma^{ij}$ is the covariance matrix for the components
$V^a$ of a gaussian random field. This quantity can be expressed
in terms of the power spectrum $P_L(k)$ in the linear theory and a straightforward
analysis gives (see, for e.g., Taylor and Hamilton, 1996)
\begin{equation}
P(k,a) = \int_0^\infty 2\pi q^2 dq \int_{-1}^{+1} d\mu\, e^{ikq\mu} \exp -k^2\left[ F(q) + \mu^2 q F'(q) \right]
\end{equation}
where
\begin{equation}
F(q) = \fra{a^2}{2\pi^2} \int_0^\infty dk\, P_L(k) \fra{j_1(kq)}{kq}
\end{equation}
The integrals, unfortunately, needs to be evaluated numerically
except in the case of $n=-2$. In this case, we get
\begin{equation}
\Delta^2 (k,a) \equiv \fra{k^3P}{2\pi^2} = \fra{16}{\pi} \fra{a^2k}{[1+(2a^2 k)^2]^2} \left[ 1 + \fra{3\pi}{4} \fra{a^2k}{[1+(2a^2 k)^2]^{1/2}}\right]
\end{equation}
which shows that $\Delta^2 \propto a^2$ for small $a$ but decays as $a^{-2}$ at late times due to the dispersion of particles. Clearly, Zeldovich approximation breaks down beyond a particular epoch and is of limited validity.
\section{Spherical approximation}
In the nonlinear regime --- when $\delta\ga 1$ --- it is not possible to solve equation (\ref{exev}) exactly. Some progress, however, can be made if we assume that the trajectories are homogeneous; i.e. $ \bld x (t, \bld q) = f (t)\bld q $ where $f(t)$ is to be determined. In this case, the density contrast is
\begin{eqnarray}
\delta_{\bld k} (t) &=& \int d^3 \bld q e^{-if(t)\bld k . \bld q} - (2 \pi)^3 \delta_D(\bld k)\nonumber \\
&=&(2\pi)^3 \delta_D (\bld k) [f^{-3} - 1] \equiv (2 \pi)^3 \delta_D (\bld k)\delta (t) \label{spheapprox}
\end{eqnarray}
where we have defined $\delta(t) \equiv \left[ f^{-3}(t)-1 \right]$ as the amplitude of the density contrast for the $\bld k = 0$ mode. It is now straightforward to compute $A$ and $B$ in (\ref{exev}). We have
\begin{equation}
A = 4 \pi G\rho_b \delta^2(t) [(2 \pi)^3 \delta_D(\bld k)]
\end{equation}
and
\begin{eqnarray}
B&=&\int d^3 \bld q (k^aq_a)^2 \dot f^2 e^{-if(k_aq^a)} = -\dot f^2 {\partial^2 \over \partial f^2} [(2 \pi)^3 \delta _D(f \bld k) ] \nonumber \\
&=& -{4 \over 3} {\dot \delta^2 \over (1 + \delta)} [(2 \pi)^3 \delta_D (\bld k)]
\end{eqnarray}
so that the equation (\ref{exev}) becomes
\begin{equation}
\ddot\delta + 2 {\dot a \over a} \dot\delta = 4 \pi G \rho_b (1 + \delta) \delta + {4 \over 3} {\dot\delta^2 \over (1 + \delta)} \label{x}
\end{equation}
(This particular approach to spherical collapse model, which does not require fluid equations is due to Padmanabhan 1998.) To understand what this equation means, let us consider, at some initial epoch $t_i$, a spherical region of the universe which has a slight constant overdensity compared to the background. As the universe expands, the overdense region will expand more slowly compared to the background, will reach a maximum radius, contract and virialize to form a bound nonlinear system. Such a model is called ``spherical top-hat''.
For this spherical region of radius $R(t)$ containing dustlike matter of mass $M$ in addition to other forms of energy densities, the density contrast for dust will be given by:
\begin{equation}
1+\delta = {\rho\over \rho_b} = {3M \over 4\pi R^3(t)} {1\over \rho_b(t)} = {2GM \over \Omega_m H_0^2 a_0^3} \left[ {a(t) \over R(t)}\right]^3 \equiv \mu{a^3\over R^3}.
\end{equation}
[Note that, with this definition $f \propto (R/a)$.] Using this in (\ref{x}) we can to obtain an equation for $R(t)$ from the equation for
$\delta$; straight forward analysis gives
\begin{equation}
\ddot R = - {G M \over R^2} - {4 \pi G \over 3} \left( \rho + 3p\right)_{{\rm rest}} R .\label{qfive}
\end{equation}
This equation could have been written down ``by inspection'' using the relations \begin{equation}
\ddot R = -\nabla \phi_{{\rm tot}} ; \qquad \phi _{{\rm tot}} = \phi_{{\rm FRW}} + \delta \phi = - (\ddot a / 2a ) R^2 - G \delta M / R .
\end{equation}
Note that this equation is valid for perturbed ``dust-like'' matter in {\it any} background spacetime with density $\rho_{\rm rest}$ and pressure $p_{\rm rest}$ contributed by the rest of the matter. Our homogeneous trajectories $\bld x (\bld q , t) = f(t) \bld q$ actually describe the spherical top hat model.
This model is particularly simple for the $\Omega =1$, matter dominated universe, in which $\rho_{\rm rest} = p_{rest} = 0$ and we have to solve the equation
\begin{equation}
{d^2R \over dt^2} = - {GM \over R^2}.
\label{qonefortytwo}
\end{equation}
This can be done by standard techniques and
the final results for the evolution of a spherical overdense
region can be summarized by the following relations:
\begin{equation}
R(t)={R_i\over 2\delta_i}(1-\cos\theta)=
{3x\over 10\delta_0}(1-\cos\theta),\label{qthfou}
\end{equation}
\begin{equation}
t={3t_i\over 4\delta^{3/2}_i}
(\theta-\sin\theta)= \left({3\over 5}\right)^{3/2}
{3t_0\over 4\delta^{3/2}_0}
(\theta-\sin\theta),
\label{qthfiv}
\end{equation}
\begin{equation}
\rho(t)=\rho_b(t)
{9(\theta-\sin\theta)^2\over 2(1-\cos\theta)^3},\label{qthree}
\end{equation}
The density can be expressed in terms of the
redshift by using the relation
$(t/t_i)^{2/3}=(1+z_i)(1+z)^{-1}.$
This gives
\begin{equation}
(1+z)=\left({4\over 3}\right)^{2/3}
{\delta_i(1+z_i) \over (\theta- \sin \theta)^{2/3}}
=\left({5 \over 3}\right)\left({4 \over 3}\right)^{2/3}
{\delta_0 \over (\theta - \sin \, \theta)^{2/3}}; \label{qredth}
\end{equation}
\begin{equation}
\delta = {9 \over 2}
{(\theta
- \sin \, \theta)^2 \over (1- \cos \, \theta)^3} - 1. \label{qdeuse}
\end{equation}
Given an initial density contrast $\delta_i$ at
redshift $z_i$, these equations define (implicitly) the function $\delta (z)$
for $z>z_i$. Equation (\ref{qredth}) defines $\theta$ in terms
of $z$ (implicitly); equation (\ref{qdeuse}) gives
the density contrast at that $\theta (z)$.
For comparison, note that linear evolution gives
the density contrast $\delta_L$ where
\begin{equation}
\delta_L = {\overline \rho_L \over \rho_b}-1
={3 \over 5}
{\delta_i(1+z_i) \over 1+z}
={3 \over 5}
\left({3 \over 4}\right)^{2/3}
(\theta - \sin \theta)^{2/3}. \label{qrsle}
\end{equation}
We can estimate the accuracy of the
linear theory by comparing $\delta(z)$
and $\delta_L(z)$.
To begin with, for $z \gg 1$, we have $\theta \ll 1$ and
we get $ \delta(z) \simeq
\delta_L(z)$.
When
$\theta = (\pi /2)$, $ \delta_L=(3/5)(3/4)^{2/3}
(\pi / 2 -1)^{2/3} = 0.341$
while $\delta = (9/2)(\pi /2 -1)^2 -1 = 0.466$; thus
the actual density contrast is about 40 percent higher. When
$\theta=(2 \pi/3), \delta_L = 0.568$
and $\delta =1.01 \simeq 1.$
If we interpret $\delta = 1$
as the transition point to nonlinearity, then such a
transition occurs at
$\theta = (2\pi /3)$,
$\delta_L \simeq 0.57$.
From (\ref{qredth}), we see that this occurs at the redshift
$(1+z_{\rm nl}) = 1.06 \delta_i(1+z_i)= (\delta_0/0.57).$
The spherical region reaches the maximum radius of
expansion at $\theta = \pi$.
From our equations, we find that the redshift
$z_m$, the proper radius of the shell
$r_m$ and the average density contrast
$\delta_m$
at `turn-around' are:
\begin{eqnarray}
(1+z_m) &=&{\delta_i(1+z_i) \over \pi^{2/3}(3/4)^{2/3}}
=0.57(1+z_i)\delta_i\nonumber \\
&=&{5 \over 3} {\delta_0 \over (3 \pi /4)^{2/3}}
\cong {\delta_0 \over 1.062},\nonumber \\
r_m &=&{3x\over 5 \delta_0},
\left({\overline \rho \over \rho_b}\right)_m =
1+ \overline \delta_m=
{9\pi^2\over 16}\approx 5.6. \nonumber \\
\end{eqnarray}
The first equation gives the redshift at turn-around for a region,
parametrized by the
(hypothetical) linear
density contrast $\delta_0$ extrapolated to the present epoch.
If, for example, $\delta_i\simeq 10^{-3}$ at $z_i\simeq 10^4$, such a
perturbation would have turned around at $(1+z_m)$ $\simeq 5.7$ or when
$z_m\simeq 4.7$. The second equation gives the maximum radius reached by the
perturbation. The third equation shows that the region
under consideration is nearly six times denser than the background
universe, at turn-around. This corresponds to
a density contrast of $\delta_m\approx 4.6$
which is definitely in the nonlinear regime.
The linear evolution gives $\delta_L=1.063$
at $\theta= \pi$.
After the spherical overdense region turns around it will continue to
contract. Equation (\ref{qthree}) suggests that at
$\theta=2\pi$ all the mass will collapse to a point. However,
long before this happens, the approximation that matter is distributed in
spherical shells and that random velocities of the particles are small, (implicit in the assumption of homogeneous trajectories $\bld x = f(t) \bld q)$
will break down. The collisionless (dark matter)
component will relax to a configuration
with radius $r_{\rm vir}$,
velocity dispersion $v$
and density $\rho_{\rm coll}.$ After virialization of the collapsed
shell, the potential energy $U$ and the kinetic energy $K$
will be related by $|U|=2K$ so that the total energy ${\cal E}\, =U+K=-K$.
At $t=t_{m}$ all the energy was
in the
form of potential
energy. For a spherically symmetric system with constant
density,
${\cal E}\,\approx -3G M^2/5r_m$.
The `virial velocity' $v$ and
the `virial radius' $r_{\rm vir}$ for the collapsing mass
can be estimated
by
the equations:
\begin{equation}
K\equiv {Mv^2\over 2}={\cal -E}\; ={3GM^2\over 5r_m};
\quad|U|={3GM^2\over 5r_{\rm vir}}=2K=Mv^2.
\end{equation}
We get:
\begin{equation}
v= (6GM/5r_m)^{1/2};\quad
r_{\rm vir}=r_m/2.
\end{equation}
The time taken for the fluctuation to reach virial equilibrium,
$t_{\rm coll}$,
is essentially
the time corresponding to $\theta=2\pi$. From
equation (\ref{qredth}), we find that
the redshift at collapse, $z_{\rm coll}$, is
\begin{equation}
(1+z_{\rm coll})={\delta_i(1+z_i)\over (2 \pi)^{2/3}(3/4)^{2/3}}
=0.36 \delta_i (1+z_i) = 0.63(1+z_m)={\delta_0 \over
1.686}.
\end{equation}
The density of the collapsed object can also be determined fairly
easily.
Since $r_{\rm vir}=(r_m/2)$, the mean density of the collapsed object is
$\rho_{\rm coll}=8\rho_m$
where $\rho_m$ is the density of the object at turn-around.
We have, $\rho_m \cong 5.6 \rho_b(t_m)$
and $\rho_b(t_m)=(1+z_m)^3$
$(1+z_{\rm coll})^{-3}\rho_b(t_{\rm coll})$.
Combining these
relations, we get
\begin{equation}
\rho_{\rm coll}\simeq 2^3\rho_m\simeq
44.8\rho_b(t_m)\simeq
170\rho_b(t_{\rm coll})\simeq
170\rho_0(1+z_{\rm coll})^3
\end{equation}
where $\rho_0$ is the present cosmological density.
This result
determines $\rho_{\rm coll}$ in terms of the redshift of formation of a
bound object.
Once the system has virialized, its density and size does
not change. Since $\rho_b \propto a^{-3}$, the
density contrast $\delta$
increases as $a^3$ for $t>t_{\rm coll}$.
This approach can be easily generalised to describe the situation in which the initial density profile is given by $\rho(r_i)$. Given an initial density profile $\rho_i(r)$, we can calculate the mass $M(r_i)$ and energy $E(r_i)$ of each shell labelled by the initial radius $r_i$. In spherically symmetric evolution, $M$ and $E$ are conserved and each shell will be described by equation (\ref{qonefortytwo}). Assuming that the average density contrast $\overline\delta_i(r_i)$ decreases with $r_i$, the shells will never cross during the evolution. Each shell will evolve in accordance with the equations (\ref{qthfou}), (\ref{qthfiv}) with $\delta_i$ replaced by the mean initial density contrast $\overline\delta_i(r_i)$ characterising the shell of initial radius $r_i$. Equation (\ref{qthree}) gives the mean density inside each of the shells from which the density profile can be computed at any given instant.
A simple example for this case corresponds to a scale invariant situation in which $E(M)$ is a power law. If the energy of a shell containing mass $M$ is taken to be
\begin{equation}
E(M) = E_0 \left({M\over M_0}\right)^{2/3 - \epsilon} < 0,
\end{equation}
then the turn-around radius and turn-around time are given by
\begin{equation}
r_m(M) = -{GM\over E(M)} = -{GM_0\over E_0} \left({M\over M_0}\right)^{{1\over 3} + \epsilon} \label{qrm}
\end{equation}
\begin{equation}
t_m(M) = {\pi\over 2} \left({r_m^3\over 2GM}\right)^{1/2} = {\pi GM\over (-E_0/2)^{3/2}} \left( {M\over M_0}\right)^{3\epsilon/2}.
\end{equation}
To avoid shell crossing, we must have $\epsilon > 0$ so that outer shells with more mass turn around at later times. In such a scenario, the inner shells expand, turn around, collapse and virialize first and the virialization proceeds progressively to outer shells. We shall assume that each virialized shell settles down to a final radius which is a fixed fraction of the maximum radius. Then the density in the virialized part will scale as $(M/r^3)$ where $M$ is the mass contained inside a shell whose turn-around radius is $r$. Using (\ref{qrm}) to relate the turn-around radius and mass, we find that
\begin{equation}
\rho(r) \propto {M(r_m = r)\over r^3} \propto r^{3/(1+3\epsilon)} r^{-3} \propto r^{-9\epsilon / (1+3\epsilon)}.
\end{equation}
Two special cases of this scaling relation are worth mentioning: (i) If the energy of each shell is dominated by a central mass $m$ located at the origin, then $E\propto Gm/r \propto M^{-1/3}$. In that case, $\epsilon = 1$ and the density profile of virialized region falls as $r^{-9/4}$. The situation corresponds to a accretion on to a massive object (ii)
If $\epsilon = 2/3$ then the binding energy $E$ is the same for all shells. Then we get $\rho \propto r^{-2}$ which corresponds to an isothermal sphere.
The spherical model can be easily generalised for the set of trajectories with $x^a(t, \bld q) = f^{ab}(t)q_b$ (Padmanabhan 1998) In this case, it is convenient to decompose the derivative of the velocity $\partial_a u_b = \dot f_{ab}$ into shear $\sigma_{ab}$, rotation $\Omega^c$ and expansion $\theta$ by writing
\begin{equation}
\dot f_{ab} = \sigma_{ab}+ \epsilon_{abc} \Omega^c+{1\over 3} \delta_{ab}\theta.
\end{equation}
where $\sigma_{\rm ab}$ is the symmetric traceless part of $f_{\rm ab}; \ {\rm the} \ \epsilon_{\rm abc} \Omega^c$ is the antisymmetric part and $(1/3) \delta_{\rm ab} \theta$ is the trace.
In this case, (\ref{x}) gets generalised to:
\begin{equation}
\ddot\delta + 2 {\dot a \over a} \dot \delta = 4 \pi G \rho_b (1 + \delta) \delta + {4 \over 3} {\dot \delta^2 \over (1 + \delta)} + \dot a^2 (1 + \delta) (\sigma^2 - 2 \Omega^2) \label{y}
\end{equation}
where $\sigma^2 \equiv \sigma^{ab} \sigma_{ab}$ and $\Omega^2 \equiv \Omega^i\Omega_i$ . From the last term on the right hand side we see that shear contributes positively to $\ddot\delta$ while rotation $\Omega^2$ contributes negatively. Thus shear helps growth of inhomogenities while rotation works against it. To see this explicitly,
we again introduce a function $R(t)$ by the definition
\begin{equation}
\label{deltadefn}
1+\delta= {{9GM{t^2}}\over {2R^3}} \equiv \mu \frac{a^3}{R^3}
\end{equation}
\noindent where $M$ and $\mu$ are constants. Using this relation
between $\delta$ and $R(t)$, equation (\ref{y}) can be converted
into the following equation for $R(t)$
\begin{equation}
\label{reqn}
\ddot{R}=-\frac{GM}{R^2}-\frac{1}{3} \dot{a}^2 \left(
\sigma^2-2\Omega^2\right) R
\end{equation}
\noindent where the first term represents the gravitational attraction
due to the mass inside a sphere of radius $R$
and the second gives the effect of the shear and angular momentum. We shall now see how an improved spherical collapse model can be constructed with this term.
\section{Improved spherical collapse model}
In the spherical collapse model (SCM, for short) each spherical shell expands at a progressively slower rate against the
self-gravity of the system, reaches a maximum radius and then collapses under its
own gravity, with a steadily increasing density contrast. The maximum radius,
$R_{max}=R_i/\delta_i$, achieved by the shell, occurs at a density
contrast $\delta =(9\pi^2/16)-1 \approx 4.6$, which is in the ``quasi-linear''
regime. In the case of a perfectly spherical system, there exists no
mechanism to halt the infall, which proceeds inexorably towards a
singularity, with all the mass of the system collapsing to a single point.
Thus, the fate of the shell is to collapse to zero radius at $\theta = 2\pi$ with an infinite
density contrast; this is, of course, physically unacceptable.
In real systems, however, the implicit assumptions
that (i) matter is distributed in spherical shells and (ii) the non-radial
components of the velocities of the particles are small, will
break down long before infinite densities are reached.
Instead, we expect the collisionless dark matter to reach virial equilibrium.
After virialization, $|U|=2 K$, where $U$ and $K$ are, respectively, the potential
and kinetic energies; the virial
radius can be easily computed to be half the maximum radius reached by the system.
The virialization argument is clearly physically well-motivated for real systems.
However, as mentioned earlier, there exists no mechanism in the standard SCM
to bring about this virialization; hence, one has to
introduce by hand the assumption that, as the
shell collapses and reaches a particular radius,
say $R_{max}/2$, the collapse
is halted and the shell remains at this radius thereafter. This arbitrary
introduction of virialization is clearly one of the major drawbacks of the standard
SCM and takes away its predictive power in the later stages of evolution. We
shall now see how the retention of the angular momentum
term in equation (\ref{reqn}) can serve to stabilize the collapse of the system,
thereby allowing us to model the evolution towards $r_{vir}=R_{max}/2$ smoothly.
(Engineer etal, 1998)
At this point, it is important to note a somewhat subtle aspect of our
generalisation. The original equations are clearly Eulerian in nature:
{\it i.e.} the time derivatives give the temporal variation of the quantities
at a fixed point in space. However, the time derivatives in equation
(\ref{y}), for the density contrast $\delta$, are of a different kind.
Here, the observer is moving with the fluid element and
hence, in this, Lagrangian case, the variation in density contrast seen
by the observer has, along with the intrinsic time variation, a component
which arises as a consequence of his being
at different locations in space at different instants of time. When the
$\delta$ equation is converted into an equation for the function $R(t)$,
the Lagrangian picture is retained; in SCM, we can interpret $R(t)$ as
the radius of a spherical shell, co--moving with the observer. The mass
$M$ within each shell remains constant in the absence of shell crossing
and the entire formalism is well defined. The physical
identification of $R$ is, however, not so clear in the case where the
shear and rotation terms are retained, as these terms break the spherical
symmetry of the system. We will nevertheless continue to think of $R$
as the ``effective shell radius`` in this situation, {\it defined by\/}
equation (\ref{deltadefn}) governing its evolution. Of course, there is
no such ambiguity in the {\it mathematical} definition of $R$ in this formalism. This is equivalent to taking $R^3$ as proportional to the volume of a region defined by the location of a set of mass points.
We now return to equation (\ref{y}),
and recast the equation into a form more
suitable for analysis. Using logarithmic variables, $D_{\rm SC} \equiv {\rm ln}
\hskip 0.03 in (1 + \delta)$ and $\alpha \equiv {\rm ln}\hskip 0.03 in a$, equation
(\ref{y}) can be written in the form (the subscript `SC'
stands for `Spherical Collapse')
\begin{eqnarray}
\label{deltalog}
\frac{d^2 D_{\rm SC}}{d \alpha^2}-\frac{1}{3} \left(\frac{d D_{\rm SC}}{d
\alpha }\right) ^2 + \frac{1}{2} \frac{d D_{\rm SC}}{d \alpha} \quad =
\qquad\qquad\qquad \quad\nonumber \\
\qquad\qquad\qquad\qquad \frac{3}{2} \left[\exp (D_{\rm SC})-1 \right] + a^2 (\sigma^2-2 \Omega^2)
\end{eqnarray}
where $\alpha$ takes the role of time coordinate. It is also convenient to introduce the quantity, $S$, defined by \\
\begin{equation}
S \equiv a^2 (\sigma^2-2 \Omega^2)
\end{equation}
which we shall hereafter call the ``virialization term''. The
consequences of the retention of the virialization term are easy to
describe qualitatively. We expect the
evolution of an initially spherical shell to proceed along the lines of the standard SCM
in the initial stages, when any deviations from spherical symmetry, present in the
initial conditions, are small. However, once the maximum radius is reached and the
shell recollapses, these small deviations are amplified by a positive feedback
mechanism. To understand this, we note that all particles in a given spherical
shell are equivalent due to the spherical symmetry of the system. This implies
that the motion of any particle, in a specific shell, can be considered
representative of the motion of the shell as a whole. Hence, the behaviour of the
shell radius can be understood by an analysis of the motion of a single particle.
The equation of motion of a particle in an expanding universe can be written as
\begin{equation}
\ddot{{\bf X}_i}+2\frac{\dot{a}}{a} \dot{{\bf X}_i}=-\frac{\nabla \phi}{a^2}
\end{equation}
where $a(t)$ is the expansion factor of the locally overdense ``universe".
The $\dot{{\bf X}_i}$ term acts as a damping force when it is positive;
{\it i.e.} while the background is expanding. However, when the
overdense region reaches the point of maximum expansion and turns around, this
term becomes negative, acting like a {\it negative\/} damping
term, thereby amplifying any deviations from spherical symmetry
which might have been initially present. Non-radial components of velocities
build up, leading to a randomization of velocities which finally results
in a virialised structure, with the mean relative velocity between any
two particles balanced by the Hubble flow. It must be kept in mind,
however, that the introduction of the virialization term changes the
behaviour of the solution in a global sense and it is not strictly
correct to say that this term starts to play a role {\it only after}
recollapse, with the evolution proceeding along the lines of the
standard SCM until then. It is nevertheless reasonable to expect that,
at early times when the term is small, the system will evolve as standard SCM
to reach a maximum radius, but will fall back smoothly to a constant size later on.
Equation (\ref{y}) is actually valid for any fluid system and the virialization term, $S$, is, in general, a function of $a$ and ${\bf x}$, since the derivatives in equation (\ref{y}) are total time derivatives,
which, for an expanding Universe, contain partial derivatives with respect
to both ${\bf x}$ and $t$ separately. Even in the case of displacements with $x^a = f^{ab}(t)q_b$, the one equation (\ref{y}) cannot uniquely determine all the components of $f^{ab}(t)$.
Handling this equation exactly will take us back to the full non-linear equations and, of course, no progress can be made. Instead, we will make the
{\it ansatz\/} that the virialization term depends on $t$ and ${\bf x}$
only through $\delta(t,{\bf x})$:
\begin{equation}
S(a,{\bf x}) \equiv S(\delta(a,{\bf x})) \equiv S(D_{\rm SC})
\end{equation}
In other words, $S$ is a function of the density contrast alone.
This {\it ansatz\/} seems well motivated because the density contrast, $\delta$,
can be used to characterize the SCM at any point in its evolution and one might
expect the virialization term to be a function only of the system's state, at
least to the lowest order. Further, the results obtained with this assumption
appear to be sensible and may be treated as a test of the {\it ansatz\/} in its
own framework.\\
\noindent To proceed further systematically, we {\it define} a function $h_{\rm SC}$
by the relation \\
\begin{equation}
\label{defh}
{{dD_{\rm SC}}\over {d\alpha}} = 3h_{\rm SC}
\end{equation}
For consistency, we shall assume the {\it ansatz\/} $h_{\rm SC}(a,{\bf x}) \equiv
h_{\rm SC}\left[\delta(a,{\bf x})\right]$.
The definition of $h_{\rm SC}$ allows us to write equation (\ref{deltalog}) as
\begin{equation}
\label{hequation}
\frac{d h_{\rm SC}}{d \alpha}=h_{\rm SC}^2-\frac{h_{\rm SC}}{2}+\frac{1}{2}
\left[\exp (D_{\rm SC}) -1\right] + \frac{S(D_{\rm SC})}{3}
\end{equation}
Dividing (\ref{hequation}) by (\ref{defh}), we obtain the following
equation for the function $h_{\rm SC}(D_{\rm SC})$\\
\begin{eqnarray}
\label{dhdDeqn}
\frac{dh_{\rm SC}}{dD_{\rm SC}} = \frac{h_{\rm SC}}{3}-\frac{1}{6}+ \frac{1}{6 h_{\rm SC}}
\left[\exp(D_{\rm SC})-1\right]+\frac{S(D_{\rm SC})}{9 h_{\rm SC}}
\end{eqnarray}
If we know the form of either $h_{\rm SC}(D_{\rm SC})$ or $S(D_{\rm SC})$,
this equation allows us to determine the other. Then, using equation (\ref{defh}),
one can determine $D_{\rm SC}$. Thus, our modification of the standard SCM
essentially involves providing the form of $S_{\rm SC}(D_{\rm SC})$ or
$h_{\rm SC}(D_{\rm SC})$.
We shall now discuss several features of such a modelling in order to arrive
at a suitable form.
The behaviour of $h_{\rm SC}(D_{\rm SC})$ can be qualitatively understood from
our knowledge of the behaviour of $\delta$ with time. In the linear regime
($\delta \ll 1$), we know that $\delta$ grows linearly with $a$; hence
$h_{\rm SC}$ increases with $D_{\rm SC}$. At the extreme non-linear end ($\delta \gg 1$),
the system ``virializes'', {\it i.e.\/} the proper radius and the density of the system become
constant. On the other hand, the density $\rho_b$, of the background, falls like $t^{-2}$
(or $a^{-3}$) in a flat, dust-dominated universe. The density contrast
is defined by $\delta = (\rho/\rho_b - 1) \simeq \rho/\rho_b$ (for $\delta \gg 1$)
and hence
\begin{equation}
\delta \propto t^2 \propto a^3
\end{equation}
in the non-linear limit. Equation (\ref{defh}) then implies that
$h_{\rm SC}(\delta)$ tends to unity for $\delta \gg 1$. Thus, we expect that
$h_{\rm SC}(D_{\rm SC})$ will start with a value far less than unity, grow, reach a
maximum a little greater than one and then smoothly fall back to unity.
[A more general situation discussed in the literature corresponds to $h
\rightarrow {\rm constant}$ as $\delta \rightarrow \infty$, though the
asymptotic value of $h$ is not necessarily unity. Our discussion can be
generalised to this case.]
This behaviour of the $h_{\rm SC}$ function can be given another useful
interpretation whenever the density contrast has a monotonically
decreasing relationship with the scale, $x$, with small $x$ implying large
$\delta$ and vice-versa. Then, if we use a local power law approximation
$\delta \propto x^{-n}$ for $\delta \gg 1$ with some $n >0$, we have $D_{\rm SC}
\propto \ln (x^{-1})$ and
\begin{equation}
h_{\rm SC} \propto {{dD_{\rm SC}} \over
{d\alpha}} \propto - {{{d \ln} ({1\over x})}\over {d \ln a}} \propto
\frac{\dot{x} a}{\dot{a} x} \propto - {v \over {{\dot a}x}}
\end{equation}
\par
\noindent where $v \equiv a{\dot x}$ denotes the mean relative velocity.
Thus, $h_{\rm SC}$ is proportional to the ratio of the relative peculiar velocity
to the Hubble velocity. We know that this ratio is small
in the linear regime (where the Hubble flow is dominant) and later
increases, reaches a maximum and finally falls back to unity with the
formation of a stable structure; this is another argument leading to
the same qualitative behaviour
of the $h_{\rm SC}$ function.
Note that, in standard SCM (for which $S = 0$), equation
(\ref{dhdDeqn}) reduces to \\
\begin{equation}
\label{dhdDscm}
3h_{\rm SC}\frac{dh_{\rm SC}}{dD_{\rm SC}}=h_{\rm SC}^2-{h_{\rm SC}\over 2}+{\delta \over 2}
\end{equation}
The presence of the linear term in $\delta$ on the RHS of the
above equation causes $h_{\rm SC}$ to increase with $\delta$, with $h_{\rm SC} \propto
\delta^{1/2}$ for $\delta \gg 1$. If virialization is imposed as
an {\it ad hoc\/} condition,
then $h_{\rm SC}$ should fall back to unity discontinuously --- which is
clearly unphysical; the form of $S(\delta)$ must hence be chosen so as to ensure
a smooth transition in $h_{\rm SC}(\delta)$ from one regime to another. [As an aside, we remark that $S(\delta)$ can be reinterpreted to include
the lowest order contributions arising from shell crossing,
multi-streaming, etc., besides the shear and angular momentum terms, {\it
i.e.} it contains all effects leading to virialization of the system; see S. Engineer, etal, 1998]
We will now derive an approximate functional form for the virialization
function from physically well-motivated arguments.
If the virialization term is retained in equation (\ref{reqn}), we have
\begin{equation}
\label{theRequation}
{{d^2 R}\over {d t^2}}=-{{GM}\over {R^2}} - {{H^2 R} \over 3} S
\end{equation}
where $H=\dot a/a$.
Let us first consider the late time behaviour of the system. When virialization
occurs, it seems reasonable to
assume that $R\rightarrow {\rm constant} $ and $\dot{R} \rightarrow 0$.
This implies that, for large density contrasts,
\begin{equation}
S \approx -{{3GM} \over {R^3 H^2}} \;\; \qquad(\delta \gg 1)
\end{equation}
\noindent Using $H=\dot{a}/a=(2/3t)$, and equation (\ref{deltadefn})
\begin{equation}
S \approx -{{27GM t^2} \over {4 R^3}} = -{3 \over 2} (1 + \delta
)\approx -{3\over 2}\delta \;\; \qquad(\delta \gg 1)
\end{equation}
Thus, the ``virialization'' term tends to a value of ($ -3 \delta/2$) in the non-linear
regime, when stable structures have formed. This asymptotic form for
$S(\delta)$ is, however, insufficient to model its behaviour
over the larger range of density contrast (especially the
quasi-linear regime) which is of interest to us. Since $S(\delta)$
tends to the above asymptotic form at late times, the residual part, {\it i.e.}
the part that remains after the asymptotic value has been subtracted away,
can be expanded in a Taylor series in $(1 / \delta)$ without any loss of generality.
Retaining the first two terms of expansion, we write the complete virialization term as
\begin{equation}
\label{netrotation2}
S(\delta)=-\frac{3}{2} (1+\delta) -\frac{A}{\delta}+\frac{B}{\delta^2}
+{\cal O}(\delta^{-3})
\end{equation}
Replacing for $S(\delta)$ in equation ({\ref{deltalog}), we obtain,
for $\delta \gg 1$ \\
\begin{equation}
\label{new_dhdDeqn}
3h\delta \frac{dh_{\rm SC}}{d\delta} - h^2_{\rm SC} + \frac{h_{\rm SC}}{2} +
\frac{1}{2} = -\frac{A}{\delta}+\frac{B}{\delta^2}
\end{equation}
[It can be easily demonstrated that the first order term in the
Taylor series is alone insufficient to model the turnaround behaviour of the
$h$ function. We will hence include the next higher order term and use the
form in equation (\ref{netrotation2}) for the virialization term. The signs are chosen for future convenience, since it will turn out that both $A$ and $B$ are
greater than zero.] In fact, for sufficiently large $\delta$, the
evolution depends only on the combination $q\equiv(B/A^2)$. Equation (\ref{theRequation}) can be now written as
\begin{equation}
\label{theRequation2}
\ddot{R}=-\frac{GM}{R^2}-\frac{4R}{27t^2} \left[ -\frac{27 GMt^2}{4 R^3}- \frac{A}{\delta}+ \frac{B}{\delta^2} \right]
\end{equation}
Using $\delta=9GMt^2/2R^3$ and $B\equiv qA^2$ we may express equation (\ref{theRequation2})
completely in terms of $R$ and $t$. We now rescale $R$ and $t$ in the form
$R=r_{vir}y(x)$ and $t=\beta x$, where $r_{vir}$ is the final virialised
radius [{\it i.e.} $R \rightarrow r_{vir}$ for $t \rightarrow \infty$], and
$\beta^2=(8/3^5) (A/GM) r_{vir}^3$, to obtain the following equation for $y(x)$\\
\begin{equation}
\label{thescaledeqn}
y''=\frac{y^4}{x^4} -\frac{27}{4} q \frac{y^7}{x^6}
\end{equation}
We can integrate this equation to find a form for $y_q(x)$ (where $y_q(x)$ is the function $y(x)$ for a specific value of $q$) using the physically motivated boundary conditions $y=1$ and $y'=0$ as $x \rightarrow \infty$, which is simply an expression of the fact that the system reaches the virial radius $r_{vir}$ and remains at that radius asymptotically.
\noindent The results of numerical integration of this equation for a range of $q$
values are shown in figure (\ref{figure2}).
As expected on physical grounds, the function has a maximum and gracefully decreases
to unity for large values of $x$ [the behaviour of $y(x)$ near $x=0$ is irrelevant since the
original equation is valid only for $\delta \geq 1$, at least]. For a given value of
$q$, it is possible to find the value $x_c$ at which the function reaches its maximum,
as well as the ratio $y_{max}=R_{max}/r_{vir}$. The time, $t_{max}$, at which the
system will reach the maximum radius is related to $x_c$ by the relation $t_{max}=
\beta x_c = t_0 (1+z_{max})^{-3/2}$, where $t_0=2/(3 H_0)$ is the present age of
the universe and $z_{max}$ is the redshift at which the system turns around.
Figure (\ref{figure3}) shows the variation of $x_c$ and $y_{max}\equiv
(R_{max}/r_{vir})$ for different values of $q$. The entire evolution of the
system in the modified spherical collapse model (MSCM) can be expressed in terms of
\begin{equation}
\label{MSCMsoln}
R(t)=r_{vir}\; y_q(t/\beta)
\end{equation}
where $\beta=(t_0/x_c) (1+z_{max})^{-3/2}$.
In SCM, the conventional value used for ($r_{vir}/R_{max}$) is ($1/2$),
which is obtained by enforcing the virial condition that $\vert U \vert=2K$, where
$U$ is the gravitational potential energy and $K$ is the kinetic energy. It must
be kept in mind, however, that the ratio ($r_{vir}/R_{max}$) is not really
constrained to be {\it precisely} ($1/2$) since the
actual value will depend on the final density profile and the precise definitions
used for these radii. While we expect it to be around $0.5$, some
amount of variation, say between 0.25 and 0.75, cannot be ruled out theoretically.
Figure (\ref{figure3}) shows the parameter ($R_{max}/r_{vir}$),
plotted as a function of $q=B/A^2$ (dashed line),
obtained by numerical integration of
equation (\ref{theRequation}) with the {\it ansatz\/} (\ref{netrotation2}).
The solid line gives the dependence of $x_c$ (or equivalently $t_{max}$)
on the value of $q$. It can be seen that one can obtain a suitable value for
the ($r_{vir}/R_{max}$) ratio by choosing a suitable value for $q$ and vice versa.
\begin{figure}
\centering
\psfig{file=fig1.ps,width=3.5truein,height=3.0truein,angle=-90}
\caption{The figure shows the function $y_q(x)$ for some values of $q$. The x axis has scaled time, $x$ and the y axis is the scaled radius $y$.}
\label{figure2}
\end{figure}
\par
\begin{figure}
\centering
\psfig{file=fig2a.ps,width=3.5truein,height=3.0truein,angle=-90}
\caption{The figure shows the parameters ($R_{max}/r_{vir}$) (broken line) and $x_c$
(solid line) as a function of $q=B/A^2$. This clearly demonstrates that the single
parameter description of the virialization term is constrained by the value that is
chosen for the ratio $r_{vir}/R_{max}$.}
\label{figure3}
\end{figure}
Using equation (\ref{defh}) and the definition $\delta \propto t^2/R^3$, we obtain
\begin{equation}
h_{\rm SC}(x)=1-\frac{3}{2} \frac{x}{y} \frac{d y}{d x}
\end{equation}
which gives the form of $h_{\rm SC} (x)$ for a given value of $q$; this, in turn,
determines the function $y_q(x)$.
Since $\delta$ can be expressed in terms of $x$, $y$ and $x_c$ as $\delta=
(9 \pi^2/2 x_c^2) x^2/y^3$, this allows us to implicitly obtain a form for
$h_{SC}(\delta)$, determined only by the value of $q$.
It is possible to determine the best-fit value for $q$ by comparing these results with simulations. This is best done by comparing the form of $h_{\rm sc}(x)$. Such an analysis gives $q \cong 0.02$. (see S. Engineer, etal., 1998)
Figure (\ref{figure4}) shows the plot of scaled radius $y_q(x)$ vs $x$,
obtained by integrating equation(\ref{thescaledeqn}), with $q=0.02$.
The figure also shows an accurate fit (dashed line) to this solution of the form
\begin{equation}
\label{yqfit}
y_q(x)=\frac{x+a x^3+b x^5}{1+c x^3+b x^5}
\end{equation}
with $a=-3.6$, $b=53$ and $c=-12$. This fit, along with values for $r_{vir}$
and $z_{max}$, completely specifies our model through equation (\ref{MSCMsoln}).
It can be observed that ($r_{vir}/R_{max}$) is approximately $0.65$.
It is interesting to note that the value obtained for the
($r_{vir}/R_{max}$) ratio is not very widely off the usual value of $0.5$ used
in the standard spherical collapse model, {\it in spite of the fact that no
constraint was imposed on this value, {\it ab initio}, in arriving at
this result.\/}
Finally, figure (\ref{figure5}) compares the non-linear density
contrast in the modified SCM (dashed line) with that in the standard SCM
(solid line), by plotting both against the linearly extrapolated density contrast,
$\delta_L$.
It can be seen (for a given system with the same $z_{max}$ and
$r_{vir}$) that, at the epoch where the standard SCM model has a
singular behaviour ($\delta_L \sim 1.686$), our model has a smooth
behaviour with $\delta \approx 110$ (the value is not very sensitive
to the exact value of $q$). This is not widely off from the value
usually obtained from the {\it ad hoc} procedure applied in the standard
spherical collapse model. In a way, this explains the unreasonable
effectiveness of standard SCM in the study of non-linear clustering.
Figure (\ref{figure5}) also shows a comparison between the
standard SCM and the MSCM in terms of $\delta$ values in the MSCM at
two important epochs, indicated by vertical arrows. (i)
When $R = R_{max}/2$ in the SCM, {\it i.e.} the epoch at which
the SCM {\it virializes}, $\delta({\rm MSCM}) \sim 83$.
(ii) When the SCM hits the singularity, ($\delta_L \sim 1.6865$),
$\delta({\rm MSCM}) \sim 110$.\\
\par
\begin{figure}
\centering
\psfig{file=fig4.ps,width=3.5truein,height=3.0truein,angle=-90}
\caption{The figure shows a plot of the scaled radius of the shell $y_q$ as a function of scaled time $x$ (solid line) and the fitting formula $y_q=(x+ax^3+bx^5)/(1+cx^3+bx^5)$, with $a=-3.6$, $b=53$ and $c=-12$ (dashed line) (See text for discussion) }
\label{figure4}
\end{figure}
\begin{figure}
\centering
\psfig{file=fig5.ps,width=3.5truein,height=3.0truein,angle=-90}
\caption{The figure shows the non-linear density contrast in the SCM
(solid line) and in the modified SCM (dashed line), plotted against
the linearly extrapolated density contrast $\delta_L$ (discussion in text). }
\label{figure5}
\end{figure}
\section{Mass functions and abundances}
The description developed so far can also be used to address an important question: What fraction of the matter in the universe has formed bound structures at any given epoch and what is the distribution in mass of these bound structures? We shall now describe a simple approach which answers these questions. (Press and Schechter,1974)
Gravitationally bound objects in the universe, like galaxies, span a large
dynamic range in mass. Let $f(M)dM$ be the number density of bound
objects in the mass range $(M, M+dM)$ [usually called the
``mass function"] and let $F(M)$ be the number density of objects with
masses {\it greater} than $M$.
Since the formation of gravitationally bound objects is an inherently
nonlinear process, it might seem that the linear theory cannot be used to determine $F(M)$. This, however, is not
entirely true. In any one realization of the linear density field
$\delta_R({\bf x})$,
filtered using a window function of scale $R$,
there will be regions with high density [i.e. regions with
$\delta_R>\delta_c$ where $\delta_c$ is some critical
value slightly greater than unity, say]. It seems reasonable to
assume that such regions will eventually condense out
as bound objects. Though the dynamics of that region will be nonlinear,
the process of condensation is unlikely to change the {\it mass} contained in that
region significantly. Therefore, if we can estimate the mean number of
regions with $\delta_R>\delta_c$ in a Gaussian random field, we will
be able to determine $F(M)$.
One way of achieving this is as follows: Let us consider
a density field $\delta_R({\bf x})$ smoothed by a window function
$W_R$ of scale radius $R$. As a first approximation, we may assume
that the region with $\delta (R,t) >\delta_c$ (when smoothed on the scale
$R$ at time $t_i$) will form a gravitationally bound object with mass
$M\propto \overline\rho R^3$ by the time $t$. The
precise form of the $M-R$ relation
depends on the window function used; for a step function $M=(4\pi/3)$
$\overline\rho R^3$, while for a Gaussian $M=(2\pi)^{3/2}\overline\rho R^3$.
Here $\delta_c$ is a critical value
for the density contrast which has to be supplied by theory. For example, $\delta_c \simeq 1.68$ in spherical collapse model.
Since $\delta \propto t^{2/3}$ for a $\Omega =1 $ universe, the probability for the region to form a bound structure at $t$ is the same as the probability $\delta > \delta_c (t_i /t)^{2/3}$ at some early epoch $t_i$. This probability can be easily estimated {\it since at sufficiently early $t_i$}, the system is described by a gaussian random field. Hence fraction of bound objects with
mass greater than $M$ will be
{\begin{eqnarray}
F(M)&=&\int^{\infty}_{\delta_c(t,t_i)}
P(\delta, R, t_i)d\delta=
{1\over \sqrt{2\pi}}
{1\over \sigma(R, t_i)}
\int^{\infty}_{\delta_c}\exp
\left(-{\delta^2\over 2\sigma^2(R,t_i)}\right)d\delta\nonumber \\
&=&{1\over 2} {\rm er fc}
\left({\delta_c (t,t_i)\over \sqrt{2}\sigma(R,t_i)}\right),
\label{qpswron}
\end{eqnarray}
where ${\rm er fc}(x)$ is the complementary error function. The mass
function $f(M)$ is just $(\partial F/\partial M)$; the (comoving)
number density $N(M,t)$ can be found by dividing this expression by
$(M/\overline\rho)$. Carrying out these operations we get
\begin{equation}
N(M,t)dM=-
\left({\overline\rho\over M}\right)
\left({1\over 2\pi}\right)^{1/2}
\left({\delta_c\over \sigma}\right)
\left({1\over \sigma}{d\sigma\over dM}\right)\exp
\left(-{\delta^2_c\over 2\sigma^2}\right)dM.
\end{equation}
Given the power spectrum $|\delta_k|^2$ and a window function $W_R$ one can
explicitly compute the right hand side of this expression.
There is, however, one fundamental difficulty with the equation (\ref{qpswron}).
The integral of $f(M)$ over all $M$ should give unity; but it is easy
to see that, for the expression in (\ref{qpswron}),
\begin{equation}
\int^{\infty}_0f(M)dM=\int^{\infty}_0 dF={1\over 2}.
\end{equation}
This arises because we have not taken into account the underdense regions
correctly.
To see
the origin of this difficulty
more
clearly, consider the interpretation of
(\ref{qpswron}). If a point in space has $\delta >\delta_c$ when filtered at scale $R$, then that
point should correspond to a system with mass greater than $M(R)$; this is
taken care of
correctly by equation (\ref{qpswron}). However, consider those points which have
$\delta<\delta_c$ under this filtering. There is a {\it non-zero} probability that
such a point will have $\delta>\delta_c$ when the density field is filtered with
a radius $R_1>R$. Therefore, to be consistent with the interpretation in
(\ref{qpswron}), such points should {\it also} correspond to a region with mass
greater than $M$. But (\ref{qpswron}) ignores these points completely and thus
{\it underestimates} $F(M)$ [by a factor $(1/2)$]. To correct this, we shall `renormalise' the result by multiplying it by a factor 2. Then
\begin{equation}
dF(M)=\sqrt{{2\over\pi}}.{\delta_c\over \sigma_2}.
\left(-{\partial \sigma\over \partial M}\right)\exp
\left(-{\delta^2_c\over 2\sigma^2}\right)dM,
\end{equation}
or
\begin{equation}
N(M)dM=-{\overline\rho\over M}
\left({2\over\pi}\right)^{1/2}
{\delta_c\over \sigma^2}
\left({\partial\sigma\over\partial M}\right)\exp
\left(-{\delta^2_c\over 2\sigma^2}\right)dM,
\end{equation}
The quantity $\sigma$ here refers to the linearly extrapolated density $\sigma_L$; the subscript $L$ is omitted to simplify notation. The corresponding result for $F(M)$ is also larger by factor two:
\begin{equation}
F(M,z) = {\rm erfc} \left[{\delta_c \over \sqrt 2 \sigma_L(M,z)}\right] = {\rm erfc} \left[{\delta_c (1+z) \over \sqrt 2 \sigma_0(M)}\right]\label{qfmz}
\end{equation}
where $\sigma_0(M)$ is the linearly extrapolated density contrast today and we have used the fact $\sigma_L(M,z)\propto (1+z)^{-1}$. Note that, by definition, $F(M,z)$ gives the $\Omega$ contributed by the collapsed objects with mass larger than $M$ at redshift $z$; equation (\ref{qfmz}) shows that this can be calculated given only the linearly extrapolated $\sigma_0(M)$. The top panel of figure (6) gives $\Omega(M)$ as a function of $\sigma_0(M)$ for different $z$, and the observed abundance of different structures in the universe. The six different curves from top to bottom are for $z=0,1,2,3,4,5$. The dashed line on the $z=0$ curve gives the observed abundance of clusters; the trapezoidal region between $z=2$ and $z=3$ is based on abundance of damped lyman alpha systems; the line between $z=2$ and $z=4$ is a lower bound on quasar abundance.
\begin{figure}
\centering
\psfig{file=kishfig9.ps,width=3.5truein,height=3.0truein,angle=0}
\caption{(a) The $\Omega$ contributed by collapsed objects with mass greater than $M$ plotted against $\sigma(M)$ at different values of $z$. The curves are for $z=$ 0,1,2,3,4 and 5, from top to bottom. The constraint arising from cluster abundance at $z=0$, quasar abundance at $z=2-4$ and the abundance of damped Lyman-$\alpha$ systems at $z=2-3$ are marked. (b) The $M-\sigma$ relation in a class of CDM-like models;}
\label{figure6}
\end{figure}
As an example, let us consider the abundance of Abell clusters. Let the mass of Abell clusters to be $M= 5\times 10^{14 }\alpha {\rm M}_\odot$ where $\alpha$ quantifies our uncertainty in the observtion. Similarly, we take the abundance to be ${\cal A} = 4\times 10^{-6} \beta h^3 {\rm Mpc}^{-3}$ with $\beta$ quantifying the corresponding uncertainty. The contribution of the Abell clusters to the density of the universe is
\begin{equation}
F = \Omega_{\rm clus} = {M{\cal A} \over \rho_c} \approx 8 \alpha\beta \times 10^{-3}.
\end{equation}
Assuming that $\alpha\beta$ varies between 0.1 to 3, say, we get
\begin{equation}
\Omega_{\rm clus} \approx \left( 8\times 10^{-4} - 2.4 \times 10^{-2}\right) .\label{qfrab}
\end{equation}
[We shall concentrate on the top curve for $z=0$, for the purpose of this example.]
The fractional abundance given in (\ref{qfrab}), at $z=0$, requires a $\sigma\approx (0.5 - 0.78)$ at the cluster scales. All we need to determine now is whether a particular model has this range of $\sigma$ for cluster scales.
Since this mass corresponds to a scale of about $8h^{-1}Mpc$, we conclude that the linearly extrapolated density contrast must be in the range $\sigma_L =(0.5 - 0.8)$ at $R= 8h^{-1} Mpc$. This can act as a strong constraint on structure formation models. [The lower panel of figure (6) translates the bounds to a specific CDM model, parametrised by a shape parameter. This illustrates how any specific model can be compared with the bound in (\ref{qfmz}); for more details, see Padmanabhan, 1996 and references cited therein.]
\section{Scaling laws}
Before describing more sophisticated analytic approximations to gravitational clustering we shall briefly addres some simple scaling laws which can be obtained from our knowledge of linear evolution. These scaling laws are sufficiently powerful to allow reasonable predictions regarding the growth of structures in the universe and hence are useful for quick diagnostics of a given model. We shall confine our attention to the
scaling relations for a power-law spectrum
for which
$|\delta_k|^2 \propto k^n, {\rm and} \quad \sigma^2_M(R) \propto R^{-(n+3)}
\propto M^{-(n+3)/3}$.
Let us begin by asking what restrictions can be put
on the index $n$.
The integrand defining $\sigma^2$ in (\ref{qsigsph}) behaves as
$k^2|\delta_k|^2 $
near $k=0.$ [Note that
$W_k \simeq 1$ for small $k$ in any
window function].
Hence the finiteness of $\sigma^2$
will require the condition $n>-3$. The
behaviour of the integrand for large values of $k$
depends on the window function $W_k$.
If we take the window function to be a Gaussian, then
the convergence is ensured for all $n$. This might
suggest that $n$ can be made as large as one wants;
that is, we can keep the power at
small $k$ (i.e.,
large
wavelengths)
to be as small as we desire. This result, however,
is not quite true for the following reason:
As the system evolves, small scale nonlinearities will
develop in the system which can actually affect the large
scales. If the large scales have too little
power intrinsically (i.e. if $n$ is large), then
the long wavelength power will soon be dominated by the
``tail'' of the short wavelength power arising from the
nonlinear clustering. This occurs because, in equation (\ref{exev}), the nonlinear terms $A_{\bld k}$ and $B_{\bld k}$ can dominate over $4 \pi G \rho_b\delta_{\bld k}$ at long wavelengths (as ${\bld k} \rightarrow 0$). Thus there will be an {\it effective}
upper bound on $n$.
The actual value of this upper-bound depends, to some extent,
on the details of the small scale physics. It is, however,
possible to argue that the {\it natural} value for this bound is
$n=4$.
The argument runs as follows: Let us suppose that a large number of
particles, each of mass $m$, are distributed carefully in space in
such a way that there is very little power at large wavelengths.
[That is, $|\delta_k|^2 \propto k^n$ with
$n \gg 4$ for small $k$]. As
time goes on, the particles influence each other
gravitationally and will start clustering. The
density $\rho({\bf x}, t)$ due to
the particles in some region will be
\begin{equation}
\rho({\bf x},t)= \sum_i m \delta [{\bf x} - {\bf x}_i(t)],
\end{equation}
where ${\bf x}_i(t)$ is the position of the i-th
particle at time $t$ and the summation is over all the particles in
some specified
region. The density contrast in the Fourier space will be
\begin{equation}
\delta_{\bf k}(t)={1\over N} \sum_i
\left( \exp [i{\bf k}.{\bf x}_i(t)]-1\right)
\end{equation}
where $N$ is the total number of
particles in the region. For small
enough $|{\bf k}|$, we can expand the right hand side in a Taylor
series obtaining
\begin{equation}
\delta_{\bf k}(t)=
i{\bf k}.\left\{ {1 \over N}
\sum_i {\bf x}_i(t) \right \}
- {k^2 \over 2}
\left \{ {1 \over N} \sum_i x^2_i(t) \right\} + \cdots .
\end{equation}
If the motion of the particles is such that the centre-of-mass
of each of the subregions under consideration do not change, then
$\sum {\bf x}_i$
will vanish; under this (reasonable) condition,
$\delta_{\bf k}(t) \propto k^2$ for small $k$.
Note that this result follows, essentially, from the
three assumptions: small-scale graininess of the
system, conservation of mass and conservation of momentum.
This will lead to a long wavelength tail with
$|\delta_k|^2 \propto k^4$
which corresponds to $n=4$. The corresponding power spectrum for gravitational potential $P_{\varphi}(k) \propto k^{-4}|\delta_k|^2$ is a constant. Thus,
for all practical purposes, $-3<n<4.$ The value $n=4$ corresponds to
$\sigma^2_M(R) \propto R^{-7} \propto M^{-7/3}.$
For comparison, note that
purely Poisson fluctuations will correspond
to $(\delta M/M)^2 \propto (1/M)$;
i.e. $\sigma^2_M(R) \propto M^{-1} \propto R^{-3}$
with an index of $n=0.$
A more formal way of obtaining the $k^4$ tail is to solve equation (\ref{calgxx}) for long wavelengths; i.e. near $\bld k = 0$ (Padmanabhan, 1998). Writing $\phi_{\bld k} = \phi_{\bld k}^{(1)} + \phi_{\bld k}^{(2)} + ....$ where $\phi_{\bld k}^{(1)} = \phi_{\bld k}^{(L)}$ is the time {\it independent} gravitational potential in the linear theory and $\phi_{\bld k}^{(2)}$ is the next order correction, we get from (\ref{calgxx}), the equation
\begin{equation}
\ddot\phi_{\bld k}^{(2)}+ 4 {\dot a \over a} \dot\phi_{\bld k}^{(2)} \cong - { 1 \over 3a^2} \int {d^3 \bld p \over (2 \pi)^3} \phi^L_{{1 \over 2} \bld k + \bld p}
\phi^L_{{1 \over 2} \bld k - \bld p} {\cal G}(\bld k, \bld p)
\end{equation}
Writing $\phi_{\bld k}^{(2)} = aC_{\bld k}$ one can determine $C_{\bld k}$ from the above equation. Plugging it back, we find the lowest order correction to be,
\begin{equation}
\phi_{\bld k}^{(2)} \cong - \left( {2a \over 21H^2_0}\right) \int {d^3 \bld p \over (2 \pi)^3}\phi^L_{{1 \over 2} \bld k + \bld p}
\phi^L_{{1 \over 2} \bld k - \bld p} {\cal G}(\bld k, \bld p)
\end{equation}
Near $\bld k \simeq 0$, we have
\begin{eqnarray}
\phi_{\bld k \simeq 0}^{(2)} &\cong& - {2a \over 21H^2_0} \int {d^3 \bld p \over (2 \pi)^3}|\phi^L_{\bld p}|^2 \left[ {7 \over 8}k^2 + {3 \over 2}p^2 - {5(\bld k \cdot \bld p)^2 \over k^2} \right] \nonumber \\
&=& {a \over 126 \pi^2H_0^2} \int\limits^{\infty}_0 dp p^4 |\phi^{(L)}_{\bld p}|^2\nonumber \\
\end{eqnarray}
which is independent of $\bld k$ to the lowest order. Correspondingly the power spectrum for density $P_{\delta}(k)\propto k^4P_{\varphi} (k) \propto k^4$ in this order.
The generation of long wavelength $k^4$ tail is easily seen in simulations if one starts with a power spectrum that is sharply peaked in $|\bld k|$. Figure 7 shows the results of such a simulation (see Bagla and Padmanabhan, 1997) in which the y-axis is $[\Delta(k)/a(t)]$. In linear theory $\Delta \propto a$ and this quantity should not change. The curves labelled by $a=0.12$ to $a=20.0$ shows the effects of nonlinear evolution, especially the development of $k^4$ tail.
\begin{figure}
\centering
\psfig{file=kishfig6.ps,width=3.5truein,height=3.0truein,angle=0}
\caption{The transfer of power to long wavelengths forming a $k^4$ tail is illustrated using simulation results. Power is injected in the form of a narrow peak at $L=8$ and the growth of power over and above the linear growth is shown in the figure. Note that the $y-axis$ is $(\Delta/a)$ so that
there will be no change of shape under linear evolution
with $\Delta\propto a$. As time goes on a $k^4$ tail is generated which itself evolves according to a nonlinear scaling relation discussed later on.}
\label{figure7}
\end{figure}
Some more properties of the power spectra with different
values of $n$
can be obtained if the nonlinear effects are
taken into account. We know that, in the matter-dominated phase, linear perturbations grow as
$\delta_k(t) \propto a(t) \propto t^{2/3}$.
Hence
$\sigma^2_M(R) \propto t^{4/3}R^{-(3+n)}$.
We may assume that the perturbations at some scale
$R$ becomes nonlinear when
$\sigma_M(R) \simeq 1.$ It follows that the time
$t_R$ at which
a scale $R$ becomes nonlinear, satisfies the relation
\begin{equation}
t_R \propto R^{3(n+3)/4} \propto M^{(n+3)/4}.
\end{equation}
For $n>-3$, the timescale \ $t_R$ is an
increasing function of $M$; small scales become
nonlinear at earlier times. The proper
size $L$ of the region which becomes nonlinear
is
\begin{equation}
L \propto R a(t_R) \propto Rt^{2/3}_R
\propto R^{(5+n)/2} \propto M^{(5+n)/6}. \label{qlthsc}
\end{equation}
Further, the objects which are formed
at $t=t_R$
will have density $\rho$ of the order
of the background density $\overline\rho$ of the
universe at $t_R$. Since $\overline \rho \propto t^{-2}$,
we get
\begin{equation}
\rho \propto t_R^{-2} \propto R^{-3(3+n)/2}
\propto M^{-(3+n)/2}. \label{qdesc}
\end{equation}
Combining (\ref{qlthsc}) and (\ref{qdesc}) we get $\rho \propto L^{-\beta}$
with
\begin{equation}
\beta = {3(3+n) \over (5+n)}.
\end{equation}
In the nonlinear case, one may interpret the correlation
function $\xi$
as $\xi(L) \propto \rho(L)$;
this would imply $\xi(x) \propto x^{-\beta}$. ( We
shall see later that such a behaviour is to be expected on more general grounds.)
The gravitational potential due to these bodies is
\begin{equation}
\phi \simeq G \rho(L)L^2 \propto L^{(1-n)/(5+n)} \propto M^{(1-n)/6}.
\end{equation}
The same scaling, of course, can be obtained from
$\phi \propto (M/L)$.
This result shows that the binding energy of the structures
increases with $M$ for $n<1.$ In that case, the
substructures will be rapidly erased as
larger and larger structures become nonlinear.
For $n=1$, the gravitational potential is independent of the
scale, and $\rho \propto L^{-2}$.
\section{Nonlinear scaling relations}
Given an initial density contrast, one can trivially obtain the density contrast at any later epoch in the {\it linear} theory. If there is a procedure for relating the nonlinear density contrast and linear density contrast (even approximately) then one can make considerable progress in understanding nonlinear clustering. It is actually possible to make one such ansatz along the following lines.
Let $v_{{\rm rel}} (a,x)$ denote the relative pair velocities of particles separated by a distance $x$, at an epoch $a$, averaged over the entire universe. This relative velocity is a measure of gravitational clustering at the scale $x$ at the epoch $a$. Let $h(a,x)\equiv - [v_{{\rm rel}} (a,x) / \dot a x]$ denote the ratio between the relative pair velocity and the Hubble velocity at the same scale. In the extreme nonlinear limit $\left( \bar \xi \gg 1 \right)$, bound structures do not expand with Hubble flow. To maintain a stable structure, the relative pair velocity $v_{\rm rel} \left( a, x \right)$ of particles separated by $x$ should balance the Hubble velocity $Hr = \dot a x;$ hence, $v_{\rm rel} = - \dot a x $ or $h \left( a, x \right) \cong 1 $.
The behaviour of $h \left( a, x \right)$ for $\bar \xi \ll 1 $ is more complicated and can be derived as follows: Let the peculiar velocity field be $\bld v \left( \bld x \right) $ [we shall suppress the $a $ dependence since we will be working at constant $ a $]. The mean relative velocity at a separation $ \bld r = \left( \bld x - \bld y \right) $ is given by
\begin{eqnarray}
\bld v_{\rm rel}(\bld r) &\equiv& \left\langle \left[ \bld v \left( \bld x \right) - \bld v \left( \bld y \right) \right] \left[ 1 + \delta \left( \bld x \right) \right] \left[ 1 + \delta \left( \bld y \right) \right] \right \rangle \nonumber \\
& \cong & \left\langle \left[ \bld v \left( \bld x \right) - \bld v \left( \bld y \right) \right] \delta \left( \bld x \right) \right\rangle + \left\langle \left[ \bld v \left( \bld x \right) - \bld v \left( \bld y \right) \right] \delta \left( \bld y \right) \right\rangle
\end{eqnarray}
to lowest order, since $\delta^2$ term is higher order and $\left \langle \bld v\left( \bld x \right) - \bld v \left( \bld y \right) \right \rangle = 0 $. Denoting $\left( \bld v \left( \bld x \right) - \bld v \left( \bld y \right) \right) $ by $\bld v_{ x y }$ and writing $\bld x = \bld y + \bld r $, the radial component of relative velocity is
\begin{equation}
\bld v_{x y} . \bld r = \int \bld v \left( \bld k \right) . \bld r \left[ e ^{i \bld k. \left( \bld r + \bld y \right)} - {\rm e}^{i\bld k . \bld y} \right] {d^3\bld k \over \left( 2 \pi \right)^3} \label{qvxyr}
\end{equation}
where $\bld v \left( \bld k \right) $ is the Fourier transform of $\bld v \left( \bld x \right)$. This quantity is related to $\delta_{\bld k} $ by
\begin{equation}
\bld v \left( \bld k \right) = iHa \left( {\delta_{\bld k} \over k^2} \right) \bld k .
\end{equation}
(This equation is same as $\bld u = -\nabla \psi$, used in (\ref{ninety}), expressed in Fourier space). Using this in (\ref{qvxyr}) and writing $\delta \left( \bld x \right), \delta \left( \bld y \right) $ in Fourier space, we find that
\begin{eqnarray}
\bld v_{xy} . \bld r \left[ \delta \left( \bld x \right) + \delta \left( \bld y \right) \right] &=& \nonumber \\
iHa \int {d^3 k \over \left( 2 \pi \right)^3} \int {d^3 p \over \left( 2 \pi \right)^3} &\left( {\bld k . \bld r \over k^2 } \right)& \delta_{\bld k } \delta^*_{\bld p} {\rm e}^{i \left( \bld k - \bld p \right) . y } \left[ {\rm e}^{i \bld k . \bld r } - 1 \right] \left[ {\rm e}^{- i \bld p . \bld r } + 1 \right] .\nonumber \\
\end{eqnarray}
We average this expression using $\langle \delta_{\bld k} \delta^*_{\bld p} \rangle = \left( 2 \pi \right)^3 \delta_D \left( \bld k - \bld p \right) P \left( k \right), $ to obtain
\begin{eqnarray}
\bld v_{\rm rel}\cdot \bld r & \equiv & \left \langle \bld v_{xy} \cdot \bld r \left[ \delta \left( \bld x \right) + \delta \left( \bld y \right) \right]\right \rangle \nonumber \\
& =& iHa \int {d^3 k \over \left( 2 \pi \right)^3} {P \left( \bld k \right) \over k^2 }\left( \bld k . \bld r \right) \left[ {\rm e}^{i \bld k . \bld r } - {\rm e}^{- i \bld k . \bld r } \right] \nonumber \\
&=& -2 Ha \int {d^3 \bld k \over \left( 2 \pi \right)^3}{P \left( \bld k \right) \over k^2 }\left( \bld k . \bld r \right) \sin \left( \bld k . \bld r \right) .\nonumber \\
\end{eqnarray}
From the symmetries in the problem, it is clear that $\bld v_{\rm rel}(\bld r)$ is in the direction of $\bld r$. So $\bld v_{\rm rel}\cdot \bld r = v_{\rm rel} r$. The angular integrations are straightforward and give
\begin{equation}
r v_{\rm rel} = \left\langle \bld v_{\bld {xy}} . \bld r \left[ \delta \left( \bld x \right) + \delta \left( \bld y \right) \right]\right\rangle = {Ha \over r \pi^2} \int\limits^{\infty}_0 {dk \over k} P \left( k \right) \left[ kr \cos kr - \sin kr \right] .
\end{equation}
Using the expression (\ref{eightythree}) for $\bar \xi \left( r \right)$ this can be written as
\begin{equation}
r v_{\rm rel}(r) = - {2 \over 3} \left( H a r^2 \right) \bar \xi .
\end{equation}
Dividing by $r$ and noting that $Hr_{{\rm prop}} = Har $, we get
\begin{equation}
h = - {v_{\rm rel} \left( r \right) \over Hr_{\rm prop}} = - {v_{\rm rel} \left( r \right) \over aHr } = {2 \over 3} \bar \xi .\label{oneseventyfour}
\end{equation}
We get the important result that $h(a,x)$ depends on $(a, x)$ only through $\bar\xi(a, x)$ in the linear limit, while $h \cong -1$ is the nonlinear limit. This suggests the ansatz that $h$ depends on $a$ and $x$ only through some measure of the density contrast at the epoch $a$ at the scale $x$. As a measure of the density contrast we shall use $\bar\xi (a,x)$ itself since the result in (\ref{oneseventyfour}) clearly singles it out. In other words, we assume that $h(a,x) = h [\bar\xi (a,x)]$.
We shall now obtain an equation connecting $ h $ and $\bar \xi$. By solving this equation, one can relate $\bar\xi $ and $\bar \xi_L $.
(Nityananda and Padmanabhan, 1994).
The mean number of neighbours within a distance $x$ of any given particle is
\begin{equation}
N(x,t)=(na^3)\int^{x}_{o}4\pi y^2dy[1+\xi(y,t)]\label{qmean}
\end{equation}
when $n$ is the comoving number density. Hence the conservation law for pairs implies
\begin{equation}
{\partial\xi\over\partial t}+{1\over ax^2}{\partial\over \partial x}[x^2(1+\xi)v]=0\label{qcons}
\end{equation}
where $v(t,x)$ denotes the mean relative velocity of pairs at separation $x$ and
epoch $t$ (We have dropped the subscript `rel' for simplicity). Using
\begin{equation}
(1+\xi)={1\over 3x^2}{\partial\over \partial x}[x^3 (1+\bar{\xi})]
\end{equation}
in (\ref{qcons}), we get
\begin{equation}
{1\over 3x^2}{\partial\over \partial x}[x^3{\partial\over\partial
t}( 1+\bar{\xi})] = - {1\over ax^2}{\partial\over \partial
x}\left[ {v\over 3} {\partial\over \partial x}[x^2(1+\bar{\xi})]\right].
\end{equation}
Integrating, we find:
\begin{equation}
x^3 {\partial\over \partial
t}(1+\bar{\xi})=-{v\over a}{\partial\over \partial x}[x^3(1+\bar{\xi})].
\end{equation}
[The integration would allow the addition of an arbitrary function of $t$ on
the right hand side. We have set this function to zero so as to reproduce
the correct limiting behaviour].
It is now convenient to change the variables from $t$ to $a$, thereby
getting an equation for $\bar{\xi}$:
\begin{equation}
a{\partial\over \partial
a}[1+\bar{\xi}(a,x)]=\left({v\over -\dot{a}x}\right)
{1\over x^2}{\partial\over \partial x}[x^3(1+\bar{\xi}(a,x))]
\label{qlim}
\end{equation}
or, defining $ h(a,x) = - (v/\dot{a}x)$
\begin{equation}
\left({\partial\over \partial \ln a}-h{\partial\over \partial \ln x}\right)\,\,\, (1+\bar{\xi})=3h \left( 1+\bar\xi\right).\label{qhsi}
\end{equation}
This equation shows that the behaviour of $\bar{\xi}(a,x)$ is essentially
decided by $h$, the dimensionless ratio between the mean relative velocity $v$ and the Hubble velocity
$\dot{a}x=(\dot{a}/a)x_{\rm{prop}}$, both evaluated at scale $x$.
We shall now assume that
\begin{equation}
h(x,a) = h[\bar{\xi}(x,a)].\label{qloc}
\end{equation}
This assumption, of course, is consistent with the extreme linear limit
$h=(2/3)\bar{\xi}$ and the extreme nonlinear limit $h=1$.
When $h(x,a)=h[\bar{\xi}(x,a)]$, it is possible to find a solution to
(\ref{qloc})
which reduces to the form $\bar{\xi}\propto a^2$ for $\bar{\xi} \ll 1$ as follows: Let $A=\ln a,\,\,X=\ln x$ and $D(X,A) = (1+\bar{\xi})$. We define
curves (``characteristics'') in the $X,\, A,\, D$ \ space which satisfy
\begin{equation}
\left.{dX\over dA}\right|_c = -h[D[X,A]]\label{qcharc}
\end{equation}
{\it i.e.,} the tangent to the curve at any point ($X, A, D$) is
constrained by the value of $h$ at that point. Along this curve, the left hand side of
(\ref{qhsi}) is a total derivative allowing us to write it as
\begin{equation}
\left. \left( {\partial D\over \partial A}-h(D){\partial D\over \partial
X}\right)_c = \left( {\partial D\over \partial A}+{\partial
D\over \partial X}{dX\over dA} \right)_c \equiv {dD\over dA}\right|_c = 3hD.\label{qdvar}
\end{equation}
This determines the variation of $D$ along the curve. Integrating
\begin{equation}
\exp \left( {1\over 3}\int {dD\over Dh(D)}\right) = \exp (A+c)\propto a.\label{qvard}
\end{equation}
Squaring and determining the constant from the initial conditions at $a_0$, in the linear regime
\begin{equation}
\exp \left( {2\over 3}\int^{\bar{\xi}(x)}_{\bar{\xi}(a_{0},l)}
{d\bar\xi\over h(\bar{\xi})(1+\bar{\xi})}\right) =
{a^2\over a^{2}_{0}}={\bar{\xi}_L(a,l)\over \bar{\xi_L}{(a_{0},l)}}.\label{qcur}
\end{equation}
We now need to relate the scales $x$ and $l$. Equation~(\ref{qcharc}) can be written, using
equation (\ref{qdvar}) as
\begin{equation}
{dX\over dA}=-h={1\over 3D}{dD\over dA}\label{qdx}
\end{equation}
giving
\begin{equation}
3X+\ln D=\ln [x^3(1+\bar{\xi})]={\rm constant}.\label{qmid}
\end{equation}
Using the initial condition in the linear regime,
\begin{equation}
x^3(1+\bar{\xi})=l^3.\label{qini}
\end{equation}
This shows that $\bar{\xi}_{L}$ should be evaluated at
$l=x(1+\bar{\xi})^{1/3}$. It can be checked directly that (\ref{qini} and (\ref{qcur})
satisfy (\ref{qhsi}). The final result can, therefore be summarized by the equation (equivalent to
(\ref{qcur}) and (\ref{qini}))
\begin{equation}
\bar{\xi}_L(a,l)=\exp \left(
{2\over 3}\int^{\bar{\xi}(a,x)}{d\mu\over h(\mu)(1+\mu)}\right);\quad l=x(1+\bar{\xi}(a,x))^{1/3}. \label{xibarint}
\end{equation}
Given the function $h(\bar{\xi})$, this relates $\bar{\xi}_{L}$ and
$\bar{\xi}$ or --- equivalently --- gives the mapping $\bar\xi(a,x)=U[\bar\xi_L(a,l)]$ between the nonlinear and linear correlation functions evaluated at different scales $x$ and $l$. The lower limit of the integral is chosen to give $\ln \bar \xi$ for small values of $\bar \xi$ on the linear regime. It may be mentioned that the equation (\ref{qdx}) and its integral (\ref{qini}) are independent of the ansatz $h(a,x) = h[\bar\xi (a,x)]$.
The following points need to be stressed regarding this result: (i) Among all statistical indicators, it is {\it only} $\bar\xi$ which obeys a nonlinear scaling relation (NSR) of the form $\bar\xi_{\rm NL} (a, x) = U\left[ \bar\xi_L(a,l) \right]$. Attempts to write similar relations for $\xi$ or $P(k)$ have no fundamental justification. (ii) The nonlocality of the relation represents the transfer of power in gravitational clustering and cannot be ignored --- or approximated by a local relation between $\bar\xi_{NL}(a,x)$ and $\bar\xi_L(a,x)$.
Given the form of $h(\bar\xi)$, equation (\ref{xibarint}) determines the relation $\bar\xi= U[\bar\xi_L]$. It is, however, easier to determine the form of $U$, directly along the following lines (Padmanabhan, 1996a): In the linear regime $\left( \bar\xi \ll 1, \bar\xi_L \ll 1)\right)$ we clearly have $U(\bar\xi_L) \simeq \bar\xi_L$. To determine its form in the quasilinear regime,
consider a region surrounding a density peak in the linear stage,
around which we expect the clustering to take place. From the definition of $\bar\xi$ it follows that the density profile around this peak
can be described by
\begin{equation}
\rho(x)\approx\rho_{bg}[1+\xi(x)]
\end{equation}
Hence the initial mean density contrast scales with the initial shell
radius $l$ as $\bar\delta_i
(l)\propto\bar\xi_L(l)$ in the initial epoch, when linear theory
is valid. This shell will expand to a maximum radius of $x_{max}
\propto l/\bar\delta_i\propto l/\bar\xi_L(l)$. In scale-invariant,
radial collapse, models
each shell may be approximated as contributing with a effective scale
which is propotional to $x_{max}$. Taking
the final effective radius $x$ as proportional to $x_{max}$, the final
mean correlation function will
be
\begin{equation}
\bar\xi_{QL}(x)\propto \rho\propto {M\over x^3}
\propto {l^3\over (l^3/\bar\xi_L(l))^3}\propto
\bar\xi_L(l)^3
\end{equation}
That is, the final correlation function in the quasilinear regime, $\bar\xi_{QL}$, at $x$ is the cube of
initial correlation function at $l$ where $l^3\propto x^3
\bar\xi_L^3\propto x^3\bar\xi_{QL}(x).$
{\it Note that we did not assume that
the initial power
spectrum is a power law to get this result.}
In case the initial power spectrum {\it is} a power law, with
$\bar\xi_{L}\propto x^{-(n+3)}$, then we immediately find that
\begin{equation}
\bar\xi_{QL}\propto x^{-3(n+3)/(n+4)}\label{qlndep}
\end{equation}
[If the
correlation function in linear theory has the power law form $\bar\xi_{L}
\propto x^{-\alpha}$ then the process described above changes the index
from $\alpha$ to $3\alpha/(1+\alpha)$. We shall comment more
about this aspect later]. For the power law case, the
same result can be obtained by more explicit means. For
example, in power law models the energy of spherical shell with mean density $\bar\delta(x_i) \propto x_i^{-b}$ will scale
with its radius as
$E\propto G \delta M(x_i)/x_i \propto G \bar\delta x^2_i \propto x_i^{2-b}$. Since $M\propto x_i^3$, it follows that the
maximum radius reached by the shell scales as $x_{max}\propto
(M/E)\propto x_i^{1+b}$. Taking the effective radius as
$x=x_{eff}\propto x_i^{1+b}$, the final density scales as
\begin{equation}
\rho\propto {M\over x^3}\propto {x_i^3\over x_i^{3(1+b)}}
\propto x_i^{-3b}\propto x^{-3b/(1+b)}\label{basres}
\end{equation}
In this quasilinear regime, $\bar\xi$ will scale like the density and we get
$\bar\xi_{QL}\propto x^{-3b/(1+b)}$.
The index $b$ can be related to
$n$ by assuming the the evolution starts at a moment when linear
theory is valid. Since the gravitational potential energy [or the kinetic
energy] scales as $E\propto x_i^{-(n+1)}$ in the linear theory, it
follows that $b=n+3$. This
leads to the correlation function in the quasilinear regime, given by (\ref{qlndep}) .
If $\Omega=1 $ and the initial spectrum is a power law, then there is
no intrinsic scale in the problem.
It follows that the evolution has to be self similar and
$\bar\xi$ can only depend on the combination $q=xa^{-2/(n+3)}$. This allows to
determine the $a$ dependence of $\bar\xi_{QL}$ by substituting $q$
for $x$ in (\ref{qlndep}). We find
\begin{equation}
\bar\xi_{QL}(a,x)\propto a^{6/(n+4)}x^{-3(n+3)/(n+4)}\label{qlax}
\end{equation}
We know that, in the linear regime, $\bar\xi = \bar\xi_L \propto a^2$. Equation
(\ref{qlax}) shows that, in the quasilinear regime, $\bar\xi = \bar\xi_{QL} \propto a^{6/(n+4)}$. Spectra with $n < -1 $ grow faster than $a^2$, spectra with $n > -1 $ grow slower than $a^2$ and $n = -1 $ spectrum grows as $a^2$. Direct algebra shows that
\begin{equation}
\bar\xi_{QL}(a,x)\propto [\bar\xi_{L}(a,l)]^3\label{qlscal}
\end{equation}
reconfirming the local dependence in $a$ and nonlocal dependence
in spatial coordinate.
This result has no trace of original assumptions [spherical evolution,
scale-invariant spectrum ....] left in it and hence once
would strongly suspect that it will have far general validity.
Let us now proceed to the fully nonlinear regime. If we ignore the
effect of mergers, then it seems reasonable that virialised systems
should maintain their densities and sizes in proper coordinates, i.e.
the clustering should be ``stable". This
would require the correlation function to have the form $\bar\xi_{NL}
(a,x)=a^3F(ax)$. [The factor $a^3$ arising from the decrease in
background density].
From our previous analysis we expect this to be a function of
$\bar\xi_L(a,l)$ where $l^3\approx x^3\bar\xi_{NL}(a,x)$. Let us write
this relation as
\begin{equation}
\bar\xi_{NL}(a,x)=a^3F(ax)=U[\bar\xi_L(a,l)]\label{qtr}
\end{equation}
where $U[z]$ is an unknown function of its argument which needs
to be determined. Since linear correlation function evolves as
$a^2$ we know that we can write $\bar\xi_L(a,l)=a^2Q[l^3]$
where $Q$ is some known function of its argument. [We are using
$l^3$ rather than $l$ in defining this function just for future
convenience of notation]. In our case $l^3=x^3\bar\xi_{NL}(a,x)
=(ax)^3F(ax)=r^3F(r)$ where we have changed variables from
$(a,x)$ to $(a,r)$ with $r=ax$. Equation (\ref{qtr}) now reads
\begin{equation}
a^3F(r)=U[\bar\xi_L(a,l)]=U[a^2Q[l^3]]=U[a^2Q[r^3F(r)]]
\end{equation}
Consider this relation as a function of $a$ at constant $r$. Clearly
we need to satisfy $U[c_1 a^2]=c_2a^3$ where $c_1$
and $c_2$ are constants. Hence we must have
\begin{equation}
U[z]\propto z^{3/2}.
\end{equation}
Thus in the extreme nonlinear end we should have
\begin{equation}
\bar\xi_{NL}(a,x)\propto [\bar\xi_{L}(a,l)]^{3/2}\label{qnlscl}
\end{equation}
[Another way deriving this result is to note that if $\bar\xi=
a^3F(ax)$, then $h=1$. Integrating (\ref{xibarint}) with appropriate boundary
condition leads to (\ref{qnlscl}).~]
Once again we did not need to invoke the assumption that the
spectrum is a power law. {\it If} it is a power law, then we get,
\begin{equation}
\bar{\xi}_{NL}(a,x)\propto a^{(3-\gamma)}x^{-\gamma};\qquad
\gamma={3(n+3)\over (n+5)}
\end{equation}
This result is based on the assumption of ``stable clustering" and
was originally derived by Peebles (Peebles, 1980). It can be directly
verified that the right hand side of this equation can be expressed in
terms of $q$ alone, as we would have expected.
Putting all our results together, we find that the nonlinear mean
correlation function can be expressed in terms of the linear mean
correlation function by the relation:
\begin{equation}
\bar \xi (a,x)=\cases{\bar \xi_L (a,l)&(for\ $\bar \xi_L<1, \, \bar
\xi<1$)\cr
{\bar \xi_L(a,l)}^3 &(for\ $1<\bar \xi_L<5.85, \, 1<\bar \xi<200$)\cr
14.14 {\bar \xi_L(a,l)}^{3/2} &(for\ $5.85<\bar\xi_L, \, 200<\bar
\xi$)\cr}\label{hamilton}
\end{equation}
The numerical coefficients have been determined by continuity
arguments. We have assumed the linear result to be valid upto
$\bar\xi=1$ and the
virialisation to occur at $\bar\xi\approx 200$
which is result arising from the spherical model. The {\it exact} values of
the numerical coefficients can be obtained only from simulations.
The true test of such a model, of course, is N-body simulations and
remarkably enough, simulations are very well represented by relations
of the above form. The simulation data for CDM, for example,
is well fitted by (Padmanabhan etal., 1996):
\begin{equation}
\bar \xi(a,x)=\cases{\bar \xi_L(a,l) &(for\ $\bar \xi_L<1.2, \, \bar
\xi<1.2$)\cr
{\bar \xi_L(a,l)}^3 &(for\ $1<\bar \xi_L<5, \, 1<\bar \xi<125$)\cr
11.7 {\bar \xi_L(a,l)}^{3/2} &(for\ $5<\bar\xi_L, 125<\bar
\xi$)\cr}\label{qbagh}
\end{equation}
which is fairly close to the theoretical prediction.
[The fact that numerical simulations show a correlation between
$\bar\xi(a,x)$ and $\bar\xi_L(a,l)$ was originally pointed out
by Hamilton et al., (1991) who, however, tried to give a multiparameter
fit to the data. This fit has somewhat obscured
the simple physical interpretation of the result though has the virtue
of being accurate for numerical work.]
A comparison of (\ref{hamilton}) and (\ref{qbagh}) shows that the
physical processes
which operate at different scales are well represented by our model.
In other words, the processes descibed in the quasilinear and nonlinear
regimes for an {\it individual} lump still models the {\it average}
behaviour of
the universe in a statistical sense. It must be emphasized that the key
point is the ``flow of information" from $l$ to $x$ which is an exact
result. Only when the results of the specific model are recast in
terms of suitably chosen variables, we get a relation which is of general
validity. It would have been, for example, incorrect to use spherical
model to obtain relation between linear and nonlinear densities at
the same location or to model the function $h$.
It may be noted that to obtain the result in the nonlinear regime,
we needed to invoke the assumption of stable clustering which has
not been deduced from any fundamental considerations. In case
mergers of structures are important, one would consider this
assumption to be suspect (see Padmanabhan et al., 1996). We can,
however, generalise the above
argument in the following manner: If the virialised systems have
reached stationarity in the statistical sense, the function $h$
--- which is the ratio between two velocities --- should reach some
constant value. In that case, one can integrate (\ref{xibarint}) and
obatin the result $\bar\xi_{NL}=a^{3h}F(a^hx)$ where $h$ now denotes the asymptotic value. A similar argument
will now show that
\begin{equation}
\bar\xi_{NL}(a,x)\propto [\bar\xi_{L}(a,l)]^{3h/2}\label{qnlscl2}
\end{equation}
in the general case. For the power law spectra, one would get
\begin{equation}
\bar{\xi}(a,x)\propto a^{(3-\gamma)h}x^{-\gamma};\qquad
\gamma={3h(n+3)\over 2+h(n+3)}
\end{equation}
Simulations are not accurate enough to fix the value of $h$; in
particular, the asymptotic value of $h$ could depend on $n$
within the accuracy of the simulations. It may be possible to
determine this dependence by modelling mergers in some simplified form.
If $h = 1$ asymptotically, the correlation function in the extreme
nonlinear end depends on the linear index $n$. One may feel that
physics at highly nonlinear end should be independent of the linear
spectral index $n$. This will be the case if the asymptotic value of
$h$ satisfies the scaling
\begin{equation}
h = {3c \over n+3}
\end{equation}
in the nonlinear end with some constant $c$. Only high resolution
numerical simulations can test this conjecture that $h(n + 3 ) = {\rm
constant}$.
It is possible to obtain similar relations between $\xi(a, x)$ and
$\xi_L (a, l) $ in two dimensions as well by repeating the above analysis (see Padmanabhan, 1997). In 2-D the scaling
relations turn out to be
\begin{equation}
\bar \xi (a,x)\propto \cases{\bar \xi_L (a,l)&({\rm Linear}) \cr
\bar\xi_L(a,l)^2 &({\rm Quasi-linear})\cr
\bar\xi_L(a,l)^{h/2} &({Nonlinear}) \cr}
\end{equation}
where $h$ again denotes the asymptotic value. For power law spectrum the nonlinear correction function will
$\bar\xi_{NL} (a, x) = a^{2 - \gamma} x^{-\gamma} $ with $\gamma = 2
(n + 2) / (n + 4)$.
If we generalize the concept of stable clustering to mean constancy of
$h$ in the nonlinear epoch, then the correlation function will behave
as $\bar\xi_{NL} (a, x) = a^{2h}F(a^hx)$. In this case, if the
spectrum is a power law then the nonlinear and linear indices are
related to
\begin{equation}
\gamma = {2h (n + 2) \over 2 + h (n + 2)}
\end{equation}
All the features discussed in the case of 3 dimensions are present
here as well. For example, if the asymptotic value of $h$ scales with
$n$ such that $h (n + 2 )= {\rm constant}$ then the nonlinear index
will be independent of the linear index. Figure (8) shows the results of numerical simulation in 2D, which suggests that $h= 3/2$ asymptotically (Bagla etal., 1998)
We shall now consider some applications and further generalisations of these nonlinear scaling relations.
The ideas presented here can be generalised in two obvious directions
(see Munshi and Padmanabhan, 1997): (i) By considering peaks of
different heights, drawn from an initial gaussian random field,
and averaging over the probability distribution one can obtain
a more precise NSR. (ii) By using a generalised ansatz for
higher order correlation functions, one can attempt to compute
the $S_N$ parameters in the quasilinear and nonlinear regimes. We
shall briefly comment on the results of these two generalisations.
\begin{figure}
\centering
\psfig{file=kishfig1.ps,width=3.5truein,height=3.0truein,angle=0}
\caption{The comparison between theory and simulations in 2D.}
\label{figure8}
\end{figure}
(i) The basic idea behind the model used to obtain the NSR can be described
as follows: Consider the evolution of density perturbations starting from
an initial configuration, which is taken to be a realisation of a Gaussian
random field with variance $\sigma$. A region with initial density contrast $\delta_i$ will expand
to a maximum radius $x_f = x_i/ \delta_i$ and will contribute to the
two-point correlation function an amount proportional to $(x_i/x_f)^3 =
\delta_i^3$. The initial density contrast within a
{\it randomly} placed
sphere of radius $x_i$ will be $ \nu \sigma (x_i)$ with a probability
proportional to $\exp (-\nu^2/2)$. On the other hand, the initial density
contrast within a sphere of radius $x_i$, {\it centered around a peak in the
density field} will be proportional to the two-point correlation function
and will be $\nu^2 \bar\xi (x_i)$ with a probability proportional to $\exp (-\nu^2/2)$. It follows that the contribution from a typical region will
scale as $ \bar \xi
\propto \bar \xi_i^{3/2}$ while that from higher peaks will scale as $\bar \xi
\propto \bar \xi_i^3$. In the quasilinear phase, most dominant contribution
arises from high peaks and we find the scaling to be $\bar \xi_{QL}
\propto \bar \xi_i^3$. The non-linear, virialized, regime is dominated by
contribution from several typical initial regions and has the scaling
$\bar \xi_{NL}
\propto \bar \xi_i^{3/2}$. This was essentially the result obtained above, except that we took $\nu = 1$.
To take into account the statistical fluctuations of the initial Gaussian
field we can average over different $\nu$ with a Gaussian probability
distribution.
Such an analysis leads to the following result. The relationship between
$\bar \xi(a,x)$ and $\bar \xi_{L}(a,l)$ becomes
\begin{equation}
\bar \xi (a,x) = A\left[ \bar \xi_{L} (a,l)\right]^{3h/2}; A = \left( {2\over \lambda} \right)^{3h \over 2} \left[ {\Gamma\left({\alpha + 1\over 2}\right) \over 2\sqrt{\pi}}\right]^{3h/ \alpha} \label{e1}
\end{equation}
where
\begin{equation}
\alpha = {6h\over 2+h(n+3)} \label{e2}
\end{equation}
and $\lambda \approx 0.5$ is the ratio between the final virialized radius and the radius at turn-around. In our model, $h=2$ in the quasi-linear regime, and $h=1$ in the non-linear regime. However, the above result holds for
any other value of $h$. Equation (\ref{e1}) shows that the scaling relations
(\ref{hamilton}) acquire coefficients which depend on the spectral index
$n$ when we average over peaks of different heights. (Mo etal., 1995; Munshi and Padmanabhan, 1997).
(ii) In attempting to generalize our results to higher order correlation functions,
it is important to keep the following aspect in mind. The $N$th order correlation function will involve $N-1$ different length scales.
To make progress, one needs to assume that, although there are
different length scales present in reduced n-point correlation function,
all of them have to be roughly of the same order to give a significant contribution. If the correlation functions are described
by a single scale, then a natural generalisation will be
\begin{equation}
\bar \xi_N \approx \langle x_i^{3(N-1)} \rangle
/ x^{3(N-1)}\end{equation}
Given such an ansatz for the $N$ point correlation function, one can compute
the $S_N$ coefficients defined by the relation $S_N \equiv \bar \xi_N / \bar \xi_2^{N-1}$ in a straightforward manner. We find that
\begin{equation}
S_N = \left( 4\pi \right)^{(N-2)/2} {\Gamma \left( {\alpha (N-1) +1 \over 2} \right) \over \left[ \Gamma \left({\alpha +1\over 2}\right) \right]^{N-1}}
\label{sn}
\end{equation}
where $\alpha$ is defined in equation (\ref{e2}). Given the function $h(\bar \xi)$, this equation allows one to compute (approximately) the value of
$S_N$ parameters in the quasi-linear and non-linear regimes. In our model
$h =2$ in the quasi-linear regime and $h =1$ in the non-linear regime. The
numerical values of $S_N$ computed for different power spectra agrees
reasonably well with simulation results. (For more details, see Munshi and
Padmanabhan, 1997.)
\section{NSR and halo profiles}
Now that we have a NSR giving $\bar\xi(a,x)$ in terms of $\bar\xi_L(a,l)$
we can ask the question:
How does the gravitational clustering proceed at highly nonlinear scales or, equivalently, at any
given scale at large $a$ ?
\par
To begin with, it is easy to see that we must have $v=-\dot a x$ or $h=1$ for
sufficiently large $\bar\xi(a,x)$ {\it if we assume} that the
evolution gets frozen in proper coordinates at highly nonlinear scales.
Integrating equation (\ref{xibarint}) with $h=1$, we get $\bar\xi(a,x)=a^3 F(ax)$; this is the phenomenon we called ``stable clustering''. There are two points
which need to be emphasised about stable clustering:
\par
(1) At present, there exists some evidence
from simulations (see Padmanabhan etal., 1996) that
stable clustering does {\it not} occur in a $\Omega=1$ model. In a {\it formal} sense, numerical simulations cannot disprove [or
even prove, strictly speaking] the occurrence of stable clustering, because of the finite dynamic
range of any simulation.
(2). Theoretically speaking, the ``naturalness'' of stable clustering is
often overstated. The usual argument is based on the assumption that
at very small scales --- corresponding to high nonlinearities --- the structures
are ``expected to be'' frozen at the proper coordinates. However, this argument does not
take into account the fact that mergers are not negligible at {\it any scale} in
an $\Omega=1$ universe. In fact, stable clustering
is more likely to be valid in models with $\Omega<1$ --- a claim which seems to
be again supported by simulations (see Padmanabhan etal., 1996).
{\it If} stable clustering {\it is} valid, then the late time behaviour of $\bar\xi(a,x)$
{\it cannot}
be independent of initial conditions. In other words the two requirements:
(i) validity of stable clustering at highly nonlinear scales and
(ii) the independence of late time behaviour from initial conditions,
{\it are mutually
exclusive}. This is most easily seen for initial power spectra which
are scale-free. If $P_{in}(k)\propto k^n$ so that $\bar\xi_L(a,x)\propto a^2
x^{-(n+3)}$, then it is
easy to show that $\bar\xi(a,x)$ at nonlinear scales will vary as
\begin{equation}
\bar\xi(a,x) \propto a^{\frac{6}{n+5}} x^{-\frac{3(n+3)}{n+5}};\qquad (\bar\xi
\gg 200)
\end{equation}
if stable clustering is true. Clearly, the power law index in the nonlinear
regime ``remembers''
the initial index. The same result holds for more general initial conditions.
What does this result imply for the profiles of individual halos?
To answer this question, let us start with the simple assumption that the density field $\rho(a,{\bf x})$ at late stages can
be expressed as a superposition
of several halos, each with some density profile; that is, we take
\begin{equation}
\label{haloes}
\rho(a,{\bf x})=\sum_{i} f({\bf x}-{\bf x}_i,a)
\end{equation}
where the $i$-th halo is centered at ${\bf x}_i$ and contributes
an amount $f({\bf x}-{\bf x}_i,a)$ at the location ${\bf x}_i$ [We can easily generalise this equation to the situation in which there are halos with
different properties, like core radius, mass etc by summing over the number
density of objects with particular properties; we shall not bother to
do this. At the other extreme, the exact description merely corresponds to taking
the $f$'s to be Dirac delta functions. Hence there is no loss of generality in (\ref{haloes})]. The power spectrum for the
density contrast, $\delta(a,{\bf x})=(\rho/\rho_b-1)$, corresponding to the
$\rho(a,{\bf x})$ in (\ref{haloes}) can be expressed as
\begin{eqnarray}
\label{powcen}
P({\bf k},a) &\propto& \left( a^3 \left| f({\bf k},a)\right| \right)^2 \left|
\sum_i \exp -i {\bf k}\cdot{\bf x}_i(a) \right|^2 \\
\label{powcen1}
& \propto & \left( a^3 \left| f({\bf k},a)\right| \right)^2 P_{\rm cent}({\bf
k},a)
\end{eqnarray}
where $P_{\rm cent}({\bf k},a)$
denotes the power spectrum of the distribution of centers of the halos.
\par
If stable clustering is valid, then the density profiles of halos are
frozen in proper coordinates and we will have $f({\bf x} -{\bf x}_i,a)=
f(a\:({\bf x}-{\bf x}_i))$;
hence the fourier transform will have the form $f({\bf k},a)=a^{-3}\;f({\bf k}/a)$. On
the other
hand, the power spectrum at scales which participate in stable clustering
must satisfy $P({\bf k},a)=P({\bf k}/a)$ [This is merely the requirement
$\bar\xi(a,x)=a^3F(ax)$
re-expressed in fourier space]. From equation (\ref{powcen1}) it follows that we must have
$P_{\rm cent}({\bf k},a)=P_{\rm cent}({\bf k}/a) $. We can however take $P_{cent}={\rm constant}$ at sufficiently small scales.This is because we must {\it necessarily} have $P_{\rm cent} \approx {\rm constant}$, (by definition) for length scales smaller than typical halo size, when we are essentially probing the interior of a single halo at sufficiently small scales.
We can relate the halo profile to the correlation function
using (\ref{powcen1}). In particular, if the halo profile is a power law with
$f\propto r^{-\epsilon}$, it
follows that the $\bar\xi(a,x)$ scales as $x^{-\gamma}$ [see also McClelland and Silk, 1977] where
\begin{equation}
\label{gammep}
\gamma=2\epsilon-3
\end{equation}
Now if the {\it correlation function} scales as $x^{[-3(n+3)/(n+5)]}$, then
we see that
the halo density profiles should be related to the initial power law
index through the relation
\begin{equation}
\epsilon=\frac{3(n+4)}{n+5}
\end{equation}
So clearly,
the halos of
highly virialised systems still ``remember'' the initial power
spectrum.
\par
Alternatively, without taking the help of the stable clustering hypothesis, one can try to ``reason out'' the profiles of the individual
halos and use it to obtain the scaling relation for correlation functions.
One of the favourite arguments used by cosmologists to obtain such a ``reasonable'' halo profile is based on spherical, scale invariant,
collapse. It turns out
that one can provide a series of arguments, based on spherical collapse, to
show that --- under certain circumstances --- the {\it density profiles} at the
nonlinear end scale as $x^{[-3(n+3)/(n+5)]}$. The simplest variant of this argument
runs as follows: If we start with an initial density
profile which is $r^{-\alpha}$, then scale invariant spherical collapse
will lead to a profile which goes as $r^{-\beta}$ with $\beta=3\alpha/
(1+\alpha)$ [see eg., Padmanabhan, 1996a]. Taking the intial slope
as $\alpha=(n+3)/2$ will immediately give $\beta=3(n+3)/(n+5)$. [Our definition of the stable clustering in the last section
is based on the scaling of
the correlation function and gave the
slope of $[-3(n+3)/(n+5)]$ for the {\it correlation} function. The spherical
collapse gives the same slope for {\it halo profiles}.] In this case, when the halos have the slope of $\epsilon=3(n+3)/(n+5)$,
then the correlation function should have slope
\begin{equation}
\gamma=\frac{3(n+1)}{n+5}
\end{equation}
Once again, the final state ``remembers'' the initial index $n$.
Is this conclusion true ? Unfortunately, simulations do not have sufficient
dynamic range to provide a clear answer but there are some claims that
the halo profiles are ``universal'' and independent of initial conditions.
The theoretical arguments given above are also far from rigourous (in spite
of the popularity they seem to enjoy!). The argument for correlation function to scale as
$[-3(n+3)/(n+5)]$ is based on the assumption of $h=1$ asymptotically, which
may not be true. The argument, leading to density profiles scaling as
$x^{[-3(n+3)/(n+5)]}$, is based on scale invariant spherical collapse which
does not do justice to nonradial motions. Just to illustrate the situations
in which one may obtain final configurations which are independent of
initial index $n$, we shall discuss two possibilities:
(i) As a first example we will try to see when the slope of the correlation
function is universal and obtain the slope of halos in the nonlinear limit
using our relation (\ref{gammep}). Such a situation can develop {\it if we assume that $h$ reaches a
constant
value asymptotically which is not necessarily unity}. In that case, we get $\bar\xi(a,x)=a^{3h}F[a^h x]$ where $h$ now
denotes the constant asymptotic value of of the function. For an initial
spectrum which is scale-free power law with index $n$, this result translates
to
\begin{equation}
\bar\xi(a,x)\propto a^{\frac{2 \gamma}{n+3}} x^{-\gamma}
\end{equation} where $\gamma$ is given by
\begin{equation}
\gamma=\frac{3 h (n+3)}{2+h(n+3)}
\end{equation}
We now notice that one can obtain
a $\gamma$ which is independent of initial power law index provided
$h$ satisfies the condition $h(n+3)=c$, a constant. In this case, the nonlinear
correlation
function will be given by
\begin{equation}
\epsilon=3\left( \frac{c+1}{c+2} \right)
\end{equation}
Note that we are now demanding the asymptotic value of $h$ to {\it explicitly
depend} on the initial conditions though the {\it spatial} dependence of $\bar\xi(a,x)$
does not.
In other words, the velocity distribution --- which is related to $h$ --- still
``remembers'' the initial
conditions. This is indirectly reflected in the fact that the growth
of $\bar\xi(a,x)$ --- represented by $a^{6c/((2+c)(n+3))}$ --- does depend on the
index $n$.
We emphasize the fact that the velocity distribution remembers the initial condition because it is usual (in published literature) to ignore the memory in velocity and concentrate entirely on the correlation function. It is not clear to us [or we suppose to anyone else] whether it is possible to come up with a clustering scenario in which no physical feature remembers the initial conditions. This could probably occur when virialisation has run its full course but even then it is not clear whether the particles which evaporate from a given potential well (and form a uniform hot component) will forget all the initial conditions.
\par
As an example of the power of such a --- seemingly simple --- analysis note the
following: Since $c \geq 0 $, it follows that $\epsilon > (3/2)$; invariant
profiles
with shallower indices (for e.g with $\epsilon=1$) are not consistent
with the evolution described above.
\par
(ii) For our second example, we shall make an ansatz for the halo profile
and use it to determine the correlation function.
We assume, based on small scale dynamics, that
the density profiles of individual halos
should resemble that of isothermal spheres, with $\epsilon=2$, irrespective of
initial conditions. Converting this halo profile to correlation function
in the {\it nonlinear} regime is straightforward and is based on equation
(\ref{gammep}):
If $\epsilon=2$, we must have $\gamma=2 \epsilon-3=1$ at
small scales; that is $\bar\xi(a,x)\propto x^{-1}$ at the nonlinear regime.
Note that this corresponds to the index at the nonlinear
end, for which the growth rate is $a^2$ --- same as in linear
theory. We shall say more about such `critical' indices later.
[This $a^2$ growth however, is possible for initial power law spectra, only if
$\epsilon=2$, i.e $h(n+3)=1$ at very nonlinear scales.
Testing the conjecture that $h(n+3)$ is a constant is probably a little
easier than looking for invariant profiles in the simulations but the
results are still uncertain].
The corresponding analysis for the intermediate regime, with
$1\gaprox\bar\xi(a,x)\gaprox 200$, is more involved.
This is clearly
seen in equation (\ref{powcen1}) which shows that the power spectrum [and
hence the
correlation
function] depends {\it both} on the fourier transform of the halo profiles as
well as the power spectrum of the distribution of halo centres. In general,
both quantities will evolve with time and we cannot
ignore the effect of $P_{\rm cent}(k,a)$ and relate $P(k,a)$ to $f(k,a)$.
The density profile around a {\it local maxima} will
scale approximately as $\rho\propto\xi$ while the density profile around
a {\it randomly} chosen point will scale as $\rho\propto\xi^{1/2}$. [The relation
$\gamma=2 \epsilon-3$ expresses the latter scaling of $\xi\propto\rho^2$].
There is, however,
reason to believe that the intermediate regime (with $1 \gaprox \bar\xi \gaprox 200$) is dominated by the
collapse of high peaks (see Padmanabhan, 1996a) . In that case, we expect the
correlation function and the density profile to have the same slope
in the intermediate regime with $\bar\xi(a,x)\propto (1/x^2)$. Remarkably enough,
this corresponds to the `critical' index $n_c=-1$ for the intermediate
regime for which the growth is proportional to $a^2$.
We thus see that if: (i) the individual halos are isothermal spheres
with $(1/x^2)$ profile and (ii) if $\xi\propto\rho$ in the intermediate regime
and $\xi\propto\rho^2$ in the nonlinear regime, we end up with a correlation
function {\it which grows as $a^2$ at all scales}. Such an evolution, of course,
preserves the shape and is a good candidate for the late stage evolution of
the clustering.
While the above arguments are suggestive, they are far from conclusive. It
is, however, clear from the above analysis and it is not easy to provide
{\it unique} theoretical reasoning regarding the shapes of the halos.
The situation gets more complicated if we include the fact that all halos
will not all have the same mass, core radius etc and we have to modify our
equations by integrating over the abundance of halos with a given value of
mass, core radius etc. This brings in more ambiguities and depending on
the assumptions we make for each of these components [e.g, abundance for halos of a particular mass could be based on Press-Schecter formalism],
and the final results have no real significance.
It is, therefore, better [and
probably easier] to attack the question based on the evolution equation for
the correlation function rather than from ``physical'' arguments for density profiles.
\section{Power transfer and critical indices}
Given a model for the evolution of the power spectra in the quasilinear
and nonlinear regimes, one could generalise the questions raised in the last section and explore whether
evolution of gravitational clustering
possesses any universal charecteristics. For example one could ask
whether a complicated initial power spectrum will be driven to any
particular form of power spectrum in the late stages of the evolution. This is a somewhat more general issue than, say, the invariance of halo profile.
One suspects that such a possibility might arise because of the following reason: We saw in the section 11 that [in the
quasilinear regime] spectra with $n<-1$ grow faster
than $a^2$ while spectra with $n>-1$ grow slower than $a^2$. This feature
could drive the spectral index to $n=n_c\approx -1$ in the quasilinear
regime irrespective of the initial index. Similarly, the index in
the nonlinear regime could be driven to $n\approx -2$ during the late time evolution. So the spectral indices $-1$ and $-2$ are some kind
of ``fixed points" in the quasilinear and nonlinear regimes. Speculating along
these lines, we would expect the gravitational clustering to lead to
a ``universal" profile which scales as $x^{-1}$ at the nonlinear end changing over to $x^{-2}$ in
the quasilinear regime.
This effect can be understood better by studying the ``effective" index
for the power spectra at different stages of the evolution (see Bagla and Padmanabhan, 19977). To do this most effectively, let us define a local
index for rate of clustering by
\begin{equation}
n_a(a,x)\equiv \part{\ln \bar\xi(a,x)}{\ln a}
\end{equation}
which measures how fast $\bar\xi(a,x)$ is growing. When $\bar\xi(a,x)\ll 1$, then $n_a=2$
irrespective of the spatial variation of $\bar\xi(a,x)$ and the evolution preserves the shape of $\bar\xi(a,x)$. However, as clustering develops, the growth rate will
depend on the spatial variation of $\bar\xi(a,x)$. Defining the effective spatial
slope by
\begin{equation}
-[n_{x}(a,x)+3]\equiv \part{\ln \bar\xi(a,x)}{\ln x}
\end{equation}
one can rewrite the equation (\ref{qhsi}) as
\begin{equation}
\label{naeqn}
n_a=h(\frac{3}{\bar\xi(a,x)} -n_{x})
\end{equation}
At any given scale of nonlinearity, decided by $\bar\xi(a,x)$, there exists a critical
spatial slope $n_c$ such that $n_a>2 $ for $n_{x}<n_c$ [implying rate of growth is faster
than predicted by linear theory] and
$n_a<2 $ for $n_{x}>n_c$ [with the rate of growth being slower
than predicted by linear theory]. The critical index $n_c$ is fixed by setting $n_a=2$ in equation (\ref{naeqn}) at any instant. This requirement is established from the physically motivated desire to have a form of the two point correlation function that remains invariant under time evolution. Since the linear end of the two point correlation function scales as $a^2$, the required invariance of form constrains the index $n_a$ to be $2$ at {\it all} scales . The fact that $n_a>2$ for $n_{x} <n_c$ and $n_a<2$ for $n_{x} >n_c$ will tend to ``straighten out'' correlation functions towards the critical slope.
[We are assuming that $\bar\xi(a,x)$ has a slope that is decreasing with
scale, which is true for any physically interesting case]. From the NSR it is easy to see that in the range $1 {\mbox{\gaprox}}
\bar\xi {\mbox{\gaprox}} 200$, the critical index is $n_c\approx -1$
and for $200 \gaprox \bar\xi$, the critical index is $n_c\approx -2$.
This clearly suggests that the local effect of evolution is to
drive the correlation function to have a shape with $(1/x)$ behaviour
at nonlinear regime and $(1/x^2)$ in the intermediate regime. Such a
correlation function will have $n_a\approx 2$ and hence will grow at
a rate close to $a^2$.
The three panels of figure (9) illustrate features related to the
existence of fixed points in a clearer manner. In the top panel we have
plotted index of growth $n_a\equiv(\partial \ln\bar\xi(a,x)/\partial
\ln a)_x$ as a function of $\bar\xi$ in the quasilinear regime
obtained from the best fit for NSR based on simulations. Curves
correspond to an input spectrum with index $n=-2,-1,1$. The dashed
horizontal line at $n_a=2$ represents the linear growth rate. An index
above this dashed horizontal line will represent a rate of growth faster than
linear growth rate and the one below will represent a rate which is
slower than the linear rate. It is clear that -- in the quasilinear
regime -- the curve for $n=-1$ closely follows the linear growth while
$n=-2$ grows faster and $n=1$ grows slower; so the critical index is
$n_c\approx -1$.
The second panel of figure 9 shows the effective index $n_a$ as a
function of the index $n$ of the original linear spectrum at different
levels of nonlinearity labelled by $\bar\xi=1,5,10,50,100$. We see
that in the quasilinear regime, $n_a>2$ for $n<-1$ and $n_a<2$ for
$n>-1$.
The lower panel of figure 9 shows the slope $n_x = -3 - (\partial\ln
{\bar\xi} /\partial \ln{x})_a $ of $\bar\xi$ for different power law
spectra. It is clear that $n_x$ crowds around $n_c\approx -1$ in the
quasilinear regime. If perturbations grow by gravitaional instability,
starting from an epoch at which $\bar\xi_{initial}\ll 1$ at all
scales, then equation (\ref{naeqn}) with $n_a > 0$ requires, that $n_x$, at any epoch, must satisfy the
inequality
\begin{equation}
n_x\le (3/\bar\xi).\label{qq}
\end{equation}
This bounding curve is shown by a dotted line in the figure. This
powerful inequality shows that regions of strong nonlinearity [with
$\bar\xi\gg 1$] should have effective index which is close to or less
than zero.
\begin{figure}
\epsfxsize=3.3truein\epsfbox[37 402 558 744]{fig4a.ps}
\epsfxsize=3.3truein\epsfbox[37 399 552 737]{fig4b.ps}
\epsfxsize=3.3truein\epsfbox[37 399 552 737]{fig4c.ps}
\caption{The top panel shows exponent of rate of growth of density
fluctuations
as a function of amplitude. We have plotted the rate of growth for
three scale invariant spectra $n=-2, -1, 1$. The dashed horizontal
line indicates the exponent for linear growth. For the range
$1<\delta<100$, the $n=-1$ spectrum grows as in linear theory; $n<-1$
grows faster and $n>-1$ grows slower. The second panel shows exponent
of rate of growth as a function of linear index of the power spectrum
for different values of $\bar\xi$ $( 1,5,10,50,100)$. These are
represented by thick, dashed, dot-dashed, dotted and the dot-dot-dashed
lines respectively. It is clear that spectra with $n_{lin}<-1$
grow faster than the rate of growth in linear regime and $n_{lin}>-1$
grow slower. The lower panel shows the evolution of index
$n_x=-3-(\partial\ln {\bar\xi} /\partial \ln{x})_a$ with
$\bar\xi$. Indices vary from $n=-2.5$ to $n=4.0$ in steps of
$0.5$. The tendency for $n_x$ to crowd around $n_c=-1$ is apparent in
the quasilinear regime. The dashed curve is a bounding curve for the
index ($n_x < 3 /\bar\xi$) if perturbations grow via gravitational
instability.} \label{figure9}
\end{figure}
The index $n_c=-1$ corresponds to the isothermal profile with
$\bar\xi(a,x)=a^2x^{-2}$ and has two interesting features to
recommend it as a candidate for fixed point:
(i) For $n=-1$ spectra each logarithmic scale contributes the same
amount of correlation potential energy. If the regime is modelled by scale
invariant radial flows, then the kinetic energy will scale in the same
way. It is conceivable that flow of power leads to such an
equipartition state as a fixed point though it is difficult prove such
a result in any generality.
(ii) It was shown earlier that scale invariant spherical collapse will
change the density profile $x^{-b}$ with an index $b$ to another
profile with index $3b/(1+b)$. Such a mapping has a nontrivial fixed
point for $b=2$ corresponding to the isothermal profile and an index
of power specrum $n=-1$ (see Padmanabhan, 1996a).
These considerations also allow us to predict the nature of power
transfer in gravitational clustering. Suppose that, initially, the
power spectrum was sharply
peaked at some scale
$k_0=2\pi/L_0$ and has a small width $\Delta k$. When the peak
amplitude of the spectrum is far less than unity, the evolution
will be described by linear theory and there will be no flow
of power to other scales. But once the peak approaches a value
close to unity, power will be generated at other scales due to nonlinear
couplings {\it even though the amplitude of perturbations in
these scales are less than unity}.
Mathematically, this
can be understood from the evolution equation (\ref{exev}) for the density contrast
--- written in fourier space --- as :
\begin{equation}
\ddot\delta_{\bf k}+2{\dot a\over a}\dot\delta_{\bf k}=4\pi
G\bar\rho\delta_{\bf k} +Q_{\bld k} \label{coupling}
\end{equation}
where $\delta_{\bf k}(t)$ is the fourier transform of the density
contrast, $\bar\rho$ is the background density and $Q_{\bld k} \equiv A_{\bld k} - B_{\bld k}$ is a nonlocal,
nonlinear function which couples the mode ${\bf k}$ to all other modes
${\bf k'}$ . Coupling between different modes is
significant in two cases. The obvious case is one with $\delta_{\bf k}
\ge 1$. A more interesting possibility arises for modes with no
initial power [or exponentially small power]. In this case nonlinear
coupling provides the only driving terms, represented by $Q_{\bld k}$ in
equation (\ref{coupling}). These generate power at the scale ${\bf k}$
through mode-coupling, provided power exists at some other scale. {\it
Note that the growth of power at the scale ${\bf k}$ will now be
governed purely by nonlinear effects even though $\delta_{\bf k} \ll
1$.}
Physically, this arises along the following lines: If the initial
spectrum is sharply peaked at some scale $L_0$, first structures to
form are voids with a typical diameter
$L_0$. Formation and fragmentation of sheets bounding the voids lead
to generation of power at scales $L<L_0$. First bound structures will then form
at the mass scale corresponding to $L_0$. In such a model,
$\bar\xi_{\rm{lin}}$ at $L<L_0$ is nearly constant with an effective index of
$n\approx -3$. Assuming we can use equation (\ref{hamilton}) with the
local index in this case, we expect the power to grow very rapidly
as compared to the linear rate of $a^2$. [The rate of growth is $a^6$
for $n= -3$ and $a^4$ for $n=-2.5$.] Different rate of growth for
regions with different local index will lead to steepening of
the power spectrum and an eventual slowing down of the rate of
growth. In this process, which is the dominant one,
the power transfer is mostly
from large scales to small scales. [There is also a
generation of the $k^4$ tail at large scales which we have discussed earlier.]
\begin{figure}
\centering
\psfig{file=kishfig2a.ps,width=3.5truein,height=3.0truein,angle=0}
\caption{The transfer of power in gravitational clustering }
\label{figure10}
\end{figure}
\begin{figure}
\centering
\psfig{file=kishfig3.ps,width=3.5truein,height=3.0truein,angle=0}
\caption{The growth of gravitational clustering towards a universal power spectrum $P(k) \propto k^{-1}$.}
\label{figure11}
\end{figure}
From our previous discussion, we would have expected such an evolution
to lead to a ``universal''
power spectrum with some critical index $n_c\approx -1$
for which the rate of growth is that of linear theory - viz.,
$a^2$. In fact, the same results should hold even when there exists small
scale power; recent numerical simulations dramatically confirm this
prediction and show that - in the quasilinear
regime, with $1<\delta<100$ - power spectrum indeed has a universal slope (see figures 10, 11; for more details, see Bagla and Padmanabhan, 1997).
The initial power spectrum for figure 10 was
a Gaussian peaked at the scale $k_0=2\pi/L_0 ; L_0=24$ and having a
spread $\Delta k=2\pi/128$. The amplitude of the peak was chosen so
that $\Delta_{lin} (k_0=2\pi /L_0, a=0.25)=1$, where $\Delta^2(k)=k^3
P(k)/(2\pi^2)$ and $P(k)$ is the power spectrum. Needless to say, the
simulation starts while the peak of the Gaussian is in the linear
regime $(\Delta(k_0) \ll 1)$.
The y-axis is
$\Delta(k)/a$, the power per logarithmic scale divided by the linear
growth factor. This is plotted as a function of scale $L=2\pi/k$ for
different values of scale factor $a(t)$ and the curves are labeled by the
value of $a$. As we have divided the power spectrum by its linear rate
of growth, the change of shape of the spectrum occurs strictly because
of non-linear mode coupling. It is clear from this figure that power at
small scales grows rapidly and saturates to growth rate close to
the linear rate [shown by crowding of curves] at later epochs. The
effective index for the power spectrum approaches $n=-1$ within the
accuracy of the simulations. Thus this figure clearly demonstrates the
general features we expected from our understanding of scaling
relations.
Figure~11 compares power spectra of three different models at a late epoch.
Model I was described in the last para; Model II had initial power concentrated in two narrow windows in
$k$-space. In addition to power around $L_0=24$ as in model I, we
added power at $k_1=2\pi/L_1 ; L_1=8$ using a Gaussian with same width
as that used in model I. Amplitude at $L_1$ was chosen five times
higher than that at $L_0=24$, thus $\Delta_{lin} (k_1,a=0.05)=1$.
Model III was similar to model II, with the small scale peak
shifted to $k_1=2\pi/L_1 ; L_1=12$. The amplitude of the small scale
peak was the same as in Model II. At
this epoch $\Delta_{lin}(k_0)=4.5$ and it is clear from this figure
that the power spectra of these models are very similar to one
another.
There is another way of looking at this feature which is probably more useful. We recall that, in the study of finite gravitating systems made of point particles and
interacting via newtonian gravity, isothermal spheres play an important
role. They can be shown to be the local maxima of entropy [see
Padmanabhan, 1990] and hence dynamical
evolution drives the system towards an $(1/x^2)$ profile. Since one expects
similar considerations to hold at small scales, during the late stages of evolution of the universe, we may hope that isothermal spheres with
$(1/x^2)$ profile may still play a role in the late stages of evolution of
clustering in an expanding background. However, while converting the profile to correlation, we have to take note of the issues discussed earlier.
In the intermediate regime, dominated by scale invariant radial collapse, the density will scale as the correlation function and
we will have $\bar\xi\propto (1/x^2)$. On the other hand, in the nonlinear
end, we have the relation $\gamma=2\epsilon -3$ which
gives $\bar\xi\propto (1/x)$ for $\epsilon=2$. Thus, if isothermal spheres
are the generic contributors, then we expect the correlation function to
vary as $(1/x)$ and nonlinear scales, steepening to $(1/x^2)$ at intermediate
scales. Further, since isothermal spheres are local maxima of entropy, a configuration like this should remain undistorted for a long duration. This
argument suggests that a $\bar\xi$ which goes as $(1/x)$ at small scales
and $(1/x^2)$ at intermediate scales is likely to be a candidate for a {\it pseudo-linear profile}--- that is configuration which grows approximately as $a^2$ at all scales.
To go from the scalings in two limits to an actual profile, we can use
some fitting function. By making the fitting function sufficiently complicated,
we can make the pseudo-linear profile more exact. The simplest interpolation between the two limits is given by (Padmanabhan and Engineer, 1998)
\begin{equation}
\label{xisolution}
\bar\xi(a,x)=\left(\frac{Ba}{2}\;\left(\sqrt{1+\frac{L}{x}} -1\right)\right)^2
\end{equation}
with $L, B$ being constants. This approximate profile works reasonably well for the optimum value is
$B=38.6$. If we evolve this pseudo linear profile
form $a^2=1$ to $a^2\approx 1000$ using the NSR, and plot
$[\bar\xi(a,x)/a^2]$ against $x$ then the curves virtually fall on top of each other within about 10 per cent (see Padmanabhan and Engineer, 1998)
This overlap of the curves show that the profile does grow
approximately as
$a^2$.
Finally, we will discuss a different way of thinking about
pseudolinear profiles which may be useful. In studying the evolution of the density contrast $\delta(a,{\bf x})$, it is
conventional
to expand in in term of the plane wave modes as
\begin{equation}
\delta(a,{\bf x})=\sum_{\bf k} \delta(a,{\bf k}) \exp(i {\bf k}\cdot{\bf x})
\label{name1}
\end{equation}
In that case,
the {\it exact} equation governing the evolution of $\delta(a,{\bf k})$ is
given by
\begin{equation}
\frac{d^2 \delta_{\bf k}}{d a^2}+\frac{3}{2 a} \frac{d \delta_{\bf k}}{d
a}-\frac{3}{2 a^2}\delta_{\bf k}={\cal A}\label{deltakeq}
\end{equation}
where ${\cal A}$ denotes the terms responsible for the
nonlinear coupling between different
modes. The expansion in equation (\ref{name1}) is, of course, motivated by the
fact that
in the linear regime we can ignore ${\cal A}$ and each of the modes evolve
independently. For the same reason, this expansion is not of much value
in the highly nonlinear regime.
This prompts one to ask the question: Is it possible to choose some other
set of basis functions $Q(\alpha,{\bf x})$, instead of $\exp\;i{\bf k}\cdot{\bf
x}$, and expand $\delta(a,{\bf x})$ in the form
\begin{equation}
\delta(a,{\bf x})=\sum_{\alpha} \delta_{\alpha}(a)\; Q(\alpha,{\bf x})
\end{equation}
so that the
nonlinear effects are minimised ? Here $\alpha$ stands for a set of parameters
describing the basis functions. This question is extremely difficult to answer,
partly because it is ill-posed. To make any progress, we have to first give
meaning to the concept of ``minimising the effects of nonlinearity''. One
possible approach we would like to suggest is the following: We know that when
$\delta(a,{\bf x}) \ll 1 $,then $\delta(a,{\bf x})\propto a\:F({\bf x})$ for
{\it any} arbitrary $F({\bf x})$; that is all power spectra grow as $a^2$ in the
linear regime. In the intermediate and nonlinear regimes, no such general statement can
be made. But it is conceivable that there exists certain {\it special} power
spectra for which $P({\bf k},a)$ grows (at least approximately) as $a^2$ even in
the nonlinear regime. For such a spectrum, the left hand side of
(\ref{deltakeq}) vanishes (approximately); hence the right hand side should
also vanish. {\it Clearly, such power spectra are
affected least by nonlinear effects.} Instead of looking for such a special
$P(k,a)$ we can, equivalently look for a
particular form of $\bar\xi(a,x)$ which evolves as closely to the linear theory
as possible. Such correlation functions and corresponding power spectra [which
are the pseudo-linear
profiles] must be capable of capturing most of the essence of nonlinear
dynamics. In this sense, we can think of our pseudo-linear profiles as
the basic building blocks of the nonlinear universe. The fact that the
correlation function is closely related to isothermal spheres, indicates
a connection between local gravitational dynamics and large scale gravitational
clustering.
\section{Conclusion}
I tried to highlight in these lectures several aspects of gravitational
clustering which --- I think --- are important for understanding the basic
physics. Some of the discussion points to obvious interrelationships
with other branches of theoretical physics. For example, we saw that the power injected at any given scale cascades to other scales leading to a (nearly)
universal power spectrum. This is remniscent of the fluid turbulence in
which Kolmogorov spectrum arises as a (nearly) universal choice. Similarly,
the existence of certain configurations, which are least disturbed by the
evolution [the ``psuedolinear profiles", discussed in section 13] suggests
similarities with the study of eddies in fluid mechanics, which possess a life
of their own. Finally, the integral equation coupling the modes (\ref{calgxx})
promises to be an effective tool for analysing this problem. We are still
far from having understood the dynamics of this system from first principles and I hope these lectures serve the purpose of stimulating interest
in this fascinating problem.
\medskip
\noindent {\bf Acknowledgement}
\medskip
\noindent I thank Professor Reza Mansouri for the excellent hospitality during my visit to Iran in connection with this conference.
|
2,877,628,090,355 | arxiv | \section{Introduction}
The quantum adiabatic algorithm was introduced \cite{adiabatic} as a quantum algorithm for
finding the minimum
of a classical cost function $h(z)$, where $z=0,\dots,N-1$. This cost function is used to define
a quantum Hamiltonian diagonal in the $z$ basis:
\begin{eqnarray}
H_P = \sum_{z=0}^{N-1} h(z) \ket{z}. \label{problemHam}
\end{eqnarray}
The goal is now to find the ground state of $H_P$. To this end a ``beginning'' Hamiltonian $H_B$ is
introduced with a known and easy to construct ground state $\ket{g_B}$. The quantum computer
is a system governed by the time dependent Hamiltonian
\begin{eqnarray}
H(t) = (1-t/T) H_B + (t/T) H_P, \label{adiabaticHam}
\end{eqnarray}
where $T$ controls the rate of change of $H(t)$. Note that $H(0)=H_B$ and $H(T)=H_P$.
The state of the system obeys the Schr\"odinger equation,
\begin{eqnarray}
i \deriv{}{t} \ket{\psi(t)} = H(t) \ket{\psi(t)}, \label{Schrodinger}
\end{eqnarray}
where we choose
\[
\ket{\psi(0)} = \ket{g_B}
\]
and run the algorithm for time $T$. By the adiabatic theorem, if $T$ is large enough then $\ket{\psi(T)}$
will have a large component in the ground state subspace of $H_P$. (Note we are not bothering to state the
necessary condition on the lack of degeneracy of the spectrum of $H(t)$ for $0<t<T$, since it will
not play a role in the results we establish in this paper.) A measurement of $z$
can then be used to find the minimum of $h(z)$. The algorithm is useful if the required
run time $T$ is not too large as a function of $N$.
There is hope that there may be combinatorial search problems, defined on $n$ bits so that $N=2^n$, where
for certain ``interesting'' subsets of the instances the run time $T$ grows subexponentially in $n$.
A positive result of this kind would greatly expand the known power of quantum computers. At the same time
it is worthwhile to understand the circumstances under which the algorithm is doomed to fail.
In this paper we prove some general results which show that with certain choices of $H_B$ or $H_P$ the
algorithm will not succeed if $T$ is $o(\sqrt{N})$, that is $T/\sqrt{N}\rightarrow 0$ as $N\rightarrow\infty$,
so that improvement beyond Grover speedup is impossible.
We view these failures as due to poor choices for $H_B$ and $H_P$, which teach us what not to do
when looking for good algorithms.
We guarantee failure by removing any structure which might exist in $h(z)$ from either $H_B$ or $H_P$.
By structure we mean that $z$ is written as a bit string and both $H_B$ and $H_P$ are sums of
terms involving only a few of the corresponding qubits.
In Section II we show that regardless of the form of $h(z)$ if $H_B$ is a one dimensional projector
onto the uniform superposition of all the basis states $\ket{z}$, then the quantum adiabatic algorithm fails.
Here all the $\ket{z}$ states are treated identically by $H_B$ so any structure contained in $h(z)$ is
lost in $H_B$. In Section III we consider a scrambled $H_P$ that we get by replacing the cost function
$h(z)$ by $h(\pi(z))$ where $\pi$ is a permutation of $0$ to $N-1$. Here the values of $h(z)$ and
$h(\pi(z))$ are the same but the relationship between input and output is scrambled by the permutation.
This effectively destroys any structure in $h(z)$ and typically results in algorithmic failure.
The quantum adiabatic algorithm is a special case of Hamiltonian based continuous time quantum algorithms,
where the quantum state obeys (\ref{Schrodinger}) and the algorithm consists of specifying $H(t)$, the initial
state $\ket{\psi(0)}$, a run time $T$ and the operators to be measured at the end of the run.
In the Hamiltonian language, the Grover problem can be recast as the problem of finding the ground state
of
\begin{eqnarray}
H_w = E(\mathbb{I}-\ket{w}\bra{w}), \label{groverHam}
\end{eqnarray}
where $w$ lies between $0$ and $N-1$. The algorithm designer can apply $H_w$, but in this oracular setting,
$w$ is not known. In reference \cite{analog} the following result was proved. Let
\begin{eqnarray}
H(t) = H_D(t) + H_w, \label{analogHam}
\end{eqnarray}
where $H_D$ is any time dependent ``driver'' Hamiltonian independent of $w$. Assume also that the initial
state $\ket{\psi(0)}$ is independent of $w$. For each $w$ we want the algorithm to be successful,
that is $\ket{\psi(T)}=\ket{w}$. It then follows that
\begin{eqnarray}
T \geq \frac{\sqrt{N}}{2E}. \label{Groverscaling}
\end{eqnarray}
The proof of this result is a continuous-time version of the BBBV oracular proof \cite{BBBV}.
Our proof techniques in this paper are similar to the methods used to prove the result just stated.
\section{General search starting with a one-dimensional projector}
In this section we consider a completely general cost function $h(z)$ with
$z=0,\dots,N-1$. The goal is to use the quantum adiabatic algorithm to find the ground
state of $H_P$ given by (\ref{problemHam}) with $H(t)$ given by (\ref{adiabaticHam}). Let
\begin{eqnarray}
\ket{s} = \frac{1}{\sqrt{N}} \sum_{z=0}^{N-1} \ket{z}
\end{eqnarray}
be the uniform superposition over all possible values $z$. If we pick
\begin{eqnarray}
H_B = E (\mathbb{I}- \ket{s}\bra{s}) \label{groverstart}
\end{eqnarray}
and $\ket{\psi(0)}=\ket{s}$, then the adiabatic algorithm fails in the following sense:
\begin{theorem}
Let $H_P$ be diagonal in the $z$ basis with a ground state subspace of dimension $k$. Let
\[
H(t) = (1-t/T) E \left( \mathbb{I} - \ket{s}\bra{s}\right) + (t/T) H_P.
\]
Let $P$ be the projector onto the ground state subspace of $H_P$ and let $b>0$ be the success
probability, that is, $b=\bra{\psi(T)}P\ket{\psi(T)}$. Then
\[
T \geq \frac{b}{E}\sqrt{\frac{N}{k}} - \frac{2\sqrt{b}}{E}.
\]
\end{theorem}
\begin{proof}
Keeping $H_P$ fixed, we introduce $N-1$ additional beginning Hamiltonians as follows.
For $x=0,\dots,N-1$ let $V_x$ be a unitary operator diagonal in the $z$ basis with
\[
\bra{z}V_x\ket{z} = e^{2\pi i z x /N}
\]
and let
\[
\ket{x} = V_x \ket{s} = \frac{1}{\sqrt{N}} \sum_{z=0}^{N-1} e^{2\pi i z x /N}\ket{z}
\]
so that the $\{\ket{x}\}$ form an orthonormal basis. Note also that
\[
\ket{x=0}=\ket{s}.
\]
We now define
\[
H_x(t) = ( 1-t/T ) E (\mathbb{I}-\ket{x}\bra{x}) + (t/T) H_P,
\]
with corresponding evolution operator $U_x (t_2,t_1)$.
Note that $H(t)$ above is $H_0(t)$ with the corresponding evolution operator $U_0$. For each $x$ we
evolve with $H_x(t)$ from the ground state of $H_x(0)$, which is $\ket{x}$. Note that
$H_x=V_x H_0 V_x^{\dagger}$ and $U_x=V_x U_0 V_x^{\dagger}$. Let $\ket{f_x}=U_x(T,0)\ket{x}$. For
each $x$ the success probability is $\bra{f_x}P\ket{f_x}$, which is equal to $b$ since $P$ commutes
with $V_x$.
The key point is that if we run the Hamiltonian evolution with $H_x$ backwards in time,
we would then be finding $x$, that is, solving the Grover problem.
However, this should not be possible unless the run time $T$ is of order $\sqrt{N}$.
Let $U_R$ be the evolution operator corresponding to an $x$-independent reference Hamiltonian
\[
H_R(t) = (1-t/T) E + (t/T) H_P.
\]
Let $\ket{g_x} = \frac{1}{\sqrt{b}}P\ket{f_x}$ be the normalized component of $\ket{f_x}$ in the
ground state subspace of $H_P$. We consider the difference in backward evolution from $\ket{g_x}$
with Hamiltonians $H_x$ and $H_R$, and sum on $x$,
\[
S(t) = \sum_x \norm{U_x^{\dagger}(T,t)\ket{g_x}-U_R^{\dagger}(T,t)\ket{g_x}}^2.
\]
Clearly $S(T)=0$, and
\begin{eqnarray*}
S(0) &=& \sum_x \norm{ U_x^{\dagger}(T,0)\ket{g_x} - U_R^{\dagger}(T,0)\ket{g_x} }^2.
\end{eqnarray*}
Now $\ket{g_x}=\sqrt{b}\ket{f_x}+\sqrt{1-b}\ket{f_x^{\perp}}$ where $\ket{f_x^{\perp}}$ is orthogonal
to $\ket{f_x}$. Since $U_x^{\dagger}(T,0)\ket{f_x}=\ket{x}$ we have
\begin{eqnarray*}
S(0) = \sum_x \norm{ \sqrt{b}\ket{x} + \sqrt{1-b}\ket{x^\perp} - \ket{i_x} }^2,
\end{eqnarray*}
where for each $x$, $\ket{x^\perp}$ and $\ket{i_x}$ are normalized states with $\ket{x^\perp}$ orthogonal
to $\ket{x}$. Since $H_R$ commutes with $H_P$, $\ket{i_x}=U_R^{\dagger}(T,0)\ket{g_x}$
is an element of the $k$-dimensional ground state subspace of $H_P$. We have
\begin{eqnarray*}
S(0) &=& 2N - \sum_x \left[ \sqrt{b} \scalar{x}{i_x} + \sqrt{1-b}\langle x^\perp | i_x\rangle+c.c.\right] \\
&\geq& 2N - 2\sqrt{b} \sum_x \bigabs{\scalar{x}{i_x}} - 2N\sqrt{1-b}.
\end{eqnarray*}
Choosing a basis $\{\ket{G_j}\}$ for the $k$ dimensional ground state subspace of $H_P$ and writing
$\ket{i_x} = a_{x1}\ket{G_1} + \cdots + a_{xk}\ket{G_k}$ gives
\begin{eqnarray}
\sum_x \bigabs{\scalar{x}{i_x}}
&\leq& \sum_{x,j} \left|a_{xj}\right| \cdot \bigabs{\scalar{x}{G_j}} \label{boundix} \\
&\leq& \sqrt{ \sum_{x,j} \left|a_{xj}\right|^2 \sum_{x',j'} \bigabs{\scalar{x'}{G_{j'}}}^2 }
= \sqrt{Nk}. \nonumber
\end{eqnarray}
Thus
\begin{eqnarray}
S(0)\geq 2N(1-\sqrt{1-b})- 2\sqrt{b}\sqrt{Nk}. \label{s0eqn}
\end{eqnarray}
We will use the Schr\"{o}dinger equation to find the time derivative of $S(t)$:
\begin{eqnarray*}
\deriv{}{t}S(t) &=& -\sum_x \deriv{}{t} \left[ \bra{g_x}U_x(T,t)U_R^{\dagger}(T,t) \ket{g_x} + c.c. \right] \\
&=& -i \sum_x \bra{g_x}U_x(T,t)[H_x(t)-H_R(t)]U_R^{\dagger}(T,t) \ket{g_x} + c.c. \\
&=& -2\,\textrm{Im} \sum_x (1-t/T)E \bra{g_x}U_x(T,t) \ket{x}\bra{x} U_R^{\dagger}(T,t) \ket{g_x}.
\end{eqnarray*}
Now
\begin{eqnarray*}
\left| \deriv{}{t} S(t) \right| &\leq& 2E (1-t/T) \sum_x
\left| \bra{g_x}U_x(T,t) \ket{x}\bra{x} U_R^{\dagger}(T,t) \ket{g_x} \right| \\
&\leq& 2E (1-t/T) \sum_x
\left| \bra{x} U_R^{\dagger}(T,t) \ket{g_x} \right|.
\end{eqnarray*}
Using the same technique as in (\ref{boundix}), we obtain
\begin{eqnarray*}
\left| \deriv{}{t} S(t) \right| &\leq& 2E (1-t/T) \sqrt{Nk}.
\end{eqnarray*}
Therefore
\begin{eqnarray*}
\int^T_0 \left| \deriv{}{t} S(t) \right| \textrm{d}t \leq ET \sqrt{Nk}.
\end{eqnarray*}
Now $S(0)\leq S(T) + \int^T_0 \left| \deriv{}{t} S(t) \right| \textrm{d}t$ and $S(T)=0$ so
\[
S(0)\leq ET\sqrt{Nk}.
\]
Combining this with (\ref{s0eqn}) gives
\[
ET\sqrt{Nk} \geq 2N (1-\sqrt{1-b}) - 2\sqrt{b} \sqrt{Nk},
\]
which implies what we wanted to prove:
\[
T \geq \frac{b}{E}\sqrt{\frac{N}{k}} - \frac{2\sqrt{b}}{E}.
\]
\end{proof}
How do we interpret Theorem 1? The goal is to find the minimum of the cost function $h(z)$
using the quantum adiabatic algorithm. It is natural to pick for $H_B$ a Hamiltonian
whose ground state is $\ket{s}$, the uniform superposition of all $\ket{z}$ states. However
if we pick $H_B$ to be the one dimensional projector $E(\mathbb{I}-\ket{s}\bra{s})$ the algorithm
will not find the ground state if $T/\sqrt{N}$ goes to $0$ as $N$ goes to infinity.
The problem is that $H_B$ has no structure and makes no reference to $h(z)$.
Our hope is that the algorithm might be useful for interesting computational problems
if $H_B$ has structure that reflects the form of $h(z)$.
Note that Theorem 1 explains the algorithmic failure discovered by
\v{Z}nidari\v{c} and Horvat \cite{znidaric} for a particular set of $h(z)$.
For a simple but convincing example of the importance of the choice of $H_B$, suppose we take
a decoupled $n$ bit problem which consists of $n$ clauses each acting on one bit, say for each bit $j$
\begin{eqnarray*}
h_j(z)=\left\{
\begin{array}{rl}
0 & \quad \textrm{if} \, z_{j}=0, \\
1 & \quad \textrm{if} \, z_{j}=1,
\end{array}
\right.
\end{eqnarray*}
so
\begin{eqnarray}
h(z)=z_1+z_2+\dots+z_n \label{decoupledcost}.
\end{eqnarray}
Let us pick a beginning Hamiltonian reflecting the bit structure of the problem,
\begin{eqnarray}
H_B = \sum_{j=1}^{n} \frac{1}{2}\left(1-\sigma_x^{(j)}\right). \label{sumSX}
\end{eqnarray}
The ground state of $H_B$ is $\ket{s}$,
The quantum adiabatic algorithm acts on each bit independently, producing a success probability of
\[
p = \left(1-q(T)\right)^n,
\]
where $q(T)\rightarrow 0$ as $T\rightarrow\infty$ is the transition probability between the ground state
and the excited state of a single qubit. As long as $n q(T)\rightarrow const.$ we have a constant
probability of success. This can be achieved for $T$ of order $\sqrt{n}$, because for a two level system
with a nonzero gap, the probability of a transition is $q(T)=O(T^{-2})$. (For details, see Appendix A.)
However, from Theorem 1 we see that a poor choice of $H_B$ would make the quantum adiabatic algorithm
fail on this simple decoupled $n$ bit problem by destroying the bit structure.
Next, suppose the satisfiability problem we are trying to solve has clauses involving
say 3 bits. If clause $c$ involves bits $i_c$, $j_c$ and $k_c$ we may define the clause cost
function
\begin{eqnarray*}
h_c(z)=\left\{
\begin{array}{rl}
0 & \quad \textrm{if} \,\, z_{i_c},z_{j_c},z_{k_c} \, \textrm{satisfy clause} \, c, \\
1 & \quad \textrm{otherwise}.
\end{array}
\right.
\end{eqnarray*}
The total cost function is then
\begin{eqnarray*}
h(z) = \sum_c h_c(z).
\end{eqnarray*}
To get $H_B$ to reflect the bit and clause structure we may pick
\begin{eqnarray*}
H_{B,c} = \frac{1}{2}\left[(1-\sigma_x^{(i_c)}) + (1-\sigma_x^{(j_c)}) + (1-\sigma_x^{(k_c)}) \right]
\end{eqnarray*}
with
\begin{eqnarray}
H_B = \sum_c H_{B,c}. \label{clauseham}
\end{eqnarray}
In this case the ground state of $H_B$ is again $\ket{s}$. With this setup, Theorem 1 does not apply.
\begin{figure}
\begin{center}
\includegraphics[width=4.5in]{n15ec.eps}
\end{center}
\caption{Median required run time $T$ versus bit number. At each bit number there are 50 random instances
of Exact Cover with a single satisfying assignment. We choose the required run time to be
the value of $T$ for which quantum adiabatic algorithm
has success probability between 0.2 and 0.21.
For the projector beginning Hamiltonian we use \eqref{groverstart} with $E=n/2$.
The plot is log-linear. The error bars show the 95\% confidence interval for the true medians.}
\label{loglin}
\end{figure}
We did a numerical study of a particular satisfiability problem, Exact Cover. For this problem
if clause $c$ involves bits $i_c$, $j_c$ and $k_c$, the cost function is
\begin{eqnarray*}
h_c(z)=\left\{
\begin{array}{rl}
0 & \quad \textrm{if} \,\, z_{i_c}+z_{j_c}+z_{k_c}=1, \\
1 & \quad \textrm{otherwise}.
\end{array}
\right.
\end{eqnarray*}
Some data is presented in FIG. 1. Here we see that with a structured beginning Hamiltonian
the required run times are substantially lower than with the projector $H_B$.
\section{Search with a scrambled problem hamiltonian}
In the previous section we showed that removing all structure from $H_B$ dooms the
quantum adiabatic algorthm to failure. In this section we remove structure from the problem to be
solved ($H_P$) and show that this leads to algorithmic failure. Let $h(z)$ be a cost function
whose minumum we seek. Let $\pi$ be a permutation of $0,1,\dots,N-1$ and let
\[
h^{[\pi]}(z)=h\left(\pi^{-1}(z)\right).
\]
We will show that no continuous time quantum algorithm (of a very general form) can find the minimum
of $h^{[\pi]}$ for even a small fraction of all $\pi$ if $T$ is $o(\sqrt{N})$. Classically, this problem
takes order $N$ calls to an oracle.
Without loss of generality let $h(0)=0$, and $h(1),h(2),\dots,h(N-1)$ all be positive.
For any permutation $\pi$ of $0,1,\dots,N-1$ we define a problem Hamiltonian $H_{P,\pi}$,
diagonal in the $z$ basis, as
\[
H_{P,\pi} = \sum_{z=0}^{N-1} h^{[\pi]}(z) \ket{z}\bra{z}
=\sum_{z=0}^{N-1} h(z) \ket{\pi(z)}\bra{\pi(z)}.
\]
Now consider the Hamiltonian
\begin{eqnarray}
H_{\pi}(t)=H_D(t)+c(t) H_{P,\pi} \label{continuoustimealgorithm}
\end{eqnarray}
for an arbitrary $\pi$-independent driving Hamiltonian
$H_D(t)$ with $|c(t)|\leq1$ for all $t$. Using this composite Hamiltonian,
we evolve the $\pi$-independent starting state $\ket{\psi(0)}$ for time $T$, reaching the state
$\ket{\psi_\pi(T)}$. This setup is more general than the quantum adiabatic algorithm since we do not
require $H_D(t)$ or $c(t)$ to be slowly varying.
Success is achieved if the overlap of $\ket{\psi_{\pi}(T)}$
with $\ket{\pi(0)}$ is large.
We first show
\begin{lemma}
\begin{eqnarray}
\sum_{\pi,\pi'} \Big\| \ket{\psi_\pi(T)}-\ket{\psi_{\pi'}(T)} \Big\|^2 \leq 4h^* T N!\sqrt{N-1}, \label{permutationlemma}
\end{eqnarray}
where the sum is over all pairs of permutations $\pi,\pi'$ that differ by a single transposition
involving $\pi(0)$, and $h^*=\sqrt{\sum h(z)^2 / (N-1)}$.
\end{lemma}
\begin{proof}
For two different permutations $\pi$ and $\pi'$ let $\ket{\psi_{\pi}(t)}$ be the state obtained
by evolving from $\ket{\psi(0)}$ with $H_{\pi}$ and let $\ket{\psi_{\pi'}(t)}$ be the state obtained
by evolving from $\ket{\psi(0)}$ with $H_{\pi'}$.
Now
\begin{eqnarray*}
\deriv{}{t} \Big\| \ket{\psi_\pi(t)}-\ket{\psi_{\pi'}(t)} \Big\|^2
&=& -\deriv{}{t} \scalar{\psi_{\pi}(t)}{\psi_{\pi'}(t)} + c.c. \\
&=& i \bra{\psi_{\pi}(t)}(H_{\pi}(t)-H_{\pi'}(t))\ket{\psi_{\pi'}(t)} + c.c. \\
&\leq& 2 \Big| \bra{\psi_{\pi}(t)}(H_{\pi}(t)-H_{\pi'}(t))\ket{\psi_{\pi'}(t)} \Big|.
\end{eqnarray*}
We now consider the case when $\pi$ and $\pi'$ differ by a single transposition involving $\pi(0)$.
Specifically, $\pi'=\pi \circ (a\leftrightarrow 0)$ for some $a$.
Now if $\pi(0) = i$ and $\pi(a) =j$, we have $\pi'(0) =j$ and $\pi'(a) =i$.
Therefore, since $h(0)=0$,
\begin{eqnarray*}
H_{P,\pi}-H_{P,\pi'} = c(t) h(a) \left( \ket{j}\bra{j} - \ket{i}\bra{i} \right)
= c(t) h(a) \left( \ket{\pi(a)}\bra{\pi(a)} - \ket{\pi'(a)}\bra{\pi'(a)} \right),
\end{eqnarray*}
so we can write
\begin{eqnarray*}
\deriv{}{t} \sum_{\pi,\pi'} \Big\| \ket{\psi_\pi(t)}-\ket{\psi_{\pi'}(t)} \Big\|^2
&\leq&
2 |c(t)| \sum_{\pi,\pi'} h(a) \Big| \bra{\psi_{\pi}(t)}
\left( \ket{\pi(a)}\bra{\pi(a)} - \ket{\pi'(a)}\bra{\pi'(a)} \right)
\ket{\psi_{\pi'}(t)} \Big|.
\end{eqnarray*}
This further simplifies to
\begin{eqnarray*}
\deriv{}{t} \sum_{\pi,\pi'} \Big\| \ket{\psi_\pi(t)}-\ket{\psi_{\pi'}(t)} \Big\|^2
&\leq&
2 \sum_{\pi,\pi'} h(a) \left(
\bigabs{\scalar{\psi_{\pi}(t)}{\pi(a)}} +
\bigabs{\scalar{\pi'(a)}{\psi_{\pi'}(t)}} \right) \\
&=&
2 \sum_{\pi} \sum_{a\neq 0} h(a)
\bigabs{\scalar{\psi_{\pi}(t)}{\pi(a)}} +
2 \sum_{\pi'} \sum_{a\neq 0} h(a)
\bigabs{\scalar{\pi'(a)}{\psi_{\pi'}(t)}} \\
&=&
4 \sum_{\pi} \sum_{a\neq 0} h(a) \bigabs{\scalar{\psi_{\pi}(t)}{\pi(a)}} \\
&=&
4 \sum_{\pi} \sum_{a} h(a) \bigabs{\scalar{\psi_{\pi}(t)}{\pi(a)}} \\
&\leq&
4 \sum_{\pi} \sqrt{\sum_a h(a)^2} = 4h^* N! \sqrt{N-1} .
\end{eqnarray*}
where we used the Cauchy-Schwartz inequality to obtain the last line. Integrating this inequality
for time $T$, we obtain the result we wanted to prove,
\begin{eqnarray*}
\sum_{\pi,\pi'} \Big\| \ket{\psi_\pi(T)}-\ket{\psi_{\pi'}(T)} \Big\|^2 &\leq& 4 h^* T N! \sqrt{N-1},
\end{eqnarray*}
where the sum is over $\pi$ and $\pi'$ differing by a single transposition involving $\pi(0)$.
\end{proof}
Next we establish
\begin{lemma}
Suppose $\ket{1}$, $\ket{2}$, $\ket{L}$ are orthonormal vectors and $\bigabs{\scalar{\psi_i}{i}}^2 \geq b$ for
normalized vectors $\ket{\psi_i}$, where $i=1,\dots,L$. Then for any normalized $\ket{\varphi}$,
\begin{eqnarray}
\sum_{i=1}^{L} \bignorm{\ket{\psi_i}-\ket{\varphi}}^2 \geq bL-2\sqrt{L}. \label{vectorlemma}
\end{eqnarray}
\end{lemma}
\begin{proof}
Write
\begin{eqnarray*}
\sum_i \bignorm{\ket{\psi_i}-\ket{\varphi}}^2 &\geq& \sum_i \bigabs{\scalar{i}{\psi_i}-\scalar{i}{\varphi}}^2 \\
&\geq& \sum_i \bigabs{\scalar{i}{\psi_i}}^2 - 2 \sum_i \bigabs{\scalar{i}{\psi_i}} \bigabs{\scalar{i}{\varphi}}
\end{eqnarray*}
and use the Cauchy-Schwartz inequality to obtain
\begin{eqnarray*}
\sum_i \bignorm{\ket{\psi_i}-\ket{\varphi}}^2 &\geq& b L
- 2 \sqrt{\sum_i \bigabs{\scalar{i}{\psi_i}}^2 } \sqrt{\sum_i \bigabs{\scalar{i}{\varphi}}^2} \\
&\geq& bL - 2 \sqrt{L}.
\end{eqnarray*}
\end{proof}
We are now ready to state the main result of this section.
\begin{theorem}
Suppose that a continuous time algorithm of the form \eqref{continuoustimealgorithm}
succeeds with probability at least $b$, i.e.
$\bigabs{\scalar{\psi_{\pi}(T)}{\pi(0)}}^2\geq b$, for a set of $\epsilon N!$ permutations.
Then
\begin{eqnarray}
T\geq \frac{\epsilon^2 b}{16 h^*} \sqrt{N-1} - \frac{\epsilon\sqrt{\epsilon/2}}{4h^*}.
\end{eqnarray}
\end{theorem}
\begin{proof}
For any permutation $\pi$, there are $N-1$ permutations $\pi'_a$ obtained from $\pi$
by first transposing $0$ and $a$. For each $\pi$ let $\mathcal{S}_{\pi}$ be the subset of those
$N-1$ permutations on which the algorithm succeeds with probability at least $b$.
Any such permutation appears in exactly $N-1$ of the sets $\mathcal{S}_{\pi}$ so we have
\[
\sum_\pi \left|\mathcal{S}_\pi\right| = (N-1)\epsilon N!.
\]
Let $M$ be the number of sets $\mathcal{S}_{\pi}$ with $\left|\mathcal{S}_{\pi}\right|\geq\frac{\epsilon}{2}(N-1)$. Now
\begin{eqnarray*}
\sum_{\pi} \left| \mathcal{S}_{\pi} \right| &=& \sum_{\left| \mathcal{S}_{\pi} \right|\geq \frac{\epsilon}{2}(N-1)} \left| \mathcal{S}_{\pi} \right|
+ \sum_{\left| \mathcal{S}_{\pi} \right|< \frac{\epsilon}{2}(N-1)} \left| \mathcal{S}_{\pi} \right| \\
\sum_{\pi} \left| \mathcal{S}_{\pi} \right| &\leq& M(N-1) + (N!-M)\frac{\epsilon}{2}(N-1), \\
(N-1)\epsilon N! &\leq& M(N-1) + N!\frac{\epsilon}{2}(N-1),
\end{eqnarray*}
so $M\geq\frac{\epsilon}{2}N!$, i.e. at least
$\frac{\epsilon}{2} N!$ of the sets $\mathcal{S}_{\pi}$ must contain
at least $\frac{\epsilon}{2}(N-1)$ permutations on which the algorithm succeeds with probability at least $b$.
For the corresponding $\pi$, we have
\[
\sum_{\pi'_a} \bignorm{\ket{\psi_\pi (T)}-\ket{\psi_{\pi'_a} (T)}}^2
\geq b \frac{\epsilon}{2}(N-1) - 2\sqrt{\frac{\epsilon}{2}(N-1)}.
\]
by Lemma 2. (Note that the algorithm is not assumed to succeed with probability $b$ on $\pi$.)
Since there are at least $\frac{\epsilon}{2}N!$ such $\pi$,
\begin{eqnarray*}
\sum_{\pi,\pi'} \bignorm{\ket{\psi_\pi (T)}-\ket{\psi_{\pi'}(T)}}^2
\geq \frac{\epsilon}{2}N! \left(b \frac{\epsilon}{2}(N-1) - 2\sqrt{\frac{\epsilon}{2}(N-1)}\right),
\end{eqnarray*}
where the sum is over all permutations $\pi$ and $\pi'$ which differ by a single transposition involving $\pi(0)$.
Combining this with Lemma 1 we obtain
\begin{eqnarray*}
T\geq \frac{\epsilon^2 b}{16 h^*} \sqrt{N-1} - \frac{\epsilon\sqrt{\epsilon/2}}{4 h^*},
\end{eqnarray*}
which is what we wanted to prove.
\end{proof}
What we have just shown is that no continuous time algorithm of the form
\eqref{continuoustimealgorithm} can find the minimum of $H_{P,\pi}$ with a
constant success probability for even a fraction $\epsilon N!$ of all permutations $\pi$
if $T$ is $o(\sqrt{N})$.
A typical permutation $\pi$ yields an $H_{P,\pi}$ with no structure relevant to
any fixed $H_D$ and the algorithm can not find the ground state of $H_{P,\pi}$ efficiently.
\begin{figure}
\begin{center}
\includegraphics[width=4.5in]{n18sd.eps}
\end{center}
\caption{The scaled ground state energy $E/n$ for a quantum adiabatic algorithm Hamiltonian
of a decoupled problem. The lowest curve corresponds to the original decoupled problem.
The upper ``triangular'' curves correspond to single instances of
the $n$-bit decoupled problem, where the problem Hamiltonian was scrambled.}
\label{triangle}
\end{figure}
To illustrate the nature of this failure for the quantum adiabatic algorithm
for a typical permutation, consider again the decoupled
$n$ bit problem with $h(z)$ given by \eqref{decoupledcost} and $H_B$ given by \eqref{sumSX}.
The lowest curve in FIG. 2 shows the ground state energy divided by $n$ as a function of $t$.
(Since the system is decoupled this is actually the ground state energy of a single qubit.)
We then consider the $n$ bit scrambled problem for different values of $n$. At each $n$ we pick
a single random permutation $\pi$ of $0,\dots,(2^n-1)$ and apply it to obtain a cost
function $h(\pi^{-1}(z))$ while keeping $H_B$ fixed.
The ground state energy divided by $n$ is now plotted for $n=9,12,15$ and $18$.
From these scrambled problems it is clear that if we let $n$ get large the typical curves
will approach a triangle with a discontinuous first derivative at $t=T/2$. For large $n$, the
ground state changes dramatically as $t$ passes through $T/2$. In order to keep the quantum system
in the ground state we need to go very slowly near $t=T/2$ and this results in a long required run time.
\section{Conclusions}
In this paper we have two main results about the performance of the quantum adiabatic algorithm when used
to find the minimum of a classical cost function $h(z)$ with $z=0,\dots,N-1$. Theorem 1 says that
for any cost function $h(z)$, if the beginning Hamiltonian is a one dimensional projector onto the
uniform superposition of all the $\ket{z}$ basis states, the algorithm will not find the minimum
of $h$ if $T$ is less then of order $\sqrt{N}$. This is true regardless of how simple it is to
classically find the minimum of $h(z)$.
In Theorem 2 we start with any beginning Hamiltonian and classical cost function $h$. Replacing $h(z)$
by a scrambled version, i.e. $h^{[\pi]}(z)=h(\pi(z))$ with $\pi$ a permutation of $0$ to $N-1$,
will make it impossible for the algorithm to find the minimum of $h^{[\pi]}$ in time less than
order $\sqrt{N}$ for a typical permutation $\pi$.
For example suppose we have a cost function $h(z)$ and have chosen $H_B$ so that
the quantum algorithm finds the minimum in time of order $\textrm{log}\,N$. Still scrambling
the cost function results in algorithmic failure.
These results do not imply anything about the more interesting case where $H_B$
and $H_P$ are structured, i.e., sums of terms each operating only on several qubits.
\section*{Acknowledgements}
The authors gratefully acknowledge support from the National Security Agency (NSA) and Advanced Research
and Development Activity (ARDA) under Army Research Office (ARO) contract W911NF-04-1-0216.
|
2,877,628,090,356 | arxiv | \section{Introduction}
\emph{Submodular functions} have a wide veriety of applications in combinatorial optimization, economics, communication, and machine learning~\cite{Fujishige2005,Krause2014survey}.
A set function $f: 2^V \to \mathbb{R}$ on a ground set $V$ is called a \emph{submodular function} if it satisfies $f(X) + f(Y) \geq f(X \cup Y) + f(X \cap Y)$ for all $X \subseteq V$.
Equivalently, $f$ is submodular if it satisfies the \emph{diminishing return property}: $f(X \cup \{j\}) - f(X) \geq f(Y \cup \{j\}) - f(Y)$ for all $X \subseteq Y$ and $j \in V \setminus Y$.
In the last two decades, \emph{submodular maximization} has been studied extensively in theoretical computer science~\cite{Calinescu2011,Buchbinder2015}, machine learning~\cite{Krause2014survey}, and viral marketing~\cite{Kempe2003}.
Although submodular maximization is NP-hard in general, constant-factor approximation algorithms have been devised for various constraints~\cite{Calinescu2011,Buchbinder2015}.
Recently, the paradigm of \emph{``optimization as a process''} has been proposed in the context of \emph{online learning}~\cite{Hazan2016book,Cesa2006}.
The goal of online learning is making a better decision in the face of uncertainty.
Formally, let us consider the following repeated two-player game between a player and an adversary.
At each $t$th round ($t \in [T] := \{1,\dots, T\})$, the player must select an action $x_t \in K$ (possibly in a randomized manner).
After the choice of $x_t$, the adversary reveals a reward function $f_t: K \to [0,1]$ in the round, and the player gains $f_t(x_t)$.
The performance metric of the player's algorithm is the \emph{regret}:
\begin{align}
\regret(f_1, \dots, f_T) = \max_{x \in K}\sum_{t \in [T]} f_t(x) - \sum_{t \in [T]} f_t(x_t).
\end{align}
That is, the regret is the difference between the player's total gain and the gain of the best fixed action in hindsight.
A player's algorithm is said to be \emph{no regret} if the expectation of the regret is sublinear: $\mathbf{E}[\regret(f_1,\dots,f_T)] = o(T)$, where the expectation is taken under the randomness in the player.
\emph{Online submodular maximization} is an online learning problem in which the action set is a set family $\mathcal{C} \subseteq 2^V$ and the reward functions $f_t$ are submodular functions on $V$.
Since submodular maximization is NP-hard even in the offline setting, it is reasonable to relax the definition of the regret to the \emph{$\alpha$-regret}:
\begin{align}
\regret_\alpha(f_1, \dots, f_T) = \alpha\max_{X \in \mathcal{C}}\sum_{t \in [T]} f_t(X) - \sum_{t \in [T]} f_t(X_t),
\end{align}
where $\alpha > 0$ is a constant.
Intuitively, $\alpha$ corresponds to the offline approximation ratio.
A player's algorithm is said to be \emph{no $\alpha$-regret} if $\mathbf{E}[\regret_\alpha(f_1, \dots, f_T)] = o(T)$.
Streeter and Golovin~\cite{Streeter2009} presented the first no $(1-1/e)$-regret algorithm for online monotone submodular maximization under a cardinality constraint ($\mathcal{C}$ is the set of subsets satisfying the cardinality constraint and $f_t$ are monotone submodular functions).
Golovin, Streeter, and Krause~\cite{Golovin2014} extended this algorithm to a matroid constraint, generalizing a well-known \emph{continuous greedy algorithm}~\cite{Calinescu2011}.
Recently, Roughgarden and Wang~\cite{Roughgarden2018} proposed no $1/2$-regret algorithm for (unconstrained) online nonmonotone submodular maximization.
Their algorithm is based on the \emph{double greedy algorithm}~\cite{Buchbinder2015};
at its core, they designed an online learning algorithm with two actions with a stronger regret guarantee.
\subsection{Our contribution}
\begin{figure}[t]
\centering
\begin{tcolorbox}[boxrule=0pt,toprule=.5pt,bottomrule=.5pt,sharp corners]
\begin{algorithmic}
\FOR{$t = 1, \dots, T$}
\STATE A player (randomly) plays $\mathbf{x}_t \in (k+1)^V$.
\STATE An adversary reveals a $k$-submodular function $f_t: (k+1)^V \to [0,1]$ to the player as a value oracle.
\STATE The player gains reward $f_t(\mathbf{x}_t)$.
\ENDFOR
\end{algorithmic}
\end{tcolorbox}
\caption{The online $k$-submodular maximization protocol.}
\end{figure}
This paper examines online maximization of \emph{$k$-submodular} functions.
$k$-submodular functions are generalizations of submodularity and bisubmodularity, introduced by Huber and Kolmogolov~\cite{Huber2012}.
Formally, $k$-submodular functions are defined on $(k+1)^V = \{0,1,\dots, k\}^V$.
A function $f : (k+1)^V \to \mathbb{R}$ is $k$-submodular if for any $\mathbf{x}, \mathbf{y} \in (k+1)^V$, $f(\mathbf{x}) + f(\mathbf{y}) \geq f(\mathbf{x} \sqcup \mathbf{y}) + f(\mathbf{x} \sqcap \mathbf{y})$, where $\sqcup$ and $\sqcap$ are generalized ``union'' and ``intersection'' in $(k+1)^V$, respectively (see Section~\ref{sec:pre} for the formal definition).
Indeed, if $k = 1, 2$, $k$-submodularity is equivalent to submodularity and bisubmodularity, respectively.
The concepts of bisubmodularity and $k$-submodularity have numerous applications in valued CSP, delta matroids, generalized influence maximization, and image segmentation~\cite{Huber2012,Fujishige2005,Fujishige2005b,Ohsaka2015,Hirai2017}.
For offline $k$-submodular maximization, Iwata, Tanigawa, and Yoshida~\cite{Iwata2016} gave a $1/2$-approximation algorithm.
The approximation ratio is tight even for $k=1$, i.e., submodular maximization~\cite{Feige2011}.
They also devised a $\frac{k}{2k-1}$-approximation algorithm for \emph{monotone} $k$-submodular maximzation and the approximation ratio is asymptotically tight.
The main results of this paper are as follows:
\begin{itemize}
\item For online $k$-submodular maximization, we devise a polynomial-time algorithm whose expected $1/2$-regret is bounded by $O(nk\sqrt{T})$, where $n = \abs{V}$.
This result generalizes the previous algorithm of Roughgarden and Wang~\cite{Roughgarden2018} for online submodular maximization.
\item For online monotone $k$-submodular maximization, we present a polynomial-time algorithm whose expected $\frac{k}{2k-1}$-regret is $O(nk\sqrt{T})$.
\end{itemize}
To extend the algorithm of \cite{Iwata2016} to the online setting, we must consider an auxiliary online learning problem, which we call a \emph{$k$-submodular selection game}.
We show that it is sufficient to design an online algorithm for $k$-submodular selection games with a stronger regret guarantee, which is not obtained by using a standard online learning algorithm such as multiplicative weight update~\cite{Arora2012a}.
To this end, we exploit \emph{Blackwell's approachability theorem}\footnote{The possibility of using of Blackwell's approachability theorem was mentioned in Roughgarden and Wang~\cite{Roughgarden2018} without detail in a footnote. They designed an alternative algorithm for a similar problem without using Blackwell's theorem.}~\cite{Blackwell1956} and \emph{online linear optimization (OLO)}.
The Blackwell approachability theorem is a powerful generalization of von Neumann's minimax theorem for finite two-player games.
In the online learning literature, the Blackwell approachability theory has been exploited to demonstrate the existence of no-regret algorithms for various problems, such as online learning with the internal and generalized regret, and well-calibrated forecasters (see \cite{Cesa2006} and references therein).
We exploit the Blackwell approachability theorem to design an algorithm with the desired stronger regret guarantee.
To obtain a concrete regret bound, we use a beautiful duality result between approachability and OLO~\cite{Abernethy2011}.
More precisely, we use their framework to obtain an online algorithm for $k$-submodular selection games by converting an OLO algorithm.
To demonstrate the flexibility of our approach based on Blackwell's theorem, we show that the algorithm for the nonmonotone case can be easily modified for the monotone case with a stronger approximation ratio $\frac{2k}{2k-1}$.
Furthermore, our algorithm and analysis work even for an \emph{adaptive adversary}.
An \emph{oblivious adversary} fixes $f_t$ ($t \in [T]$) before the first round, whereas an adaptive adversary can select $f_t$ after seeing $\mathbf{x}_t$.
Since our approach is conceptually simpler than previous work~\cite{Roughgarden2018}, it almost immediately extends to an adaptive adversary.
\subsection{Related work}
An important special case of $k$-submodular functions is the \emph{bisubmodular} function.
Singh, Guillory, and Bilmes~\cite{Singh12} studied maximizing a bisubmodular function\footnote{Note that they used different terminology, \emph{directed bisubmodular} functions, to describe such functions.}.
General $k$-submodular maximization was first studied by Buchbinder and \v{Z}ivn\'{y}~\cite{Ward2016}.
They devised a $1/(1+\sqrt{k/2})$-approximation algorithm for $k$-submodular maximization.
Iwata, Tanigawa, and Yoshida~\cite{Iwata2016} presented a randomized algorithm with an improved and tight approximation factor of $1/2$ for $k$-submodular maximization.
A derandomized version of their algorithm was developed by Oshima~\cite{Oshima2018}.
Ohsaka and Yoshida~\cite{Ohsaka2015} studied monotone $k$-submodular maximization under a cardinality constraint.
Later, Sakaue~\cite{Sakaue2017} generalized it to a matroid constraint.
Online learning of discrete structure is called \emph{online structured learning}.
Efficient online algorithms were developed for various discrete structures, such as shortest paths and matroid basis~\cite{Takimoto2003,Suehiro2012}.
Most of these studies focused on optimizing \emph{linear} reward/loss functions (under a constraint), whereas our paper studies \emph{nonlinear} functions (without constraint).
\subsection{Organization}
The reminder of this paper is organized as follows.
Section~\ref{sec:pre} introduces $k$-submodularity, Blackwell's approachability theorem, and OLO.
Section~\ref{sec:nonmonotone} describes our algorithm for online $k$-submodular maximization along with $k$-submodular selection games.
Section~\ref{sec:monotone} presents our algorithm for online monotone $k$-submodular maximization.
\section{Preliminaries}\label{sec:pre}
\subsection{Notation}
For a positive integer $n$, we denote the set $\{1, \dots, n\}$ by $[n]$.
The probability simplex in $\mathbb{R}^k$ is denoted by $\Delta_k$.
The sets of nonnegative and nonpositive reals are denoted by $\mathbb{R}_+$ and $\mathbb{R}_-$, respectively.
The Euclidian norm is denoted by $\norm{\cdot}$.
The $j$th standard unit vector is denoted by $\mathbf{e}_j$.
The distance between a point $\mathbf{x}$ and a set $S$ is defined as $\dist(\mathbf{x}, S) := \inf_{\mathbf{y} \in S} \norm{\mathbf{x} - \mathbf{y}}$.
The orthogonal projection of a point $\mathbf{x}$ onto a set $S$ is denoted by $\proj_S(\mathbf{x})$.
\subsection{$k$-submodular functions}
Let $k$ be a positive integer.
Throughout the paper, let $V = [n]$ be a ground set.
Define $(k+1)^V = \{0,1,\dots, k\}^V$.
For $\mathbf{x} \in (k+1)^V$, we denote $\supp(\mathbf{x}) = \{ j \in V : x(j) \neq 0 \}$.
For a function $f : (k+1)^V \to \mathbb{R}$, $\mathbf{x} \in (k+1)^V$, and $j \notin \supp(\mathbf{x})$, we define
\begin{align}
\Delta_{j,i}f(X_1, \dots, X_k) := f(\mathbf{x} + i\mathbf{e}_j) - f(\mathbf{x}),
\end{align}
where $\mathbf{x} + i\mathbf{e}_j$ is a vector obtained by setting the $j$th entry of $\mathbf{x}$ to $i$.
Since $x(j) = 0$, this is the standard addition in $\mathbb{R}^V$.
Let us define a binary operator $\sqcup$ and $\sqcap$ on $\{0, 1, \dots, k\}$ as
\begin{align}
i \sqcup i' &=
\begin{cases}
\max\{i, i'\} & \text{if either $i = 0$, $i' = 0$ or $i = i'$} \\
0 & \text{otherwise}
\end{cases} \\
i \sqcap i' &=
\begin{cases}
\min\{i, i'\} & \text{if either $i = 0$, $i' = 0$ or $i = i'$} \\
0 & \text{otherwise}
\end{cases}
\end{align}
We extend these binary operations to $(k+1)^V$ so that the operations are applied entry-wise:
for $\mathbf{x}, \mathbf{y} \in (k+1)^V$, define $\mathbf{x} \sqcup \mathbf{y}, \mathbf{x} \sqcap \mathbf{y} \in (k+1)^V$ as
\begin{align}
(\mathbf{x} \sqcup \mathbf{y})(j) &= x(j) \sqcup y(j) \quad (j \in V) \\
(\mathbf{x} \sqcap \mathbf{y})(j) &= x(j) \sqcap y(j) \quad (j \in V) .
\end{align}
A function $f : (k+1)^V \to \mathbb{R}$ is \emph{$k$-submodular} if
\begin{align}\label{eq:k-submod}
f(\mathbf{x}) + f(\mathbf{y}) \geq f(\mathbf{x} \sqcup \mathbf{y}) + f(\mathbf{x} \sqcap \mathbf{y})
\end{align}
for arbitrary $\mathbf{x}, \mathbf{y} \in (k+1)^V$.
Ward and \v{Z}ivn\'{y}~\cite{Ward2016} showed that $k$-submodularity is equivalent to the following two conditions:
\begin{description}
\item[Pairwise monotonicity] $\Delta_{j,i}f(\mathbf{x}) + \Delta_{j,i'}f(\mathbf{x}) \geq 0$ for $i \neq i'$, $\mathbf{x} \in (k+1)^V$, and $j \notin \supp(\mathbf{x})$.
\item[Orthant submodularity] $\Delta_{j,i}f(\mathbf{x}) \geq \Delta_{j,i}f(\mathbf{y})$ for $i$, $\mathbf{x}\leq \mathbf{y}$, and $j \notin \supp(\mathbf{y})$.
\end{description}
Define a partial order on $(k+1)^V$ by $\mathbf{x} \leq \mathbf{y}$ if $\mathbf{x} \sqcap \mathbf{y} = \mathbf{x}$.
We say that $f : (k+1)^V \to \mathbb{R}$ is \emph{monotone} if $f(\mathbf{x}) \leq f(\mathbf{y})$ for arbitrary $\mathbf{x} \leq \mathbf{y}$.
A vector $\mathbf{x} \in (k+1)^V$ can be regarded as a $k$-subpartition of $V$.
That is, $(k+1)^V$ can be regarded as the set of $(X_1, \dots, X_k)$ ($X_i \subseteq V$, $X_i \cap X_{i'} = \emptyset$ if $i \neq i'$).
The correspondence is given by $x(j) = i$ if and only if $j \in X_i$ (we conventionally regard that $x(j)=0$ if and only if $j$ is in none of $X_i$).
For $k = 1$, $k$-submodularity \eqref{eq:k-submod} is equivalent to submodularity, $f(X) + f(Y) \geq f(X \cup Y) + f(X \cap Y)$ for $X,Y \in 2^V$.
For $k = 2$, it is equivalent to bisubmodularity~\cite{Fujishige2005},
\begin{align}
f(X_1, X_2) + f(Y_1, Y_2) \geq f((X_1\cup Y_1) \setminus (X_1 \cap Y_1), (X_2\cup Y_2)\setminus (X_2\cap Y_2)) + f(X_1 \cap Y_1, X_2 \cap Y_2),
\end{align}
for $(X_1, X_2), (Y_1, Y_2) \in 3^V$.
In \cite{Ward2016}, they showed that a submodular function $g: 2^V \to \mathbb{R}_+$ can be embedded into a bisubmodular function $f:3^V \to \mathbb{R}_+$ as
\begin{align}\label{eq:embed}
f(S,T) = g(S) + g(V\setminus T) - g(T)
\end{align}
preserving the approximation ratio.
That is, if an $\alpha$-approximate maximizer of $f$ corresponds to an $\alpha$-approximate maximizer of $g$, for arbitrary $\alpha > 0$.
This embedding demonstrates that our algorithm for online $k$-submodular maximization corresponds the algorithm of \cite{Roughgarden2018} for online submodular maximization.
A useful fact of $k$-submodular maximization is that there always exists a maximizer corresponding to a partition of $V$.
\begin{lemma}[\cite{Ward2016}]\label{lem:k-submod-partition}
Let $k \geq 2$.
For any $k$-submodular function $f$, there exists $\mathbf{o} \in \argmax_{\mathbf{x} \in (k+1)^V} f(\mathbf{x})$ such that $\supp(\mathbf{o}) = V$.
\end{lemma}
\subsection{Blackwell's approachability theorem}
The celebrated Blackwell approachability theorem~\cite{Blackwell1956} is a powerful generalization of the von Neumann minimax theorem for two-player zero-sum games.
Our presentation mostly follows \cite{Abernethy2011}.
Let $X \subseteq \mathbb{R}^m$ and $Y \subseteq \mathbb{R}^n$ be convex sets.
Let $\boldsymbol{\ell} : X \times Y \to \mathbb{R}^k$ be a biaffine function, i.e, $\boldsymbol{\ell}(\cdot, \mathbf{y})$ is affine for any $\mathbf{y} \in Y$ and vice versa.
Let $S \subseteq \mathbb{R}^k$ be a closed convex set.
We call a tuple $(X,Y,\boldsymbol{\ell},S)$ a \emph{Blackwell instance}.
We say that:
\begin{itemize}
\item $S$ is \emph{satisfiable} if $\exists \mathbf{x} \in X \forall \mathbf{y} \in Y: \boldsymbol{\ell}(\mathbf{x},\mathbf{y}) \in S$.
\item $S$ is \emph{response-satisfiable} if $\forall \mathbf{y} \in Y \exists \mathbf{x} \in X: \boldsymbol{\ell}(\mathbf{x},\mathbf{y}) \in S$.
\item $S$ is \emph{halfspace-satisfiable} if an arbitrary hyperplane $H$ containing $S $ is satisfiable.
\item $S$ is \emph{approachable} if there exists a sequence $(\mathbf{x}_t)_{t \in [T]} \subseteq X$ such that for any sequence $(\mathbf{y}_t)_{t \in [T]} \subseteq Y$, $\dist\left(\frac{1}{T}\sum_{t \in [T]} \ell(\mathbf{x}_t, \mathbf{y}_t), S \right) \to 0$ as $T \to \infty$.
\end{itemize}
\begin{theorem}[{The Blackwell approachability theorem~\cite{Blackwell1956}}]\label{thm:Blackwell}
For a Blackwell instance $(I,J,\boldsymbol{\ell},S)$, the following conditions are equivalent:
\begin{enumerate}
\item $S$ is approachable.
\item $S$ is halfspace-satisfiable.
\item $S$ is response-satisfiable.
\end{enumerate}
\end{theorem}
A \emph{halfspace oracle} $\mathcal{O}$ is an oracle that takes a halfspace $H$ with $S \subseteq H$ as input and returns $\mathcal{O}(H) = \mathbf{x}_H \in X$.
A halfspace oracle is said to be \emph{valid} if $\boldsymbol{\ell}(\mathbf{x}_H, \mathbf{y}) \in H$ for any $\mathbf{y} \in Y$.
Note that the existence of a valid halfspace oracle is equivalent to the halfspace-satisfiability of $S$.
Even if a valid halfspace oracle exists, its efficient computation depends on the geometry of the feasible regions $X$ and $Y$.
If $X$ and $Y$ are polytopes, then a halfspace oracle can be constructed by linear programming (LP) as follows.
Let $H := \{\mathbf{z} : \boldsymbol{\theta}^\top \mathbf{z} \geq \beta \}$ be a halfspace.
Since $\boldsymbol{\ell}$ is biaffine, $\boldsymbol{\theta}^\top \boldsymbol{\ell}(\mathbf{x},\mathbf{y}) = \mathbf{x}^\top P \mathbf{y} + \mathbf{b}^\top\mathbf{y} + c$ for some matrix $P$, a vector $\mathbf{b}$, and a constant $c$.
For computing a valid halfspace oracle, we can assume that $c = 0$ without loss of generality.
Then, $\mathbf{x}_H$ is a response of a valid halfspace oracle if and only if $\mathbf{x}_H \in \argmax_{\mathbf{x} \in X} \min_{\mathbf{y} \in Y} (\mathbf{x}^\top P \mathbf{y} + \mathbf{b}^\top\mathbf{y})$.
Let $Y = \{\mathbf{y} : A\mathbf{y} \geq \mathbf{c}\}$.
By the LP duality, the inner minimization $\min_{\mathbf{y} \in Y} (P^\top\mathbf{x}+\mathbf{b})^\top \mathbf{y}$ is equivalent to the following dual:
\begin{align}
\max \mathbf{c}^\top \mathbf{q} & \quad \text{s.t.} \quad A^\top \mathbf{q} = P^\top\mathbf{x} + \mathbf{b},\, \mathbf{q} \geq \mathbf{0}.
\end{align}
Since $X$ is also a polytope, after adding a constraint $\mathbf{x} \in X$, we still have an LP.
\subsubsection{Online linear optimization and approachability}
The beauty of Blackwell's approachability theory is that it provides an algorithm for finding an approaching sequence, given a valid halfspace oracle.
Abernethy and Hazan~\cite{Abernethy2011} connected the approachability and OLO.
In OLO, we are given a fixed compact convex set $K \subseteq \mathbb{R}^k$.
In each $t$th round of OLO, a player selects $\mathbf{x}_t \in K$.
Then an adversary reveals a vector $\mathbf{f}_t$ such that $\max_{\mathbf{x} \in K} \abs{\mathbf{f}^\top \mathbf{x}}\leq 1$.
The goal of the player is to minimize the regret:
\begin{align}
\regret(\mathbf{f}_1, \dots, \mathbf{f}_T) = \sum_{t \in [T]} \mathbf{f}_t^\top \mathbf{x}_t - \min_{\mathbf{x} \in K} \sum_{t \in [T]} \mathbf{f}_t^\top \mathbf{x}
\end{align}
They devised an elegant algorithm for approachability, given a valid halfspace oracle $\mathcal{O}$ and an algorithm $\mathcal{A}$ for OLO, under the assumption that $S$ is a cone.
\begin{theorem}[Abernethy and Hazan~\cite{Abernethy2011}]\label{thm:Abernethy2011}
Given a valid halfspace oracle $\mathcal{O}$, a value oracle of $\boldsymbol{\ell}$, a cone $S$, and an OLO algorithm $\mathcal{A}$ on the polar cone $S^\circ$, there exists an algorithm $\mathcal{B}$ that given a sequence $(\mathbf{y}_t)_{t\in[T]}$, computes a sequence $(\mathbf{x}_t)_{t \in [T]}$ satisfying
\begin{align}
\dist\left( \frac{1}{T}\sum_{t\in[T]} \boldsymbol{\ell}(\mathbf{x}_t,\mathbf{y}_t), S \right) \leq \frac{1}{T}\regret_\mathcal{A}(\mathbf{f}_1, \dots, \mathbf{f}_T),
\end{align}
where $\mathbf{x}_t = \mathcal{B}(\mathbf{y}_1, \dots, \mathbf{y}_{t-1})$ and $\mathbf{f}_t = - \boldsymbol{\ell}(\mathbf{x}_t,\mathbf{y}_t)$ ($t \in [T]$).
\end{theorem}
We use \emph{online gradient descent}~\cite{Zinkevich2003} as a standard OLO algorithm. See Algorihm~\ref{alg:OGD} for the detail.
\begin{algorithm}[t]
\caption{Online Gradient Descent for OLO~\cite{Zinkevich2003}}\label{alg:OGD}
\begin{algorithmic}
\REQUIRE{a compact convex set $K \subseteq \mathbb{R}^k$ and learning rate $\eta > 0$.}
\STATE Let $\mathbf{x}_0 \in K$ be an arbitrary point.
\FOR{$t \in [T]$}
\STATE Play $\mathbf{x}_t$ and observe $\mathbf{f}_t$.
\STATE Let $\mathbf{y}_{t+1} = \mathbf{x}_t - \eta\mathbf{f}_t$ and $\mathbf{x}_{t+1} = \proj_K(\mathbf{y}_{t+1})$.
\ENDFOR
\end{algorithmic}
\end{algorithm}
\begin{theorem}[Zinkevich~\cite{Zinkevich2003}]\label{thm:OGD}
Online gradient descent with learning rate $\eta > 0$ satisfies
\begin{align}
\regret(\mathbf{f}_1, \dots, \mathbf{f}_T) \leq \frac{1}{\eta} D^2 + \eta \sum_{t\in[T]} \norm{\mathbf{f}_t}^2,
\end{align}
where $D$ is the diameter of $K$.
\end{theorem}
\section{No $1/2$-regret algorithm for $k$-submodular maximization}\label{sec:nonmonotone}
In this section, we present our algorithm for online $k$-submodular maximization.
\subsection{$k$-submodular selection game}
Let us consider the following online learning problem, which we call a \emph{$k$-submodular selection game}.
In the $t$th round of the game, a player predicts a probability vector $\mathbf{p}_t \in \Delta_k$.
An adversary's play is $\mathbf{y}_t = (\mathbf{a}_t, \mathbf{b}_t) \in Y$, where $Y$ is the set of $(\mathbf{a}, \mathbf{b}) \in [-1,1]^k \times [-1,1]^k$ such that
\begin{align*}
a(i) + a(i') &\geq 0 && (i \neq i')\\
b(i) + b(i') &\geq 0 && (i \neq i')\\
b(i) &\geq a(i) && (i \in [k]).
\end{align*}
The feedback to the player is only $\mathbf{b}_t$.
We denote the set of the adversary'play by $Y$.
For a fixed $\mathbf{b}$, we denote $Y(\mathbf{b}) = \{\mathbf{a} \in [-1,1]^k : (\mathbf{a}, \mathbf{b}) \in Y \}$.
\begin{definition}
Let $\alpha > 0$.
An online algorithm $\mathcal{A}$ is an $\alpha$-selection algorithm for a $k$-submodular selection game with rate $g(k,T)$ if it satisfies
\begin{align}
\max_{i^* \in [k]} \sum_{t \in [T]} a_t(i^*) - \sum_{t\in [T]} \sum_{i\in[k]} (\alpha\cdot b_t(i) + a_t(i)) p_t(i) \leq g(k,T),
\end{align}
where $g(k,T)$ is sublinear in $T$.
\end{definition}
Our main result is as follows.
\begin{theorem}\label{thm:k-submod-selection}
There exists a $1$-selection algorithm for a $k$-submodular selection game with rate $g(k,T) = O(k\sqrt{T})$.
\end{theorem}
To prove this theorem, we appeal to the Blackwell approachability theorem.
First, we define a biaffine vector reward function $\boldsymbol{\ell}$:
For $\mathbf{p} \in \Delta_k$ and $\mathbf{y} = (\mathbf{a}, \mathbf{b}) \in Y$, let
\begin{align}
\ell(\mathbf{p}, \mathbf{y})(i) = a(i) - \sum_{i' \in [k]} (b(i')+a(i'))p(i').
\end{align}
Then, $S = \mathbb{R}_-^k$ is approachable in a Blackwell instance $(\Delta_k, Y, \boldsymbol{\ell}, S)$ if and only if a $1$-selection algorithm exists for a $k$-submodular selection game.
We now show that $S$ is approachable.
By the Blackwell approachability theorem, it suffices to show that $S$ is response-satisfiable.
Indeed, this fact is already observed in \cite{Iwata2016}.
\begin{lemma}[{\cite[Theorem~2.1]{Iwata2016}}]
For a fixed adversary's play $(\mathbf{a}, \mathbf{b})$, there exists $\mathbf{p} \in \Delta_k$ that only depends on $\mathbf{y}$ and satisfies
\begin{align}
\max_{i^* \in [k]} a(i^*) - \sum_{i\in[k]} (b(i) + a(i)) p(i) \leq 0.
\end{align}
\end{lemma}
Therefore, the Blackwell approachability theorem implies the existence of a no-regret algorithm for a $k$-submodular selection game.
In particular, exploiting the result of \cite{Abernethy2011}, we obtain Algorithm~\ref{alg:k-submod-selection} for a $k$-submodular selection game.
\begin{algorithm}[t]
\caption{A $1$-selection algorithm for a $k$-submodular selection game}\label{alg:k-submod-selection}
\begin{algorithmic}[1]
\REQUIRE An OLO algorithm $\mathcal{A}$ with feasible region $K := \{ \boldsymbol{\theta} \in \mathbb{R}^k_+ : \norm{\boldsymbol{\theta}} \leq 1 \}$.
\STATE Set up $\mathcal{A}$.
\FOR{$t \in [T]$}
\STATE $\boldsymbol{\theta}_t \gets \mathcal{A}(\mathbf{f}_1, \dots, \mathbf{f}_{t-1})$, where $\mathbf{f}_s := -\hat{\boldsymbol{\ell}}_s$ ($s \in [t-1]$).
\STATE Solve LP
\begin{align}\label{eq:halfspace-LP}
\mathbf{p}_t \in \argmin_{\mathbf{p} \in \Delta_k} \max_{\mathbf{y} \in Y} \boldsymbol{\theta}^\top\boldsymbol{\ell}(\mathbf{p},\mathbf{y})
\end{align}
to obtain $\mathbf{p}_t$.
\STATE Play $\mathbf{p}_t$ and observe $\mathbf{b}_t$.
\STATE For $i \in [k]$, let $\hat{\boldsymbol{\ell}}_t$ be a vector such that $\hat{\ell}_t(i) := \max_{\mathbf{a}_t \in Y(\mathbf{b}_t)} \ell(\mathbf{p}_t, (\mathbf{a}_t, \mathbf{b}_t))(i)$.
\ENDFOR
\end{algorithmic}
\end{algorithm}
\begin{lemma}\label{lem:k-submod-selection}
Algorithm~\ref{alg:k-submod-selection} satisfies
\begin{align}
\max_{i^* \in [k]} \sum_{t \in [T]} a_t(i^*) - \sum_{t\in [T]} \sum_{i\in[k]} (b_t(i) + a_t(i)) p_t(i)
\leq \regret_\mathcal{A}(\mathbf{f}_1, \dots, \mathbf{f}_T),
\end{align}
for any $(\mathbf{a}_t, \mathbf{b}_t) \in Y$ $(t \in [T])$,
where $\regret_\mathcal{A} (\mathbf{f}_1, \dots, \mathbf{f}_T) = \sum_{t\in[T]} \mathbf{f}_t^\top \boldsymbol{\theta}_t - \min_{\boldsymbol{\theta} \in K} \sum_{t \in [T]} \mathbf{f}_t^\top \boldsymbol{\theta}$ is the regret of the OLO algorithm $\mathcal{A}$.
\end{lemma}
\begin{proof}
The proof mostly follows from \cite{Abernethy2011}, but we provide the full proof for the sake of completeness.
Since $S$ is halfspace-satisfiable, LP~\eqref{eq:halfspace-LP} has a solution.
Indeed, solving LP~\eqref{eq:halfspace-LP} simply computes an output of a valid halfspace oracle for a halfspace $H_t = \{\mathbf{x} \in \mathbb{R}^k : \boldsymbol{\theta}_t^\top \mathbf{x} \leq 0 \}$.
Let us fix arbitrary $\mathbf{y}_t = (\mathbf{a}_t, \mathbf{b}_t) \in Y$ ($t \in [T]$).
Then,
\begin{align*}
\dist\left(\frac{1}{T}\sum_{t \in [T]} \boldsymbol{\ell}(\mathbf{p}_t, \mathbf{y}_t), S \right)
&= \max_{\boldsymbol{\theta} \in K} \frac{1}{T}\sum_{t \in [T]} \boldsymbol{\ell}(\mathbf{p}_t, \mathbf{y}_t) ^\top \boldsymbol{\theta} \\
&\leq \max_{\boldsymbol{\theta} \in K} \left[ \frac{1}{T}\sum_{t \in [T]} \hat{\boldsymbol{\ell}}_t^\top \boldsymbol{\theta} \right] \\
&= \max_{\boldsymbol{\theta} \in K} \left[ -\frac{1}{T}\sum_{t \in [T]} \mathbf{f}_t^\top \boldsymbol{\theta} \right] \\
&\leq \frac{1}{T}\max_{\boldsymbol{\theta} \in K} \left[\sum_{t \in [T]}\mathbf{f}_t^\top \boldsymbol{\theta}_t -\sum_{t \in [T]} \mathbf{f}_t^\top \boldsymbol{\theta} \right]
\tag{Since $\mathbf{f}_t^\top \boldsymbol{\theta}_t = -\boldsymbol{\theta}_t^\top \hat{\boldsymbol{\ell}}_t \geq 0$ by the valid halfspace oracle property} \\
&= \frac{\regret_\mathcal{A}(\mathbf{f}_1, \dots, \mathbf{f}_T) }{T}.
\end{align*}
Now the claim of the lemma is immediate from the following:
\begin{align*}
\frac{1}{T}\left[ \max_{i^* \in [k]} \sum_{t \in [T]} a_t(i^*) - \sum_{t\in [T]} \sum_{i\in[k]} (b_t(i) + a_t(i)) p_t(i) \right]
\leq \dist\left(\frac{1}{T}\sum_{t \in [T]} \boldsymbol{\ell}(\mathbf{p}_t, \mathbf{y}_t), S \right)
\end{align*}
\end{proof}
\begin{proof}[Proof of Theorem~\ref{thm:k-submod-selection}]
We can use online gradient descent as an internal OLO algorithm $\mathcal{A}$, which satisfies
\begin{align}
\regret_\mathcal{A}(\mathbf{f}_1, \dots, \mathbf{f}_T)
\leq \frac{1}{\eta} D^2 + \eta \sum_{t\in[T]} \norm{\mathbf{f}_t}^2
\leq \frac{1}{\eta} O(k) + \eta O(kT)
\end{align}
where we used that $D = O(\sqrt{k})$ is the diameter of $\Delta_k$ and $\norm{\mathbf{f}_t}^2 = O(k)$ for $t \in [T]$ in the second inequality.
Setting $\eta = O(1/\sqrt{T})$, we obtain the regret bound $O(k\sqrt{T})$.
Combined with Lemma~\ref{lem:k-submod-selection}, we see that Algorithm~\ref{alg:k-submod-selection} is a $1$-selection algorithm with rate $O(k\sqrt{T})$.
\end{proof}
\begin{remark}
Since Algorithm~\ref{alg:k-submod-selection} is deterministic if we use online gradient descent as an internal OLO algorithm, the guarantee in Theorem~\ref{thm:k-submod-selection} holds even for an adaptive adversary.
\end{remark}
\subsection{Main algorithm}
Now we present our main algorithm for online $k$-submodular maximization.
\begin{algorithm}[h]
\caption{No $1/(\alpha+1)$-regret algorithm for $k$-submodular maximziation}\label{alg:k-submod}
\begin{algorithmic}[1]
\REQUIRE $\alpha$-selection algorithms $\mathcal{A}_j$ for a $k$-submodular selection game ($j \in [n]$).
\STATE Set up $\mathcal{A}_j$ ($j \in [n]$).
\FOR{$t = 1, \dots, T$}
\STATE Set $\mathbf{x}_t^{(0)} := \mathbf{0}$.
\FOR{$j \in [n]$}
\STATE Receive $\mathbf{p}_t^{(j)} \in \Delta_k$ from $\mathcal{A}_j$.
\STATE Sample $i \in [k]$ from the probability distribution $\mathbf{p}_t^{(j)}$, and set $\mathbf{x}_t^{(j)}:= \mathbf{x}_t^{(j-1)}+ i\mathbf{e}_j$.
\ENDFOR
\STATE Play $\mathbf{x}_t = \mathbf{x}_t^{(n)}$ and receive $f_t$.
\FOR{$j \in [n]$}
\STATE Feedback $b_{t}^{(j)}(i) := \Delta_{j,i}f_t(\mathbf{x}^{(j-1)})$ ($i \in [k]$) to $\mathcal{A}_j$.
\ENDFOR
\ENDFOR
\end{algorithmic}
\end{algorithm}
\begin{theorem}\label{thm:k-submod}
Given $\alpha$-selection algorithms $\mathcal{A}_j$ ($j \in [n]$) for $k$-submodular selection games with rate $g(k,T)$, Algorithm~\ref{alg:k-submod} achieves
\begin{align}
\mathbf{E} \left[ \frac{1}{\alpha+1}\max_{\mathbf{o} \in (k+1)^V}\sum_{t\in[T]} f_t(\mathbf{o}) - \sum_{t\in[T]} f_t(\mathbf{x}_t) \right] \leq n g(k,T),
\end{align}
where the expectation is taken under the randomness in Algorithm~\ref{alg:k-submod}.
\end{theorem}
\begin{proof}
Let $\mathbf{o} \in (k+1)^V$ be an optimal solution such that $\supp(\mathbf{o}) = [n]$ (such an optimal solution exists by Lemma~\ref{lem:k-submod-partition}).
For each $t \in [T]$ and $j = 0, 1, \dots, n$, let $\mathbf{o}_t^{(j)} := (\mathbf{o} \sqcup \mathbf{x}_t^{(j)}) \sqcup \mathbf{x}_t^{(j)}$.
Note that $\mathbf{o}_t^{(0)} = \mathbf{o}$ and $\mathbf{o}_t^{(n)} = \mathbf{x}_t^{(n)}$.
Let $\mathbf{s}_t^{(j-1)}$ be a vector obtained by setting the $j$th element of $\mathbf{o}_t^{(j-1)}$ to $0$ for $j \in [n]$.
Define $a_t^{(j)}(i) := \Delta_{j,i} f_t(\mathbf{s}_t^{(j-1)})$ and $b_t^{(j)}(i) := \Delta_{j,i} f_t(\mathbf{x}_t^{(j-1)})$.
By orthant submodularity and pairwise monotonicity, we have
\begin{align*}
a_t^{(j)}(i) + a_t^{(j)}(i') &\geq 0 && (i \neq i')\\
b_t^{(j)}(i) + b_t^{(j)}(i') &\geq 0 && (i \neq i')\\
b_t^{(j)}(i) &\geq a_t^{(j)}(i) && (i \in [k]).
\end{align*}
Therefore, $\mathbf{b}_t^{(j)}$ is valid feedback to $\mathcal{A}_j$ ($j \in [n]$).
Let us fix $j \in [n]$ and let $i^* := o(j)$.
Note that $i^* \in [k]$, since $\supp(\mathbf{o}) = [n]$.
Since $\mathcal{A}_j$ is an $\alpha$-selection algorithm, we have
\begin{align}\label{eq:1-selection-guarantee}
\sum_{t \in [T]}\sum_{i\in[k]} (a_t^{(j)}(i^*) - a_t^{(j)}(i))p_t^{(j)}(i)
\leq \alpha \sum_{t\in [T]} \sum_{i\in[k]} b_t^{(j)}(i)p_t^{(j)}(i) + g(k,T),
\end{align}
conditioned on $\mathbf{x}_t^{(j-1)}$ ($t \in [T]$).
Taking the expectation on $\mathbf{x}_t^{(j-1)}$ ($t \in [T]$), we obtain
\begin{align}
\mathbf{E}\left[ \sum_{t \in [T]} (f_t(\mathbf{o}_t^{(j-1)}) - f_t(\mathbf{o}_t^{(j)})) \right]
\leq \alpha \mathbf{E}\left[ \sum_{t\in [T]} (f_t(\mathbf{x}_t^{(j)}) - f_t(\mathbf{x}_t^{(j-1)})) \right] + g(k,T).
\end{align}
Summing these inequalities for $j \in [n]$, we arrive at
\begin{align*}
\mathbf{E}\left[ \sum_{t \in [T]} (f_t(\mathbf{o}) - f_t(\mathbf{x}_t)) \right]
&\leq \alpha\mathbf{E}\left[ \sum_{t\in [T]} (f_t(\mathbf{x}_t) - f_t(\mathbf{0})) \right] + ng(k,T)\\
&\leq \alpha\mathbf{E}\left[ \sum_{t\in [T]} f_t(\mathbf{x}_t) \right] + n g(k,T)
\tag{since $f_t(\mathbf{0})\geq 0$ ($t \in [T]$)},
\end{align*}
which proves the theorem.
\end{proof}
Combining this theorem with Lemma~\ref{lem:k-submod-selection}, we obtain the main result.
\begin{corollary}
There exists a polynomial-time algorithm for online $k$-submodular maximization whose $1/2$-regret is bounded by $O(kn\sqrt{T})$.
\end{corollary}
\begin{remark}
Since Algorithm~\ref{alg:k-submod-selection} is deterministic, \eqref{eq:1-selection-guarantee} is valid for an adaptive adversary.
Therefore, the regret bound of Algorithm~\ref{alg:k-submod} holds for an adaptive adversary.
Note that a selection algorithm used in Roughgarden and Wang~\cite{Roughgarden2018} is randomized;
therefore it requires different analysis for an adaptive adversary.
\end{remark}
\section{Online monotone $k$-submodular maximization}\label{sec:monotone}
To demonstrate the flexibility of our method with the Blackwell approachability theory, we present a no $\frac{k}{2k-1}$-regret algorithm for online monotone $k$-submodular maximization.
To this end, we define a modified version of a $k$-submodular selection game, which we call a \emph{monotone $k$-submodular selection game}.
The only difference in the monotone case is that the set of the adversary's play is further restricted to $Y_+ := Y \cap (\mathbb{R}_+^k \times \mathbb{R}_+^k)$, which means that $\mathbf{y}_t \geq \mathbf{0}$.
\begin{lemma}
There exists a $(1-1/k)$-selection algorithm for a monotone $k$-submodular selection game with rate $g(k,T) = O(k\sqrt{T})$.
\end{lemma}
\begin{proof}
Again, we use the Blackwell approachability theorem.
We define a slightly modified vector reward function $\boldsymbol{\ell}'$ as follows:
\begin{align}
\ell'(\mathbf{p}, \mathbf{y})(i) = a(i) - \sum_{i' \in [k]} (\alpha\cdot b(i')+a(i')) p(i'),
\end{align}
where $\alpha = 1-1/k$.
It suffices to show that $S = \mathbb{R}_-^k$ is response-satisfiable for a Blackwell instance $(X, Y_+, \boldsymbol{\ell}', S)$.
In \cite[Theorem~2.2]{Iwata2016}, it is shown that for fixed $\mathbf{y} = (\mathbf{a}, \mathbf{b}) \in Y_+$, there exists $\mathbf{p} \in \Delta_k$ such that $\boldsymbol{\ell}'(\mathbf{p}, \mathbf{y}) \leq \mathbf{0}$.
Therefore, there exists an online algorithm for producing an approaching sequence.
Indeed, such an algorithm can be constructed by a slight modification of Algorithm~\ref{alg:k-submod-selection}:
instead of $\boldsymbol{\ell}$ and $Y$, we use $\boldsymbol{\ell}'$ and $Y_+$, respectively.
It is easy to see that the modified algorithm produces a sequence $\mathbf{p}_t$ ($t\in[T]$) with the same guarantee as in Lemma~\ref{lem:k-submod-selection}:
\begin{align}
\max_{i^* \in [k]} \sum_{t \in [T]} a_t(i^*) - \sum_{t\in [T]} \sum_{i\in[k]} (\alpha\cdot b_t(i) + a_t(i)) p_t(i)
\leq \regret_\mathcal{A}(\mathbf{f}_1, \dots, \mathbf{f}_T),
\end{align}
for any $(\mathbf{a}_t, \mathbf{b}_t) \in Y$ $(t \in [T])$, where $\mathcal{A}$ is an internal OLO algorithm.
Again, using online gradient descent as $\mathcal{A}$, we obtain the same bound as before, which completes the proof.
\end{proof}
Combining this result with Theorem~\ref{thm:k-submod}, we obtain the following.
\begin{theorem}
There exists a polynomial-time algorithm for online monotone $k$-submodular maximization whose $\frac{k}{2k-1}$-regret is bounded by $O(kn\sqrt{T})$.
\end{theorem}
\begin{proof}
We use the same notation as in the proof of Theorem~\ref{thm:k-submod}.
Since $f_t$ is monotone ($t \in [T]$), we have $\mathbf{a}_t^{(j)}, \mathbf{b}_t^{(j)} \geq \mathbf{0}$ ($t\in[T]$, $j \in [n]$).
Therefore, $\mathbf{b}_t^{(j)}$ is valid feedback to an algorithm for a monotone $k$-submodular selection game.
Since $\alpha = 1-1/k$, we have the same bound for the $\frac{k}{2k-1}$-regret.
\end{proof}
\subsection*{Acknowledgement}
The author thanks Takanori Maehara, Shinsaku Sakaue, Yuichi Yoshida, and Kaito Fujii for valuable discussions.
The author also thanks Tim Roughgarden and Joshua R. Wang for sharing a draft of \cite{Roughgarden2018}.
This work was supported by ACT-I, JST.
\bibliographystyle{IEEEtranS}
|
2,877,628,090,357 | arxiv | \section{Introduction}
Probably the most extreme example of `starburst activity in galaxies'
occurs when a nuclear starburst behaves like an AGN. But is that
possible? Can the basic properties of Active Galactic Nuclei be
understood in terms of physical processes associated with a starburst?
By the mid 80's the answer to this question was `no', as there seemed
to be no reasonable way to explain basic AGN properties such as
variability, emission line ratios and widths in terms of stellar
processes. The consensus then emerged that accretion onto a
super-massive black hole is the ultimate source of energy in active
galaxies, though it is fair to say that this consensus was at least
partially established due to the lack of plausible alternative
theories. Since then Terlevich and colaborators have given new breath
to starburst-based models for AGN exploring new possibilities such as
warmers and compact Supernova Remnants (cSNRs).
There have been several reviews of the starburst model for AGN, such
as those by Terlevich (1989, 1992, 1994), Filippenko (1992)
and Heckman (1991). This review differs from the previous ones
because it comes in a time when, curiously, though there have never
been as much evidence for a connection between star-formation and
activity, there are also new strong evidence for the existence of
accretion disks and black holes in AGN, as the reader will realize
browsing through this very volume. This may well be indicating that
the answer to the AGN phenomenum lies with neither starbursts nor
black-holes, but {\it both}, an idea which gathered some strength
during this conference. It is however important to establish the
virtues and limitations of both theories before reaching such a
strong conclusion. In this paper I review the essential aspects of
the starburst model for AGN, discussing which aspects of the
activity phenomenum can be adequately understood in terms of the
physics of starbursts and cSNRs, as well as properties which do not
seem to fit in this framework.
\section{Overview of the model}
The starburst model started with the idea that hot stars could power
the ionization in narrow lined AGN, and gradually switched its focus
to the phenomena associated with cSNRs in a nuclear starburst. In
this section I briefly review the history model, indicating strong
and weak points and updating it where necessary.
\subsection{Warmers}
\label{sec:warmers}
The official kick-off of the starburst model was given in 1985 by
Terlevich \& Melnick. They proposed a scenario in which the central
ionizing source consists of a young, massive, metal-rich cluster
containing `warmers', extreme WR stars reaching temperatures of $\sim
150000$\ K. The existence of such stars was supported by stellar
evolution calculations (e.g., Maeder \& Meynet 1988) which showed that
strong mass-loss could strip-off the outer layers of massive stars,
exposing their hot interior. Terlevich \& Melnick's calculations (later
confirmed by Cid Fernandes \et 1992 and Garcia-Vargas \& D\'{\i}az
1992) showed that a cluster containing `warmers' would have an ionizing
spectrum hard enough to photoionize the surrounding gas to Narrow Line
Region like conditions.
Though a few warmers have been found (e.g., Dopita \et 1990), by now
it is clear that they are not as common as previously thought, most
likely because the mass lost during the evolution forms an opaque
atmosphere which effectively reduces the temperature of the star. In
fact, updated spectral synthesis calculations using newer
evolutionary tracks and appropriate stellar atmospheres for the WR
phases show that the ionizing spectrum is more typical of HII
regions than of AGN (e.g., Leitherer, Gruenwald \& Schmutz 1992).
In summary, `warmers' did not live up to be the long sought `missing
link between starbursts and AGN'. This is not to say that WR-stars
have nothing to do with AGN. In fact, the existence of objects
showing both AGN and WR features points to some sort of connection
at least in some cases (see Conti 1993). It is nevertheless clear
that a sound starburst model for AGN could not be based on the
existence of warmers. Indeed, in the subsequent papers cSNRs
(\S\ref{sec:cSNRs}) have replaced warmers as the basic agent of
activity, to the point that warmers are {\it not} a necessary
ingreedient of the model anymore.
\subsection{Evolutionary Scheme}
\label{sec:Evol_Scheme}
Terlevich, Melnick \& Moles (1987) considered the evolution of a
metal-rich nuclear starburst, dividing it into four phases: (1) an
initial HII region phase lasting for $\sim 3$\ Myr, when all stars
are in the main-sequence, (2) a phase from 3 to 4 Myr when the first
warmers appear and the emission line spectrum changes to Seyfert 2
or LINER, (3) a phase from 4 to 8 Myr where warmers and OB stars
still dominate the ionization but the cluster also contains type Ib
SNe and red supergiants, and (4) a Seyfert 1 phase from 8 to 60 Myr,
when cSNRs produce variability, ionization and broad lines.
There are several reasons why this scheme should be revised, the
first of which is that, as discussed above, warmers can almost
certainly be ruled out as abundant sources of ionizing photons.
Besides, we now know that, at least in some cases, the difference
between Seyfert 2s and Seyfert 1s is not an evolutionary one, but an
orientation effect. A revised evolutionary scheme would comprise
only two phases: an initial HII region (or LINER, depending on the
density and metallicity; Shields 1992, Filippenko \& Terlevich 1992)
phase lasting until the first cSNR appears at $\sim 8$~Myr and the
nucleus turns into a Seyfert 1. During both phases the action of
stellar winds and SNRs can drive a biconical outflow if the parent
molecular cloud (the torus?) is flattened. This mechanism would
naturally create the conditions postulated by the unified model.
Seyfert 2s would then be simply edge-on Seyfert 1s. Some Seyfert 2s
and LINERs could also be face-on low luminosity phase 2 systems,
where the low SN rate ensures quiescent periods with little or no
variability nor broad lines (Aretxaga \& Terlevich 1994).
\subsection{compact Supernova Remants}
\label{sec:cSNRs}
cSNRs are the key ingreedient in the starburst model. They are in fact
the only thing which set AGN-like starbursts apart from ordinary ones.
In the context of the starburst model for AGN, cSNRs are responsible
for the time variability, X-ray and radio emission, the BLR properties
and the NLR ionization, as explained in the seminal paper by Terlevich
\et (1992). cSNRs are nothing but ordinary SNRs evolving in a dense
circumstellar medium ($n_{CSM} > 10^6$~\percc). Density is the main
parameter governing the evolution of the remnant, though metallicity
and the detailed structure of ejecta and CSM also play a role. At such
high densities cooling of the shocked material is very efficient and
the remnant is capable of radiating away most of its kinetic energy in
a few years. The structure and evolution of a cSNR is complex and has
to be followed numerically, and hydro codes have only recently been
adapted to work under such extreme conditions. The first simulations of
cSNRs were those by Terlevich \et (1992). More recent and detailed
calculations can be found in Plewa (1995), Terlevich \et (1995) and Cid
Fernandes \et (1996). The state of the art in numerical modeling of
radiative shocks is reviewed in T. Plewa's contribution.
To first order, the structure of a cSNR comprises 4 regions (see
schemes in Terlevich \et 1992 and Cid Fernandes \& Terlevich 1994).
From the inside out: (1) the unshocked, freely expanding ejecta, (2)
shocked ejecta, (3) shocked CSM and (4) unperturbed CSM. Dense,
cold, fast moving thin shells form behind both the forward and
reverse shocks via catastrophic cooling. These two cold regions plus
the unshocked ejecta are photoionized by the high energy photons
emanating from the cooling regions, producing broad emission lines
whose ratios, luminosities and widths are similar to those found in
AGN. Though many details remain to be worked
out, the cSNR model is definitely in the right track, as confirmed
by the discovery of objects like SN 1987F (Filippenko 1989), with a
spectrum so similar to that of AGN that led Filippenko to call it
a `Seyfert 1 impostor'.
Besides being fascinating objects by themselves, cSNRs are
potentially the true missing link between starbursts and AGN.
Whether or not they can explain AGN properties which have puzzled us
for so long, there can be little doubt that a starburst whose SNe
evolve into cSNRs will look like an AGN in many aspects. In my view,
this has been already {\it proven} both theoretically, with the work
described above, and empirically, with the discovery of `Seyfert 1
impostors'. At this stage, I cannot help observing that, unlike the
situation in AGN studies, the theory of cSNRs is parsecs ahead of
observations. cSNR simulations have reached an unprecedent stage of
refinement. We can, for instance, consider the detailed structure of
both ejecta and CSM, as well as the effects of inhomogeneities on
the evolution of the remnant. The theory here is really crying out to
be tested. Detailed observations of cSNRs are badly needed not only to
better guide the theory of radiative shocks but to strongly test the
cSNR-AGN connection proposed by the starburst model.
\subsection{Why high density?}
\label{sec:density}
All that is required for a SN to become a cSNR is a dense CSM. Given
this, a starburst will {\it certainly} present many AGN properties.
But what generates such a high density? The historical explanation
was based on the effects of metalicity (Terlevich \& Melnick 1985,
Terlevich, Melnick \& Moles 1987). The central regions of a galaxy,
particularly early type ones, where most AGN are found, have an
enhanced metallicity, which would enhance radiatively driven mass
loss, creating both warmers and a dense CSM around the cSNR
progenitors. This reasoning was used by Terlevich, Melnick \& Moles
to explain the predominance of AGN at early Hubble types and
starbursts at late types.
Though metallicity certainly plays some role, the current view is that
the ISM in a nuclear starburst is pressurized by stellar winds and
SNRs. The high pressure confines the wind driven bubbles to small
volumes, creating the conditions for SNe to become cSNRs (see J.
Franco's contribution elsewhere in this volume). This, however, is
probably not the only way to produce cSNRs, since objects like SN 1987F
and SN 1988Z (Filippenko 1989, Stathakis \& Sadler 1991) were found in
regions showing no signs of particularly violent star-formation.
\section{AGN properties explained by the model}
Having gone through the basics, I now briefly outline some of the
AGN phenomenology which can be at least reasonably well understood in
the the framework of the starburst model.
\subsection{BLR properties}
That cSNRs can have emision line regions aking to those of Seyfert 1
was already realized by Fransson (1988) in his early study of SN-CSM
interaction. Indeed, the combined hydrodynamics $+$\ photoionization
calculations of Terlevich \et (1992) demonstrated that cSNRs do at
least as well as canonical models in reproducing the BLR line ratios.
Furthermore the different photoionized regions (the two thin shell
associated with the forward and reverse shocks plus the ejecta) explain
the existence of high and low ionization line regions (e.g.,
Collin-Souffrin \et 1988). Reproducing the basic BLR properties with a
{\it physical} model, as opposed to an {\it ad hoc} model such as a
central ionizing source plus an arbitrary distribution of clouds, is
one of the main achievements of the starburst model.
Though the velocities and luminosities of the line emitting regions in
cSNRs are similar to those in the BLR, detailed emission line profile
calculations proved to be more complicated than initially thought (Cid
Fernandes \& Terlevich 1994, Cid Fernandes 1995). The complications
arise because the line emitting shells are thought to be optically
thick, but geometrically too thin to be modeled with the Sobolev
approximation. In this situation calculations are very sensitive to the
detailed shape of the shells, which is likely to be distorted by
instabilities. Though the theory here still needs work, one can always
resort to the empirical argument that {\it observed} profiles in cSNRs
such as SN 1987F and SN 1988Z would probably go un-noticed if put among
a gallery of AGN profiles. A further complication is that, discounting
low luminosity systems ($M_B$ $\gapprox$-20), chances are that we
seldom observe isolated cSNRs in AGN. Interestingly, computations of
the line profiles for multi-cSNR systems yielded more robust results
than for individual cSNRs. Effects such as the line width-luminosity
correlation and the centrally depressed \Hb/\Ha\ profile ratio can be
readily understood in the model.
\subsection{Optical--UV variability and the origin of the lag}
Most of the recent interest in AGN variability has been on the time-lag
between continuum and emission line variations, with intensive
monitoring campaings designed to reconstruct the BLR geometry using
echo-mapping techniques. In the starburst model for AGN the lag is {\it
not} due to reverberation, but due to the hydrodynamical effects
associated with thin shell formation behind radiative shocks (Terlevich
\et 1995, Plewa 1995). As the shocked material begins to cool in the
processes leading to shell formation, lots of radiation are released,
producing a continuum burst. Eventually the cold gas collapses onto a
thin shell, but this only happens some time after the maximum in the
continuum, explaining the origin of the lag. As the continuum burst
fades and the shell density increases, the ionization parameter goes
down, explaining why low ionization lines take longer to respond to
continuum variations. After shell formation, shock oscillations produce
further variations, but this time the shell is already formed and there
should be little or no lag, explaining the puzzling observation that
some events show essentialy no lag (e.g., Clavel \et 1991). This
elegant, albeit controversial, interpretation illustrates the richness
of phenomena associated with cSNRs and radiative shocks in general.
The optical-UV continuum variability of starburst powered AGN is
reviewed in I. Aretxaga's contribution, where the reader can find how
properties such as the light curve statistics of Seyfert 1s, the
anti-correlation between variability and luminosity and the Structure
Function of QSOs are explained in the starburst model.
\subsection{QSO luminosity function}
Terlevich \& Boyle (1993) carried out the interesting exercise of
evolving the stars in the core of present day elliptical galaxies
back in time (see also Terlevich 1992 and Boyle 1994). With simple
assumptions they were able to reproduce the shape, amplitude and
evolution of the QSO luminosity function, supporting
not only the starburst model version of QSOs as young/primeval
galaxies but also galaxy formation models which indicate that giant
ellipticals have gone through a period of intense star-forming
activity around $z$~$\gapprox$~2. (See Heckman 1994 and Terlevich
1994 for an interesting debate on this issue.)
\subsection{Etcetera}
The points above are those which have been studied in more detail, but
there are other AGN properties which can also be understood in the
framework of the starburst model. These include (in varying stages of
development): the Ca~II absorption triplet (Terlevich, D\'{\i}az \&
Terlevich 1990), the spectral energy distribution (Terlevich 1990),
radio emission and variability (Colina 1993), the nature of strong
Fe~II emitters, size of the cluster and mass segregation (Terlevich
1994), effects of magnectic fields (R\'o\.zyczka \& Tenorio-Tagle 1995)
and X-ray features such as the warm-absorber, cold reflector and
high-energy cut-off.
\section{AGN properties not explained by the model}
Despite this impressive list of explainable properties, the
starburst model has from its early days been subjected to criticism
from several fronts. The similarities between radio quiet and radio
loud AGN, for instance, have always been a matter of worry, since
radio loud objects have from the outset been excluded by the model.
Other unresolved issues include micro-variability, and the absence
of a Lyman edge and stellar features in the UV. However, rapid X-ray
variability and the recently discovered broad Fe lines are, in my
view, the most serious dificulties currently faced by the model.
Rapid X-ray variability has always been seen as a serious problem for
non-black hole models for AGN, a point not so much based on theory, but
on the fact that galactic black-hole candidates (Cygnus X-1 being the
classic example) also show rapid variability (see Mushotsky, Done \&
Pounds 1993). Similarity arguments, however, have to be taken with
care, since on optical wavelengths (at least) cSNRs are much more
AGN-like than galactic black holes. Furthermore, gamma ray observations
are revealing a clear difference between the high energy spectra of
Cygnus X-1, which goes well into the MeV range, and that of radio-quiet
AGN, which turns-over at about 100 KeV and cuts off at a few hundred
KeV (A. Carrami\~nana, priv.\ comm., McConnell \et 1994, Warwick \et
1996). (See also Kinney 1994 for a comparison between stellar
accretion disk systems and AGN.) In any case, it is difficult to see
how cSNRs could produce strong, rapid X-ray variability. The
interaction of SN fragments with the dense, thin shells in cSNRs can
give rise to rapid X-ray flares, but, as discussed by Cid Fernandes \et
(1996), the physical conditions required to model AGN-like flares
quantitatively are too extreme. This, however, has yet to be verified
with X-ray monitoring of cSNRs---the fact that we don't know how to
produce strong, rapid variability does not mean that they don't have
it!
The discovery of an extremely broad ($\sim c/3$) Fe line in MGC
6-30-15 (Tanaka \et al 1995), a feature now detected in many other
Seyfert 1s (see R. Mushotsky contribution), brought further problems
for the starburst model. Besides being very difficult to see how
such a wide line could be produced in cSNRs, the observed profile is
remarkably well fit by a relativistic accretion disk model. Further
compelling evidence for a disk surrounding a compact super-massive
object came with the water maser observations of NGC 4258 by Miyoshi
\et (1995).
\section{Discussion: Starbursts, black-holes or both?}
All in all, the starburst model proved capable of explaining
several, though not all, AGN properties. What makes the model so
attractive is its deductive character, deriving observables out of
physics instead of parameters. Though the numerical balance of
evidence for and against may favour the starburst model, a single
unexplained observation is enough to put any model in trouble. When
an astrophysical model is in trouble we either (1) drop it
altogether, (2) modify/fix it or (3) opt to say it does not apply to
all objects. As already said above, starbursts containing cSNRs
would certainly look like AGN in many ways, so there must be some,
maybe many objects of this kind in AGN lists. In this sense, at
least, ruling out a starburst origin for the activity in galactic
nuclei is not a sensible alternative. The choice therefore has to be
between fixing the model or restricting its applicability to a
sub-set of the AGN family.
The easiest way to account for rapid X-ray variability, broad iron line
and radio loud objects would be to go for a hybrid black-hole
$+$\ starburst scenario. One version of such a model is that of J.
Perry and colaborators, reviewed elsewhere in this volume. Naively, one
could think that combining a ``Terlevich-type'' of starburst with a
black-hole would fix the problems with X-rays, with stars and cSNRs
still responsible for the activity at lower energies. While this might
work in some cases, my view is that such a scheme does not provide a
natural explanation for the correlations between high and low energy
properties of AGN, one example being the correlation between X-ray and
Balmer line luminosities (e.g., Ward \et 1988). Such correlations
indicate that the low and high energy photons somehow know about each
other, something which would be difficult to understand if they
originate from very different phenomena. It is therefore not obvious
that a simple combination of the canonical and starburst paradigms is
the answer.
I conclude that, to the disgust of purists, the evidence seems to be
pointing to two kinds of objects: starburst and accretion-powered
AGN. This is not a new idea, as many of us always suspected that AGN
constitute a mixed bag. Distinguisuing
between these two possibilities has been a central theme in the
starburst $\times$\ monster debate (see Filippenko \et 1993). At
this stage, detection of strong, rapid X-ray variability and broad
Fe lines are the best indication of the existence an accretion-disk
in AGN. Experiments to resolve the small but extended nucleus
predicted by the starburst model will eventually provide a more
conclusive method to discriminate starburst from accretion-powered
AGN.
Going back to our original question, can we after all these years
reach a veredictum on whether starbursts can power AGN? Given the
advances on cSNR theory and the discovery of objects like SN 1987F
and SN 1988Z, we can safely say that yes, starbursts {\it can} power
AGN. At the same time, recent discoveries have put the standard
accretion-disk model on firm ground. Whatever the final answer to
this cosmic puzzle is, it is clear that while we digest this
apparent conflict, much can be learned observing cSNRs. A
fundamental aspect which differentiates the starburst model from
black-hole and hybrid models is that many of its predictions can be
tested {\it outside} AGN. Galactic black-hole candidates are about
the closest one can get to an AGN-like engine, but applying the
knowledge of such systems to AGN involves an unconfortable leap of
several factors of 10. cSNRs, on the contrary, are expected and
observed to be similar to those inferred to exist in the cores of
massive starburst-powered AGN. They therefore provide a unique
laboratory to strongly test the starburst-AGN connection and to help
us solve this yet unfinished debate.
|
2,877,628,090,358 | arxiv | \section{Introduction}
\label{sec:Intro}
Most real world systems, like airport transportation~\cite{Colizza:et.al:2006:PNAS,Verma:et.al:2014:SciReports}, power grid \cite{Albert:et.al:2004:PhyRevE,Cuadra:et.al:2015:Energies}, Internet infrastructure~\cite{Yook:et.al:2002:PNAS} and etc., can be abstracted as complex networks each consisting of many functional nodes and edges, denoted by $\mathcal{G} = (\mathcal{V},\mathcal{E})$ with $N$ nodes in $\mathcal{V}$ and $M$ edges in $\mathcal{E}$. In practical applications, it has been often reported that the failure of a few nodes and/or edges could significantly degrade the functionality and stability of a network, even leading to the system collapse. A typical example was the Italian nation-wide blackout on September 28 2003 due to cascading failures of power stations starting from a single powerline~\cite{Crucitti:et.al:2004:PhysicalA}. Another recent example was the outage of the Internet service of the American operator Century Link~\cite{Goodwin:Jazmin:2020:CNN} on August 30, 2020 arising from a single router malfunction.
\par
Many efforts have been devoted on studying the dynamics and properties of network stability under different failure scenarios~\cite{Saberi:2015:PhysRep, Almeira:et.al:2020:PhyRevE}. Some insights have been gained from studying network failure processes, including cascading failure~\cite{Buldyrev:et.al:2010:Nature}, network percolation and phase transition~\cite{Callaway:et.al:2000:Phy.Rev.L, Karrer:et.al:2014:Phy.Rev.L, Morone:et.al:2015:Nature}. One of important insights is that a single node can disenable its attached edge(s), which may further cause the failure of its neighboring nodes. Furthermore, if a few nodes are disenabled at the same time, their removals could not only cause local structural damages but also lead to global topological collapse. This observation has recently motivated lots of studies on how to select a few \textit{vital nodes} to attack, such that their removals can dismantle a network, i.e., the network dismantling problem.
\par
The \textit{network dismantling} (ND) problem is to determine a \textit{target attack node set} (TAS), denoted by $\mathcal{V}_t$ ($\mathcal{V}_t \subseteq \mathcal{V}$), for its removal leading to the disintegration of a network into many disconnected components, among which the \textit{giant connected component} (GCC), denoted as $\mathcal{G}\backslash \mathcal{V}_t$, is smaller than a predefined threshold. The objective of network dismantling is to find such a TAS $\mathcal{V}^* \subseteq \mathcal{V}$ with the minimum size, that is,
\begin{equation}\label{Eq:NDObjective}
\mathcal{V}^* \equiv \min \big \{\mathcal{V}_{t} \subseteq \mathcal{V} : | \mathcal{G} \backslash \mathcal{V}_{t} |/N \leq \Theta \big \}
\end{equation}
where $\Theta$ is the predefined dismantling threshold. The ND problem has been proven as NP-hard~\cite{braunstein:et.al:2016:PNAS}, and many solutions have been proposed to find a suboptimal TAS for large networks~\cite{Mugisha:Zhou:2016:PhyRevE, Zdeborov:et.al:2016:SciReports, Fan:et.al:2020:NatureML, Grassia:et.al:2021:NatureC}.
\par
Recently, some researchers have considered on applying deep learning techniques for problems of combinatorial optimization on graph~\cite{Dai:et.al:2017:NIPS, Quentin:et.al:2021:IJCAI, schuetz:et.al:2022:NatureML}. For example, many \textit{Graph Neural Networks} (GNNs)~\cite{Hamilton:et.al:2017:NIPS,Wu:et.al:2020:TNLS,Zhou:et.al:2020:AIOpen} have been designed and applied to find feasible solutions for a large class of graph optimization problems, like the maximum cut, minimum vertex cover, maximum independent set problem.
Also some GNNs have been designed to approximate some global centralities of a node~\cite{Cyrus:et.al:2021:KDD, Changjun:et.al:2019:CIKM, Sunil:et.al:2019:CIKM}, which normally with high computation complexities due to graph-wide operations. For example, the betweenness centrality requires to first find the shortest path for any pair of nodes. With the powerful representation capability of a GNN, such graph centralities or metrics can be approximated with high accuracy and low computational complexity.
\par
For network dismantling, it is also very intriguing to ask the question about whether we can design and train a neural model to output the target attack node set for any input network? However, this question has not yet been well studied in the literature to our best knowledge. This may be due to the fact that not only network sizes but also topological characteristics are quite different across different real-world networks, which seems to discourage applying one neural model to dismantle different networks. Nonetheless, this article presents an affirmative answer to the question through our initiative efforts of designing an effective neural model for network dismantling.
\par
In this article, we are interested in the following question: \textit{Can we find a smallest TAS to dismantle any real-world network via a neural model?} That is, given an input graph $\mathcal{G}$ with its adjacency matrix $\mathbf{A}$, a neural model can output a \textit{dismantling score} $s_{i}^{dis} \in \mathbb{R}$ for each node $v_i$ to facilitate the selection of target attack nodes. In this article, we design and train a neural model, called \textit{neural influence ranking model} (NIRM), for the network dismantling problem. Some challenges on designing and training a neural model have been addressed in this article:
\begin{itemize}
\item Design a neural model: The influence of a node to the stability of a whole network should be not only evaluated from its own structural characteristics, but also compared with other farther apart nodes.
\item Train a neural model: The \textit{training dataset} as well as the so-called \textit{ground truth} labels for training a neural model are not available, not even mention to their appropriateness and trustfulness.
\end{itemize}
In NIRM, we learn both local structural and global topological characteristics for each node, so as to compute and fuse its local and global influence scores for outputting each node a dismantling score. Our NIRM model is trained by some tiny synthetic networks of no more than thirty nodes, where we can find the optimal TAS(s) via exhaustive search. We design a labelling rule for selected target nodes and propose a training score propagation algorithm to obtain labels for other nodes. We conduct extensive experiments on various real-world network, and results validate the effectiveness of our neural model: Compared with the classic approaches and state-of-the-art competitors, the proposed NIRM performs the best for most of real-world networks.
\par
The rest of the paper is organized as follows: Section~\ref{sec:Related} reviews the related work. The design and training details of our NIRM are introduced in Section~\ref{sec:NIRM} and Section~\ref{sec:Training}, respectively. Section~\ref{sec:Experiment} presents the experimental results. Finally, Section~\ref{sec:Conclusion} concludes the paper.
\begin{figure*}[t]
\centering
\includegraphics[scale = 1.5,width=\textwidth]{images/NIRM_CIKM.pdf}
\caption{The NIRM framework: The upper left part is feature scoring that generates initial scores by converting local features. The lower left part illustrates GAT, which encodes initial scores between neighbors as well as neighborhood structure for learning each node representation. Local scoring and global scoring respectively evaluates a node's influence to the stability of node-centric local structure and network-wide global topology. The final dismantling score is a fusion of local score and global score. The upper right part marks the selected attack nodes in red, which are the same as one of optimal solutions by exhaustive search.}
\label{Fig:NIRM}
\end{figure*}
\begin{figure*}[h]
\centering
\includegraphics[ width=\textwidth, scale =2 ]{images/NodeRanking.pdf}
\caption{Illustration of training score initialization, propagation and labeling. For a tiny synthetic network, exhaustively search the optimal dismantling sets ($\{v_2, v_3, v_7, v_8\} $ and $\{v_2, v_3, v_8, v_9\}$ in the given example), and initialize the training score for a node as its normalized appearances in the optimal dismantling sets. For each vital node (colored blue), its initial score is equally propagated to its one-hop neighbors. After score propagation, a node training score is the sum of its own and its received scores (if any), of which the normalized rank is ground truth influence label.
The number next to each node in the third figure is its training score. The forth figure presents the ranking results according to the training scores. }
\label{Fig:ScorePropagation}
\end{figure*}
\section{Related work}
\label{sec:Related}
\subsection{Deep Learning on Node Ranking}
Recently, the problem of ranking nodes on a graph has been revisited from deep learning viewpoint and some neural models have been designed.
\par
Centrality-oriented approaches focus on fast approximating the relative rank of nodes in terms of their centralities to reduce computation complexity~\cite{Grando:et.al:2018:CSUR, Changjun:et.al:2019:CIKM, Sunil:et.al:2019:CIKM, Appan:Mahardhika:2021:IEEEBD, Matheus:et.al:2021:TNSE, Sunil:et.al:2021:TKDD}. Grando et al.~\cite{Grando:et.al:2018:CSUR} propose to estimate eight graph centralities by a neural network with degree and eigenvector as input features. Fan et al.~\cite{Changjun:et.al:2019:CIKM} propose a GNN-based encoder-decoder framework to select the top-K highest betweenness centralities. Sunil et al.~\cite{Sunil:et.al:2021:TKDD} propose a neural model to predict both betweenness and closeness centrality.
\par
Some other neural ranking approaches do not estimate nodes' centralities, but directly output ranking scores for some downstream tasks~\cite{SongQi:et.al:2018:CIKM, Namyong:et.al:2019:KDD, Euyu:et.al:2020:Scientific,Yaojing:et.al:2020:CIKM}. For example, Song et al.~\cite{SongQi:et.al:2018:CIKM} introduce a variant of Recurrent Neural Network for node ranking in heterogeneous temporal graphs to dynamically estimate temporal and structural nodes' influences over time. Yu et al.~\cite{Euyu:et.al:2020:Scientific} propose a CNN-based model to identify critical nodes in temporal networks. Park et al.~\cite{Namyong:et.al:2019:KDD} presented an attentive GNN for predicate-aware score aggregation to estimate entity importance in Knowledge Graphs.
\subsection{Network dismantling}
Network dismantling is a typical discrete combinatorial optimization problem. Attacking different TAS will lead to inestimable combinatorial effects on network connectivity.
Most existing solutions to the ND problem can be generally divided into two categories: centrality metric-based and network decycling-based. For the former, the basic idea is to first compute some centrality metric for each node, and then rank nodes and select the top-$K$ important nodes to form TAS. Some commonly used centralities include degree centrality (DC)~\cite{Albert:et.al:2000:Nature}, closeness centrality (CC)~\cite{Bavelas:Alex:1950:JASA}, betweeness centrality (BC)~\cite{Freeman:Linton:1977:Sociometry}, eigenvector centrality (EC)~\cite{Bonacich:et.al:1987:AmericaJS} and etc. Recently,
Collective Influence (CI)~\cite{Morone:et.al:2015:Nature} was proposed as an improved version of DC, which quantifies the importance of a node by considering not only its one-hop neighbors but also its high-order neighbors.
\par
The network decyling-based approaches, including the Minsum~\cite{braunstein:et.al:2016:PNAS}, BPD~\cite{Mugisha:Zhou:2016:PhyRevE} , CoreHD~\cite{Zdeborov:et.al:2016:SciReports} and etc., tried to first finding those nodes whose removals can cause an acyclic network, that is, a network does not contain loops. Next, a kind of greedy tree-breaking algorithms are used to break the residual forest into small disconnected components. In addition, after determining the TAS, some nodes can be reinserted back into the original network, if such reinsertion would not cause the increase of the residual GCC. For example, the BPD~\cite{Mugisha:Zhou:2016:PhyRevE} applies the spin glass theory to solve the feedback vertex set problem when searching nodes for breaking all network loops. Besides, the GND~\cite{Ren:et.al:2018:PNAS} proposes a spectral approach unifying the equal graph partitioning and vertex cover problem, iteratively partitioning the network into two components.
\section{NIRM: A Neural Influence Ranking Model for Network Dismantling}
\label{sec:NIRM}
The proposed neural influence ranking model (NIRM) takes the adjacency matrix $\mathbf{A}\in \mathbb{R}^{N\times N}$ of a network as input, and outputs a vector of \textit{dismantling scores} $\mathbf{s} \in \mathbb{R}^{N\times 1}$ for all nodes. Although different input networks are with different sizes (viz., different $N$), we design several neural modules not only for converting network dimensions but also for learning nodes' representations. Note that all learnable parameters in neural modules are trained from tiny synthetic networks. We compute local and global influence scores based on learned nodes' representations and fuse them to output dismantling score.
\par
Fig.~\ref{Fig:NIRM} presents the NIRM architecture, which consists of the following modules: (1) feature engineering, (2) feature scoring, (3) representation encoding, (4) local scoring, (5) global scoring, and (6) fusion and rank.
\subsection{Feature engineering} This module is to construct a feature vector for each node based on the adjacency matrix of input network. Although many attributes and measures, either local node-centric or global network-wise, can be used as features, we prefer to focus on those local ones, as they do not incur too much computation burden. Specifically, our NIRM uses the following five local node-centric attributes to construct a \textit{local feature matrix} $\mathbf{X}\in \mathbb{R}^{N \times 5}$, where each row is the feature vector $\mathbf{x}_i$ for node $v_i$, consisting of (1) the number of its one-hop neighbors, viz., its degree; (2) the number of its two-hop neighbors; (3) average degree of its one-hop neighbors; (4) its local clustering coefficient; (5) a constant for regulation. These features contains the basic neighborhood information of a node, revealing its local connectivity and structure characteristics to some extent.
\subsection{Feature scoring} This module is to make an initial evaluation of the importance to network stability for each node based on its local feature. On the one hand, we want to measure a node's connectivity and structure characteristics; On the other hand, we also would like to make a network-wide importance estimation from local feature across all nodes. To this end, we use a fully connected neural network with a network-wide shared kernel to convert local features into scores:
\begin{equation}\label{Eq:FeatureScoring}
s_i^{init} = \operatorname{ReLU} ( \mathbf{W}_1 \mathbf{x}_{i}^{\mathsf{T}} + \mathbf{b}_1),
\end{equation}
where $s_i^{init}$ is the initial score of node $v_i$, $\mathbf{W}_1$ and $\mathbf{b}_1$ are the shared kernel with learnable parameters. We note that other neural networks can also be used for initial scoring.
\subsection{Representation encoding} This module is to encode the initial score of each node together with its neighbors' scores, yet in a discriminate way, into a representation vector in a latent space. After feature engineering and scoring, the initial score could reflect a node's influence to network stability to some extent.
However, the initial score obtained from network-wide conversion only focuses on statistical metrics in the node-centric ego-net, which may not well exploit neighbors' properties as well as multi-relational interactions between neighbors.
We would like to further mine potential neighborhood interactions for evaluating the influence of a node specific to its structure. To this end, we employ the graph neural network technique for encoding nodes' representations.
\par
We introduce Graph Attention Network (GAT)~\cite{Petar:et.al:2018:ICLR} consisting of $L$ \textit{neighborhood aggregation} (NA) layers to learn nodes' representations from their initial scores and high-order structure information. The input is the initial score vector $\mathbf{s}^{init}$, and output is the representation matrix $\mathbf{H}$. Each NA layer applies $H$ \textit{attention heads} to encode node-centric structural properties from different views. We take a node $v_i$ for example to introduce the core operations in $l$-th NA layer:
\begin{itemize}
\item Compute the coefficients $\alpha_{i j}^h$ between $v_i$ and its one-hop neighbors $v_j \in \mathcal{N}_{i}$ (including itself) by the $h$-th attention head $\mathbf{a}_{h}$ ($\mathbf{a}_h^{\mathsf{T}}$ as its transposition):
\begin{equation}
\alpha_{i j}^{h} = \frac{\exp [\operatorname{LeakyReLU}(\mathbf{a}_h^{\mathsf{T}}\mathbf{h}_{ij})]}{\sum_{v_k \in \mathcal{N}_i \cup v_i} \exp [\operatorname{LeakyReLU}(\mathbf{a}_h^{\mathsf{T}}\mathbf{h}_{ik})]},
\end{equation}
where $\mathbf{h}_{ij}=\mathbf{h}_i^{(l-1)} || \mathbf{h}_j^{(l-1)}$ stands for the concatenation of the hidden embeddings from the $(l-1)$-th NA layer.
\item Aggregate the converted neighborhood hidden embeddings in a weighted way to obtain an aggregation embedding $\mathbf{m}_i^h$ for the $h$-th attention head:
\begin{equation}
\mathbf{m}_i^h = \sum_{v_j \in \mathcal{N}(i) \cup v_i} \alpha_{i j}^h \mathbf{W}_2 \mathbf{h}^{(l-1)}_{j}.
\end{equation}
\item Concatenate the converted aggregation embeddings for $H$ attention heads to output the hidden embedding $\mathbf{h}_i^{l}$ for the $l$-th NA layer.
\begin{equation}
\mathbf{h}_i^{(l)} = \|_{h=1}^{H} \operatorname{ReLU}(\mathbf{m}_i^h).
\end{equation}
\end{itemize}
Neighboring nodes influence each other through their own initial scores as well as local structural properties. For one NA layer, low-order structural information are attentively encoded for node representation learning. Through multiple NA layers, $L$-hop apart neighbors are also included in representation learning, which can help to capture high-order neighbors' influences as well as some high-order topological information.
\subsection{Local scoring} This module is to evaluate the local influence of each node, for its removal, to the destruction of the local network structure centered to the node. From a node-centric view, one could expect that if a node is with more similar neighbors, then its removal may not only influence the connectivity of its local structure, but also cause further instability cascade through its similar neighbors. For such considerations, we propose to compute a local influence score $s_i^{local}$ for each node $v_i$ as follows:
\begin{equation}\label{Eq:LocalScore}
s_{i}^{local} = \frac{1}{|\mathcal{N}_i|} \sum_{v_j \in \mathcal{N}_i} \langle \mathbf{h}_{i}, \mathbf{h}_{j} \rangle + \bar{d}_i,
\end{equation}
where $\langle \cdot, \cdot \rangle$ stands for the dot product of two vectors, and $\bar{d}_i = d_i/d_{max}$ is the normalized node degree and $d_{max}$ the largest degree of input network.
\subsection{Global scoring} This module is to compare the global influence of one node, for its removal, to the possible damage of the global topology with that of other nodes. Consider that two nodes are with similar local structure but distant from each other. As the dismantling is for the whole network, it is necessary to further distinguish the importance of the two nodes for their global ranking. For such considerations, we design a global projection operator $\mathbf{P}$ to learn a global influence score for each node:
\begin{equation}\label{Eq:GlobalScore}
s_i^{global} = \langle \mathbf{P}^{\mathsf{T}}, \hat{\mathbf{h}}_i \rangle,
\; \mathrm{where} \; \hat{\mathbf{h}}_i = \sum_{v_j \in \mathcal{N}_{i} \cup v_i} \frac{1}{\sqrt{|\mathcal{N}_i|+1}} \cdot \mathbf{h}_j,
\end{equation}
where $\mathbf{P}$ denotes the learnable projection vertor and $\hat{\mathbf{h}}_i$ is the neighborhood representation of node $v_i$. We note that using $\hat{\mathbf{h}}_i$ instead of $\mathbf{h}_i$ is again to enjoy node-centric local structural information such that global comparison is conducted for a kind of \textit{ego-structure} though represented by a single node.
\subsection{Fusion and rank} This module is to fuse the local score $s_i^{local}$ and global score $s_i^{global}$ as the \textit{dismantling score} $s_i^{dis}$ for each nodes. As we have already designed sophisticated mechanisms for computing $s_i^{local}$ and $s_i^{global}$, we simply add up the two scores for $s_i^{dis}$, that is,
\begin{equation}\label{Eq:DismantlingScore}
s_i^{dis} = s_i^{global} + s_i^{local}.
\end{equation}
Finally, nodes are ranked according to their dismantling scores.
\subsection{Complexity analysis}
The NIRM model requires node features matrix $\mathbf{X}$ and the sparse adjacency matrix $\mathbf{A}$ as input, which can be applied to any given network to infer dismantling scores. The time complexity of NIRM consists of three parts, the first part explicitly learns the initial score from local feature, which takes $O(\left| V \right|)$ and $V$ is the number of nodes. The second part GAT applies the self-attention mechanism to encode high-order structure and neighbors information into topology representation, the time complexity of the embedding process is $O\left( L \left( \left| V \right| + \left| E \right| \right) \right) $, where $L$ is the number of propagation layers(e.g., 3), $E$ is the number of edges. After GAT, then combine the local influence scores and global influence scores to sort nodes in the entire graph, where the time complexity of calculating the local and global influence scores is $O(\left| E \right|)$, $O(\left| V \right|)$ respectively and the node ranking operation takes $O(\left| V \right| log \left| V \right|)$. Therefore, the overall complexity of the NIRM model is given by $ O( \left| V \right| + \left| E \right| + \left| V \right| log \left| V \right|) $.
\begin{table}[t]
\centering
\renewcommand{\arraystretch}{1.2}
\setlength{\tabcolsep}{0.6mm}
\caption{Statistical properties of real-world networks}
\label{tab:SI_Dataset}
\begin{tabular}{l c c c c c c }
\hline
Network & Category & Nodes & Edges & Density & AVD & Diameter \\
\hline
PPI~\cite{Dongbo:et.al:2003:NAR} & Protein & 2,224 & 6,609 & 0.0027 & 5.94 & 11 \\
HI-II-14~\cite{Thomas:et.al:2014:Cell} & Protein & 4,165 & 13,087 & 0.0016 & 6.28 & 11 \\
Ca-GrQc~\cite{Leskovec:Krevl:2014:SNAP} & Collab. & 4,158 & 13,422 & 0.0016 & 6.46 & 17 \\
NetScience~\cite{Kunegis:et.al:2013:www} & Collab. & 1,461 & 2,742 & 0.0026 & 3.75 & 17 \\
DNCEmails~\cite{Kunegis:et.al:2013:www} & Comm. & 1,866 & 4,384 & 0.0025 & 4.70 & 8 \\
Innovation~\cite{Kunegis:et.al:2013:www} & Comm. & 241 & 923 & 0.0319 & 7.66 & 5 \\
Infectious~\cite{Kunegis:et.al:2013:www} & Contact & 410 & 2,765 & 0.0330 & 13.49 & 9 \\
Genefusion~\cite{Hoglund:et.al:2006:Oncogene} & Gene & 291 & 279 & 0.0066 & 1.92 & 9 \\
P-H~\cite{Kunegis:et.al:2013:www} & Social & 2,000 & 16,098 & 0.0081 & 16.10 & 10 \\
HM~\cite{Kunegis:et.al:2013:www} & Social & 1,858 & 12,534 & 0.0073 & 13.49 & 14 \\
UsPower~\cite{Watts:et.al:1998:Nature} & Grid & 4,941 & 6,594 & 0.0005 & 2.67 & 46 \\
Crime~\cite{Kunegis:et.al:2013:www} & Malicious & 829 & 1,473 & 0.0043 & 3.55 & 10 \\
Corruption~\cite{Ribeiro:et.al:2018:JCN} & Malicious & 309 & 3,281 & 0.0689 & 21.24 & 7 \\
Roget~\cite{Vladimir:Andrej:2006:Pajek} & Lexicon & 1,010 & 3,646 & 0.0072 & 7.22 & 10 \\
Bible~\cite{Kunegis:et.al:2013:www} & Lexicon & 1,773 & 9,131 & 0.0058 & 10.30 & 8 \\
\hline
\end{tabular}
\end{table}
\begin{table}[t]
\centering
\renewcommand{\arraystretch}{1.2}
\caption{NIRM model configurations and hypermeters.}
\begin{tabular}{l|c}
\hline
Hyper-parameter & Value \\
\hline
Adam optimizer learning rate & $1 \times{10^{-3}}$ \\
regularization term (L2 penalty) & $1 \times{10^{-4}}$ \\
learning decay & $0.4$ \\
mini-batch size & $6$ \\
number of self-attention heads & $8, 4, 2$ \\% & Number of concatenated heads \\
maximum training epoches & $50$ \\% & Number of maximum training epochs \\
neighbor-aggregation layers & $3$ \\% & Numbers of neighbor-aggregation layers \\
dimensions of node embedding & $32, 16, 8$ \\% & Dimensions of node embedding \\
patience period & $8$ \\% & Training stopped if loss did not decrease for 8 consecutive epochs \\
probability of dropout & $0.1, 0.2$ \\% & Probability of dropout \\
negative slope of Leaky ReLU & $0.2$ \\% & Angle of Leaky ReLU \\
number of neurons per layer & $5, 8$ \\% & Number of neurons per layer \\
\hline
\end{tabular}
\label{Table:Hyperparameter}
\end{table}
\begin{figure*}[t]
\centering
\includegraphics[width = 17.8cm]{images/Real_Onepass_Exp_CIKM_2.png}
\caption{Comparison of one-pass dismantling performance on real-world networks.
(\textit{A}): The smaller the normalized size of TAS $\rho$, the darker the color. Among 15 real world networks, the NIRM achieves the best in 12 networks, and second best on 3 networks. The bottom row provides the averaged $\rho$ over all 15 networks, where ours is 36.58 and second best is 40.41. (\textit{B} and \textit{E}): the node degree distribution and the attack nodes (red) selected by NIRM for two real world networks: Roget containing 1,010 nodes and 3,646 edges, and HM containing 1,858 nodes and 12,534 edges. (\textit{C} and \textit{F}) : the NGCC when removing different fractions of target attack nodes, where the area of ours is 298.79 and 296.06, the second smallest is BC of 303.28 in Roget,DC of 302.55 in HM. (\textit{D} and \textit{G}): the dismantling performance $\rho$ against different dismantling thresholds $\Theta$.}
\label{fig:Real_Onepass_Experiment
\end{figure*}
\begin{figure*}[h]
\centering
\includegraphics[width = 17.8cm]{images/Real_Repeated_Exp_CIKM_2.png}
\caption{Comparison of adaptive dismantling performance on the fifteen real-world networks. (\textit{A}) The smaller the normalized size of TAS $\rho$, the darker the color. Among the 15 networks, the NIRM achieves the best on 14 network, and the second best on 1 networks. The bottom row provides the average $\rho$, where ours is 24.76 and the second best is 28.81. (\textit{B} and \textit{E}): The selected attack nodes (red) by NIRM are fewer than those of one-pass dismantling: 37.52$\%$ vs. 55.35$\%$ in Roget, and 22.60$\%$ vs. 34.98$\%$ in HM. (\textit{C} and \textit{F}): the NGCC when removing different fractions of target attack nodes. (\textit{D} and \textit{G}): the dismantling performance $\rho$ against different dismantling thresholds $\Theta$.
}
\label{fig:Real_Repeated_Experiment}
\end{figure*}
\section{Model Training}
\label{sec:Training}
\subsection{Training datasets} We first introduce how to construct training datasets from synthetic model networks. In network science, some generative models have been widely accepted for producing synthetic networks, including Erd\"{o}s-R\'{e}nyi (ER, $p=0.1$) \cite{Erdos:et.al:1960:evolution}, Watt-Strogatz (WS, $k=4, p=0.1$) \cite{Watts:et.al:1998:Nature}, Barab\'{a}si-Albert (BA, $m=3$) \cite{Barabasi:et.al:1999:Science} and Powerlaw-Cluster (PLC, $m=3, p=0.05$) \cite{Holme:et.al:2002:PhyRevE}. Such synthetic model networks can well reflect one or more kinds of topological characteristics of many real-world networks, such as uniform or power-law node degree distribution. We use these generative models to produce a large number of tiny synthetic networks to compose a training dataset. Each synthetic network contains only a few of nodes (randomly selected from 20 to 30).
\par
We use \textit{exhaustive search} to first find the optimal TAS(s) for each tiny synthetic network. Note that the computational cost of exhaustive search increases exponentially with the increase of network size. This is also why we only use small-scale synthetic networks for training dataset. Further notice that for a synthetic training network $\mathcal{G}_t=(\mathcal{V}_t, \mathcal{E}_t)$, there could exist more than one optimal TAS. For $\mathcal{G}_t$, let $\mathcal{T}_t$ denote the set of its optimal TASs. For each node $v_i \in \mathcal{T}_t$, we count its frequency of appearing in different TASs, denote by $N_i$. Furthermore, let $N_{max}$ denote the maximum of $N_i$. Then for each $v_i \in \mathcal{T}_t$, we set its initial training score $c_i^0 = N_i / N_{max}$.
\par
Although the aforementioned approach can set a so-called ground-truth score for each node $v_i \in \mathcal{T}_t$, there could exist some node $v_j \in \mathcal{V}_t \backslash \mathcal{T}_t$ with no such a score, i.e., $c_j^0=0$. On the one hand, such a node $v_j$ may still play a role to network stability, though only with a smaller influence. On the other hand, the objective of our neural model is to output dismantling scores for final ranking. To take care of all nodes and avoid the unbalance of score, we propose the following \textit{training score propagation} algorithm to propagate the initial training score of a node $v_i \in \mathcal{T}_t$ to its one-hop neighbors:
\begin{equation}\label{Eq:ScorePropagation}
c_i = \sum_{v_j \in \mathcal{N}_i} \frac{c_j^0}{|\mathcal{N}_j|} + c_i^0, \; \forall c_i \in \mathcal{V}_t,
\end{equation}
where $\mathcal{N}_j$ is the set of $v_j$'s neighbors and $c_j^0$ denotes initial training score of $v_j$, which is equally allocated to $v_j$'s neighbors (divided by the number of its neighbors). After score propagation, training score $c_i$ of node $v_i$ is the sum of its own and its received scores.
\par
Fig.~\ref{Fig:ScorePropagation} illustrates the training score initialization, propagation and labeling by an example network.
\subsection{Loss function}
Let $\mathbf{c}$ denote the nodes' scores of an instance in the training dataset, and $\hat{\mathbf{s}}$ is the estimated scores of NIRM. The loss of one training network is defined as the following \textit{mean square error}:
\begin{equation}\label{Eq:Loss}
Loss = \frac{1}{|\mathcal{V}|} \sum_{v_i \in \mathcal{V}} (c_i - \hat{s}_i)^2.
\end{equation}
We implemented the NIRM in the PyTorch framework and used the Adam optimizer to train the model. During the training, we decay the learning rate with an early stopping criterion based on the loss on the validation set.
\par
In our NIRM, the learnable parameters include $\mathbf{W}_1$, $\mathbf{b}_1$ in feature scoring, $\mathbf{a}_{l,h} (l=1,...,L, h=1,...,H)$, $\mathbf{W}_2$ in representation learning, and $\mathbf{P}$ in global scoring. There are in total $854$ learnable parameters in our model. We generate 4,000 synthetic model networks with $95\%$ as training instances and $5\%$ as validation instances.
\section{Experiment Results and Analysis}
\label{sec:Experiment}
\subsection{Experiment settings}
We evaluate NIRM on fifteen real-world networks from various domains, including infrastructure network, communication network, collaboration network, malicious network and etc. Table \ref{tab:SI_Dataset} presents the characteristics and statistics of these real-world networks, model configurations and training hype-parameters can be found in Table \ref{Table:Hyperparameter}. All the experiments were conducted on an 8-core workstation with the following configurations: Intel Xeon E5-2620 v4 CPU with 2.10GHz, 32GB of RAM and Nvidia GeForce GTX 1080Ti GPU with 11GB memory. We release our code on Github at: https://github.com/JiazhengZhang/NIRM.
\subsection{Dismantling strategy}
For an input network, our NIRM outputs all nodes' dismantling scores for ranking, so enabling two dismantling strategies. The first one, called \textit{one-pass dismantling}, just selects the top-$K$ nodes according to their dismantling scores, such that the removal of only such $K$ nodes can lead to the \textit{normalized size of GCC} (NGCC) in the residual network not exceeding the threshold $\Theta$. NGCC is defined as the number of nodes in GCC divided by that of the original network. The second one, called \textit{adaptive dismantling}, only selects the top-$1$ node to remove; While the residual network after removing such one selected node will be input again to output a new node for next removal; Such selection-and-removal process will be repeated until the NGCC satisfies the threshold.
\subsection{Basline methods}
We compare NIRM with other state-of-the-art approaches which focus on estimating node importance, approaches for network dismantling can be generally classified into two categories according to the corresponding dismantling strategies.
\textbf{One-pass approaches}, common metric-based approaches mostly belong to this category, various baselines are selected and their introduction are listed below:
\begin{itemize}
\item DC (Degree Centrality)~\cite{Albert:et.al:2000:Nature}. Sequentially remove nodes in order of degree centrality.
\item CI (Collective Influence)~\cite{Morone:et.al:2015:Nature}. CI is defined as the product of the node degree (minus one) and total degrees of neighboring nodes at the surface of a ball of constant radius.
\item EC (Eigenvector Centrality)~\cite{Bonacich:et.al:1987:AmericaJS}. EC is based on the idea that the more important a neighboring node is connected, the more important that node is.
\item BC (Betweeness Centrality)~\cite{Freeman:Linton:1977:Sociometry}. BC is a path-based metric that counts the number of shortest paths through node.
\item CC (Closeness Centrality)~\cite{Bavelas:Alex:1950:JASA}. CC measures the average distance between the node and other nodes.
\item HC (Harmonic Centrality)~\cite{Paolo:et.al:2014:IM}. HC is a variant of the CC algorithm that can be applied to disconnected network.
\item PC (Percolation Centrality)~\cite{Piraveenan:et.al:2013:PloS}. PC quantifies relative impact of nodes based on their topological connectivity, as well as their percolation states.
\end{itemize}
\begin{figure*}[t]
\centering
\includegraphics[width=\textwidth, height= 0.45\textwidth]{images/SI_Sta_CIKM_3.png}
\caption{The NGCC when removing different fractions of target attack nodes
under one-pass dismantling.}
\label{fig:SI_Real_Onepass}
\end{figure*}
\begin{figure*}[ht]
\centering
\includegraphics[width=\textwidth, height= 0.45\textwidth]{images/SI_Dyn_CIKM_3.png}
\caption{The NGCC when removing different fractions of target attack nodes
under adaptive dismantling.}
\label{fig:SI_Real_Repeated}
\end{figure*}
\par
\textbf{Adaptive approaches}, we also explore the performance of adaptive approaches on network dismantling, details of these approaches are described as follows:
\begin{itemize}
\item DC~\cite{Albert:et.al:2000:Nature}. The adaptive version of DC, which recalculates the node degree centrality in the remaining network after each removal.
\item CI~\cite{Morone:et.al:2015:Nature}. The adaptive version of CI, which measure the importance of nodes based on the local structure of a ball of radius around every node.
\item BPD~\cite{Mugisha:Zhou:2016:PhyRevE}. BPD considers the spin glass theory and proposes a belief propagation-guided decimation algorithm. It first remove nodes until eliminating all the loops in the residual network. After that, BPD check the size of tree components and iteratively removes the root node.
\item CoreHD~\cite{Zdeborov:et.al:2016:SciReports}. CoreHD is similar with BPD in that both have eliminating loops and tree breaking. First remove the nodes with the highest degree from the 2-core of the network in an adaptive way. After the 2-core is empty, it adopts greedy tree-breaking algorithm.
\item GND~\cite{Ren:et.al:2018:PNAS}. GND proposes a spectral approach of node-weighted Laplacian operator. In each step, it partitions the remaining network into two components, and remove a set of nodes which covers all of the edges between two non-overlapping components at minimal cost.
\item FINDER~\cite{Fan:et.al:2020:NatureML}. FINDER is a reinforcement learning-based method. It encodes the state (remaining network) and all possible actions (remove node) into embedding vectors, so that it take action $a$ that represents the maximum expected rewards given state $s$.
\item NSKSD~\cite{Yiguang:et.al:2021:TCS}. NSKSD consider the overlapping effects between nodes, and introduced novel metrics KSD and NS. It proposed the double-turns selection mechanism to remove nodes in an adaptive way.
\end{itemize}
Note that some centrality-based approaches, e.g. BC, CC, EC, are not included in adaptive approaches, though they can be applied for adaptive dismantling, their adaptive versions are with great computation burdens yet not satisfactory dismantling performance. In addition, we implement NIRM as well as other state-of-the-art approaches without node reinsertion.
\begin{table}[t]
\centering
\caption{Ablation study of dismantling performance $\rho$ when using node intermediate scores and final scores to select attack nodes.}
\small
\renewcommand{\arraystretch}{1.2}
\setlength{\tabcolsep}{1.6mm}
(\textbf{A}) Adaptive dismantling
\begin{tabular}{l| c c c c c c}
\hline
Methods & UsPower & P-H & Ca-GrQc & Infectious & Bible & HM \\
\hline
\hline
NIRM-IS & 12.37 & 53.95 & 32.23 & 65.61 & 41.06 & 26.43 \\
NIRM-GS & 67.86 & 89.35 & 74.43 & 96.83 & 83.42 & 89.24 \\
NIRM-LS & 9.05 & 27.55 & 13.32 & 55.85 & 20.47 & 24.17 \\
NIRM & \textbf{8.58} & \textbf{25.70} & \textbf{11.98} & \textbf{54.88} & \textbf{19.35} & \textbf{22.60} \\
\hline
\end{tabular}
\vspace{10pt}
(\textbf{B}) One-pass dismantling
\begin{tabular}{l| c c c c c c}
\hline
Methods & UsPower & P-H & Ca-GrQc & Infectious & Bible & HM \\
\hline
\hline
NIRM-IS & 18.94 & 86.80 & 35.95 & 79.02 & 31.75 & 48.44 \\
NIRM-GS & 98.91 & 96.45 & 97.55 & 98.78 & 94.47 & 97.42 \\
NIRM-LS & 18.32 & 49.15 & 24.63 & 81.71 & 28.26 & 38.64 \\
NIRM & \textbf{16.81} & \textbf{47.90} & \textbf{24.29} & \textbf{76.83} & \textbf{27.35} & \textbf{34.98} \\
\hline
\end{tabular}
\label{Table:Ablation}
\end{table}
\subsection{Performance evaluation}
We first compare the one-pass dismantling performance of our NIRM with that of seven commonly used ranking metrics. Fig.~\ref{fig:Real_Onepass_Experiment}~\textit{A} presents the dismantling performance in terms of the \textit{normalized size of TAS} (denoted by $\rho$, and $\rho=|\mathcal{V}_{tas}|/|\mathcal{V}|$) for fifteen real-world networks, when setting the dismantling threshold $\Theta = 0.01$. Fig.~\ref{fig:Real_Onepass_Experiment}~\textit{B} and ~\textit{E} visualize the attack nodes (red) in Roget and HM to illustrate that our model is capable of identifying critical nodes for network dismantling. For different dismantling thresholds $\Theta$ in Roget and HM, as shown in Fig.~\ref{fig:Real_Onepass_Experiment}~\textit{D} and \textit{G}, NIRM consistently outperforms competitors. Our NIRM achieves the smallest $\rho$ in twelve real-world networks, and three smallest in three networks. On average, the NIRM selects about $36.58\%$ attack nodes to dismantle a network; While the second best one requires attacking $40.41\%$ nodes.
\par
We next focus on the adaptive dismantling strategies. In general, as each iteration of selection-and-removal is for a particular input network, it can be expected that the performance of adaptive algorithm would be better than its one-pass version. We note that although the BC, CC, HC, PC and EC algorithm can also be applied for adaptive dismantling, they are not included for comparison, as their adaptive versions are with great computation burdens yet with not satisfactory dismantling performance. On the other hand, we include seven state-of-the-art schemes for performance comparison, including DC~\cite{Albert:et.al:2000:Nature}, CI~\cite{Morone:et.al:2015:Nature}, BPD~\cite{Mugisha:Zhou:2016:PhyRevE}, CoreHD~\cite{Zdeborov:et.al:2016:SciReports}, GND~\cite{Ren:et.al:2018:PNAS}, FINDER~\cite{Fan:et.al:2020:NatureML} and NSKSD~\cite{Yiguang:et.al:2021:TCS}.
\par
Fig.~\ref{fig:Real_Repeated_Experiment}~\textit{A} compares the adaptive dismantling performance for the same fifteen real-world networks. It is not unexpected that the adaptive versions of NIRM, DC, and CI requires fewer attack nodes to meet the same dismantling threshold. Among the fifteen real-world networks, our NIRM achieves the best in fourteen ones, and second best in one. Furthermore, on average, our NIRM requires to attack only $24.76\%$ nodes, with an improvement of $4.05\%$ compared with the second best NSKSD requiring $28.81\%$ attack nodes. Fig.~\ref{fig:Real_Repeated_Experiment}~\textit{C} and \textit{F} plot the NGCC when removing different fractions of target attack nodes. We observe that the NIRM can often achieve the smallest NGCC against the same number of removed nodes, indicating our model better captures the response of system throughout the whole attack process. In addition, the area under such a curve can evaluate the average attack efficiency of a dismantling scheme. The areas of our NIRM are $248.27$, and $250.84$ in HM, and Roget, respectively, much smaller than other current popular methods FINDER of $259.81$, CI of $253.56$ in the corresponding network.
See Fig.~\ref{fig:SI_Real_Onepass} and Fig.~\ref{fig:SI_Real_Repeated} for more dismantling results on eight real-world networks.
\par
Table~\ref{Table:Ablation} reports the results of our ablation study, where six real-networks are used to examine the normalized size of TAS performance of adaptive and one-pass dismantling. Recall that in our NIRM, three intermediate scoring vectors, i.e., initial scores, local scores and global scores, can also be extracted for ranking nodes and dismantling networks. It can be seen that although local scores can already achieve competitive dismantling performance, its fusion with global scores further reduces target attack nodes. This collaborates our design objective of using a global projection to further make up network-wide comparisons of nodes' influences.
\section{Conclusion}
\label{sec:Conclusion}
Although many heuristic algorithms have been proposed for tackling the network dismantling problem, little has been done on employing recent deep learning techniques. The main challenge is from the fact that real world networks are with diverse sizes and distinct characteristics, which seems to discourage the use of a single neural model for different networks. In addition, training a neural model faces the difficulties of unknown ground truth labels, especially for large networks. Nonetheless, this article has provided an insightful trial of designing and training a neural influence ranking model for network dismantling. The key design philosophy is to encode local structural and global topological characteristics of each node for ranking its influence to network stability. Another factor of success is from our score propagation for only recruiting tiny synthetic networks for model training.
\par
Experiments on fifteen real-world networks have validated our NIRM as a promising solution to the network dismantling task. Yet we would like to note again that it is this style of training from tiny but applying for real-world networks, could open a new window for further investigations. Indeed, we acknowledge that our neural model is far from perfection, and many advances can be expected. In NIRM, only a few local attributes are crafted as initial features. A possible extension is to also take care of the node-centric motifs~\cite{Shao:et.al:2021:TKDD} for feature encoding. Also our global project kernels are somewhat plain, while sophisticated modules like Parametric UMAP~\cite{Sainburg:et.al:2021:NeuralCompute} and techniques like contrastive learning~\cite{You:et.al:2020:NIPS} can also be tried. Finally, we would like to expect further investigations on more delicate labeling strategies as well as training mechanisms.
\begin{acks}
This work is supported in part by National Natural Science Foundation of China (Grant No: 62172167).
We also want to use our NIRM model on MindSpore\footnote{http://www.mindspore.cn/}, which is a new deep learning computing framework. These problems are left for future work.
\end{acks}
\bibliographystyle{ACM-Reference-Format}
\balance
|
2,877,628,090,359 | arxiv | \section{Introduction}
Face anti-spoofing has always been a key challenging task of all face verification and recognition systems. Conventional face anti-spoofing systems used eigen faces~\cite{zhang1997face}, HoG (Histogram of Gradient)~\cite{albiol2008face}, or LBP (Local Binary Pattern) features to perform the task~\cite{rahim2013face}, whereas the recent systems mostly involve the deep neural features such as DeepFace~\cite{parkhi2015deep}, FaceNet~\cite{schroff2015facenet}, and OpenFace~\cite{baltruvsaitis2016openface}.
Emerging and pervasive usage of the IR and depth sensors, has highly simplified the detection of real and fake images recently through some image processing techniques, however using of all these sensors is not always feasible and moreover the previously distributed products of most companies are not included with these technologies.
In this paper, we address the problem of face anti-spoofing through merely using the RGB frames of traditional cameras without using the auxiliary data, which is the most challenging task in this area.
In order to cope with the facial liveness detection challenge, several datasets with their specific properties have been released, so far. NUAA imposter~\cite{tan2010face}, which is publicly available, contains 7509 fake, as well as 5105 real images, however this volume of data, considering the general scale of the required data for deep learning purposes, is insufficient and limiting. Casia-Surf, is another dataset recently published in CVPR 2019~\cite{Parkin_2019_CVPR_Workshops}, which contains all three types of data including RGB, IR, and depth data samples and is useful for multi-modal systems. This dataset contains 9608 training, as well as validation data samples.
The historically influential works in anti-spoofing area, contains four major approaches. One, is the texture-based methods which incorporate some hand-crafted features such as HoG, and LBP followed by traditional classifiers such as SVM to perform the task\cite{boulkenafet2016face,li2018face}. The temporal-based methods, on the other hand, either use the facial motion patterns (e.g., eye blinking) or involve the movements between face and the background and employ methods such as the optical flow to track the movement of face in order to discriminate real faces from the fake ones~\cite{inproceedings}. Some 3D structure-based methods have also been developed which either extract depth information from 2D images, or they analyze the 3D shape information being recorded with 3D sensors and then compare the 3D model of the input sample with that of a genuine face~\cite{8901932}. This method, however requires specific 3D devices which are not easily available and should be costly. Finally, the rPPG (Remote
Photoplethysmography) methods extract pulse signal
from facial videos without contacting any skin~\cite{lin2019face,song2018face}. Nevertheless, all these systems are highly vulnerable against the fake face attacks and masks, and may not cope with these attacks without the auxiliary data assistance, such as depth information, IR~\cite{liu2018learning}. In recent years, the deep learning-based methods have been pervasively used for many detection and recognition tasks, as well as anti-spoofing, ~\cite{chen2019face,chen2019attention}.
In this paper, we address the anti-spoofing problem and liveness detection using an end-2-end system. The novelty of this work is multi-fold: 1) No hand-crafted features (e.g., HoG, and LBP) have been utilized, 2) The proposed system requires no auxiliary data (e.g., Depth, IR) and resorts merely on the input RGB images, 3) Although the deep neural network structure has been employed, the entire system is light, portable and able to be deployed on hand-held devices and mobile phones with normal commercial processors, 4) The proposed system can deal with all types of fake images, such as depth-wise masks and high resolution display replaying, 5) Since each of the aforementioned achievements raises some challenging issues, the proposed system is able to cope with these issues, effectively.
The outlines of this paper is as follows. The next section, explains the background theory underlying the proposed system. Then, the experimental results and the analytics about them are pointed out. The conclusion terminates the paper and is followed by the references being cited in this paper.
\section{Related Works}
The problem of face anti-spoofing have been nearly solved for flagship devices. For example, the Face ID service on iPhone X, have been enabled to create a 3D mesh graph of the face, using Dot-projector, flood illumination, and IR sensors along with a dedicated neural network hardware (Neural Engine). Other brands, also use almost similar mechanisms to cope with the anti-spoofing problem.
On the other hand, there are a handful of mid-range mobile handsets and previously sold out devices which lack these sensors and processing units. Furthermore, there are many verification tasks which are performed using laptop webcams which are totally missing these mechanisms. These issues motivate the work on a model which can address the problem using the RGB images, only.
\begin{figure}
\centering
\includegraphics[width=1\linewidth]{img/2_big_picture_new}
\vspace{-6mm}
\caption{Data preparation flow graph, in order to gather the real and fake images from the dataset.}
\label{fig:2_big_picture}
\end{figure}
\begin{figure}[t]
\centering
\includegraphics[width=1\linewidth, height=.22\textheight]{img/1_sample}
\vspace{-8mm}
\caption{(Top) the real image samples, (Bottom) The fake image samples corresponding to the top images (from left to right): Mask with mouth and eyes cropped out, a paper mask without cropping, a paper mask with upper part cut in the middle, a paper print mask, and video replaying.}
\label{fig:1}
\end{figure}
\section{The Proposed Framework}
The problem of face anti-spoofing could be cast as a binary classification problem, which attempts to discriminate between real and fake images. However, the amount of fake samples is normally dominant vs. the real ones due to the enormous types of attacks and variations of the fake images within each type which could be given to the system. Hence, the system is likely to be exposed to the imbalanced training data.
\subsection{Dataset Preparation Task}
In order to gather the required data for antispoofing purposes, there are issues which hinder the clean data preparation. For instance, the background person who passes by or the portraits in the background, could easily leak into the data if no preprocessing is performed. Correspondingly, these outliers have to be thrown away. The functional flow graph of the proposed system for appropriate and reliable data preparation is depicted in figure~\ref{fig:2_big_picture}.
Various datasets are generated to handle the liveness detection problem. NUAA imposter~\cite{tan2010face} which has been publicly available, contains only 7509 fake, and 5105 real images, and this volume of data for the deep network-based applications is not sufficient, at all. Casia Surf~\cite{zhang2019casia} is a recently released dataset, which contains the RGB, IR, and depth data samples, and is appropriate for multi-modal systems. It contains 9608 training, as well as validation data samples. This much data also do not suffice the deep structured training models. The dataset being used in this paper is ROSE-Youtu~\cite{li2018face}, which contains the real videos, and their corresponding fake videos. Therefore, the data samples are not extracted as images. Some samples of images taken from this dataset are depicted in figure~\ref{fig:1}.
In order to extract the images from the ROSE-Youtu dataset, we employed the MTCNN network ~\cite{zhang2016joint} for face detection. To do so, we loaded every single video from the $3350$ videos in the dataset and spread it out into frames. Within each frame the face detection has been performed using MTCNN. Then, the face region has been cropped out, and put in its associated class of data~\cite{ghofrani2019realtime}.
\begin{figure}[b]
\centering
\includegraphics[width=1\linewidth, height=.08\textheight]{img/3_bad_cases}
\caption{Inappropriate samples (left to right): $1^{st}$ and $2^{nd}$ images: Fake samples with real background images being moved; Third up to fifth images: Fake samples with portrayed images in the background.}
\label{fig:3_bad_cases}
\end{figure}
Thus, we have prepared a set of $817,519$ data samples without augmentation being performed. During this preparation, we noticed some real images in which there are background people passing by, and this makes the real video to be classified as a fake one. In addition, there are videos which include portraits behind the scene, and this also causes the real video to be classified as fake. Samples of these cases are shown in figure~\ref{fig:3_bad_cases}.
In order to make sure that the model is robust against the person changing, the set of images related to a particular person has been extracted out of the data (which contains the samples from $20$ persons), and is used as the test data, for both the real and fake images. The rest of the dataset has been divided into $80\%$ and $20\%$ for training and validation purposes, respectively.
\subsection{The Proposed Architecture}
After gathering and cleaning the data, in order to perform the binary classification (i.e., Real or fake images) a state-of-the-art EfficientNet architecture~\cite{tan2019efficientnet} has been employed, as depicted in figure~\ref{fig:4_eff}.
\begin{figure}[t]
\centering
\includegraphics[width=.86\linewidth, height=.66\textheight]{img/4_eff}
\caption{Anti-spoofing architecture based on EfficientNet B0. Transfer learning has been involved for the weights of the network.}
\label{fig:4_eff}
\end{figure}
This network uses the B0 model, which has been pretrained on imagenet dataset, only as the initialization. The whole layers are trainable, and the stack of fully connected (FC) layers ($1024, 256, 32, 2$ neurons in each layer) with \textit{swish}~\cite{ramachandran2017swish} activation function in the first two FC layers, and the \textit{softmax} and \textit{tanh} activation functions for the final layers have been involved. Moreover, the \textit{dropconnect}~\cite{wan2013regularization} and \textit{batch normalization}~\cite{ioffe2015batch} has been performed between each two layers, to avoid overfitting. The entire number of parameters in the model is $5,592,606$, and the optimization method being employed is the \textit{Rectified-Adam}~\cite{liu2019variance} minimization algorithm.
Due to the fact that the ongoing binary classification is over the imbalanced data, as mentioned earlier, monitoring the accuracy of the network is not reasonable, the evaluations have been presented using the F1-score, which is a combination of precision and recall, as
\begin{eqnarray}
F_1\, \mathtt{score} = 2.\frac{precision.recall}{precision+recall}
\end{eqnarray}
Furthermore, due to the classification nature of the problem the binary cross entropy has been chosen as the loss function.
During the training the following results, as in table~\ref{tb:t1}, have been achieved:
\begin{table}[h]
\centering
\caption{Training and validation loss and F1-score for the architecture of figure~\ref{fig:4_eff}}
\begin{tabular}{|c|c|c|c|}
\hline
\multicolumn{2}{|c|}{Train} & \multicolumn{2}{c|}{Validation}\tabularnewline
\hline
\hline
Loss & F1-score & Loss & F1-score\tabularnewline
\hline
0.0204 & 99.37 & 0.0137 & 1.0\tabularnewline
\hline
\end{tabular}
\label{tb:t1}
\end{table}
Using dropconnect has caused the training loss to be less than the validation loss, as depicted in figure~\ref{fig:5_eff_res}.
\begin{figure}[t]
\centering
\includegraphics[width=1\linewidth, height=.13\textheight]{img/5_eff_res}
\caption{Results of the F1-score, and loss of the training and validation for the architecture being proposed in figure~\ref{fig:4_eff}.}
\label{fig:5_eff_res}
\end{figure}
The optimum parameters have been obtained after $97$ epochs, and the model needs $68,210$ MBytes to be saved. In addition, for the unseen test data the confusion matrix is, as in figure~\ref{fig:6_eff_conf}.
\begin{figure}
\centering
\includegraphics[width=.85\linewidth, height=.2\textheight]{img/6_eff_conf}
\caption{The confusion matrix for the EfficientNet test data.}
\label{fig:6_eff_conf}
\vspace{-4mm}
\end{figure}
As depicted in figure~\ref{fig:6_eff_conf}, the proposed model performs quite well, however due to the high number of parameters and using swish function it is not well suited for the client side implementations. This architecture, in stead is well qualified for the server side implementations.
\subsection{Low Weight Model-Client Side}
Another prevalent architecture which could be incorporated to perform the task is the MobileNet V2~\cite{sandler2018mobilenetv2} , which uses the separable CNN logic with depth-wise and point-wise (i..e Xception~\cite{chollet2017xception}). In this work, we have used a minimal structure of such a model, which uses the separable CNN logic with less number of parameters. A visual perception of the final layer of the MobileNet V2 being trained on the imageNet dataset has been depicted in figure~\ref{fig:7_visual}. In order to visualize the layer, the softmax has been omitted and the output has been activated using a linear activation.
\begin{figure}[b]
\centering
\includegraphics[width=1\linewidth, height=.2\textheight]{img/7_visual}
\caption{Visualized final layer of the MobileNet. (Top-Left to right): Gold fish, White shark, Stingray, (Bottom-left to right): Hen, Oustrich, Brambling}
\label{fig:7_visual}
\end{figure}
\begin{figure}
\centering
\includegraphics[width=1\linewidth, height=.2\textheight]{img/8_first_cnn}
\vspace{-6mm}
\caption{The primary level kernels for MobileNet V2, pretrained on imageNet.}
\label{fig:8_first_cnn}
\end{figure}
In addition to the previously shown output of the fully connected layer, we take a look at the primary convolution layer and some middle layers, as well. These figures are depicted in figure~\ref{fig:9_mobile}.
\begin{figure}
\centering
\includegraphics[width=1\linewidth, height=.24\textheight]{img/15_new}
\vspace{-6mm}
\caption{(Top to Bottom Rows): The low level kernels for MobileNet V2, The mid-level kernels for MobileNet V2, The high-level kernels for MobileNet V2, all of them are pretrained on imageNet dataset.}
\vspace{-1mm}
\label{fig:15_new}
\end{figure}
Looking at these figures, indicates that the network has reached a high perception level for the imageNet dataset with numerous classes. However, in our work we are interested in only two classes, namely real and fake face images. Therefore, the basic network could be easily simplified with respect to the filters being used in convolution layers, and the parameters could be drastically decreased. In our proposed architecture, the number of the filters in each layer are $1/3$ of the original one, and the input size has been reduced to the minimum one, which is $96 \times 96 \times 3$. By applying these changes the model volume and the number of parameters has been changed from $16$ MBytes with $3.47$ million parameters into $3.7$ MBytes with $266,801$ parameters, respectively. Applying the deployment model conversion techniques, which has been proposed by Tensorflow (e.g., TF-lite conversion, and Quantization), even a more compact model volume could be achieved with $100$ KBytes.
\begin{figure}
\centering
\includegraphics[width=.85\linewidth, height=.64\textheight]{img/9_mobile}
\caption{Low-weight anti-spoofing architecture based on MobileNet V2. A contraction has been applied to the original network, and the remnant weights are used as the initializers.}
\label{fig:9_mobile}
\end{figure}
In our implementation, we used these techniques, and using a GTX 1080 Graphic card, with $32$ GBytes of RAM, we could increase the batch size up to 718 samples, and for tuning the large batch the \textit{group normalizer}~\cite{wu2018group} has been employed. The evaluation results of training and testing of the proposed light model has been depicted in figure~\ref{fig:10_mobile_res}.
\begin{figure}
\centering
\includegraphics[width=1\linewidth, height=.22\textheight]{img/10_mobile_res}
\caption{The results of the low-weight proposed architecture of figure~\ref{fig:9_mobile} for training and validation data.}
\label{fig:10_mobile_res}
\end{figure}
The achieved results are for $100$ epochs for each metric.
\begin{table}
\caption{Training and validation loss, accuracy, precision, recall, and F1 score of the proposed low-weight architecture.}
\begin{tabular}{|c|c|c|c|c|c|c|c|c|c|}
\hline
\multicolumn{5}{|c|}{Train} & \multicolumn{5}{c|}{Valid}\tabularnewline
\hline
\hline
loss & acc & pre & rec & F1 & loss & acc & pre & rec & F1\tabularnewline
\hline
\scriptsize 0.0395 &\scriptsize 1.0 &\scriptsize 1.0 &\scriptsize 1.0 & \scriptsize 100 &\scriptsize 0.1654 &\scriptsize 96.29 &\scriptsize 97.81 &\scriptsize 100 &\scriptsize 95.69\tabularnewline
\hline
\end{tabular}
\label{tb:t2}
\end{table}
\begin{figure}
\centering
\includegraphics[width=.84\linewidth, height=.18\textheight]{img/11}
\caption{The confusion matrix of the proposed low-weight proposed archiecture on the test data.}
\label{fig:11}
\end{figure}
By observing the visualized layers of the network, which has been trained for imageNet dataset containing $1000$ classes, and through induction we can logically dedicate that for our binary classification problem the low level kernels from the initial layers are not supposed to make a tremendous discriminative features, as opposed to the higher level layers, and therefore
they could be reduced. Thus, we canceled out half of the filters from the initial convolution layer, and a percentage of the rest. Figure~\ref{fig:8_first_cnn}, depicts the first layer kernels of MobileNet V2 network. Obviously, these number of kernels would not be very informative for a binary classification purpose. Thus, the network width controller coefficient of $0.35$ has been used in our experiments to achieve an optimum filter width within the MobileNet V2 network. Hence, the transfer learning is not exactly what we have performed in our work. We have contracted the pretrained MobileNet V2, then we used the initial weights of the contracted network, as depicted in figure~\ref{fig:15_new}, followed by our customized MLP stack ($336 \times 112 \times 1$ dense layers), with group normalization due to the huge batch size, as depicted in figure~\ref{fig:9_mobile}.
\begin{figure}
\centering
\includegraphics[width=1\linewidth, height=.1\textheight]{img/13}
\caption{(Left)The visualized dense layer of the proposed low-weight model, (The $2 \times 5$ matrix of images) kernels of the highest layer of the proposed architecture of figure~\ref{fig:9_mobile}}
\label{fig:13}
\end{figure}
\section{Experiments and Analytics}
The confusion matrix of the low-weight anti-spoofing network over the ROSE-Youtu dataset is depicted in figure~\ref{fig:11}. The imbalanced nature of the data has impacted the real image detection outcome, compared to the EfficientNet B0 model, explained in the previous section.
As depicted in figure~\ref{fig:10_mobile_res}, the training process has been performed even faster than the original MobileNet model, since the number of parameters are dramatically decreased. However, the validation curve clearly shows a bias with respect to the training curve. The reason would be due to the tremendous reduction of the parameter numbers which pursue the network toward being underfitted. Identical to the figure~\ref{fig:7_visual}, in figure~\ref{fig:13} the final layer of our proposed network has been visualized, which works for the binary classification task, after the training phase is completed.
The results of the proposed low-weight architecture, as depicted in figure~\ref{fig:10_mobile_res} and table~\ref{tb:t2}, obviously verifies its qualification for being used in the client side.
As depicted in figure~\ref{fig:14}, the gradCAM attention visualization for the Up-mask image focuses on the eyes which has got an unusual depth, ~\cite{selvaraju2017grad}. For the full-mask image, both the eyes and mouth has grabbed the attention, and for the replay image and the photo image the attention distribution over the face has become scattered almost randomly. For the real face, however the attention is mostly on the chin, and distributed regularly.
\begin{figure}
\centering
\includegraphics[width=1\linewidth, height=.24\textheight]{img/14}
\vspace{-6mm}
\caption{(Top-2-down rows): Test data (left to right: Upper mask, full mask, replay, Photo, real images); Saliency features of the test data~\cite{simonyan2013deep}; gradCAM attention visualization graph over the test data samples.}
\vspace{-4mm}
\label{fig:14}
\end{figure}
\vspace{-1mm}
\section{Conclusion}
Two end-2-end attention-based face anti-spoofing models, have been proposed in this paper, one could be used for the server side and the other for the client side implementations, which merely incorporate the RGB images of the camera. These models require no auxiliary data (e.g., depth, IR) and perform remarkably well on the real and fake discrimination task.
The proposed model based on the EfficientNet B0, has performed perfectly well on the dataset, which enables it to be used in flagship mobile devices containing NPUs (dedicated Neural Processing Units), or in the server side.
The proposed low-weight architecture requires very few number of parameters in a low volume which enables it to be efficiently used in mobile handsets. Various attacks have been experimented and both the heavy weight and the low-weight architectures have performed quite well on the fake data inputs which verify the robustness of the proposed models.
{\small
\bibliographystyle{IEEEtran}
|
2,877,628,090,360 | arxiv | \section{Introduction}
\label{s:intro}
Networked Control Systems (NCSs) are an integral part of modern day Cyber-Physical Systems (CPS) and Internet of Things (IoT) applications. While these applications typically involve a large number of plants, bandwidth of shared communication networks is often limited. The scenario in which the number of plants sharing a communication network is higher than the capacity of the network is called \emph{medium access constraint}. This scenario motivates the need to allocate the communication network to each plant in a manner so that good qualitative and quantitative properties of the plants are preserved. This task of efficient allocation of a shared communication network is commonly referred to as a \emph{scheduling problem} and the corresponding allocation scheme is called a \emph{scheduling logic}. In this paper we study algorithmic design of scheduling logics for NCSs.
The existing classes of scheduling logics can be classified broadly into two categories: \emph{static} and \emph{dynamic}. In case of the former, a finite length allocation scheme of the network is determined offline and is applied eternally in a periodic manner, while in case of the latter, the allocation of the shared network is determined based on some information about the plant (e.g., states, outputs, access status of sensors and actuators, etc.), see \cite{Walsh2001} for a detailed discussion. In this paper we consider a shift in paradigm and present probabilistic scheduling logics for NCSs.
We study an NCS consisting of multiple discrete-time linear plants whose feedback loops are closed through a shared communication network. A block diagram of such an NCS is shown in Figure \ref{fig:ncs}.
\begin{figure}[htbp]
\begin{center}
\scalebox{0.6}{
\begin{tikzpicture}[every path/.style={>=latex},base node/.style={draw,rectangle, scale = 1.4}]
\node[base node] (a) at (-2,5) {Controller 1};
\node[base node] (b) at (3.5,4) {Plant 1};
\node[base node] (c) at (-2,2) {Controller 2};
\node[base node] (d) at (3.5,1) {Plant 2};
\node[base node] (e) at (-2,-2) {Controller N};
\node[base node] (f) at (3.5,-3) {Plant N};
\draw (-4.5 ,5) edge (a);
\draw (-4.5,5) edge (-4.5,4);
\draw[->] (-4.5,4) -- (0,4);
\draw (a) edge (0.4,5);
\draw[->] (b) -- (5.5,4);
\draw (5.5,4) edge (5.5,5);
\draw[->] (1,4) -- (b);
\draw[-.] (1,4) -- (0.4,4.5);
\draw (5.5,5) edge (1.1,5);
\draw[-.] (1.1,5) -- (0.6,5.5);
\draw (-4.5 ,2) edge (c);
\draw (-4.5,2) edge (-4.5,1);
\draw[->] (-4.5,1) -- (0,1);
\draw (c) edge (0.4,2);
\draw[->] (d) -- (5.5,1);
\draw (5.5,1) edge (5.5,2);
\draw[->] (1,1) -- (d);
\draw[-.] (1,1) -- (0.4,1.5);
\draw (5.5,2) edge (1.1,2);
\draw[-.] (1.1,2) -- (0.6,2.5);
\draw (-4.5 ,-2) edge (e);
\draw (-4.5,-2) edge (-4.5,-3);
\draw[->] (-4.5,-3) -- (0,-3);
\draw (e) edge (0.4,-2);
\draw[->] (f) -- (5.5,-3);
\draw (5.5,-3) edge (5.5,-2);
\draw[->] (1,-3) -- (f);
\draw[-.] (1,-3) -- (0.4,-2.5);
\draw (5.5,-2) edge (1.1,-2);
\draw[-.] (1.1,-2) -- (0.6,-1.5);
\draw[dashed] (-0.4,-4) -- (-0.4,6);
\draw[dashed] (2,-4) -- (2,6);
\draw[dashed] (-0.4,-4) -- (2,-4);
\draw[dashed] (-0.4,6) -- (2,6);
\node (g) at (0.75,-4.5) {Communication network};
\node (h) at (3.5,-0.5) {\(\vdots\)};
\end{tikzpicture}
}
\caption{Block diagram of NCS}\label{fig:ncs}
\end{center}
\end{figure}
We assume that the plants are unstable in open-loop and exponentially stable in closed-loop. Due to a limited communication capacity of the network, only a few plants can exchange information with their controllers at any instant of time. Consequently, the remaining plants operate in open-loop at every time instant. Our contributions are twofold: \\
\begin{itemize}[label = \(\circ\),leftmargin=*]
\item We present an algorithm to design scheduling logics. At every instant of time, our algorithm allocates the shared network to subsets of the plants with certain probabilities. We present necessary and sufficient conditions on the plant dynamics and the capacity of the shared network under which a scheduling logic obtained from our algorithm ensures stochastic stability of each plant in the NCS.
\item Given plant dynamics and capacity of the shared network, we present an algorithm to design static state-feedback controllers such that the plants, their controllers and the shared network together satisfy our stability conditions.
\end{itemize}
The proposed stability conditions are derived using a Markovian jump linear systems modelling of the individual plants. They involve matrix inequalities and can be verified by using standard matrix inequality solver toolboxes.
The remainder of this paper is organized as follows: In \S\ref{s:prob_stat} we formulate the problem under consideration. Our results appear in \S\ref{s:mainres}. We also describe various features of our results in this section. Numerical experiments are presented in \S\ref{s:num_ex}. We conclude in \S\ref{s:concln} with a brief discussion on future research direction.
{\bf Notation}. \(\mathbb{R}\) is the set of real numbers and \(\mathbb{N}\) is the set of natural numbers, \(\mathbb{N}_0 = \mathbb{N}\cup\{0\}\). For two scalars \(a\) and \(b\), \(a\%b\) denotes the remainder of the operation \(a/b\). For a finite set \(C\), its cardinality is denoted by \(\abs{C}\). For a vector \(v\), \(\norm{v}\) denotes its Euclidean norm. For symmetric block matrices, \(\bigstar\) acts as ellipsis for the terms that are introduced by symmetry, \(\text{diag}(Q_1,Q_2,\ldots,Q_n)\) denotes a block-diagonal matrix with diagonal elements \(Q_1,Q_2,\ldots,Q_n\). \(0_{d\times d}\) and \(I_{d\times d}\) denote \(d\)-dimensional \(0\)-matrix and identity matrix, respectively. We will operate in a probabilistic space \((\Omega,\mathcal{F},\mathbb{P})\), where \(\Omega\) is the sample space, \(\mathcal{F}\) is the \(\sigma\)-algebra of events, and \(\mathbb{P}\) is the probability measure.
\section{Problem statement}
\label{s:prob_stat}
We consider an NCS with \(N\) plants whose dynamics are given by
\begin{align}
\label{e:plants}
x_i(t+1) = A_i x_i(t) + B_i u_i(t),\:x_i(0) = x_i^0,\:t\in\mathbb{N}_0,
\end{align}
where \(x_i(t)\in\mathbb{R}^{d_i}\) and \(u_i(t)\in\mathbb{R}^{m_i}\) are the vectors of states and inputs of the \(i\)-th plant at time \(t\), respectively, \(i=1,2,\ldots,N\). Each plant \(i\) employs a state-feedback controller \(u_i(t) = K_i x_i(t)\), \(t\in\mathbb{N}_0\). The matrices \(A_i\in\mathbb{R}^{d_i\times d_i}\), \(B_i\in\mathbb{R}^{d_i\times m_i}\) and \(K_i\in\mathbb{R}^{m_i\times d_i}\), \(i=1,2,\ldots,N\) are constants.
\begin{assump}
\label{a:stability}
The open-loop dynamics of each plant is unstable and each controller is stabilizing. More specifically, the matrices \(A_i+B_iK_i\), \(i=1,2,\ldots,N\) are Schur stable and the matrices \(A_i\), \(i=1,2,\ldots,N\) are unstable.\footnote{A matrix \(A\in\mathbb{R}^{d\times d}\) is Schur stable if all its eigenvalues are inside the open unit disk. We call \(A\) unstable if it is not Schur stable.}
\end{assump}
The controllers are remotely located and each plant communicates with its controller through a shared communication network. The network has a limited communication capacity in the sense that at any time instant, only \(M\) plants (\(0<M<N\)) can access the network. Consequently, the remaining \(N-M\) plants operate in open loop.
\begin{assump}
\label{a:ideal}
The communication network is ideal in the sense that exchange of information between plants and their controllers is not affected by communication uncertainties.
\end{assump}
Let \(i_s\) and \(i_u\) denote the stable and unstable modes of the \(i\)-th plant, respectively, \(A_{i_s} = A_i+B_i K_i\) and \(A_{i_u} = A_i\), \(i=1,2,\ldots,N\). We let
\[
\mathcal{S} = \{s\in\{1,2,\ldots,N\}^{M}\:|\:\text{the elements of \(s\) are distinct}\}
\]
be the set of all subsets of \(\{1,2,\ldots,N\}\) with cardinality \(M\). We call a function \(\gamma:\mathbb{N}_0\to\mathcal{S}\), that specifies, at every time \(t\), \(M\) plants of the NCS which access the shared network at that time, as a \emph{scheduling logic}. Let \(r_i^0\) denote the initial mode of operation of plant \(i\), i.e., \(r_i^0 =i_s\), if \(i\in\gamma(0)\) and \(r_i^0 =i_u\), if \(i\notin\gamma(0)\).
We will focus on stochastic stability of the plants.
\begin{defn}
\label{d:stability}
The \(i\)-th plant in \eqref{e:plants} is \emph{stochastically stable} if for every initial condition \(x_i^0\in\mathbb{R}^{d_i}\) and initial mode of operation \(r_i^0\in\{i_s,i_u\}\), we have that
\(\displaystyle{\mathbb{E}\Biggl\{\sum_{t=0}^{+\infty}\norm{x_i(t)}^{2}\:|\:x_i^0,r_i^0\Biggr\}<+\infty}\).
\end{defn}
Our first objective is:
\begin{prob}
\label{prob:main1}
Given the matrices \(A_i\), \(B_i\), \(K_i\), \(i=1,2,\ldots,N\) and the number \(M\), design a scheduling logic, \(\gamma\), that preserves stochastic stability of each plant \(i\) in the NCS.
\end{prob}
Towards solving Problem \ref{prob:main1}, we will first present a probabilistic algorithm. We will then identify conditions on the matrices \(A_{i_s}\), \(A_{i_u}\), \(i=1,2,\ldots,N\) and the network capacity, \(M\), such that stochastic stability of each plant \(i\) in the NCS is ensured under a scheduling logic obtained from our algorithm.
Our second objective is:
\begin{prob}
\label{prob:main2}
Given the matrices \(A_i\), \(i=1,2,\ldots,N\) and the network capacity, \(M\), design static state-feedback controllers, \(K_i\), \(i=1,2,\ldots,N\), such that the conditions for stability under our scheduling logics are satisfied.
\end{prob}
Towards designing suitable state-feedback controllers, we will solve a set of feasibility problems involving LMIs.
\section{Main results}
\label{s:mainres}
\subsection{Stabilizing scheduling logics}
\label{ss:mainres1}
We first present our solution to Problem \ref{prob:main1}. We will operate under the following assumption:
\begin{assump}
\label{a:divisibility}
\rm{
The total number of plants, \(N\) and the capacity of the shared communication network, \(M\) together satisfy \(N\%M = 0\).
}
\end{assump}
Assumption \ref{a:divisibility} ensures that the total number of plants, \(N\), in the NCS is divisible by the capacity of the shared network, \(M\). In other words, the \(N\) plants can be divided into an integer number of chunks of \(M\) plants. Let \(v = N/M\). Towards designing a scheduling logic, we rely on disjoint sets \(c_1,c_2,\ldots,c_v\in\mathcal{S}\) and scalars \(p_{c_1},p_{c_2},\ldots\),\(p_{c_{v}}\in]0,1[\) that satisfy \(\displaystyle{\sum_{j=1}^{v}p_{c_{j}}}=1\).
Suppose that \(c_1,c_2,\ldots,c_v\) and \(p_{c_1},p_{c_2},\ldots\),\(p_{c_{v}}\) are fixed. A scheduling logic, \(\gamma\), is generated as follows: at each time instant \(t=0,1,2,\ldots\), we allocate the shared network to the plants in \(c_j\) with probability \(p_{c_j}\), \(j\in\{1,2,\ldots,v\}\). This procedure is summarized in Algorithm \ref{algo:sched_design}.\footnote{We will discuss how to choose the quantities \(c_1,c_2,\ldots,c_v\) and \(p_{c_1},p_{c_2},\ldots\),\(p_{c_{v}}\) favourably in a moment.}
\begin{algorithm}
\begin{algorithmic}[1]
\STATE Set \(v = N/M\).
\STATE Pick \(c_1,c_2,\ldots,c_v\in\mathcal{S}\) such that \(c_j\cap c_k=\emptyset\) for all \(j,k=1,2,\ldots,v\), \(j\neq k\).
\STATE Pick \(p_{c_1},p_{c_{2}},\ldots,p_{c_{v}}\in]0,1[\) such that \(\displaystyle{\sum_{j=1}^{v}p_{c_{j}}} = 1\).
\FOR {\(t=0,1,2,\ldots\)}
\STATE Set \(\gamma(t) = c_j\) with probability \(p_{c_{j}}\).
\ENDFOR
\caption{Design of a scheduling logic}\label{algo:sched_design}
\end{algorithmic}
\end{algorithm}
The following theorem provides necessary and sufficient conditions on the matrices, \(A_i\), \(B_i\), \(K_i\), \(i=1,2,\ldots,N\) and the capacity of the network, \(M\), under which a scheduling logic, \(\gamma\), obtained from Algorithm \ref{algo:sched_design} ensures stochastic stability of each plant in the NCS.
\begin{theorem}
\label{t:mainres1}
Consider an NCS described in \S\ref{s:prob_stat}. Suppose that Assumption \ref{a:divisibility} holds. Let \(\gamma\) be a scheduling logic obtained from Algorithm \ref{algo:sched_design}. Each plant \(i\) in \eqref{e:plants} is stochastically stable under \(\gamma\) if and only if the following conditions hold:
for each \(i\in c_j\), \(j=1,2,\ldots,v\), there exist symmetric and positive definite matrices \(P_k\in\mathbb{R}^{d_i\times d_i}\), \(k=i_s,i_u\), such that
\begin{align}
\label{e:maincondn}
A_k^\top \P^i A_k - P_k \prec 0,
\end{align}
where \(\P^i = p_{c_j}P_{i_s} + (1-p_{c_{j}})P_{i_u}\).
\end{theorem}
\begin{remark}
\label{rem:expln1}
Condition \eqref{e:maincondn} involves properties of the matrices \(A_i\), \(B_i\) and \(K_i\), \(i=1,2,\ldots,N\), the disjoint sets \(c_j\), \(j\in\{1,2,\ldots,v\}\) and the probabilities \(p_{c_{j}}\), \(j\in\{1,2,\ldots,v\}\). It relies on the existence of symmetric and positive definite matrices, \(P_k\), \(k=i_s,i_u\), \(i=1,2,\ldots,N\) that together with the matrices \(A_i\), \(B_i\), \(K_i\), \(i=1,2,\ldots,N\) and the probabilities \(p_{c_j}\), \(j=1,2,\ldots,v\) corresponding to the subset \(c_j\), \(j\in\{1,2,\ldots,v\}\) that plant \(i\) appears in, satisfy a set of matrix inequalities. Notice that with the quantities \(A_i\), \(B_i\), \(K_i\), \(c_j\), \(p_{c_j}\), \(j=1,2,\ldots,v\) known, the set of inequalities in \eqref{e:maincondn} can be solved by employing standard Linear Matrix Inequalities solvers.
\end{remark}
\begin{remark}
\label{rem:expln2}
Fix a scheduling logic, \(\gamma\), obtained from Algorithm \ref{algo:sched_design}. Theorem \ref{t:mainres1} is necessary and sufficient in the following sense: if condition \eqref{e:maincondn} holds, then \(\gamma\) is stabilizing, and if \(\gamma\) is stabilizing, then condition \eqref{e:maincondn} holds.
\end{remark}
Towards proving Theorem \ref{t:mainres1}, we will utilize the following auxiliary result
\begin{lem}
\label{lem:auxres}
Suppose that Assumption \ref{a:divisibility} holds. Then the following are true:
\begin{enumerate}[label = \roman*), leftmargin = *]
\item \(\displaystyle{\bigcup_{j=1}^{v}c_j} = \{1,2,\ldots,N\}\), and
\item for each \(i\in\{1,2,\ldots,N\}\), there exists exactly one \(j\in\{1,2,\ldots,v\}\) such that \(i\in c_j\).
\end{enumerate}
\end{lem}
\begin{proof}
i) Assume, by contradiction, that
\(
\displaystyle{\bigcup_{j=1}^{v}c_j} \neq \{1,2,\ldots\),\(N\}.
\)
By construction of \(c_j\), \(j=1,2,\ldots,v\), it must then be true that
\(\displaystyle{\bigcup_{j=1}^{c}c_j\subset\{1,2,\ldots,N\}}\).
We have \(\abs{c_j} = M\) for each \(j=1,2,\ldots,v\). Thus, \(\displaystyle{\abs{\bigcup_{j=1}^{v}c_j}} = vM=(N/M)M\). Since \(N\% M = 0\), we have \(\displaystyle{\abs{\bigcup_{j=1}^{v}c_j}} = N\). Then it must hold that there exist \(\ell\in\{1,2,\ldots,N\}\) and \(j_1,j_2,\ldots,j_q\in\{1,2,\ldots,v\}\) such that \(\ell\in c_{j_{m}}\) for each \(m=1,2,\ldots,q\). But this contradicts the fact that \(c_j\), \(j=1,2,\ldots,v\) are disjoint sets. Consequently, it must be true that \(\displaystyle{\bigcup_{j=1}^{v}c_j} = \{1,2,\ldots,N\}\).
ii) Since \(v = N/M\) and the sets \(c_j\), \(j=1,2,\ldots,v\), are disjoint, the assertion follows at once.
\end{proof}
\begin{proof}{(of Theorem \ref{t:mainres1})}:
Fix a scheduling logic, \(\gamma\), obtained from Algorithm \ref{algo:sched_design}. We will show that condition \eqref{e:maincondn} is necessary and sufficient for stability of each plant \(i\) in \eqref{e:plants} under \(\gamma\).
Fix \(j\in\{1,2,\ldots,v\}\) and \(i\in c_j\).
By Lemma \ref{lem:auxres} ii), \(i\) appears in exactly one \(c_j\). We model the plant \(i\) under \(\gamma\) as follows:
\begin{align}
\label{e:i-swsys}
x_i(t+1) = A_{\sigma_i(t)}x_i(t),\:\sigma_i(t)\in\{i_s,i_u\}.
\end{align}
Notice that \eqref{e:i-swsys} is a Markovian jump linear system whose set of subsystems is \(\{i_s,i_u\}\) and the transition function \(\sigma_i\in\mathbb{N}_0\to\{i_s,i_u\}\) satisfies \(\sigma_i(t) =i_s\), if \(i\in\gamma(t)\) and \(\sigma_i(t) =i_u\), if \(i\notin\gamma(t)\).
In particular, \(\sigma_i\) is a Markov chain, defined on \((\Omega,\mathcal{F},\mathbb{P})\), taking values in \(\{i_s,i_u\}\) with transition probability matrix
\(
\Pi_i = \pmat{\pi_{i_s\is} & \pi_{i_si_u}\\\pi_{\iui_s} & \pi_{i_u\iu}},
\)
where
\begin{align*}
\begin{aligned}
\pi_{i_s\is} &= \mathbb{P}(\sigma_i(t+1) = i_s\:|\:\sigma_i(t) = i_s) = p_{c_j},\\
\pi_{i_si_u} &= \mathbb{P}(\sigma_i(t+1) = i_u\:|\:\sigma_i(t) = i_s) = 1-p_{c_j},\\
\pi_{\iui_s} &= \mathbb{P}(\sigma_i(t+1) = i_s\:|\:\sigma_i(t) = i_u) = p_{c_j},\\
\pi_{i_u\iu} &= \mathbb{P}(\sigma_i(t+1) = i_u\:|\:\sigma_i(t) = i_u) = 1-p_{c_j},
\end{aligned}
\:\:i\in c_j.
\end{align*}
By \cite[Lemma 2]{Zhang2009}, the switched system \eqref{e:i-swsys} is stochastically stable if and only if the following conditions hold:
\begin{align}
\label{e:pf1_step1}A_{i_s}^\top \bigl(\pi_{i_s\is}P_{i_s}+\pi_{i_si_u}P_{i_u}\bigr)A_{i_s} - P_{i_s} \prec 0,\\
\intertext{and}
\label{e:pf1_step2}A_{i_u}^\top \bigl(\pi_{\iui_s}P_{i_s}+\pi_{i_u\iu}P_{i_u}\bigr)A_{i_u} - P_{i_u} \prec 0,
\end{align}
where \(P_{i_s}\), \(P_{i_u}\in\mathbb{R}^{d_i\times d_i}\) are symmetric and positive definite matrices. We have
\(\pi_{i_s\is}P_{i_s}+\pi_{i_si_u}P_{i_u} = p_{c_j}P_{i_s} + (1-p_{c_{j}})P_{i_u}
=\pi_{\iui_s}P_{i_s}+\pi_{i_u\iu}P_{i_u} = \P^i\).
Clearly, stochastic stability of plant \(i\) is equivalent to the conditions \eqref{e:pf1_step1}-\eqref{e:pf1_step2}.
Since \(i\in c_j\) and \(j\in\{1,2,\ldots,v\}\) were chosen arbitrarily, stochastic stability of each plant \(\displaystyle{i\in\bigcup_{j=1}^{v}c_j}\) under \(\gamma\) is immediate. In view of Lemma \ref{lem:auxres} i), this completes our proof of Theorem \ref{t:mainres1}.
\end{proof}
\begin{remark}
\label{rem:compa1}
\rm{
Recall that a Markovian jump linear system is a switched system \cite[Section 1.1.2]{Liberzon} with linear subsystems; its switching logic is stochastic and can be described by a Markov chain. Switched systems with both deterministic and stochastic switching logics have been employed to design scheduling algorithms for NCSs with communication limitations and uncertainties earlier in the literature, see e.g., \cite[Remark 11]{quevedo2020} for a detailed discussion. The primary difference of our work with the existing literature is that our scheduling algorithm is probabilistic while most of the existing design of scheduling logics by employing switched systems modelling of plants relies on purely deterministic techniques. In case of the latter, stochastic behaviour of the switching logics arises from probabilistic assumptions on the communication uncertainties typically leading to non-homogeneous Markov chains, see e.g., \cite{ghi} where a probabilistic data loss model is considered. In the current work stochastic behaviour of the switching logics arises from probabilistic scheduling logic and the switching logics are time homogeneous Markov chains.
}
\end{remark}
\begin{remark}
\label{rem:compa2}
\rm{
Qualitative and quantitative properties of conti-nuous-time linear plants communicating with their controllers under a \emph{pre-specified} stochastic scheduling logic have been studied in \cite{Fridman2015}. The problem considered in this paper differs from the said setting due to the following two reasons: (a) we focus on \emph{designing} stabilizing probabilistic scheduling logics, and (b) our plant dynamics evolve in discrete-time.
}
\end{remark}
\begin{remark}
\label{rem:compa3}
\rm{
Notice that our design of scheduling logics is neither static nor dynamic (a description of these terms are given in Section \ref{s:intro}). Indeed, we neither repeat a finite length allocation scheme nor take properties of the plants or other components in the NCS into consideration at every time instant. This is not surprising as the proposed design technique is solely probabilistic.
}
\end{remark}
For selecting disjoint sets \(c_j\), \(j\in\{1,2,\ldots,v\}\) and the probabilities \(p_{c_{j}}\), \(j\in\{1,2,\ldots,v\}\) such that condition \eqref{e:maincondn} holds, we employ an exhaustive search over all combinations of \(v\)-many disjoint sets \(\overline{c}_j\in\mathcal{S}\), \(j=1,2,\ldots,v\) and probabilities \(\overline{p}_{\overline{c}_j}\in]0,1[\), \(j=1,2,\ldots,v\) such that \(\displaystyle{\sum_{j=1}^{v}\overline{p}_{\overline{c}_{j}}} = 1\) holds.\footnote{A search over all combinations of \(v\)-many disjoint sets \(\overline{c}_{j}\in\mathcal{S}\), \(j=1,2,\ldots,v\) suffices in view of Lemma \ref{lem:auxres}.} The interval \(]0,1[\) is sampled with a (small enough) step size \(h > 0\). Let \(r\) be the biggest integer satisfying \(rh < 1\). For all combinations of \(v\)-many disjoint sets \(\overline{c}_j\in\mathcal{S}\), \(j=1,2,\ldots,v\) and all choices of \(\overline{p}_{\overline{c}_j}\in\{h,2h,3h,\ldots,rh\}\) such that \(\displaystyle{\sum_{j=1}^{v}\overline{p}_{\overline{c}_{j}}} = 1\), we solve a feasibility problem for all plants \(i=1,2,\ldots,N\). It outputs, if exist, symmetric and positive definite matrices, \(P_k\), \(k=i_s,i_u\), \(i=1,2,\ldots,N\) that together with the matrices \(A_i\), \(B_i\), \(K_i\), \(i=1,2,\ldots,N\) and the probabilities \(\overline{p}_{\overline{c}_j}\), \(j=1,2,\ldots,v\) satisfy condition \eqref{e:maincondn}. If an output is obtained, then we assign \(c_j = \overline{c}_j\) and \(p_{c_j} = \overline{p}_{\overline{c}_j}\), \(j=1,2,\ldots,v\). Otherwise, we do not have suitable inputs for Algorithm \ref{algo:sched_design}. The procedure is summarized in Algorithm \ref{algo:exhaust_search}.
\begin{algorithm}
\begin{algorithmic}[1]
\STATE Construct the set \(\mathcal{S}\).
\STATE Fix a step size \(h > 0\) (small enough). Compute \(r\) as the biggest integer satisfying \(rh < 1\).
\FOR {all \(\overline{c}_1,\overline{c}_2,\ldots,\overline{c}_v\in\mathcal{S}\) such that \(\overline{c}_j\cap \overline{c}_k=\emptyset\) for all \(j,k=1,2,\ldots,v\), \(j\neq k\)}
\FOR {\(\overline{p}_{\overline{c}_1} = h,2h,\ldots,rh\)}
\FOR {\(\overline{p}_{\overline{c}_2} = h,2h,\ldots,rh\)}
\STATE \(\vdots\)
\FOR {\(\overline{p}_{c_v} = h,2h,\ldots,rh\)}
\IF {\(\displaystyle{\sum_{j=1}^{v}\overline{p}_{\overline{c}_{j}}} = 1\)}
\STATE Solve the following feasibility problem for \(P_k\), \(k=i_s,i_u\), \(i=1,2,\ldots,N\):
\small
\begin{align}
\label{e:feas_prob}
\hspace*{-3cm}\minimize\:\:&\:\:1\\
\sbjto\:\:&\:\:
\begin{cases}
A_{i_s}^\top \biggl(\overline{p}_{\overline{c}_j}P_{i_s}+\bigl(1-\overline{p}_{\overline{c}_j}\bigr)P_{i_u}\biggr) A_{i_s} \nonumber\\- P_{i_s} \prec 0,\nonumber\\
A_{i_u}^\top \biggl(\overline{p}_{\overline{c}_j}P_{i_s}+\bigl(1-\overline{p}_{\overline{c}_j}\bigr)P_{i_u}\biggr) A_{i_u}\nonumber\\ - P_{i_u} \prec 0,\nonumber\\
P_{i_s} = P_{i_s}^\top,\:\:P_{i_s}\succ 0,\nonumber\\
P_{i_u} = P_{i_u}^\top,\:\:P_{i_u}\succ 0,\nonumber\\
\kappa I_{d_i\times d_i}\preceq P_{i_s}, P_{i_u} \preceq I_{d_i\times d_i},\\
\kappa > 0\:(\text{small}),\\
i=1,2,\ldots,N.
\end{cases}
\end{align}
\normalsize
\STATE If a solution to \eqref{e:feas_prob} is obtained, then go to Step \ref{step:final}.
\ENDIF
\ENDFOR
\ENDFOR
\ENDFOR
\ENDFOR
\STATE \label{step:final} Set \(c_j = \overline{c}_j\) and \(p_{c_j} = \overline{p}_{\overline{c}_j}\), \(j=1,2,\ldots,v\) and exit.
\caption{Selection of \(c_j\) and \(p_{c_{j}}\), \(j = 1,2,\ldots,v\)}\label{algo:exhaust_search}
\end{algorithmic}
\end{algorithm}
\begin{remark}
\label{rem:bounds_search}
\rm{
Notice that the conditions \(\kappa I_{d_i\times d_i}\preceq P_{i_s}, P_{i_u} \preceq I_{d_i\times d_i}\) in the feasibility problem \eqref{e:feas_prob} is not inherent to the set of inequalities \eqref{e:maincondn}. It is included for numerical reasons. In particular, \(\kappa I_{d_i\times d_i}\preceq P_{i_s}, P_{i_u}\) limits the condition numbers of \(P_{i_s}\) and \(P_{i_u}\) to \(\kappa^{-1}\), and the condition \(P_{i_s}, P_{i_u} \preceq I_{d_i\times d_i}\) guarantees that the set of feasible \(P_{i_s}\), \(P_{i_u}\) is bounded. Here, we have \(i=1,2,\ldots,N\).
}
\end{remark}
\begin{remark}
\label{rem:complexity}
\rm{
Algorithm \ref{algo:exhaust_search} has a large computational complexity when the number of plants, \(N\) and their dimensions, \(d_i\), \(i=1,2,\ldots,N\) are large. However, selection of \(c_j\) and \(p_{c_j}\), \(j=1,2,\ldots,v\) is an offline process. Indeed, they are to be chosen only once prior to the generation of a scheduling logic.
}
\end{remark}
Suppose that suitable sets \(c_j\) and scalars \(p_{c_{j}}\), \(j=1,2,\ldots,v\) are obtained from Algorithm \ref{algo:exhaust_search}. A next natural question is: how do we choose an element \(c_j\) with a probability \(p_{c_j}\), \(j\in\{1,2,\ldots,v\}\) at every instant of time \(t\in\mathbb{N}_0\)? Clearly, using a standard random number \(j\in\{1,2,\ldots,v\}\) generator is not sufficient as we have a probability \(p_{c_j}\) associated to every \(c_j\). We employ Algorithm \ref{algo:impl} for this purpose.
\begin{algorithm}
\begin{algorithmic}[1]
\STATE Fix \(T\in\mathbb{N}\) (large enough).
\FOR {\(j=1,2,\ldots,v\)}
\STATE Set the frequency of occurrence of \(c_j\) as \(f_{c_j} = p_{c_j}\times T\).
\ENDFOR
\STATE Construct a set \(TEMP\) that contains \(f_{c_{j}}\) instances of \(c_j\), \(j=1,2,\ldots,v\), i.e.,
\small \(\displaystyle{TEMP = \bigcup_{j=1}^{v}\biggl\{c_j^{1},c_j^{2},\ldots,c_j^{f_{c_j}}\biggr\}}\).\normalsize
\FOR {\(t = 0,1,\ldots,T-1\)}
\STATE Pick an element \(r\) from \(TEMP\) uniformly at random, set \(\gamma(t) = r\) and \(TEMP = TEMP\setminus\{r\}\).
\ENDFOR
\caption{Implementation of Algorithm \ref{algo:sched_design}}\label{algo:impl}
\end{algorithmic}
\end{algorithm}
It involves four steps: First, a time horizon \(\{0,1,\ldots,T-1\}\) is fixed, where \(T\in\mathbb{N}\) is a large number. Second, the frequency of occurrence of each \(c_j\) in \(\{0,1,\ldots,T-1\}\) is computed as \(f_{c_j} = p_{c_j}\times T\), \(j=1,2,\ldots,v\). Notice that
\(\displaystyle{\sum_{j=1}^{v}f_{c_j} = \sum_{j=1}^{v}p_{c_j}\times T = T\sum_{j=1}^{v}p_{c_{j}} =}\)\({T}\).
Third, a set \(TEMP\) is created with \(f_{c_j}\)-many instances of \(c_j\), \(j=1,2,\ldots,v\). It follows that \(\abs{TEMP} = T\). Fourth, at each time \(t=0,1,\ldots,T-1\), an element \(r\) from \(TEMP\) is chosen uniformly at random, is assigned to \(\gamma(t)\), and the set \(TEMP\) is updated to be \(TEMP\setminus\{r\}\). Clearly, the sequence \(\gamma(0),\gamma(1),\ldots\), \(\gamma(T-1)\) obeys the frequency of occurrence, \(f_{c_j}\), for the set \(c_j\), \(j=1,2,\ldots,v\). Our procedure for implementing Algorithm \ref{algo:sched_design}, however, has a large memory requirement when the numbers \(v\) and \(T\) are very large.
\begin{remark}
\label{rem:inhomo1}
\rm{
Recall that we have been operating under Assumption \ref{a:divisibility}. The requirement for this assumption is purely technical and specific to our key apparatus of analysis. With \(N\%M\neq 0\) and the probabilistic logic for the selection of plants employed in Algorithm \ref{algo:sched_design}, the individual plants cannot be modelled as Markovian jump linear systems whose transition process is a time homogeneous Markov chain. Indeed, consider the Markovian jump linear system modelling of each plant in NCS under a scheduling logic, \(\gamma\), obtained from Algorithm \ref{algo:sched_design} as employed in our proof of Theorem \ref{t:mainres1}. We could use a time homogeneous Markov chain under the assertion of Lemma \ref{lem:auxres} ii). If \(N\%M\neq 0\) and \(v=\lceil N/M\rceil\), then there exists at least one \(i\in\{1,2,\ldots,N\}\) that appears in more than one \(c_j\), \(j=1,2,\ldots,v\). Consequently, the probability of transition to mode \(i_s\) are possibly multiple for different \(j\in\{1,2,\ldots,v\}\) such that \(i\) appears in \(c_j\). As a result, the transition probability matrix, \(\Pi_i\), is no longer constant. A time inhomogeneous Markov chain with a time-varying transition probability matrix is suitable for the setting where no restriction on how the numbers \(N\) and \(M\) are connected is imposed. This general case is beyond the scope of this paper and we identify it as a topic for future work.
}
\end{remark}
\begin{remark}
\label{rem:(dis)adv}
\rm{
In this paper we have proposed a probabilistic algorithm for scheduling NCSs. Our tool is new and differs from the techniques existing currently in the literature. We highlight the following features:
\begin{enumerate}[label = (\alph*), leftmargin=*]
\item In terms of offline computations required prior to the implementation of the scheduling logic, our technique is close to a static scheduling mechanism. Indeed, in case of the latter, a finite length allocation scheme is computed offline and is repeated eternally, while we compute a finite set of disjoint sets and probabilities for their activation and use the quantities eternally.
\item Unlike a dynamic scheduling mechanism, our technique does not consider properties of the plants and/or the communication network and/or other components in the NCS at every instant of time.
\item Our technique does not adapt to unforeseen/sudden faults in the system. The mechanism needs to be interrupted externally, and a new set of disjoint sets and their associated probabilities are to be fed.
\end{enumerate}
}
\end{remark}
Notice that the matrices \(A_i\), \(B_i\), \(i=1,2,\ldots,N\) and the capacity of the network, \(M\) are beyond our control, whereas there is an element of choice associated to the matrices \(K_i\), \(i=1,2,\ldots,N\), the sets \(c_j\), \(j=1,2,\ldots,N\) and the probabilities \(p_{c_{j}}\), \(j=1,2,\ldots,v\). We address this matter in our solution to Problem \ref{prob:main2}.
We now present Algorithm \ref{algo:controller_design} to design state-feedback controllers, \(K_i\), \(i=1,2,\ldots,N\), for the plants in the NCS. This is our solution to Problem \ref{prob:main2}. The algorithm first employs the matrices \(A_i\), \(i=1,2,\ldots,N\), the chosen disjoint sets \(c_j\), \(j=1,2,\ldots,v\) and their corresponding probabilities of allocation of the shared network, \(p_{c_j}\), \(j=1,2,\ldots,v\), to obtain symmetric and positive definite matrices, \(P_{i_s}\), \(P_{i_u}\), \(i=1,2,\ldots,N\) that satisfy condition \eqref{e:maincondn} with \(k=i_u\), \(i=1,2,\ldots,N\). It then utilizes the matrices \(A_i\), \(B_i\), \(P_{i_s}\), \(P_{i_u}\), \(i=1,2,\ldots,N\), to arrive at suitable controllers, \(K_i\), \(i=1,2,\ldots,N\) such that condition \eqref{e:maincondn} holds with \(k=i_s\), \(i=1,2,\ldots,N\). A set of feasibility problems is employed for this design. The matrix inequalities involved in Algorithm \ref{algo:controller_design} can be solved by employing standard linear matrix inequalities and bilinear matrix inequalities toolboxes. The following theorem asserts that state-feedback controllers obtained from Algorithm \ref{algo:controller_design} meet our requirements.
\begin{algorithm}
\begin{algorithmic}[1]
\FOR {\(i=1,2,\ldots,N\)}
\STATE Solve the following feasibility problem for \(P_{i_s}\), \(P_{i_u}\in\mathbb{R}^{d_i\times d_i}\):
\small
\begin{align}
\label{e:feasprob1}
\minimize\:\:&\:\:1\\
\sbjto\:\:&\:\:
\begin{cases}
A_{i_u}^\top \P^iA_{i_u} - P_{i_u} \prec 0,\nonumber\\
P_{i_s} = P_{i_s}^\top, P_{i_u} = P_{i_u}^\top,\nonumber\\
P_{i_s},P_{i_u} \succ 0,\nonumber\\
\kappa I_{d_i\times d_i}\preceq P_{i_s}, P_{i_u} \preceq I_{d_i\times d_i},\:\kappa > 0\:(\text{small}).
\end{cases}
\end{align}
\normalsize
\IF {the feasibility problem \eqref{e:feasprob1} admits a solution}
\STATE Solve the following feasibility problem for \(Y_i\in\mathbb{R}^{m_i\times d_i}\):
\small
\begin{align}
\label{e:feasprob2}
\hspace*{-3cm}\minimize\:\:&\:\:1\\
\sbjto\:\:&\:\:
\begin{cases}
&\bigl(A_{i}P_{i_s}^{-1}+B_i Y_i\bigr)^\top(\P^i)^{-1}\bigl(A_{i}P_{i_s}^{-1}+B_i Y_i\bigr)\nonumber\\
&- P_{i_s}^{-1} \prec 0.\nonumber
\end{cases}
\end{align}
\normalsize
\IF {the feasibility problem \eqref{e:feasprob2} admits a solution}
\STATE Compute \(K_i\) as follows:
\small
\begin{align}
\label{e:controller_comp}
K_i = Y_i P_{i_s}.
\end{align}
\normalsize
\ENDIF
\ENDIF
\ENDFOR
\caption{Design of static state-feedback controllers}\label{algo:controller_design}
\end{algorithmic}
\end{algorithm}
\begin{theorem}
\label{t:mainres2}
Consider an NCS described in \S\ref{s:prob_stat}. Suppose that Assumption \ref{a:divisibility} holds. Let the matrices \(A_i\), \(B_i\), \(i=1,2,\ldots,N\), the sets \(c_j\), \(j=1,2,\ldots,v\) and the probabilities \(p_{c_j}\), \(j=1,2,\ldots,v\), be given. Suppose that the state-feedback controllers, \(K_i\), \(i=1,2,\ldots,N\) are computed as \eqref{e:controller_comp}. Then condition \eqref{e:maincondn} holds.
\end{theorem}
\begin{proof}
Fix \(j\in\{1,2,\ldots,v\}\) and \(i\in c_j\).
Suppose that there exists a solution \(P_{i_s}\), \(P_{i_u}\) to the feasibility problem \eqref{e:feasprob1}. By Schur complement, the inequality
\begin{align}
\label{e:pf2_step1}
A_{i_u}^\top \P^i A_{i_u} - P_{i_u} \prec 0
\end{align}
is equivalent to
\small
\(\pmat{-\P^i & \P^i A_{i_u}\\\bigstar & -P_{i_u}} \prec 0\).
\normalsize
We need to design \(K_i\) such that the following inequality holds:
\begin{align}
\label{e:pf2_step3}
\pmat{-\P^i & \P^i A_{i_s}\\\bigstar & -P_{i_s}} \prec 0.
\end{align}
Let \(K_i\) be computed as described in \eqref{e:controller_comp}. We perform a congruence transform to the left-hand side of \eqref{e:pf2_step3} by \small\(\text{diag}\pmat{(\P^i)^{-1},P_{i_s}^{-1}}\)\normalsize and obtain
\( \pmat{-(\P^i)^{-1} & (A_i P_{i_s}^{-1}+B_i Y_i)\\\bigstar & -P_{i_s}^{-1}}\).
From \eqref{e:pf2_step3} it follows that the above quantity is negative definite. By Schur complement, we have
\begin{align}
\label{e:pf2_step4}
(A_i P_{i_s}^{-1}+B_i Y_i)^\top (\P^i)^{-1}(A_i P_{i_s}^{-1}+B_i Y_i) - P_{i_s}^{-1} \prec 0.
\end{align}
Consequently, if the feasibility problem \eqref{e:feasprob2} admits a solution \(Y_i\), then \(K_i\) computed as \eqref{e:controller_comp} satisfies \eqref{e:pf2_step3}. Conditions \eqref{e:pf2_step1} and \eqref{e:pf2_step4} together lead to \eqref{e:maincondn}.
Since \(i\in c_j\) was chosen arbitrarily, it follows that condition \eqref{e:maincondn} holds for each plant \(i\in c_j\). Moreover, since \(j\in\{1,2,\ldots,v\}\) was chosen arbitrarily, we have that condition \eqref{e:maincondn} holds for each plant \(\displaystyle{i\in\bigcup_{j=1}^{v}c_j}\). By Lemma \ref{lem:auxres} i), the assertion of Theorem \ref{t:mainres2} follows.
\end{proof}
\begin{remark}
\label{rem:compa4}
\rm{
Our design of static state-feedback controllers in Algorithm \ref{algo:controller_design} involves standard linear algebraic techniques for stabilization of Markovian jump linear systems. The analysis is similar in spirit to \cite{Zhang2009}, where stability of \emph{a} Markovian jump linear system was considered. We deal with a more general setting of \emph{simultaneous} design of state-feedback controllers for \(N\) such systems.
}
\end{remark}
We now present numerical experiments to demonstrate our results.
\section{Numerical experiments}
\label{s:num_ex}
Our first experiment involves linearized models of benchmark control systems.
\begin{experiment}
\label{ex:num_exp1}
\rm{
Consider an NCS with number of plants, \(N = 2\) and capacity of the shared communication network, \(M = 1\).
\begin{itemize}[label = \(\circ\), leftmargin = *]
\item Plant \(i=1\) is a discretized version of a linearized batch reactor system presented in \cite[\S IVA]{Walsh2002} with sampling time \(0.05\) units of time. We have
\begin{align*}
A_1 &= \pmat{1.0795 & -0.0045 & 0.2896 & -0.2367\\-0.0272 & 0.8101 & -0.0032 & 0.0323\\
0.0447 & 0.1886 & 0.7317 & 0.2354\\0.0010 & 0.1888 & 0.0545 & 0.9115},
B_1 = \pmat{0.0006 & -0.0239\\0.2567 & 0.0002\\0.0837 & -0.1346\\0.0837 & -0.0046},\\
\end{align*}
\item Plant \(i=2\) is a discretized version of a linearized inverted pendulum system presented in \cite[\S 4]{Rehbinder2004} with sampling time \(0.05\) units of time. We have
\begin{align*}
A_2 &= \pmat{1.0123 & 0.0502\\0.4920 & 1.0123},\:B_2 = \pmat{0.0123\\0.4920}.
\end{align*}
\end{itemize}
Notice that the plants are open-loop unstable and \(N\%M = 0\). We compute \(v = N/M = 2\). Let \(c_1 = \{1\}\), \(c_2 = \{2\}\) and \(p_{c_1} = p_{c_2} = 0.5\). We first design static state-feedback controllers, \(K_i\), \(i=1,2\) such that condition \eqref{e:maincondn} holds for each \(i\) in \eqref{e:plants}. We employ Algorithm \ref{algo:controller_design} for this purpose. We obtain
\begin{align*}
K_1 = \pmat{0.0152761 & -0.8159748 & -0.2394377 & -0.7514747\\
2.3245781 & 0.0798596 & 1.622477 & -1.0654847}
\end{align*}
and
\begin{align*}
K_2 = \pmat{-2.3973087 & -1.4308615}.
\end{align*}
We have
\begin{align*}
P_{1s} &= \pmat{974.82022 & 115.25221 & 693.51383 & -223.88521\\
115.25221 & 1022.0729 & 160.38138 & 109.95335\\
693.51383 & 160.38138 & 768.15463 & -219.94088\\
-223.88521 & 109.95335 & -219.94088 & 1250.1576},\\
P_{1u} &= \pmat{1678.8234 & 300.05968 & 1271.4766 & -378.75625\\
300.05968 & 1465.4904 & 391.07683 & 368.29291\\
1271.4766 & 391.07683 & 1213.8238 & -279.44358\\
-378.75625 & 368.29291 & -279.44358 & 1483.7789},\\
Y_{1} &= \pmat{ 0.0005645 & -0.0006647 & -0.0008519 & -0.0005914\\
0.0024236 & -0.0001203 & -0.0001764 & -0.0004387},
\end{align*}
and
\begin{align*}
P_{2s} &= \pmat{1717.7113 & 138.39564\\138.39564 & 50.218134},\\
P_{2u} &= \pmat{2580.3612 & 512.67656\\512.67656 & 184.31981},\\
Y_{2} &= \pmat{0.0011569 & -0.0316812}.
\end{align*}
It follows that
\begin{align*}
A_{1_s}^\top \P^1 A_{1_s} - P_{1_s}
&=\pmat{-51.553004 & -7.8596573 & -69.500984 & -13.199701\\
-7.8596573 & -480.12758 & -40.530729 & 37.248709\\
-69.500984 & -40.530729 & -230.28674 & 106.35376\\
-13.199701 & 37.248709 & 106.35376 & -300.5394}
\prec 0_{d_1\times d_1},
\end{align*}
\begin{align*}
A_{1_u}^\top \P^1 A_{1_u} - P_{1_u}
&=\pmat{-48.482389 & 0.1252562 & -62.165288 & -15.778984\\
0.1252562 & -428.29213 & -25.838462 & 53.124646\\
-62.165288 & -25.838462 & -182.71532 & 86.522247\\
-15.778984 & 53.124646 & 86.522247 & -289.02844}
\prec 0_{d_1\times d_1},
\end{align*}
and
\begin{align*}
A_{2_s}^\top \P^2 A_{2_s} - P_{2_s}
&=\pmat{-26.390428 & -3.0495068\\
-3.0495068 & -30.242636}
\prec 0_{d_2\times d_2},
\end{align*}
\begin{align*}
A_{2_u}^\top \P^2 A_{2_u} - P_{2_u}
=\pmat{-25.479355 & -3.4282391\\
-3.4282391 & -25.646787}
\prec 0_{d_2\times d_2}.
\end{align*}
We then employ Algorithm \ref{algo:impl} to generate probabilistic scheduling logics. We set \(T=1000\). It follows that \(f_{c_1} = 500\) and \(f_{c_2} = 500\). We generate \(10\) different sequences \(\gamma(0)\), \(\gamma(1),\ldots\), \(\gamma(999)\). Corresponding to each sequence, we pick \(10\) different initial conditions \(x_i^0\in[-10,+10]^{d_i}\), \(i=1,2\) and plot \(\norm{x_i(t)}^{2}\), \(i=1,2\). The resulting trajectories (up to time \(t=100\)) are illustrated in Figures \ref{fig:plant1} and \ref{fig:plant2}. Stochastic stability of each plant in the NCS under consideration follows.
\begin{figure}
\centering
\includegraphics[scale = 1]{plant1}
\caption{\(\norm{x_1(t)}^{2}\) versus \(t\)}\label{fig:plant1}
\end{figure}
\begin{figure}
\centering
\includegraphics[scale = 1]{plant2}
\caption{\(\norm{x_2(t)}^{2}\) versus \(t\)}\label{fig:plant2}
\end{figure}
}
\end{experiment}
Our next experiment is geared towards testing scalability of the proposed techniques.
\begin{experiment}
\label{ex:numex2}
\rm{
We fix capacity of the shared network as \(M = 10\) and carry out the following procedure for various values of the total number of plants, \(N\), such that Assumption \ref{a:divisibility} holds:
\begin{enumerate}[label = (\roman*), leftmargin = *]
\item We generate unstable matrices \(A_i\in\mathbb{R}^{5\times 5}\) and vectors \(B_i\in\mathbb{R}^{5\times 1}\) with entries from the interval \([-2, 2]\) and the set \(\{0,1\}\), respectively, chosen uniformly at random and ensuring that each pair of matrices \((A_i, B_i)\), \(i = 1,2,\ldots,N\) is controllable.
\item We compute \(v=N/M\), construct the set \(\mathcal{S}\) containing all subsets of \(\{1,2,\ldots,N\}\) with \(M\) distinct elements, choose a step size \(h=0.001\), and compute \(r\) to be biggest integer satisfying \(rh < 1\).
\item For all distinct sets \(c_j\in\mathcal{S}\), \(j=1,2,\ldots,v\) and probabilities \(p_{c_j}\in\{h,2h,\ldots,rh\}\), \(j=1,2,\ldots,v\) satisfying \(\displaystyle{\sum_{j=1}^{v}{p}_{{c}_{j}}} = 1\), we employ Algorithm \ref{algo:controller_design} until a suitable set of state-feedback controllers, \(K_i\), \(i=1,2,\ldots,N\) are designed. We note the corresponding \(c_j\) and \(p_{c_{j}}\), \(j=1,2,\ldots,v\) and proceed to Step (iv). If no such set of controllers is found, then we report a failure.
\item We employ Algorithm \ref{algo:impl} to generate probabilistic scheduling logics. We set \(T=1000\) and generate a sequence \(\gamma(0)\), \(\gamma(1),\ldots\), \(\gamma(999)\).
\end{enumerate}
The above set of steps was implemented by employing the LMI solver toolbox and PENBMI toolbox in MATLAB R2020a on an Intel 17-8550U, 8 GB RAM, 1 TB HDD PC with Windows 10 operating system. The time taken to conduct the experiment for various choices of \(N\) are summarized in Table \ref{tab:data_tab}. Not surprisingly, we observe that
the computation time under consideration increases as the number of plants in an NCS increases.
}
\end{experiment}
\begin{table}[http]
\centering
{
\begin{tabular}{|c | c | c|c|}
\hline
\(N\) & \(M\) & Result & Time taken (in sec)\\
\hline
\(100\) & \(10\) & Success & \(93\)\\
\hline
\(200\) & \(10\) & Success & \(1183\)\\
\hline
\(500\) & \(10\) & Success & \(10367\)\\
\hline
\(700\) & \(10\) & Success & \(33710\)\\
\hline
\(1000\) & \(10\) & Success & \(75726\)\\
\hline
\end{tabular}}
\vspace*{0.2cm}
\caption{Data for numerical experiment}\label{tab:data_tab}
\end{table}
\section{Conclusion}
\label{s:concln}
In this paper we presented a probabilistic algorithm to design scheduling logics for NCSs whose shared communication networks have limited capacity. We operated under the assumption that communication between plants and their controllers is not affected by any form of communication uncertainties. A next natural research direction is the design of probabilistic algorithms that construct scheduling logics for NCSs under communication uncertainties like time delays, data losses, quantization errors, etc. This matter is currently under investigation and will be reported elsewhere.
|
2,877,628,090,361 | arxiv | \section{Introduction \label{intro}}
One of the greatest challenges in Extragalactic Astronomy is to fundamentally explain the formation and evolution of late-type galaxies (LTGs) and their main structural components. To this end, it is crucial to assess the build-up histories of galactic disks and bulges, given that the formation of the latter is intimately linked to the genesis and growth of supermassive black holes (SMBHs) and their regulatory role on galaxy evolution \citep[e.g.,][]{KorHo13,HecBes14,MarNav18}.
Traditionally, galactic disks were thought to form early-on via violent quasi-monolithic gas collapse \citep{Lar74} or galaxy mergers \citep[e.g.,][]{BarHer96,SprHer05,BouJogCom05} and/or to gradually assemble around pre-existing monolithically collapsed bulges \citep[e.g.,][]{Kau93,Zoc06}.
Lately, however, thanks to the recent development of observing facilities and improved modeling techniques that employ increasingly sophisticated physical recipes, it became apparent that LTGs undergo rather complex assembly histories, comparatively to what was previously envisaged.
On the other hand, the literature regarding formation and evolution of LTGs reports contradictory results, leading to conflicting interpretations. For instance, by performing disk-to-bulge decomposition for 180 galaxies from the 3DHST Legacy Survey with redshift 1.5 < $z$ < 4.0, \cite{Sac19} find that from the redshift interval z > 2 to z < 2, the scale-length of two-component galaxies undergoes an increase ($\sim$1.3 times) whereas their bulge sizes and bulge/total ratio (B/T) remain almost constant. The authors infer that z$\sim$2 is mostly a disk formation period while bulges had formed at earlier times (z > 2).\footnote{This interpretation, however, seems to be in conflict with the fact that significant disk growth together with a constant B/T must also imply bulge growth.}
This conclusion has been recurrently reported in the literature \citep[e.g.,][]{Mar16,Mar17}, usually resulting from photometric bulge-disk decomposition studies. In contrast, other works where different techniques were applied suggest instead that bulges and disks grow and evolve jointly, opposing the prevailing view of two independent formation scenarios for bulge and disk build-up. In a study by \cite{vD13}, by applying the abundance matching technique to Milky Way (MW) progenitor candidates out to z = 2.5, it is found that MW-like LTGs have built $\sim$90\% of their present stellar mass (${\cal M}_{\star}$) after z = 2.5 with the star-formation peak occurring before z = 1. Additionally, they show that the bulge buildup was prolonged, occurring between 1 < $z$ < 2.5. In this period, the mass in the central 2 kpc of MW progenitors increases by a factor of $\sim$3, ruling out models in which bulges were fully assembled first and disks gradually formed around them.
In addition, a recent study by \citet[][hereafter BP18]{BrePap18} where a representative sample of 135 local LTGs\footnote{The present study is the result of an extended analysis applied to the same galaxy sample as in BP18.} was analyzed by combining three techniques, namely surface photometry, spatially resolved spectral modeling and post-processing with RemoveYoung (${\cal RY}$) \citep{GP16-RY}, demonstrates that LTG bulges form a $continuous$ $sequence$ with regard to their $<\!\!\!\delta\mu_{9{\rm G}}\!\!\!>$\ \footnote{$<\!\!\!\delta\mu_{9{\rm G}}\!\!\!>$\ (mag) is defined by BP18 as the difference between the mean $r$ band surface brightness of the present-day stellar component and that of stars older than 9 Gyr.}, mean stellar age and metallicity ($\langle t_{\star,\textrm{B}} \rangle_{{\cal M}}$\ and $\langle Z_{\star,\textrm{B}} \rangle_{{\cal M}}$, respectively), across $\sim$3 dex in log(${\cal M}_{\star}$) and $>$ 1 dex in log of stellar surface density ($\Sigma_{\star}$).
Moreover, they find that physical properties of bulges and their parent disks are linked, pointing once more to a joint evolution between bulge and disk.
Even though they adopt significantly different methodologies and galaxy samples, the two aforementioned studies find strong evidence for a unified formation scenario of LTGs, where bulge build-up time-scale and mass (total and relative) are dictated by the total galaxy mass \citep[see also][]{Gan07}. It is worth stressing that these studies are not based on structural decomposition techniques, being therefore free from prior assumptions on the LTG's individual stellar components (bulge, disk, bar) that might impact the obtained results.
Concerning the disk of LTGs, these complex stellar structures are mainly characterized by a radial light distribution $\rm I(R)$ that is usually well fit by an exponential law with the generic form $\rm I(R) \propto e^{-R/\alpha}$, where $\alpha$ is the disk scale-length. As a result, galaxy disks were initially thought to universally follow an exponential decay in their surface brightness profiles (type~I) \citep{Vau59}, yet subsequent work has revealed the existence of disk galaxies that deviate from the exponential law in their outskirts, exhibiting down/up-bending surface brightness profiles (SBPs) \citep[type~II and type~III, respectively,][]{Fre70}. These observations prompted a revision of the proposed theories for the formation of such stellar structures and accurate modeling of possible divergences from the exponentiality \citep[e.g., ][]{Poh08,Lai14,Wat19}.
The last decades were crucial for the development of this field of research, due to the overall improvement of computational power and tools.
\citet{Dal97} tries to explain the range of observed disk properties by using a set of gravitationally self-consistent models for disk collapse, assuming that while collapsing, the resulting dark matter halo has a universal density profile, the angular momentum is conserved and its initial distribution is comparable to that produced by an external tidal torque.
Their work predicts that several disk properties are intimately related, such as total mass and initial angular momentum -- the collapse of a gas cloud with low initial angular momentum will give rise to a high mass, high surface brightness galaxy whereas the opposite is valid in the case of a gas cloud with high initial angular momentum.
The shape of the rotation curve also appears to be tightly connected with the initial angular momentum. Low angular momentum disks (generally higher mass galaxies) are centrally concentrated being globally unstable to non-axisymmetric perturbations which may result in angular momentum transfer and secular bulge/bar formation. This contrasts with high angular momentum disks (usually lower mass galaxies) which display a slowly rising rotation curve that is not so prone to instabilities (i.e., precursors of bulge/bar formation).
As for observational studies, a recent IFS study by \cite{RL17a} attributes deviations from an exponential slope to radial stellar migration, proposing it to be more efficient in type~III disks as compared to type~II. A supplementary study by these authors adds further support to this scenario through the modeling of Milky Way-mass disks in cosmological simulations \citep{RL17b}.
Whereas significant progress has been achieved in our understanding of disks in their outer parts, the same does not apply for their central regions. A critical question that has been barely explored is whether the exponential slope of the disk is valid all the way to the LTG center, i.e., beneath the bulge. Although SBPs of bulgeless\footnote{In this work, \emph{bulgeless galaxies} denote pure disk galaxies without evidence for a central luminosity excess.} galaxies show a pure exponential luminosity profile \citep[e.g.,][]{SacSah16}, the central radial distribution of the disk for the majority of LTGs remains unexplored. Considering that the existence of a bulge prevents direct observation of the disk in the inner region of the galaxy, it is standard procedure to preserve the assumption that the disk follows an exponential profile.
The implications of this simplifying assumption are manyfold and fundamental: this fitting approach actually presumes that bulge and disk co-exist without significant dynamical interaction and mass exchange (e.g., stellar migration, kinematical heating) over several Gyr of galactic evolution, which appears to be in contrast with conclusions from previous studies that advocate a joint evolution of these two galaxy entities.
Furthermore, in the light of the assumption that disks gradually assembled around bulges that were formed prior to and independently from the disk, the most reasonable expectation should be a decrease in the disk's stellar surface density in the innermost part of the galaxy, i.e., the opposite of what it is commonly assumed. If bulges are a product of monolithic collapse, it appears legitimate to consider that the inherent high gas and stellar velocity dispersion ($\sigma_{\star}$) in the bulge would act against the build-up of a dynamically cold disk inside the bulge radius.
Another implicit assumption enclosed within the generally adopted exponentiality of the disk to the LTG center is the absence of significant
stellar age or metallicity gradients within the bulge radius, as well as that the specific star-formation rate (sSFR) is nearly invariant throughout the disk. Recently, IFS observations allowed a spatially resolved exploration of stellar populations of galaxies across a significant part of their radial extent (up to 1.5 to 2~ effective radii, $\rm R_{eff}$). Many studies demonstrate that some LTGs present age and metallicity gradients across their disks \citep[see, e.g.,][]{Gon15,God17} pointing to the non-negligible effect of a non-uniform sSFR throughout the galaxy. In addition, a systematic analysis of the radial profiles of mass and luminosity weighted ages of the same LTG sample used in this study \citep{Bre20a} reveals quite significant stellar age gradients within the bulge radius, whose slope (positive/negative) is anti-correlated with total galaxy mass.
From the theoretical viewpoint, in the past decades there has been some evidence supporting the non-preservation of the exponential profile of the disk within the bulge.
Attempts to model the rotation velocity profile or the stellar surface density of the disk component by means of semi-analytic models \citep{KuiDub95}, N-body simulations \citep{WidDub05} or, more recently, hydrodynamical simulations \citep{Obr13} show that, in the presence of a bulge (whose stars are characterized by significantly higher $\sigma_{\star}$ as compared to the disk), the orbits of disk stars (mostly supported by rotational velocity, V$_{\rm rot}$) are gradually kinematically heated by cumulative weak interactions with the bulge.
According to these results, interaction between both collisionless stellar populations over several Gyr will eventually lead to a scarcity (or even depletion) of rotation-dominated stellar populations in the galactic center, resulting in a flattening or even a sharp central decrease of the disk's particle density, as shown in Fig.~2 of
\citet{Obr13}. In this context, it is noteworthy that from the observational point of view, the need for a central flattening of exponential profiles in many dwarf galaxies was established through surface photometry of early-type and late-type systems \citep{BinCam93,P96a,Noeske03}.
If the assumption on the inner disk's exponential nature is proved to be incorrect, important implications might be expected for structural studies of LTGs. For instance, while performing 1D surface photometry the standard procedure for the photometric decomposition of such galaxies involves the determination and subtraction of the disk contribution by approximating its SBP by an exponential function. The residual central luminosity excess is attributed to the bulge and fitted with a S\'ersic model, the best-fitting parameters of which ($\eta$, $\rm R_{eff}$, radial extent, total magnitude) are used for bulge classification into classical bulges (CB) and pseudo-bulges (PB).
Even in the case where all the structural components such as bulge and disk are fitted simultaneously, as commonly occurs with 2D surface photometry codes such as IMFIT \citep{Erwin2015} or GALFIT \citep{Peng2010}, by assuming an incorrect surface brightness distribution for the disk one would still introduce a systematic bias.
A possible overestimation of the disk luminosity inside the bulge radius due to the false assumption that it retains its exponential slope all the way to the center, would impact determinations of the luminosity and structural properties of the bulge. This might offer an explanation for the relatively weak correlation between $\eta$ and bulge magnitude \citep[see for e.g.,][]{Hea14,Mos14}.
As a pilot attempt to investigate the disk's radial distribution within the bulge and evaluate the validity of the conventional and universal assumption of extrapolating its exponential intensity profile, here we present a study where we test this hypothesis in the context of spectral analysis and modeling. To this end, we developed a tool that allows us to estimate the net spectral energy distribution (SED) of the bulge after removal of the disk contribution using a combined photometric and spectral modeling approach. The tool was applied to a representative sample of the local LTGs population, comprising 135 galaxies from the CALIFA IFS survey \citep{Sanchez12-DR1,Sanchez16-DR3}.
It is worth mentioning that there were previous attempts to perform spectrophotometric decomposition of galaxies based on IFS data \citep{Men19,John17} and long-slit spectroscopy \citep{Johnston12,Johnston14,Sil12}. However, the methodologies adopted in these studies greatly differ from the one presented here, with the most significant difference being that these authors fix the surface brightness distribution of the disk to the standard exponential law.
Consequently, these studies do not explore, per design, possible deviations from the exponentiality of the disk in its central part; the exponentiality of the disk was explicitly assumed. Additionally, these tools were applied to early-type (ETGs) and lenticular (S0s) galaxies, respectively, i.e. galaxies with nearly homogeneous stellar populations in terms of age, contrary to LTGs.
Section sample describes the sample selection, Sect. \ref{meth} outlines the adopted methodology, Sect. \ref{resDiskSub} presents the main results and finally Sect. \ref{conc} is dedicated to the discussion of the obtained results and summarizing the conclusions.
\section{Sample description \label{sample}}
The galaxy sample here analyzed was selected from the 3$^{\rm rd}$ Data Release of the CALIFA integral field spectroscopy (IFS) survey \citep[667 galaxies;][see http://califa.caha.es]{Sanchez16-DR3}. It is constituted by 135 non-interacting, nearly face-on local ($\leq$130 Mpc) LTGs (see BP18 for a complete description of sample) and it was assembled aiming for high representativity of the LTG population in the local Universe, spanning a range of $\sim$3 dex in log(${\cal M}_{\star}$) and $>$ 1 dex in log($\Sigma_{\star}$). In BP18 the galaxy sample was tentatively subdivided into three $< \mkern-6mu \delta\mu_{9{\rm G}} \mkern-6mu >$ intervals.
This quantity was there defined as the difference $\mu_{\rm 0\,Gyr}$-$\mu_{\rm 9\,Gyr}$ between the mean $r$ band surface brightness of the present-day stellar component and that of stars older than 9 Gyr (a $<\!\!\!\delta\mu_{9{\rm G}}\!\!\!>$\ $\approx$ 0 mag implies that the bulge has completed its buildup earlier than 9 Gyr ago ($z\simeq 1.34$) while a $<\!\!\!\delta\mu_{9{\rm G}}\!\!\!>$\ of --2.5 mag denotes a contribution of 90\% from stars younger than 9 Gyr). Interval A (\brem{iA}; $<\!\!\!\delta\mu_{9{\rm G}}\!\!\!>$\ $\leq$ --2.5 mag; 34 galaxies) includes the least massive galaxies which host low-mass, young bulges with low stellar metallicity, frequently classified as star-forming (SF) after the BPT spectroscopic classification scheme \citep{BalPhiTer81}. In contrast, interval C (\brem{iC}; $<\!\!\!\delta\mu_{9{\rm G}}\!\!\!>$\ $\geq$ --0.5 mag; 43 galaxies) contains the most massive galaxies with the most massive, dense, old and chemically enriched bulges, typically falling in the class of AGN/LINER. LTGs that fall within interval B (\brem{iB}; --1.5 mag to --0.5 mag; 58 galaxies) display intermediate characteristics in all measured properties. As for the frequency of bars in our sample they represent about 40\% ($\sim$1/3 for \brem{iA}, $\sim$1/3 for \brem{iB} and $\sim$2/3 for \brem{iC}). This subdivision and subsequent examination of bulge and galaxy properties within each interval led us to conclude that the galaxy total mass is the main evolutionary driver for LTGs, being tightly connected with the bulge's stellar mass and surface density, mean stellar age and metallicity, current photo-ionization mechanism and mean stellar age and metallicity of the disk.
\section{Data analysis \label{meth}}
The concept used here consists of assuming different intensity profiles for the innermost part of the disk, subtract their contributions from the total central luminosity and subsequently fit the remaining (net) bulge spectra, in order to assess whether they are physically plausible. An unphysical or implausible SED for the disk-subtracted bulge implies that the underlying assumption of the exponentiality of the disk is invalid.
The adopted methodology combines modeling of binned IFS spectra -- by means of two population spectral synthesis (PSS) codes: {\sc Starlight}\ \citep{Cid05} \& {\sc FADO}\ \citep{GomPap17} -- with surface photometry of optical images. Surface photometry was carried out on SDSS $r$- and $g$-band images with the goal of estimating the expected light fraction of the disk inside the bulge radius when assuming different radial intensity distributions. As shown in Fig.~\ref{disks}, we assumed three profiles for the radial intensity of the disk within the bulge radius, $\mathrm{R}_{\rm B}$: a) purely exponential, as commonly assumed b) an inwardly flattening and c) a centrally depressed, so that at the galactic center its contribution is virtually zero. Subsequent integration of the different light growth curves allowed to determine the fraction of light within $\mathrm{R}_{\rm B}$\ pertaining to the disk ($f_{\rm D}$), for each disk configuration.
In parallel, we conducted a spectral fitting analysis: after having a clear-cut definition of $\mathrm{R}_{\rm B}$\ (see Sect.~\ref{3disk}) and the disk radial extent, individual spaxels were integrated into one spectrum for the respective stellar components. For each galaxy, bulge and disk spectra were modeled by both {\sc Starlight}\ \& {\sc FADO}\ using two simple stellar population (SSP) spectral libraries -- Z4, comprising SSPs from \citet{BruCha03} for 38 ages between 1~Myr and 13~Gyr for four stellar metallicities (0.05, 0.2, 0.4 and 1.0~$Z_{\odot}$), referring to a Salpeter IMF \citep{SalIMF} and Padova 2000 tracks \citep{Gir00}, and Z5 which is identical to Z4 in terms of age coverage except for being supplemented by SSPs with a metallicity of 1.5~$\mathrm{Z}_{\odot}$. The use of two stellar libraries and two conceptually distinct spectral fitting codes permitted to uncover to what extent the obtained results depend on the spectral modeling technique.
The best-fitting synthetic stellar spectrum of the normalized SED of the disk was then scaled according to the previously determined $f_{\rm D}$\ (after correction from the SDSS $r$- or $g$-band transmission curve) and subsequently subtracted from the bulge best-fitting synthetic stellar SED, this way theoretically obtaining the net-bulge SED. Finally, we assessed the soundness of the obtained spectra by refitting them with both PSS codes.
Considering that this approach consists in the application of spatially resolved spectral modeling on IFS data, it is, therefore, free from any prior assumption on its stellar populations and star formation histories (SFH) throughout the galaxy.
\begin{figure*}[t]
\centering
\includegraphics[width=1\linewidth]{NGC0036_SBP_comp4.png}
\caption{Left: SDSS true-color image of LTG \object{NGC 0036}. The white circle overlaid in the SDSS image depicts the bulge diameter (with radius $\mathrm{R}_{\rm B}$) and the innermost black circle illustrates the bulge radius that was considered for this study (with radius $\mathrm{R}_{\rm C}$, defined in Sect.~\ref{spec}). The radius of the outermost black circle corresponds to the limit of the disk (with radius $\rm R_{\rm D}$)\protect\footnotemark. Right: SDSS $g$-band SBP illustrating the assumed three disk configurations (black points -- observed profile; blue solid line -- standard exponential disk, \brem{expD}; green solid line -- inwardly flattening disk, \brem{flatD}; red solid line -- centrally depressed disk, \brem{decrD}). The colored dashed lines represent the modeled bulge after subtraction of the respective disk's profile. The horizontal line denotes the limiting surface brightness at $\mu$ = 24~mag/$\sq\arcsec$. Grey vertical lines denote the considered limits for the disk (the outermost grey line is illustrated by the uttermost circle in the l.h.s), dashed vertical black line depicts $\mathrm{R}_{\rm B}$\ (innermost black circle at the l.h.s) and the dashed-dot vertical black line the $\mathrm{R}_{\rm C}$\ (white circle at the l.h.s). The galactocentric distance (x axis) is normalized to $\mathrm{R}_{\rm eff}$. }
\label{disks}
\end{figure*}
\footnotetext{The circle depicting $\rm R_{\rm D}$\ does not precisely correspond to the analyzed area, being instead the equivalent radius (the radius of a circle with an area equal to the sum of all spaxels that belong to the disk). The l.h.s of Fig.~\ref{maps} displays the spaxels that were considered in constructing the average spectrum of the disk for \object{NGC 0036}.}
\subsection{Photometric decomposition assuming three different radial profiles for the disk}\label{3disk}
To test the standard assumption that the disk conserves its exponential nature all the way to the galactic center, we resort to structural analysis which permits to decompose galaxies into their main stellar constituents. Seeking for a uniform, clear-cut definition for the bulge radius $\mathrm{R}_{\rm B}$, without relying on strong prior assumptions on the photometric structure of LTGs, this quantity was determined by fitting a single S\'ersic model to the central luminosity peak. It was determined at an extinction-corrected surface brightness level $\mu_{\rm lim}$ of 24 mag/$\sq\arcsec$, this way encircling nearly all the flux from the bulge. For this purpose we used our 1D surface photometry code iFIT \citep{ifit}\footnote{As discussed in BP18 in further detail, we additionally performed full image decomposition with iFIT, IMFIT and GALFIT. We generally found a minor dependence of $\mathrm{R}_{\rm B}$\ on different codes and profile fitting schemes.}, after visual inspection of the morphology and $g$--$i$ color maps.
As for the disk, it was modeled by fitting the standard exponential model ($\mathrm{I}_{e}$) at intermediate radii (see BP18 for additional details on the photometric analysis), which equation in intensity units is given by:
\begin{equation}
\mathrm{I}_{e}(\rm R^{\star}) = I_{0} \cdot e^{\rm - R^{\star}/\alpha},
\end{equation}
where I$_{0}$ is the intensity at the galactic center, $\rm R^{\star}$ the radius (i.e., distance from the center) and $\alpha$ is the scale-length, in $\arcsec$.
After estimating the best-fitting parameters for the exponential disk for each galaxy in the sample, we adapted the previously estimated disk to a centrally flattened and a down-bending surface brightness profile within $\mathrm{R}_{\rm B}$ by means of the modified exponential ($\mathrm{I}_{\hat{e}}$) distribution proposed by \citet{P96a}:
\begin{equation}
\renewcommand*{\arraystretch}{1.5}
\begin{array}{l}
\mathrm{I}_{\hat{e}}(\mathrm{R}^{\star}) = \mathrm{I}_{e}[1 - \epsilon_{1} e^{\mathrm{P}_{3}(\rm R^{\star})}] \\
\rm P_{3}(R^{\star}) = \left( \dfrac{\rm R^{\star}}{R_{core}} \right)^{3} + \left( \dfrac{\rm R^{\star}(1-\epsilon_{1})}{\alpha \epsilon_{1}} \right)
\end{array}
\renewcommand*{\arraystretch}{1}
\end{equation}
In all cases, we set the core radius $\rm R_{core}$ equal to $\mathrm{R}_{\rm B}$\ so that $\mathrm{I}/\mathrm{I}_{e}$ starts diverging from 1 at this radius (i.e., decreasing the disks's intensity relative to that corresponding to an exponential fitting law).
As for the central intensity depression, $\epsilon_{1} = \Delta \rm I / I_{0}$, we tested two different cases: an intermediate case characterized by a nearly constant intensity within $\mathrm{R}_{\rm B}$\ (adopted $\epsilon_{1} =\alpha/1.5 \rm R_{core}$) and a steeply down-bending disk profile reaching zero intensity ($\epsilon_{1} = 1$) at the center ($\rm R^{\star}=0$) of the disk. These two modified exponential profiles are referred as \brem{flatD} and \brem{decrD}, respectively, and are illustrated in the r.h.s. panel of Fig.~\ref{disks}, in addition to the standard exponential disk \brem{expD}.
Although a full photometric analysis and characterization of the different bulge luminosity profiles that remain after subtraction of the assumed disks is out of the scope of this article, as a sanity check we used a simple $\chi^2$ minimization algorithm to fit a S\'ersic model to the residuals, as a tentative assessment of the plausibility of the obtained bulge profiles.
We document values for the S\'ersic index within $0.3 \leq \eta \leq 2.3$ (not reaching, in any circumstance, unfeasible values such as 0.2 > $\eta$ > 8) and an average absolute difference of 0.5 between the best-fitting $\eta$ for \brem{expD} and \brem{flatD} and 0.2 for \brem{expD} and \brem{decrD} (see Fig.\ref{disks} for a visual representation of the three possible bulge luminosity profiles for \object{NGC 0036}). Regarding the obtained estimates for the B/T, we report values within $3\% \leq \mathrm{B/T} \leq 51\%$ and an average increase of $\sim 15\%$ between the B/T estimated after subtraction of \brem{expD} and the same after subtraction of \brem{flatD} and $\sim 25\%$ when comparing \brem{expD} with \brem{decrD}. Visual inspection of the remaining excess and respective models suggests that all the obtained bulge luminosity profiles are physically reasonable, demonstrating that experiments involving surface photometry do not yield strong discriminators of deviations of the disk from the exponentiality.
\subsection{Construction of bulge and disk spectra \label{spec}}
By integrating IFS CALIFA data (after correction of spectra
in individual spaxels for intrinsic stellar motions) we constructed two spectra per galaxy: a summed up spectrum of the bulge and an average spectrum of the disk, normalized to one spaxel (1 $\sq\arcsec$ given that CALIFA data-cubes have a pixel scale of 1 spaxel). Bearing in mind that this is a pilot study and considering that the spectroscopic modeling and subsequent subtraction of the bar contribution is a non-trivial task, there was no attempt to model separately the bar component. Additionally, by computing an average disk spectrum from a significant number of spaxels (see Fig. \ref{maps}), whereas the bar is confined to a smaller number of spaxels as compared to the disk in any barred galaxy, it is expected that the spectroscopic contribution from the bar is smoothed out in the final disk spectrum, so that the bar will only marginally contaminate the average spectrum for the disk (i.e., the region outside $\mathrm{R}_{\rm B}$).
\begin{figure*}[h!]
\centering
\includegraphics[width=1\linewidth]{NGC0036_cont_comp1.png}
\caption{Illustration of the CALIFA IFS data (spaxel-by-spaxel map) of \object{NGC 0036} (l.h.s.) where it is shown the emission-line-free pseudo-continuum between 6390 and 6490~\AA. Within the red contour are the spaxels that were integrated to obtain a single bulge spectrum which is shown in the r.h.s along with its respective best-fitting stellar SED as obtained by {\sc Starlight}, Z4, in red. As for the average disk spectrum displayed in the r.h.s overplotted by its best-fitting stellar spectrum, in dark-blue, was obtained by integrating the spaxels that lay between the two blue contours in the l.h.s. and subsequent division by the considered number of spaxels. Additionally, it is plotted in light-blue at the r.h.s the best-fitting stellar SED for the disk after scaling according to the exponential model.}
\label{maps}
\end{figure*}
Using an adaptation of the isophotal annuli (\brem{isan}) surface photometry technique by \citet{P02}, which consists on the computation of of the mean surface brightness for a given filter within logarithmically equidistant isophotal zones obtained from a reference image, in this case the emission-line-free pseudo-continuum between 6390 $\AA$ and 6490 $\AA$, all galaxies of the sample were previously segmented into 18 isophotal zones.
Having the sample galaxies already segmented and the information on the radial extent of the bulge, the following procedure was to determine the zones that lie within the bulge and the ones within the disk, as pictured in Fig.~\ref{maps}. Here we defined a more conservative radial extent of the bulge ($\mathrm{R}_{\rm C}$, see black circle overlaid with SDSS true color image at the left panel and dashed vertical line at the right panel of Fig.~\ref{disks}) which is the galactocentric radius of the last zone that lies within $\mathrm{R}_{\rm B}$, as given by the structural analysis (mean $\mathrm{R}_{\rm C}$\ $\sim$ 0.87 $\cdot$ $\mathrm{R}_{\rm B}$; $\sigma$ = 0.13 for the LTG sample -- given the minor difference one can assume that $\mathrm{R}_{\rm B}$\ $\simeq$ $\mathrm{R}_{\rm C}$).
\\
The l.h.s of Fig.~\ref{maps} illustrates the spaxels that were used to construct the two spectra in the case of \object{NGC 0036}. The total spectrum within the bulge region $\mathrm{F}_{\rm C}(\lambda)$, (i.e., bulge plus a possible disk contribution, displayed in black at the r.h.s of Fig.~\ref{maps}, with its best-fitting stellar SED $\mathrm{F}_{\rm C}^{\star}(\lambda)$, overplotted in red) was created by summing up the spaxels residing within the red contour, i.e., all the spaxels that pertain to the zones within $\mathrm{R}_{\rm C}$. The normalized disk spectrum $\mathrm{\hat{F}}_{\rm D}(\lambda)$, (plotted in the r.h.s of the same Fig. in black, with its best-fitting stellar SED $\mathrm{\hat{F}}_{\rm D}^{\star}(\lambda)$, overplotted in dark-blue) was constructed by summing up the spaxels that occupy the locus between the two blue contours (i.e., between one zone after the last one pertaining in the bulge and zone 14\footnote{We decided to exclude the last 4 zones from the analysis due to the decreasing surface brightness $\mu$, and consequently decreasing signal-to-noise ratio, of the outermost spaxels. Throughout the sample, the 14$^{th}$ zone has an average value of $\mu$ = 23.6 mag/$\sq\arcsec$.}), subsequently dividing by the considered number of spaxels. Still at the r.h.s of Fig.~\ref{maps}, it is plotted in light-blue the best-fitting stellar SED for the disk, after scaling according to \brem{expD}.
Considering that most of our disks host significant star-formation which manifests itself through strong emission lines, by directly subtracting the observed disk spectra one would get artificially deep absorption features in the net-bulge spectrum. A way of avoiding this is to conduct first the spectral fitting of bulge and disk, which will result in emission-line free spectra $\mathrm{\hat{F}}_{\rm D}^{\star}(\lambda)$\ \& $\mathrm{F}_{\rm C}^{\star}(\lambda)$. To this end, spectral modeling of the disk and bulge spectra was carried out using the PSS codes {\sc Starlight}\ \& {\sc FADO}\ in the spectral range between 3900 and 6800~\AA\ adopting the Z4 and Z5 libraries (cf. Sect. 2.2). The four spectral modeling runs will be later referred to as SLZ4, SLZ5, FDZ4, FDZ5, respectively.
For {\sc Starlight}, a purely stellar code, strong emission lines were masked out before fitting while for {\sc FADO}\ these are used to achieve self-consistency between the nebular and the stellar emission, while deriving the SFHs.
\begin{figure*}[b]
\centering
\includegraphics[width=0.7\linewidth]{IC2604_spec_SLZ4_1.png}
\caption{The resulting spectra and respective fits for LTG \object{IC 2604} -- panel \textbf{a)} displays the spectrum of the bulge (red), disk (dark-blue) and scaled disk (light-blue) according to the exponential disk distribution (additionally it can be seen the overplotted SDSS $g$-band transmission curve in soft grey). Panel \textbf{b)} displays the residuals (in $\%$) between the modeled and observed spectrum for the bulge (red) and disk (dark-blue) with the black horizontal line corresponding in either case to a percentage deviation of 0$\%$. The vertical arrow corresponds to a percentage deviation of 50$\%$. At panel \textbf{c)} it is shown in black the resulting net-bulge spectra after subtraction of the three different models for the disk and overplotted are the respective stellar fits. The residuals, i.e., difference between obtained net-bulge SED and respective stellar fits divided by the observed, are shown in panel \textbf{d)}. As in panel \textbf{b)} we shift the residuals by an arbitrary amount (in this case, by 10$\%$; cf. vertical arrow) for the sake of better visibility. Labels on the r.h.s of the panels list the bulge and disk stellar mass $ \log({\cal M}_{\star})$, and mass-weighted mean stellar age $\langle t_{\star,\textrm{B}} \rangle_{{\cal M}}$\ and metallicity $\langle Z_{\star,\textrm{B}} \rangle_{{\cal M}}$\ prior to subtraction, the scaling factors for each of the disk configuration and $\langle t_{\star,\textrm{B}} \rangle_{{\cal M}}$\ \& $\langle Z_{\star,\textrm{B}} \rangle_{{\cal M}}$\ for each of the fits (where dD, fD and eD corresponds to centrally decreasing, flat and exponential disk, respectively), as obtained with {\sc Starlight}\ and Z4 stellar library.}
\label{disk_sub_im}
\end{figure*}
\subsection{Scaling of $\mathrm{\hat{F}}_{\rm D}^{\star}(\lambda)$:}
Subsequently, we estimated how much light within $\mathrm{R}_{\rm B}$\ belongs to the disk component and, accordingly, how much it is needed to scale $\mathrm{\hat{F}}_{\rm D}^{\star}(\lambda)$\ to ensure consistency between the spectroscopic and photometric analysis.
By integrating the observed SBP ($\mathrm{L}_{\rm T}$) and the three disk luminosity distributions ($\mathrm{L}_{\rm D}$) from the galactic center until $\mathrm{R}_{\rm C}$, we computed $f_{\rm D}$, the light fraction within the bulge residing in the disk under each of the assumptions:
\begin{equation}
\renewcommand*{\arraystretch}{1.5}
\begin{array}{l}
\mathrm{L}_{\rm D} = 2 \pi \int_{0}^{\mathrm{R}_{\rm C}} \mathrm{R}^{\star} \cdot 10^{(\mu_{D}(\mathrm{R}^{\star})-C)/-2.5} \cdot d \mathrm{R}^{\star}\\
\mathrm{L}_{\rm T} = 2 \pi \int_{0}^{\mathrm{R}_{\rm C}} \mathrm{R}^{\star} \cdot 10^{(\mu(\mathrm{R}^{\star})-C)/-2.5} \cdot d \mathrm{R}^{\star}\\
f_{D} = \mathrm{L}_{\rm D} / \mathrm{L}_{\rm T} \\
\end{array}
\renewcommand*{\arraystretch}{1}
\end{equation}
where $\mu(\rm R^{\star})$ is the surface brightness distribution of the observed SDSS $r$ or $g$-band SBP, $\mu_{\rm D}(\rm R^{\star})$ is the surface brightness distribution of the assumed disk component and $C$ the calibration constant.
\\
Bearing in mind the differences between the two data-sets (photometric and spectroscopic data), a mandatory step to achieve consistency when combining both techniques is to to scale the spectra based on the photometrically predicted luminosity fraction of the disk inside the bulge both in the SDSS $g$ and $r$ band. This is attained by convolving the bulge/disk spectra by the filter transmission curve $T_{\rm SDSS}(\lambda)$. Subsequent integration in the considered $\lambda$ range provides the corrected fluxes for the bulge $\mathrm{S}_{\rm C}$, and for each of the assumed disk luminosity distributions $\mathrm{S}_{\rm D}$:
\begin{equation}
\renewcommand*{\arraystretch}{1.5}
\begin{array}{l}
\mathrm{S}_{\rm D} = \int_{\lambda_{\rm min}}^{\lambda_{\rm max}} \mathrm{\hat{F}}_{\rm D}^{\star}(\lambda) \cdot \mathrm{T}_{\rm SDSSg}(\lambda) \cdot d\lambda\\
\mathrm{S}_{\rm C} = \int_{\lambda_{\rm min}}^{\lambda_{\rm max}} \mathrm{F}_{\rm C}(\lambda) \cdot \mathrm{T}_{\rm SDSSg}(\lambda) \cdot d\lambda\\
\end{array}
\renewcommand*{\arraystretch}{1}
\end{equation}
Division of $\mathrm{S}_{\rm D}$ by $\mathrm{S}_{\rm C}$, after multiplication by the number of spaxels contained within the zones inside $\mathrm{R}_{\rm C}$\ ($n_{\rm pC}$) ($\mathrm{\hat{F}}_{\rm D}^{\star}(\lambda)$\ is normalized, i.e., it corresponds to a single spaxel) will result in the corrective factor, $f_{\rm C}$:
\begin{equation}
f_{\rm C} = n_{\rm pC} \cdot \mathrm{S}_{\rm D} / \mathrm{S}_{\rm C}
\end{equation}
The final scaling factor $f_{\rm S}$, is simply the division between $f_{\rm D}$, i.e., the light fraction within the bulge residing in the disk according to the surface photometry, and the corrective factor $f_{\rm C}$, i.e., the same but according to the spectroscopic analysis, after correcting from the filter transmission curve:
\begin{equation}
f_{\rm S} = f_{\rm D} / f_{\rm C}
\end{equation}
The individual bulge net spectra for each of the three different disk configurations were computed by subtracting the scaled disk spectra from $\mathrm{F}_{\rm C}(\lambda)$\ as:
\begin{equation}
\mathrm{F}_{\rm B}(\lambda) = \mathrm{F}_{\rm C}(\lambda) - n_{\rm pC} \cdot f_{\rm S} \cdot \mathrm{\hat{F}}_{\rm D}^{\star}(\lambda)
\end{equation}
\\
Finally, the three $\mathrm{F}_{\rm B}(\lambda)$ (one for each disk configurations) for each galaxy were re-fitted by means of {\sc Starlight}\ \& {\sc FADO}\ with libraries Z4 \& Z5, this way completing the four spectral modeling runs.
\\
\subsection{Overview of the disk-subtraction tool:}
Summarizing, the developed suite of codes in Fortran, ESO-MIDAS and Python:
\begin{itemize}
\item[i.] computes the integrated observed spectrum of the bulge -- OBS bulge, $\mathrm{F}_{\rm C}(\lambda)$, and the average spectrum of the disk -- OBS disk, $\mathrm{\hat{F}}_{\rm D}(\lambda)$;
\item[ii.] by integrating the observed SBPs and those assumed for the disk, computes the scaling factors $f_{\rm S}$, under consideration of the SDSS $g$- or $r$-band filter transmission curves (for the $g$-band filter the average of the scaling factors of the sample are 2.37 for \brem{expD}, 2.09 for \brem{flatD} and 0.11 for \brem{decrD});
\item[iii.] extracts and models with {\sc Starlight}\ \& {\sc FADO}\ the OBS bulge \& OBS disk, and determines the respective best-fitting stellar SEDs, $\mathrm{F}_{\rm C}^{\star}(\lambda)$ \& $\mathrm{\hat{F}}_{\rm D}^{\star}(\lambda)$;
\item[iv.] scales and subtracts the latter from OBS bulge and re-fits (four times; i.e. with {\sc Starlight}\ \& {\sc FADO}\ for both the Z4 and Z5 SSP libraries) the net-bulge spectra $\mathrm{F}_{\rm B}(\lambda)$, obtained after subtraction of the three different functional forms of the disk within $\mathrm{R}_{\rm B}$, obtaining the best-fitting stellar SED for the net-bulge $\mathrm{F}_{\rm B}^{\star}(\lambda)$.
\end{itemize}
Figure~\ref{disk_sub_im} displays the results obtained for the LTG \object{IC 2604}. Extrapolation of a pure exponential or flat profile for the disk to the galactic center yields negative flux for $\lambda$ $\leq$ 4000~\AA, implying that the disk profile has to flatten or show a central depression within $\mathrm{R}_{\rm B}$. While assuming exponential and flat disk models, SLZ4's stellar age and metallicity estimates reached the maximum allowed value, which is per se an indication of an unphysical fit. Additionally, comparison of the resulting mass estimates for the bulge after subtraction of the exponential and flat models, $\rm \log({\cal M}_{\star, eD})$ and $\rm \log({\cal M}_{\star, fD})$, respectively, with the same prior of subtraction $\rm \log({\cal M}_{\star, B})$, indicates an increase of the stellar mass after disk removal, once more pointing to the invalidity of the assumed profiles for the disk (see next Sect.). For this galaxy, most of these criteria (used to classify the resulting net-bulge SED as unphysical) were met for all the four spectral modeling runs.
\section{Spectroscopic subtraction of the disk and first insights on the invalidity of its exponential intensity profile inside $\mathrm{R}_{\rm B}$}\label{resDiskSub}
The tool developed for the spectroscopic subtraction of the disk SED within $\mathrm{R}_{\rm B}$\ was applied to the entire sample using the previously estimated photometric constraints in both $r$- and $g$-band to scale the normalized spectrum of the disk. Considering that the results for both passbands are in agreement (within $\lesssim$ 15\%) and that the $g$-band transmission curve covers a significant part of the blue spectral range ($\sim$ 3830 - 5480 \AA), therefore better tracing the luminosity distribution of a SF disk, it was decided to present here only the results obtained from photometric decomposition $g$-band SBPs.
The first and possibly most decisive test of the validity of the
exponential fitting function for the disk within $\mathrm{R}_{\rm B}$\ is to examine the properties of the residual net spectrum of the bulge after subtraction of the estimated contribution from the underlying disk, i.e., whether $\mathrm{F}_{\rm B}(\lambda)$ is positive throughout the considered spectral range. Since $\mathrm{F}_{\rm B}(\lambda)$ is obtained on the standard assumption that the disk exponential slope is preserved all the way to the center, a possibly negative flux of the net spectrum of the bulge over a significant spectral interval yields a strong indication against the validity of the background assumption of the disk exponentiality. Whereas this \emph{reductio ad absurdum} approach does not permit to directly constrain the intensity of the disk beneath the bulge, it allows to confirm or exclude a range of functional forms for the disk profile. Finally, we re-fitted the three $\mathrm{F}_{\rm B}(\lambda)$ with the purpose of evaluating the plausibility of the net-bulge SEDs, this way indirectly assessing the validity of the enclosed assumptions for the disk luminosity profile.
Histograms displayed in Figs.~\ref{expD_prob} \&~\ref{flatD_prob} illustrate the percentage of unphysical net-bulge SED after subtraction of \brem{expD} and \brem{flatD}, respectively, in each of the three bulge stellar mass bins $\rm M_{\star, B}$ (the first bin encloses galaxies that host bulges with $\rm log(M_{\star, B}) \leq 9.5$ $\mathrm{M}_{\odot}$, being mainly composed by \brem{iA} galaxies, the second, bulges with $9.5 < \rm log(M_{\star, B}) < 10.5$ ($\sim$ \brem{iB}) and the third, bulges with $\rm log(M_{\star, B}) \geq 10.5$ ($\sim$ \brem{iC}) -- the average value for each mass bin is $10^{8.9}$ $\mathrm{M}_{\odot}$\ (31 galaxies), $10^{10}$ (61 galaxies) and $10^{10.75}$ (43 galaxies), respectively, as shown in the top-horizontal axis of the top-left panel of Fig.~\ref{mean_values}.
In the l.h.s. are the panels showing the frequencies for each individual criteria (blue and purple bars express the percentage of galaxies that reached the maximum mass-weighted stellar age and metallicity allowed by the SSP library, respectively, light pink bars the fraction of partly negative net-bulge SEDs and dark pink bars the percentage of galaxies in each mass bin which stellar mass estimate is higher after disk subtraction) whereas the r.h.s. displays the fraction of galaxies that fail all criteria simultaneously.
In addition, Fig.~\ref{fail_all} shows the fraction of unphysical net-bulge SED according to SLZ4, SLZ5, FDZ4 \& FDZ5, i.e., the fraction of net-bulge spectra which fail the aforementioned criteria for all spectral modeling runs, simultaneously. As a complement, Table~\ref{table_sum} summarizes the fractions of unphysical net-bulge spectra obtained after subtraction of \brem{expD} \& \brem{flatD} relatively to the total galaxy sample.
Visual inspection of the light pink bars in the histograms show that there is no significant preference for any bulge-mass interval and that a significant fraction of bulges do not contain enough flux in their blue spectral range to accommodate the inwardly extrapolated disk profile. As a final thought on our barred galaxies, inspection of the true color images and $g$-$i$ color maps of our sample reveals that the frequency of bars is higher for massive galaxies and that generally bar colors are similar to those of the bulge, i.e., redder than those of the disk. Consequently, the possible inclusion of the bar in the normalized spectrum of the disk $\mathrm{F}_{\rm D}(\lambda)$ would lead to a slightly redder SED. Therefore, if contamination of $\mathrm{F}_{\rm D}(\lambda)$ by the bar was significant, one would expect negative (or very low) values also in the red spectral regime for some of the barred, high mass galaxies (where the bar contribution is significantly larger as compared to other mass bins). Such effect is not observed which lead us to conclude that, by adopting this methodology, bar contamination does not produce a significant effect.
\begin{figure}[p]
\centering
\includegraphics[width=1.0\linewidth]{Problems_Exp.png}
\caption{Panels in the l.h.s. display the histograms showing the fractions of unphysical net-bulge spectra after subtraction of \brem{expD}, subdivided in three bulge mass bins (low $\rm M_{\star, B}$ for bulges with log of stellar mass lower or equal than 9.5 $\mathrm{M}_{\odot}$, intermediate $\rm M_{\star, B}$ for bulges with log of stellar mass between 9.5 and 10.5 and high $\rm M_{\star, B}$ for bulges with log of stellar mass higher or equal than 10.5). From left to right, the bars represent the fraction of net-bulge spectra that reached the maximum mass-weighted stellar age (blue) and metallicity (purple), display negative flux (light pink) and which estimated stellar mass is higher after disk subtraction (dark pink). Panels in the r.h.s contain the histograms showing the fractions of unphysical net-bulge spectra that meet all the aforementioned criteria. From top to bottom the rows refer to the spectral modeling run SLZ4, SLZ5, FDZ4 and FDZ5.}
\label{expD_prob}
\end{figure}
\begin{figure}[p]
\centering
\includegraphics[width=1.0\linewidth]{Problems_Flat.png}
\caption{Same layout as in Fig.~\ref{expD_prob}, displaying the results obtained after subtraction of \brem{flatD}.}
\label{flatD_prob}
\end{figure}
\begin{figure}[p]
\centering
\includegraphics[width=1.0\linewidth]{Problems_summary.png}
\caption{Same color coding as in Figs.~\ref{expD_prob} \& \ref{flatD_prob}, showing the galaxy fractions failing the criteria simultaneously for all runs.}
\label{fail_all}
\end{figure}
Examination of the bars colored blue and purple show, for each bulge mass interval, the fraction of net-bulge spectra whose determined mass-weighted stellar age and metallicity has reached the maximum allowed value by the SSP library, respectively. Such \emph{failed} fits yield indirect constraints on the validity of the assumed model for the intensity profile of the disk within the bulge radius.
Inspection of these results demonstrates that, independently of the
used PSS code or SSP library, a higher fraction of bulges in
the higher mass bin tends to reach maximum values of age and
metallicity, followed by the intermediate mass bulges and finally by low mass bulges, which display the lowest percentage of \emph{failed} fits, according to these criteria. Indeed, considering that the two higher mass bins enclose the intrinsically oldest and most metal-rich bulges in the sample, it is to be expected that mainly for these galaxies, subtraction of the rather blue spectrum of the disk will lead to a strong deficit of flux in the blue spectral range, thereby forcing PSS codes to reach maximum age\footnote{In addition to the typical uncertainties expected from spectral synthesis \citep[0.2-0.3 dex][]{Cid05,Cid14} resolving stellar populations becomes increasingly challenging with increasing stellar age. Based on \cite{Car19} who explore how {\sc Starlight}\ and {\sc FADO}\ recover the mass-weighted mean stellar age and metallicity and a set of additional tests performed adopting a similar experimental setup, we estimate the effective time resolution in age determinations of old stellar populations (>9 Gyr) to $\sim$1 Gyr.} and metallicity determinations.
\begin{figure}[p]
\centering
\includegraphics[width=1.0\linewidth]{mean_values.png}
\caption{
Panels in the l.h.s. display the mean values for the bulge's mass-weighted stellar age $\langle t_{\star,\textrm{B}} \rangle_{{\cal M}}$\ in Gyr, in each mass bin, in the case of no subtraction (grey), subtraction of a centrally depressed disk (red), subtraction of centrally flattened disk (green) and subtraction of exponential disk (blue). Panels at the r.h.s. display the mean estimated values for the bulge's mass-weighted stellar metallicity $\langle Z_{\star,\textrm{B}} \rangle_{{\cal M}}$\ in $\mathrm{Z}_{\odot}$. The ellipses represent the standard deviation $\sigma$ of the mean, providing an estimate on how spread are these estimates within each bin mass. Their semi-major/minor axes display the $\sigma$ in $\rm M_{\star, B}$ (in the x-axis) and the $\sigma$ in $\langle t_{\star,\textrm{B}} \rangle_{{\cal M}}$\ or $\langle Z_{\star,\textrm{B}} \rangle_{{\cal M}}$\ (in the y-axis). The 1$\rm ^{st}$ panel contains an additional x-axis providing information on the average stellar mass for each bulge bin mass. From top to bottom the rows refer to the four spectral modeling runs. The horizontal dashed line indicates the maximum allowed value for the respective quantity and SSP library.
}
\label{mean_values}
\end{figure}
This same effect is equally responsible for the increase in stellar mass after disk subtraction (dark pink bars), observed for a significant part of the galaxies of the sample: when fitting a spectrum using PSS codes, after obtaining the population vector PV (i.e, the fractional contributions of the individual SSPs), the stellar mass is derived by converting light to mass using the individual SSP's mass-to-light (M/L) ratios. Young stellar populations, which are typically seen in SF disks of LTGs, are characterized by low M/L ratios -- in spite of being significantly bright they contain a low percentage of stellar mass. On the other hand, older stellar populations, which typically populate the bulges of the most massive LTGs, have higher M/L ratios which imply that such stellar populations, although faint, constitute the bulk of the total stellar mass of the galaxy. Whereas it is common to observe an increase in flux, from the red to the blue spectral range, in the continuum of the spectra of SF disks, in the spectra of bulges is frequently observed a shortage of blue flux (i.e., a decrease in the continuum within the blue spectral range).
Depending on the steepness of the blue slope of the continuum of $\mathrm{F}_{\rm D}(\lambda)$ and on the deficiency of the flux of $\mathrm{F}_{\rm C}(\lambda)$ in this same spectral range, by removing the light contribution of a SF disk from the integrated central spectra, one might significantly reduce the blue flux of the residual SED. Seemingly, this approach is producing unreasonably red spectra (i.e., a severe lack of flux in the blue spectral range) for a significant fraction of the sample galaxies. In such cases, the PSS codes have no other alternative than to compensate this effect by introducing high fractions of high M/L SSPs (i.e., old stellar populations), which will artificially elevate the total stellar mass often to values which are higher than the ones derived from the observed integrated central spectrum. Logically, this result also implies that the assumptions retained in the adopted methodology are unreasonable, strengthening the conclusion that, for a significant part of the galaxies of the sample, the disk must diverge from exponentiality within $\mathrm{R}_{\rm B}$.
Figure~\ref{fail_all} displays the fractions of unphysical net-bulge SED within each bin mass for all spectral modeling run simultaneously. Its inspection and comparison with Figs.~\ref{expD_prob} \& ~\ref{flatD_prob} (see also Tab.~\ref{table_sum}) provides an idea of what are the criteria that are less impacted by the choice of PSS code or stellar library. Clearly and as expected, the most independent criteria are the frequency of net-bulge spectra with partially negative flux, which should be considered the most reliable test to define whether or not the enclosed assumptions are valid. Notwithstanding, even though the remaining criteria are more affected by the non-negligible uncertainties inherent to spectral synthesis (namely, for instance, age-metallicity degeneracy, different recipes for the convergence in distinct PSS codes, and increased difficulty in resolving old stellar populations) together they provide additional clues on the plausibility of the obtained net-bulge SEDs.
Considering that gradients of the stellar populations within the individual stellar components might be an important factor determining the unplausibility of the obtained net-bulge SEDs, we explored $g$-$r$ color and stellar age gradients within bulge and disk.
Although color and age gradients within $\mathrm{R}_{\rm B}$\ might be quite considerable \citep[see][]{Bre20a}, the same are generally negligible for the disk component. Figure 2 of \cite{Bre20a} shows that intermediate stellar mass galaxies, which host intermediate mass bulges, generally do not display significant age gradients. Bearing in mind that age gradients within the disk region are often insignificant and that the fractions of the net-bulge SEDs which are partially negative are higher for this particular mass bin, age/color gradients cannot be the main reason for the high number of partially negative net-bulge spectra.
\begin{table*}
\centering
\begin{tabular}{ c c c c c c }
Run & Max. $\langle t_{\star,\textrm{B}} \rangle_{{\cal M}}$\ & Max. $\langle Z_{\star,\textrm{B}} \rangle_{{\cal M}}$\ & Neg. Flux & > $\rm M_{\star, B}$ & All criteria \\
\hline
\multicolumn{6}{c}{After subtraction of \brem{expD}} \\
\hline
SLZ4 & 41.5 & 43.7 & 31.9 & 27.4 & 23.0 \\
SLZ5 & 25.9 & 25.9 & 31.9 & 26.7 & 17.8 \\
FDZ4 & 28.1 & 43.7 & 31.9 & 27.4 & 14.8 \\
FDZ5 & 16.3 & 18.5 & 31.1 & 32.6 & 8.1 \\
\hline
Average & 28.0 & 33.0 & 31.7 & 28.5 & 15.9 \\
\hline
\multicolumn{6}{c}{After subtraction of \brem{flatD}} \\
\hline
SLZ4 & 30.4 & 28.9 & 20.0 & 18.5 & 11.9 \\
SLZ5 & 16.3 & 14.8 & 18.5 & 17.8 & 9.6 \\
FDZ4 & 21.5 & 28.1 & 21.5 & 18.5 & 8.9 \\
FDZ5 & 11.1 & 12.6 & 20.7 & 21.5 & 5.2 \\
\hline
Average & 19.8 & 21.1 & 20.2 & 19.1 & 8.9 \\
\end{tabular}
\caption{Percentage of unphysical net-bulge SEDs after subtraction of disk models \brem{expD} and \brem{flatD} relatively to the total number of galaxies in the sample.}\label{table_sum}
\end{table*}
In addition, Fig.~\ref{mean_values} displays the average values for the bulge mass-weighted mean stellar age $\langle t_{\star,\textrm{B}} \rangle_{{\cal M}}$\ (l.h.s.), and metallicity $\langle Z_{\star,\textrm{B}} \rangle_{{\cal M}}$\ (r.h.s.), within each mass bin, as obtained for the four spectral modeling runs. Major/minor axes of the ellipses depict the error bars ($\sigma$ of the mean) for the estimated stellar mass (horizontal axes) and for $\langle t_{\star,\textrm{B}} \rangle_{{\cal M}}$\ or $\langle Z_{\star,\textrm{B}} \rangle_{{\cal M}}$\ (vertical axes). The different colors correspond to the various assumptions that were tested -- grey dots depict the mean values for OBS bulge $\mathrm{F}_{\rm C}(\lambda)$, red dots the values obtained after subtraction of the centrally depressed disk \brem{decrD}, green dots the values obtained after subtraction of the flattened disk \brem{flatD} and blue dots the values obtained after subtraction of the standard exponential disk \brem{expD}. Inspection of this Fig. and Tab.~\ref{table_sum} reveals that:
\begin{itemize}
\item[i.] mass-weighted age and metallicity determinations are not
significantly affected after subtraction of a centrally depressed disk (variations in mass, age or metallicity are within the expected error associated with spectral synthesis). Such a result was expected, considering that by assuming a disk shape such as \brem{decrD} the scaling factor $f_{\rm S}$, i.e., the light fraction of the disk within $\mathrm{R}_{\rm C}$, is in all cases low, with an average value of 11$\%$ for the whole sample.
\item[ii.] generally, ages and metallicities obtained from fitting $\mathrm{F}_{\rm B}(\lambda)$ increase as compared to those prior to disk subtraction;
\item[iii.] higher mass galaxies (which host higher mass bulges) have an increased tendency to reach the maximum value allowed by the SSP library;
\item[iv.] the higher the assumed disk contribution inside
$\mathrm{R}_{\rm B}$\ (from \brem{decrD} to \brem{flatD} to \brem{expD}), the
higher is the fraction of LTGs for which the determined age and/or
metallicity converges to the maximum allowed value by the adopted SSP
library (0$\%$ of the galaxies of the sample for \brem{decrD}, an average of $\sim$20 (21)$\%$ for $\langle t_{\star,\textrm{B}} \rangle_{{\cal M}}$\ ($\langle Z_{\star,\textrm{B}} \rangle_{{\cal M}}$) after \brem{flatD} and an average of $\sim$28 (33)$\%$ after \brem{expD}). The same behavior is observed for the increase in stellar mass after disk subtraction (0$\%$ after \brem{decrD}, an average of $\sim$20$\%$ after \brem{flatD} and $\sim$28$\%$ after \brem{expD}) and fraction of partially negative SEDs (0$\%$ after \brem{decrD}, an average of $\sim$20$\%$ after \brem{flatD} and $\sim$32$\%$ after \brem{expD});
\item[vi.] the values obtained with {\sc FADO}\ are in every case more disperse as compared to the same obtained with {\sc Starlight}\ (even for OBS bulge and \brem{decrD}) evidencing the non-negligible discrepancies in quantities obtained by different spectral synthesis codes.
\end{itemize}
Although instructive, this exercise alone is not sufficient to definitively answer the question of whether the disk preserves its intensity slope, or even exists, inside the bulge radius.
Nevertheless, this investigation has placed important constraints on the possible light distribution by the disk within $\mathrm{R}_{\rm B}$: for a substantial part of the sampled LTGs, independently of their stellar mass,
the assumption of inward extrapolation of the exponential intensity
profile of the disk yields dubious or unphysical results. This suggests that the
true intensity profile of the disk inside $\mathrm{R}_{\rm B}$\ shows a flattening
or central depression, as proposed from theoretical studies. If, on the other hand, the disk light distribution within $\mathrm{R}_{\rm B}$\ conserves its exponential nature, these results indicate that, for a significant fraction of the analyzed galaxies, the disk contribution within $\mathrm{R}^{\star}$\ $<$ $\mathrm{R}_{\rm B}$\ should be much redder than the host disk, displaying a spectroscopic profile more similar to the bulge.
\section{Summary and Conclusions}\label{conc}
With the goal of placing constraints on the radial intensity profile of the disk in late-type galaxies within their bulge radius $\mathrm{R}_{\rm B}$, we developed a tool that allows us to determine the net SED of the bulge
after spectroscopic subtraction of the photometrically inferred contribution from the underlying disk. Although quite rudimentary at this stage, this technique allowed us to gain first insights into the validity of the standard assumption that the disk preserves its exponential slope all the way to the LTG center:
\begin{itemize}
\item[i.] The analysis presented here indicates that, independently of the bulge's stellar mass (which is tightly correlated with the total LTG stellar mass and its bulge's mean stellar age), up to 32\% (20\%) of the SEDs obtained for the bulge after subtraction of an exponential (or inwardly flattening) model for the disk
yield negative flux in the blue spectral range. This implies that
in a significant fraction of LTGs the disk component must show a
central flat core or intensity depression inside $\mathrm{R}_{\rm B}$.
\item[ii.] Further support against the standard assumption of the
exponentiality of the disk within $\mathrm{R}_{\rm B}$\ comes from the fact that
spectral modeling obtained in that case for the net SED of the
bulge leads to dubious results. Specifically we find that when a purely exponential disk profile is assumed, $\sim$28\% ($\sim$33\%) of the disk-subtracted bulges reach the maximum allowed age (metallicity), for $\sim$28\% the stellar mass is higher than that estimated prior to disk subtraction and for $\sim$32\% the remaining net-bulge SED were partially negative.
\item[iii.] By assuming a flat intensity profile for the disk
within $\mathrm{R}_{\rm B}$, spectral modeling of $\sim$20\% ($\sim$21\%) of the net SED of bulges in our sample yields the maximum allowed age (metallicity), for $\sim$ 20\% of all cases the stellar mass exceeds that estimated
within $\mathrm{R}_{\rm B}$\ prior to disk subtraction and for $\sim$20\% the remaining net-bulge SED were partially negative.
\end{itemize}
The present investigation suggests that, in a significant fraction of LTGs, the disk component must show a central intensity depression inside $\mathrm{R}_{\rm B}$. If proven to be true, the soundness of the outcome resulting from a substantial fraction of past studies would be compromised, namely the ones based on bulge/disk decomposition, considering that this issue would propagate to many local and moderate to high-$z$ studies, impacting, for instance findings concerning growth and evolution of bulges and disks of LTGs (e.g., the evolution in $z$ of the B/T ratio).
Facing these results, one can even speculate that some of the reported findings regarding the formation and evolution of LTGs (see Sect.\ref{intro}) might be artificially driven by the wrong assumption of the conservation of the exponential disk within the bulge while performing structural decomposition.
Furthermore, it is worth bearing in mind that, in disk-dominated LTGs, the assumed (inwardly extrapolated) disk flux can provide up to $\sim$ 80\% of the light inside the bulge radius. Therefore, overestimating the true luminosity fraction by the disk could lead to the erroneous classification of a high-luminosity, high-$\eta$ CB as a PB. Moreover, if this bulge hosts an active galactic nucleus (AGN) one will conclude that some PBs display AGN activity in their cores whereas in fact this bulge is not a PB but a CB. This hypothetic scenario serves only as an example of how the adopted assumptions and methods might dictate the obtained results.
To complement this analysis and further explore the issue one can take advantage of the excellent-quality IFS data captured by 10m-class telescopes (e.g., with the MUSE@VLT spectrograph) being now at hand. By performing spatially resolved spectral synthesis to a number of local, moderately inclined LTGs one could explore more deeply the radial mass and stellar surface density profiles of galactic disks, based on this spectrophotometric decomposition technique. Moreover, the high spectral resolution and S/N permitted by MUSE data permits to resolve older stellar populations significantly more accurately as compared to IFS data restricted to the blue spectral range, especially when using a self-consistent spectral modeling tool, such as {\sc FADO}.
Finally, via kinematical decomposition, one could investigate $\rm V_{rot} / \sigma_{\star}$ radial profiles, which might too give further insights on the validity of the inward exponentiallity of galactic disks.
\begin{acknowledgements}
We thank the anonymous referee for valuable comments and suggestions. This work was supported by Fundação para a Ciência e a Tecnologia (FCT)
through the research grants [UID/FIS/04434/2019] UIDB/04434/2020 and
UIDP/04434/2020.
I.B. was supported by Instituto de Astrof\'isica e Ci\^encias do Espa\c{c}o through the research grant CIAAUP-30/2019-BID and by the FCT PhD::SPACE Doctoral Network (PD/00040/2012) through the fellowship PD/BD/52707/2014
funded by FCT (Portugal).
P.P. was supported through Investigador FCT contract IF/01220/2013/CP1191/CT0002 and by a contract that is supported by FCT/MCTES through national funds (PIDDAC) and by grant PTDC/FIS-AST/29245/2017.
J.M.G. is supported by the DL 57/2016/CP1364/CT0003 contract and
acknowledges the previous support by the fellowships CIAAUP-04/2016-BPD in
the context of the FCT project UID/-
FIS/04434/2013 \& POCI-01-0145-FEDER-007672, and SFRH/BPD/66958/2009 funded by FCT and POPH/FSE (EC).
This study uses data provided by the Calar Alto Legacy Integral Field Area (CALIFA) survey (califa.caha.es),
funded by the Spanish Ministry of Science under grant ICTS-2009-10, and the Centro Astron\'omico Hispano-Alem\'an.
It is based on observations collected at the Centro Astron\'omico Hispano Alem\'an (CAHA) at Calar Alto, operated jointly
by the Max-Planck-Institut f\"ur Astronomie and the Instituto de Astrof\'isica de Andaluc\'ia (CSIC).
This research has made use of the NASA/IPAC Extragalactic Database (NED) which is operated by the Jet Propulsion Laboratory,
California Institute of Technology, under contract with the National Aeronautics and Space Administration.
\end{acknowledgements}
\input{References.tex}
\end{document}
|
2,877,628,090,362 | arxiv | \chapter{Introdu\c{c}\~ao}
H\'a pouco mais de cem anos, Albert Einstein \cite{Ein} estabeleceu uma teoria em que o papel do observador assume uma certa relev\^ancia na determina\c{c}\~ao dos fen\^omenos f\'isicos e na aplica\c{c}\~ao das pr\'oprias leis da F\'isica, quando as velocidades dos objetos em questão são próximas da velocidade da luz. Esta teoria \'e denominada Relatividade Restrita ou Especial. O ponto de partida para o estudo desta teoria corresponde aos postulados da const\^ancia da velocidade da luz e a validade das leis da Física que, quando aplicadas a referenciais inerciais distintos, devem descrever os fenômenos de maneiras semelhantes. Os fenômenos físicos são então descritos por equações que têm a mesma forma, ou seja, a Relatividade Restrita deve ser uma teoria \emph{covariante}, e a conex\~ao entre sistemas de referências descritos por equações covariantes \'e feita atrav\'es de transforma\c{c}\~oes de Lorentz, como veremos a seguir.
\smallskip
Como a Relatividade Restrita é uma teoria elaborada em um espaço-tempo quadri\-dimensional, as transforma\c{c}\~oes de Lorentz podem ser definidas, basicamente, em dois tipos: as rota\c{c}\~oes e as impulsões (\emph{boosts}). As primeiras correspondem aos tr\^es tipos b\'asicos de rota\c{c}\~oes em torno dos tr\^es eixos espaciais, enquanto que a segunda \'e determinada por uma mudan\c{c}a de velocidades. H\'a ainda tr\^es tipos de impulsões, cada uma ocorrendo em um dos tr\^es eixos espaciais. Assim, um sistema \'e dito ter simetria de Lorentz se as leis da F\'isica n\~ao s\~ao alteradas por transforma\c{c}\~oes de Lorentz dos tipos descritos anteriormente.
\smallskip
A simetria de Lorentz e as propriedades da Mecânica Quântica correspondem a base para a formulação da Teoria Quântica de Campos, que descreve partículas como excitações locali\-zadas de um campo imerso no espaço-tempo. O desenvolvimento dessas ideias, no século passado, desencadeou a formulação do {\it Modelo Padrão} (MP), que des\-creve todas as interações conhecidas no universo (forças eletromagnética, nuclear forte e nuclear fraca), exceto a gravidade. Um resultado importante, propriedade do MP e obtido pela teoria quântica, é a {\it simetria} CPT. Nesta sigla, C quer dizer conjuga\c{c}\~ao de \emph{carga}, ou seja, esta transforma\c{c}\~ao converte a part\'icula em sua antipart\'icula; P corresponde \`a \emph{paridade}, isto \'e, deve ocorrer uma invers\~ao espacial quando esta transformação é realizada e, finalmente, T muda a dire\c{c}\~ao de fluxo do tempo. Devido a essas defini\c{c}\~oes, um sistema \'e dito ter simetria CPT se as tr\^es transforma\c{c}\~oes juntas n\~ao afetarem a f\'isica do sistema. Isso \'e denominado \emph{teorema} CPT, inicialmente proposto por Schwinger \cite{Sch}. Uma das consequ\^encias deste teorema é o fato de que partículas e suas antipartículas devem ter cargas elétricas de mesmo módulo e sinais opostos, tempos de vida exa\-tamente iguais, assim como massas e momentos magnéticos. As transformações discretas sofridas pelo campo de Dirac na figura de seus covariantes bilineares s\~ao estudadas no Capítulo 2 desta dissertação.
\smallskip
Os f\'isicos te\'oricos atuais esperam que, no limite de altas energias, uma teoria unificada possa descrever a Natureza de maneira simétrica. No entanto, a simetria de Lorentz é violada na tentativa de incorporar a Teoria da Relatividade Geral\footnote{Como é bem conhecido, Einstein não obteve sucesso ao tentar formular uma teoria que engloba o eletromagnetismo e a gravidade pois, nesta última, não existe força de repulsão entre as massas.} ao MP. O MP com gravidade descreveria sistemas físicos que devem ocorrer em escalas de altas energias, que são da ordem de grandeza da escala Planck, cuja massa \'e $m_{Pl}=\sqrt{\dfrac{\hbar c}{G_N}}\approx10^{19}\,GeV$, sendo $G_N$ a constante universal de Newton. As distâncias típicas nessas circunstâncias de escala seriam da ordem de $10^{-33}\,cm$.
\smallskip
A quebra da simetria de Lorentz surge também em outra área da física teórica contemporânea de altas energias: a Teoria de Cordas. Nesta teoria, a partícula pontual passa a ser um objeto unidimensional. Desta forma, quando se move no espaço-tempo, ao invés de traçar uma linha (linha mundo), esta part\'icula descreve uma superfície denominada \emph{folha de mundo}. Assim, os modos normais de vibração desta superfície de mundo recuperariam as informações de descrição das partículas. Tal ideia de violação da invariância de Lorentz, no contexto da Teoria de Cordas, foi lançada pioneiramente, em 1989, por Kostelecký e Samuel \cite{Sam}. Neste trabalho, os autores argumentam que tal violação pode ser estendida ao MP. Então, na segunda metade dos anos 90, surge o {\it Modelo Padrão Extendido} (MPE).
\smallskip
O MPE é uma teoria que possui todas as propriedades do MP usual - tais como a estrutura de calibre $SU(3)\times SU(2)\times U(1)$ e renormalizabilidade -, e a estens\~ao que permite estabelecer viola\c{c}\~oes de simetrias de Lorentz e CPT. Esta teoria então fornece uma descri\c{c}\~ao quantitativa das viola\c{c}\~oes das simetrias de Lorentz e CPT, controladas por coeficientes cujos valores s\~ao determinados por experimentos espec\'ificos\footnote{Em relação a esses experimentos, na referência \cite{Gib}, é possível encontrar alguns detalhes, tais como a razão $\dfrac{m_K-m_{\bar K}}{m_K}\lesssim10^{-18}$ para um sistema de kaons neutros. A massa do kaon é 0,5 $GeV$.}. Como tais experimentos são sensíveis a testes de transformações CPT, a teoria será baseada em efeitos de baixa energia, sem gravidade. Uma caracter\'istica marcante desta teoria, como se sabe da literatura, \'e que a quebra da simetria CPT implica na quebra da simetria de Lorentz\footnote{Para efeito de linguagem, o termo {\it quebra da simetria de Lorentz} ser\'a utilizado de uma maneira geral, enquanto que o termo {\it quebra da simetria} CPT ser\'a utilizado de forma espec\'ifica.} \cite{Gre}. Esse fato permite que qualquer viola\c{c}\~ao observ\'avel da simetria CPT seja descrita pelo MPE.
\smallskip
Em 1997, \cite{Col} Colladay, Kostelecký e colaboradores propuseram, para o setor de férmions, uma base teórica para estudos de modelos que envolvem violação de simetria CPT. Nesse trabalho, os autores argumentam que resultados obtidos através de c\'alculos te\'oricos, utilizando-se correções em mecânica quântica relativística e teoria de perturbações (usualmente até primeira ordem de aproximação, no regime de baixas energias), devem servir como motiva\c{c}\~ao para a busca, via experimentos, de limites para os campos de quebra de Lorentz na teoria de férmions. Os autores também redefiniram vários elementos importantes para o estudo da extensão da teoria de Dirac, modificados pela nova teoria, como lagrangianas, equações de Dirac, energias, propagadores, espinores de Dirac, etc. Esses elementos serão apresentados no Capítulo 3. No Cap\'itulo 4, a teoria de viola\c{c}\~ao CPT ser\'a aplicada a mode\-los teóricos em mecânica quântica, tais como mudan\c{c}as de energia de el\'etrons e p\'ositrons relativ\'isticos e ao efeito Zeeman an\^omalo.
\smallskip
Nesta dissertação, estudaremos os possíveis efeitos que a quebra da simetria de Lorentz podem causar em sistemas físicos da EDQ\footnote{Maiores detalhes de outros testes envolvendo quebra da simetria de Lorentz na EDQ são encontrados na referência \cite{Edq}.}. Assim, sistemas compreendidos no acoplamento de férmions com um campo de calibre serão analisados.
\smallskip
No início da década de 80, S. Deser, R. Jackiw e S. Templeton \cite{Des} escreveram a teoria eletromagnética de Maxwell em uma variante planar em (2+1)$D$, que preserva a invariância de Lorentz e transformações de calibre. Este modelo, denominado {\it teoria de Maxwell-Chern-Simons} (MCS), tem aplicabilidade a fen\^omenos planares de mat\'eria condensada, com grande destaque na literatura aos supercondutores e ao efeito Hall qu\^antico fracion\'ario \cite{Dun}. No Capítulo 5 discutimos algumas das propriedades e calculamos o termo induzido de CS proveniente do acoplamento de férmions com um campo de calibre no contexto do espaço-tempo planar.
\smallskip
Embora esta teoria s\'o exista em dimens\~oes \'impares, em 1990, Carroll, Field e Jackiw perceberam, em um trabalho pioneiro \cite{Car}, que \'e poss\'ivel formular uma teoria semelhante em (3+1)$D$, através da adição de
\begin{equation}
S_{CS}^{(3+1)D}=\frac{1}{2}\int d^4x\,\varepsilon^{\mu\nu\rho\sigma}\eta_\mu A_\nu \partial_\rho A_\sigma
\end{equation}
à ação de Maxwell convencional. Este termo é conhecido na literatura como termo de Carrol-Field-Jackiw. Tal termo é CPT-ímpar\footnote{O MPE tamb\'em admite a adi\c{c}\~ao do termo $-\frac{1}{4}\eta_{\mu\nu\rho\sigma}\,F^{\mu\nu}F^{\rho\sigma}$ \`a lagrangiana de Maxwell. Apesar de violar a simetria de Lorentz, este termo é CPT-par.}. Apesar de transformações de calibre serem preservadas, a simetria de Lorentz é violada, porque \'e necessário acoplar ao termo tipo Chern-Simons (CS) um quadrivetor constante $\eta_\mu$, que gera uma anisotropia do espaço-tempo.
\smallskip
No Capítulo 6 calculamos as correções radiativas, na aproximação de um laço, provenientes do acoplamento axial de férmions com um campo de calibre na presença da quebra da simetria de Lorentz. Tal acoplamento gera uma indução de um termo semelhante ao de CS \cite{Jac}, como na equação (1.1), na ação da EDQ. Essa indução é ambígua, uma vez que a proporcionalidade entre os campos de matéria e radiação depende exclusivamente do esquema de regularização adotado (tais esquemas podem ser: regularização dimensional, regularização de Pauli-Villars \cite{Alt}, método de {\it cut-off} e método do tempo próprio de Schwinger \cite{Lin}). Assim, calcularemos o termo induzido utilizando o método de regularização dimensional e discutiremos algumas consequências desse resultado sobre as velocidades de propagação de fótons clássicos.
\smallskip
As lagrangianas (e consequentemente as grandezas delas derivadas) ser\~ao sempre representadas em unidades naturais $\hbar=c=1$ nesta disserta\c{c}\~ao. Quando necessário, a métrica utilizada será dada, em quatro dimensões, por $g_{\mu\nu}=diag(1,-1,-1,-1)$. As matrizes de Dirac, quando contraídas com um quadrivetor constante, obedecem as relações $\not\!c=\gamma^\mu c_\mu=\gamma_\mu c^\mu$, onde $\not\!c^2=c^2$.
\chapter{Simetrias discretas do campo de Dirac}
\section{O campo escalar como motivação}
\'E bem conhecido que a mec\^anica qu\^antica n\~ao relativ\'istica descreve satisfatoriamente os fen\^omenos que envolvem part\'iculas cujas velocidades s\~ao pequenas quando comparadas \`a velocidade da
luz. Este papel \'e bem realizado pela equa\c{c}\~ao de Schr\"odin\-ger:
\begin{equation}
i\frac{\partial\phi}{\partial t}=H\phi\,.
\end{equation}
Historicamente, sabe-se que Schr\"odinger n\~ao encontrou inicialmente esta equa\c{c}\~ao. Seu resultado foi o que hoje conhecemos por {\it equa\c{c}\~ao de Klein-Gordon} (KG)\footnote{Mais informações históricas podem ser obtidas em {\it Scientific American, Gênios da Ciência - Quânticos: Os Homens que Mudaram a Física, edição especial (2005)}.}. Schr\"o\-dinger n\~ao a publicou porque n\~ao conseguiu aplic\'a-la aos el\'etrons no átomo de hidrogênio, pois se trata de uma equa\c{c}\~ao relativ\'istica. A equa\c{c}\~ao de KG \'e obtida atrav\'es da rela\c{c}\~ao entre massa, momento e energia relativ\'isticos:
\begin{equation}
p^2=p^\mu
p_\mu=E^2-|{\mbox{\boldmath{$p$}}}|^2=m^2\hspace{1eM}\rightarrow\hspace{1eM}E^2=|{\mbox{\boldmath{$p$}}}|^2+m^2\,,
\end{equation}
e tamb\'em atrav\'es das substitui\c{c}\~oes
$\displaystyle{E\rightarrow i\frac{\partial}{\partial t}}$ \,e\,
$\displaystyle{{\mbox{\boldmath{$p$}}}\rightarrow
-i{\mbox{\boldmath{$\nabla$}}}}$:
\begin{equation}
(\partial^\mu\partial_\mu+m^2)\phi\equiv(\square+m^2)\phi=0\,.
\end{equation}
onde $\square=\partial^2/\partial t^2-\nabla^2$ \'e o operador de
D'Alembert.
\smallskip
Em linguagem contempor\^anea, a equa\c{c}\~ao de Schr\"odinger (2.1) \'e interpretada como o limite n\~ao relativ\'istico da equa\c{c}\~ao de KG.
\smallskip
As solu\c{c}\~oes da equa\c{c}\~ao de KG s\~ao do tipo ondas planas
\begin{equation}
\phi({\mbox{\boldmath{$p$}}},t)\sim e^{-iEt+i{\bf p}\cdot{\bf x}}\,,
\end{equation}
onde, de acordo com (2.2), $\displaystyle{E=\pm\sqrt{|{\mbox{\boldmath{$p$}}}|^2+m^2}}$.
\smallskip
A equa\c{c}\~ao de KG apresenta dois problemas aparentes: o espectro de energia admite \emph{valores negativos} e a
probabilidade n\~ao \'e \emph{positiva definida}, pois a equa\c{c}\~ao de KG \'e de segunda ordem em
$\partial\phi/\partial t$. A mec\^anica qu\^antica n\~ao relativ\'istica fornece a probabilidade e a corrente de
probabilidade:
\begin{eqnarray}
\rho&=&\phi^\ast\phi\\
{\mbox{\boldmath{$J$}}}&=&-\frac{i}{2m}\,\left(\phi^\ast{\mbox{\boldmath{$\nabla$}}}\phi-\phi{\mbox{\boldmath{$\nabla$}}}\phi^\ast\right)\,,
\end{eqnarray}
que satisfazem a equa\c{c}\~ao da continuidade:
\begin{equation}
\frac{\partial\rho}{\partial
t}+{\mbox{\boldmath{$\nabla$}}}\cdot{\mbox{\boldmath{$J$}}}=0\,,
\end{equation}
que ainda pode ser escrita na forma quadrivetorial $\partial_\mu J^\mu=0$, com o quadrivetor corrente dado por
$J^\mu=(\rho,{\mbox{\boldmath{$J$}}})$.
\smallskip
No caso relativ\'istico, a equa\c{c}\~ao de KG exige que a corrente de probabilidade seja dada por\footnote{Esta nota\c{c}\~ao abreviada significa $A\stackrel{\leftrightarrow}{\partial^\mu} B=A\partial^\mu
B-(\partial^\mu A)B$.} (onde $\phi$ e $\phi^\ast$ satisfazem a equa\c{c}\~ao de KG):
\begin{equation}
J^\mu=i\phi^\ast\stackrel{\leftrightarrow}{\partial^\mu}\phi\,\,,\hspace{1eM}\mbox{ou}\hspace{1eM}J^\mu=\left(i\phi^\ast\stackrel{\leftrightarrow}{\partial_0}\phi,-i\phi^\ast\stackrel{\leftrightarrow}{{\mbox{\boldmath{$\nabla$}}}}\phi\right)\,.
\end{equation}
A solu\c{c}\~ao para os dois problemas aparentes \'e reinterpretar $\phi$, elevando seu \emph{status} de {\it fun\c{c}\~ao de onda de uma part\'icula} para um {\it operador que descreve campos}. Este operador de campo deve ser quantizado, utilizando-se regras espec\'ificas. A outra solu\c{c}\~ao \'e admitir que as energias positivas e negativas representam, respectivamente, as \emph{part\'iculas} e \emph{antipart\'iculas}. Esta \'e uma das principais caracter\'isticas da teoria de Dirac para o tratamento relativ\'istico do el\'etron, como veremos mais adiante. Definiremos agora alguns elementos de quantiza\c{c}\~ao can\^onica do campo bos\^onico de KG, que ser\~ao \'uteis quando as quantiza\c{c}ões do campo fermi\^onico de Dirac forem realizadas.
\smallskip
O campo escalar de KG pode ser escrito como uma transformada de Fourier da seguinte maneira:
\begin{eqnarray}
\phi(x)=\int\frac{d^4p}{(2\pi)^4}\,e^{-ip\cdot x}\phi(p)\,.
\end{eqnarray}
Este campo é solução da equação de KG. Então, podemos decompô-lo em modos de Fourier,
\begin{eqnarray}
\phi(p)=f({\bf p})\delta(p_0-\omega_p)+g({\bf
p})\delta(p_0+\omega_p)\hspace{2eM},\hspace{2eM}p_0=\pm\omega_p\,,
\end{eqnarray}
onde $f({\bf p})$ e $g({\bf p})$ são operadores que serão determinados mais adiante. Assim,
\begin{eqnarray}
\phi(x)&=&\int\frac{d^3p}{(2\pi)^4}dp_0[f({\bf
p})\delta(p_0-\omega_p)+g({\bf
p})\delta(p_0+\omega_p)]e^{-ip_0x_0+i{\bf p}\cdot{\bf x}}\nonumber\\
&&\nonumber\\
&=&\int\frac{d^3p}{(2\pi)^4}[f({\bf p})e^{-i\omega_px_0+i{\bf p}\cdot{\bf
x}}+g(-{\bf p})e^{i\omega_px_0-i{\bf p}\cdot{\bf x}}]\nonumber\\
&&\nonumber\\
&=&\int\frac{d^3p}{(2\pi)^4}[f({\bf p})e^{-ip\cdot x}+g(-{\bf
p})e^{ip\cdot x}]\bigg|_{p_0=\omega_p}\,.
\end{eqnarray}
Uma escolha conveniente é
\begin{equation}
\frac{f({\bf p})}{(2\pi)^4}=\frac{1}{(2\pi)^{3/2}}\frac{a({\bf
p})}{\sqrt{2\omega_p}}\hspace{1.5eM}\mbox{e}\hspace{1.5eM}\frac{g({-\bf
p})}{(2\pi)^4}=\frac{1}{(2\pi)^{3/2}}\frac{b^\dagger({\bf
p})}{\sqrt{2\omega_p}}\,,
\end{equation}
e o campo (2.9) fica
\begin{equation}
\phi(x)=\frac{1}{(2\pi)^{3/2}}\int\frac{d^3p}{\sqrt{2\omega_p}}[a({\bf
p})e^{-ip\cdot x}+b^\dagger({\bf p})e^{ip\cdot x}]\,.
\end{equation}
Os operadores $a^\dagger({\bf p})$ e $a({\bf p})$ são operadores que criam e aniquilam partículas, res\-pectivamente. Já os operadores $b^\dagger({\bf p})$ e $b({\bf p})$ são os mesmos operadores para as antipartículas.
Se $a({\bf p})=b({\bf p})$, o campo é real, e sua lagrangiana\footnote{Como tratamos de uma teoria em que os campos são locais, é conveniente definir uma função ${\cal L}$, denominada densidade de lagrangiana, onde $L=\int d^3x{\cal L}(\phi,\partial_\mu\phi)$\,. Portanto, a terminologia {\it lagrangiana} se refere simplesmente à {\it densidade de lagrangiana}.} é dada por
\begin{equation}
{\cal L}=\frac{1}{2}(\partial_\mu\phi)^2-\frac{m^2}{2}\phi^2\,.
\end{equation}
Definindo o momento $\pi({\bf x},t)$ canonicamente conjugado ao campo $\phi({\bf x},t)$ como
\begin{equation}
\pi({\bf x},t)=\frac{\partial{\cal L}}{\partial(\partial_0\phi({\bf
x},t))}=\dot{\phi}({\bf x},t)\,,
\end{equation}
e adotando a regra de comutação em tempos iguais,
\begin{equation}
[\phi(x),\pi(x')]=i\delta^{(3)}({\bf x}-{\bf x}')\,,
\end{equation}
temos que
\begin{equation}
[a({\bf p}),a^\dagger({\bf p}')]=\delta^{(3)}({\bf p}-{\bf p}')\,.
\end{equation}
Considere o campo escalar real. Sabemos que o operador $a({\bf p})$ aniquila o vácuo, ou seja, $a({\bf p})|0\rangle=0$. Então, podemos fazer a seguinte substituição
\begin{equation}
\langle0|a({\bf p})a^\dagger({\bf
p})|0\rangle\,\,\to\,\,\langle0|[a({\bf p}),a^\dagger({\bf
p})]|0\rangle\,.
\end{equation}
Assim, obtemos a seguinte relação:
\begin{eqnarray}
\langle0|[\phi(x),\phi(y)]|0\rangle&=&\!\int\!\frac{d^3p\,d^3p'}{(2\pi)^3}\frac{1}{\sqrt{2\omega_p\,2\omega_{p'}}}\,\langle0|[a_p\,e^{-ip\cdot x}+a^\dagger_p\,e^{ip\cdot x},\,a_{p'}\,e^{ip'\cdot y}+a^\dagger_{p'}\,e^{-ip'\cdot y}]|0\rangle\nonumber\\
&=&\int\!\frac{d^3p}{(2\pi)^3}\left\{\frac{1}{2\omega_p}\,e^{-ip\cdot(x-y)}\Bigg|_{p^0=+\omega_p}-\frac{1}{2\omega_p}\,e^{-ip\cdot(x-y)}\Bigg|_{p^0=-\omega_p}\right\}\nonumber\\
&=&\int\!\frac{d^3p}{(2\pi)^3}\!\int\frac{dp^0}{2\pi i}\,\frac{-1}{p^2-m^2}\,e^{-ip\cdot(x-y)}\nonumber\\
&=&\int\!\frac{d^4p}{(2\pi)^4}\,\frac{i}{p^2-m^2}\,e^{-ip\cdot(x-y)}\hspace{2eM}\mbox{para}\hspace{1eM}x^0>y^0\,.
\end{eqnarray}
Definimos, então, o {\it propagador de Feynman} para o campo bosônico
\begin{equation}
\Delta_F(x-y)=\int\!\frac{d^4p}{(2\pi)^4}\,\frac{i}{p^2-m^2+i\epsilon}\,e^{-ip\cdot(x-y)}\,.
\end{equation}
Quando aplicamos a equação de KG a este propagador, obtemos a seguinte identidade:
\begin{equation}
(\square+m^2)\Delta_F(x-y)=-i\delta^{(4)}(x-y)\,.
\end{equation}
\section{A equa\c{c}\~ao de Dirac}
Motivado pelo desejo de encontrar uma equa\c{c}\~ao de primeira ordem relativisticamente covariante, Dirac escreveu uma equa\c{c}\~ao linear na derivada temporal e no momento. Com $H_0=\alpha_1p_1+\alpha_2p_2+\alpha_3p_3+\beta m$ representando o \emph{hamiltoniano livre de Dirac}, a equa\c{c}\~ao (2.1) toma a forma:
\begin{equation}
i\frac{\partial\psi}{\partial
t}=\left[-i{\mbox{\boldmath{$\alpha$}}}\cdot{\mbox{\boldmath{$\nabla$}}}+\beta
m\right]\psi\,,
\end{equation}
onde ${\mbox{\boldmath{$\alpha$}}}$ e $\beta$ s\~ao matrizes $n\times n$, que devem ser determinadas, e $\psi$ uma matriz coluna de ordem $n$. Para descobrir esta rela\c{c}\~ao, aplicamos a
derivada temporal em ambos os lados de (2.22) e encontramos:
\begin{equation}
-\frac{\partial^2\psi}{\partial
t^2}=\left[-\alpha^i\alpha^j\nabla^i\nabla^j-im(\beta\alpha^i+\alpha^i\beta)\nabla^i+\beta^2M^2\right]\psi\,.
\end{equation}
Comparando com a equa\c{c}\~ao de KG, vemos claramente que ${\mbox{\boldmath{$\alpha$}}}$ e $\beta$ devem
satisfazer as condi\c{c}\~oes:
\begin{eqnarray}
\alpha^i\alpha^j+\alpha^j\alpha^i=2\delta^{ij}\,,&&\nonumber\\
\beta\alpha^i+\alpha^i\beta=0\,,&&\\
\beta^2=1\,.&&\nonumber
\end{eqnarray}
As rela\c{c}\~oes (2.24) nos indicam, de fato, que ${\mbox{\boldmath{$\alpha$}}}$ e $\beta$ \emph{n\~ao s\~ao}
n\'umeros ordin\'arios, ou seja, s\~ao \emph{matrizes}. No entanto, como sabemos, os operadores que representam fen\^omenos f\'isicos devem ser dados por matrizes quadradas\footnote{Esta consequ\^encia caracteriza uma \emph{transforma\c{c}\~ao linear} em seu caso particular, a \emph{opera\c{c}\~ao linear}.}. A combina\c{c}\~ao linear mais simples, a $2\times2$, n\~ao \'e poss\'ivel, pois as matrizes ${\mbox{\boldmath{$\alpha$}}}$ e $\beta$ 2$\times$2 {\it não existem}. A pr\'oxima op\c{c}\~ao mais simples \'e a de dimens\~ao $4\times4$ que, na \emph{representa\c{c}\~ao de Dirac} ou representa\c{c}\~ao padr\~ao\footnote{Veja Ap\^endice A.}, fornece:
\begin{equation}
{\mbox{\boldmath{$\alpha$}}}=\left(\begin{array}{cc}0&{\mbox{\boldmath{$
\sigma$}}}\\{\mbox{\boldmath{$
\sigma$}}}&0\end{array}\right)\hspace{2eM}\mbox{e}\hspace{2eM}\beta=\left(\begin{array}{cc}1&0\\0&-1\end{array}\right)\,.
\end{equation}
Esta escolha \'e feita com base em um teorema devido a Pauli. Isto implica que as matrizes ${\mbox{\boldmath{$\alpha$}}}$ e $\beta$ s\~ao hermitianas, e as propriedades (2.24) tamb\'em se aplicam a
${\mbox{\boldmath{$\alpha$}}}^\dagger$ e $\beta^\dagger$. Portanto, o hamiltoniano de Dirac em (2.22) \'e hermitiano. Para escrevermos a equa\c{c}\~ao de Dirac em nota\c{c}\~ao moderna, devemos considerar
${\mbox{\boldmath{$\alpha$}}}=\gamma^0{\mbox{\boldmath{$\gamma$}}}$ e $\beta=\gamma^0$, e então temos que:
\begin{equation}
(i\gamma^\mu\partial_\mu-m)\psi=0\,.
\end{equation}
De posse da equa\c{c}\~ao de Dirac acima, calcularemos agora suas autoenergias e suas solu\c{c}\~oes do tipo ondas-planas na forma $\psi=N\,u(p)\,e^{-ip\cdot x}$, cuja constante de normali\-za\c{c}\~ao ser\'a calculada mais adiante. Considere a equa\c{c}\~ao de autovalores $H_0\,u^{(s)}({\mbox{\boldmath{$p$}}})=E\,u^{(s)}({\mbox{\boldmath{$p$}}})$,
com o hamiltoniano de Dirac $H_0$ e as autoenergias (2.2), al\'em da representa\c{c}\~ao padr\~ao para ${\mbox{\boldmath{$\alpha$}}}$ e $\beta$:
\begin{equation}
\left(\begin{array}{cc}m&{\mbox{\boldmath{$\sigma$}}}\cdot{\mbox{\boldmath{$p$}}}\\{\mbox{\boldmath{$\sigma$}}}\cdot{\mbox{\boldmath{$p$}}}&-m\end{array}\right)\left(\begin{array}{c}\chi_s\\\eta_s\end{array}\right)=E\left(\begin{array}{c}\chi_s\\\eta_s\end{array}\right)\,,
\end{equation}
onde $s=1,2$ rotulam os spins para cima e para baixo, respectivamente,
$\chi_1=\left(\begin{array}{c}1\\0\end{array}\right)$,
$\chi_2=\left(\begin{array}{c}0\\1\end{array}\right)$ configuram uma
base ortonormal, e $\eta_s$ deve ser determinado por (2.27).
\smallskip
As solu\c{c}\~oes com energia positiva, para as part\'iculas com
$E=+|\sqrt{|{\mbox{\boldmath{$p$}}}|^2+m^2}|$, s\~ao dadas por:
\begin{equation}
u^{(s)}(p)=N\,\left(\begin{array}{c}\chi_s\\\frac{{\mbox{\boldmath{$\sigma$}}}\cdot{\mbox{\boldmath{$p$}}}}{{\displaystyle{E+m}}}\,\chi_s\end{array}\right)\,.
\end{equation}
Admitindo a existência de antipartículas na teoria de Dirac, as mesmas devem ter soluções com energias negativas, ou seja, $E=-|\sqrt{|{\mbox{\boldmath{$p$}}}|^2+m^2}|$. Assim, as antipart\'iculas devem ser pensadas como part\'iculas com energias $E\,\rightarrow\,-E$ e momentos ${\mbox{\boldmath{$p$}}}\,\rightarrow\,-{\mbox{\boldmath{$p$}}}$.
Sendo a equa\c{c}\~ao de autovalores agora dada por
$H_0\,\upsilon^{(s)}({\mbox{\boldmath{$p$}}})=-E\,\upsilon^{(s)}({\mbox{\boldmath{$p$}}})$, obtemos a solu\c{c}\~ao\footnote{O espinor escolhido para os estados de energias negativas é $\chi_{-s}=-i\sigma^2\chi^\ast_{s}$ para haver concordância com a transformação por conjugação de carga que
será feita mais adiante.}
\begin{equation}
\upsilon^{(s)}(p)=N'\,\left(\begin{array}{c}\frac{{\mbox{\boldmath{$\sigma$}}}\cdot{\mbox{\boldmath{$p$}}}}{{\displaystyle{E+m}}}\,\chi_{-s}\\\chi_{-s}\end{array}\right)\,.
\end{equation}
A normaliza\c{c}\~ao do espinor (2.28), dentre outras poss\'iveis, com $\bar{u}=u^\dagger\gamma^0$, é:
\begin{eqnarray}
&&\bar{u}^{(r)}(p)u^{(s)}(p)=\delta^{rs}\,,\\
&&\bar{\upsilon}^{(r)}(p)\upsilon^{(s)}(p)=-\delta^{rs}\,.
\end{eqnarray}
de onde segue que $\displaystyle{N=N'=\sqrt{\frac{E+m}{2m}}}$.
\smallskip
\'E conveniente ainda escrevermos as seguintes rela\c{c}\~oes entre os espinores obtidos na teoria de Dirac:
\begin{eqnarray}
u^{(r)\dagger}(p)u^{(s)}(p)&=&\frac{E}{m}\,\delta^{rs}\,,\nonumber\\
&&\nonumber\\
\upsilon^{(r)\dagger}(p)\upsilon^{(s)}(p)&=&\frac{E}{m}\,\delta^{rs}\,,\\
&&\nonumber\\
u^{(r)\dagger}(p)\upsilon^{(s)}(p)&=&\upsilon^{(r)\dagger}(p)u^{(s)}(p)=0\nonumber\,.
\end{eqnarray}
Portanto, a equa\c{c}\~ao de Dirac no espa\c{c}o dos momentos e seus espinores normalizados para as part\'iculas e antipart\'iculas s\~ao, respectivamente, dados por:
\begin{eqnarray}
(\not\!p-m)u^{(s)}(p)&=&0\hspace{1.5eM}\rightarrow\hspace{1.5eM}u^{(s)}(p)=\sqrt{\frac{E+m}{2m}}\,\left(\begin{array}{c}\chi_s\\\frac{{\mbox{\boldmath{$\sigma$}}}\cdot{\mbox{\boldmath{$p$}}}}{{\displaystyle{E+m}}}\,\chi_s\end{array}\right)\,,\\
&&\nonumber\\
(\not\!p+m)\upsilon^{(s)}(p)&=&0\hspace{1.5eM}\rightarrow\hspace{1.5eM}\upsilon^{(s)}(p)=\sqrt{\frac{E+m}{2m}}\,\left(\begin{array}{c}\frac{{\mbox{\boldmath{$\sigma$}}}\cdot{\mbox{\boldmath{$p$}}}}{{\displaystyle{E+m}}}\,\chi_{-s}\\\chi_{-s}\end{array}\right)\,.
\end{eqnarray}
Como vimos nas se\c{c}\~oes anteriores, as solu\c{c}\~oes com energias negativas para as equa\c{c}\~oes de onda relativ\'isticas causaram um grande desconforto no in\'icio da cons\-tru\c{c}\~ao da teoria. No entanto, Dirac {\it reinterpretou} as soluções de ener\-gias negativas para o problema. Dirac prop\^os um estado de v\'acuo, denominado \emph{mar de Dirac}, com todos os estados de energias negativas preenchidos. Um el\'etron no mar de Dirac com energia nega\-tiva pode absorver radia\c{c}\~ao e ser transportado para um estado com energia positiva. O buraco (\emph{hole}) deixado pelo el\'etron \'e reinterpretado como uma aus\^encia de carga $+|e|$ e energia $-|E|$. Esta \textquotedblleft aus\^encia\textquotedblright \'e ent\~ao considerada uma evidência da exist\^encia de uma \emph{antipart\'icula} com carga $+|e|$ e energia $-|E|$. Ent\~ao, a teoria de Dirac prev\^e a exist\^encia do \emph{p\'ositron}, antipart\'icula do el\'etron, descoberto experimentalmente em 1932 por Carl Anderson.
\vspace{-0.9cm}
\begin{center}
\begin{picture}(115,160)
\LongArrow(30,0)(30,100)\Text(5,90)[]{$Energia$}
\LongArrow(90,45)(120,45)\Text(170,45)[]{Região Proibida}
\GBox(30,30)(100,60){0.8}\Text(18,62)[]{$Mc^2$}\Text(18,28)[]{-$Mc^2$}
\GCirc(55,17){4}{1.0}\Text(90,13)[]{Buraco}
\GCirc(55,79){4}{0.0}\Text(90,82)[]{Elétron}
\ArrowArc(56,47)(30,280,80)
\end{picture}\\ \vspace{0.5cm} {\sl Representação pictórica do mar de Dirac.}
\end{center}
\section{O propagador do férmion}
A equa\c{c}\~ao de Dirac admite uma solu\c{c}\~ao arbitr\'aria, que engloba as energias positivas e negativas, que \'e dada por:
\begin{equation}
\psi(x)=\frac{1}{(2\pi)^{3/2}}\sum\limits_s\int\frac{d^3p}{E/m}\left[c^{(s)}(p)u^{(s)}(p)e^{-ip\cdot
x}+d^{(s)\dagger}(p)\upsilon^{(s)}(p)e^{ip\cdot x}\right]\,,
\end{equation}
e seu operador adjunto
\begin{equation}
\bar{\psi}(x)=\frac{1}{(2\pi)^{3/2}}\sum\limits_s\int\frac{d^3p}{E/m}\left[c^{(s)\dagger}(p)\bar{u}^{(s)}(p)e^{ip\cdot
x}+d^{(s)}(p)\bar{\upsilon}^{(s)}(p)e^{-ip\cdot x}\right]\,,
\end{equation}
onde $c^{(s)}$ e $c^{(s)\dagger}$ são operadores de aniquilação e criação de partículas, enquanto que $d^{(s)}$ e $d^{(s)\dagger}$ são s mesmos operadores para as antipartículas. Como os férmions obedecem a estatística de Fermi-Dirac, estes operadores satisfazem a seguinte regra de anticomutação
\begin{equation}
\{c^{(s)}(p),c^{(r)\dagger}(p')\}=\{d^{(s)}(p),d^{(r)\dagger}(p')\}=\frac{E}{m}\delta^{rs}\,\delta({\bf
p}-{\bf p}')\,.
\end{equation}
Então,
\begin{eqnarray}
\langle0|\psi(x)\bar{\psi}(y)|0\rangle&=&\int\!\frac{d^3p}{(2\pi)^3}\,\frac{m}{E}\sum\limits_su^{(s)}(p)\bar{u}^{(s)}(p)\,e^{-ip\cdot(x-y)}\nonumber\\
&&\nonumber\\
&=&\int\!\frac{d^3p}{(2\pi)^3}(i\!\not\!\partial_x+m)\frac{1}{2E}\,e^{-ip\cdot(x-y)}\\
&&\nonumber\\
\langle0|\bar{\psi}(x)\psi(y)|0\rangle&=&\int\!\frac{d^3p}{(2\pi)^3}\,\frac{m}{E}\sum\limits_s\upsilon^{(s)}(p)\bar{\upsilon}^{(s)}(p)\,e^{-ip\cdot(y-x)}\nonumber\\
&&\nonumber\\
&=&\int\!\frac{d^3p}{(2\pi)^3}(-i\!\not\!\partial_x+m)\frac{1}{2E}\,e^{-ip\cdot(y-x)}\,.
\end{eqnarray}
Da mesma forma que foi feito com o campo de KG, obteremos, para o campo de Dirac, a função de Green retardada:
\begin{equation}
S_R(x-y)\equiv\theta(x^0-y^0)\langle0|\{\psi(x),\bar{\psi}(y)\}|0\rangle\,,
\end{equation}
que satisfaz
\begin{equation}
S(x-y)=(i\!\not\!\partial_x+m)\Delta(x-y)\,,
\end{equation}
com $\Delta(x-y)$ sendo o propagador para os bósons. Aplicando o operador $-i\!\not\!\partial_x+m$ na expressão acima em ambos os lados pela esquerda, segue que
\begin{eqnarray}
(-i\!\not\!\partial_x+m)S(x-y)&=&(\partial^\mu\partial_\mu+m^2)\Delta(x-y)\nonumber\\
&=&-i\delta^{(4)}(x-y)\,,
\end{eqnarray}
onde foi utilizada a express\~ao (2.21).
\smallskip
Portanto, no espaço dos momentos,
\begin{eqnarray*}
i\delta^{(4)}(x-y)=\int\frac{d^4p}{(2\pi)^4}\,i\,e^{-ip\cdot(x-y)}=\int\frac{d^4p}{(2\pi)^4}(\not\!p-m)e^{-ip\cdot(x-y)}S(p)\,,
\end{eqnarray*}
ou seja,
\begin{equation}
S(p)=\frac{i}{\not\!p-m}\,.
\end{equation}
Utilizando os mesmos argumentos do campo de KG, o propagador de Feynman para o campo fermiônico é definido como se
segue:
\begin{eqnarray}
S(x-y)&\equiv&\langle0|T\psi(x)\bar{\psi}(y)|0\rangle\nonumber\\
&&\nonumber\\
&=&i\int\frac{d^4p}{(2\pi)^4}\,\frac{\not\!p+m}{p^2-m^2+i\epsilon}\,e^{-ip\cdot(x-y)}\,.
\end{eqnarray}
\section{Simetrias discretas}
A teoria de Dirac apresenta, além das transformações de Lorentz, outras duas importantes simetrias tipo espaço-tempo: a transformação por {\it paridade} e a {\it inversão temporal}. A primeira, de caráter espacial, troca o sinal da parte espacial de um quadrivetor qualquer: $x^\mu=(x^0,\mathbf{x})\,\to\,\tilde{x}_\mu=(x^0,-\mathbf{x})$ e o
mesmo vale para o momento $\tilde{p}_\mu=(p_0,-{\bf p})$. A segunda, estritamente temporal, inverte o fluxo do tempo no cone de luz: $-\tilde{x}_\mu=(-p^0,\mathbf{x})$ e $-p_\mu=(-x^0,\mathbf{p})$. No entanto, apesar dessas trocas, a norma no espaço-tempo de Minkowiski se conserva.
\smallskip
Como foi visto nas seções anteriores, a teoria quântica tem como uma de suas principais virtudes a capacidade de explicar a existência de partículas e antipart\'iculas na Natureza. Este fato acarreta em outra simetria discreta da teoria: a conversão da partícula em sua antipartícula e vice-versa. Esta simetria é denominada transformação por {\it conjugação de carga}. Acredita-se que a Natureza procura perservar estas três simetrias combinadas, denominada {\it simetria} CPT. No entanto, existem alguns processos que podem violar esta simetria. Antes de discutí-los, observaremos a atuação destas três simetrias sobre os
campos e partículas de Dirac através dos covariantes bilineares.
\subsection{Paridade}
Considerando os operadores de campo de Dirac (2.35) e (2.36), procuramos por um operador unitário que troca o sinal da parte espacial dos momentos nos operadores de criação e aniquilação de partículas e antipartículas, sem \emph{alterar seus estados de spin}:
\begin{eqnarray}
{\cal P}c^{(s)}(p){\cal P}^{-1}=\alpha_p\,c^{(s)}(\tilde{p})\hspace{1.5eM}&,&\hspace{1.5eM}{\cal P}d^{(s)}(p){\cal P}^{-1}=\beta_p\,d^{(s)}(\tilde{p})\,,\nonumber\\
&&\\
{\cal P}c^{(s)\dagger}(p){\cal
P}^{-1}=\alpha_p^\ast\,c^{(s)\dagger}(\tilde{p})\hspace{1.5eM}&,&\hspace{1.5eM}{\cal
P}d^{(s)\dagger}(p){\cal
P}^{-1}=\beta_p^\ast\,d^{(s)\dagger}(\tilde{p})\,,\nonumber
\end{eqnarray}
onde $\alpha_p$ e $\beta_p$ são as fases dessa transformação. Sendo o produto escalar conservado, ou seja, $p\cdot x=\tilde{p}\cdot\tilde{x}$, com $u^{(s)}(p)=\gamma^0u^{(s)}(\tilde{p})$ e $\upsilon^{(s)}(p)=-\gamma^0\upsilon^{(s)}(\tilde{p})$, a transformação que procuramos é:
\begin{eqnarray}
{\cal P}\psi(x){\cal P}^{-1}&=&\frac{1}{(2\pi)^{3/2}}\!\int\!\!\frac{d^3p}{E/m}\sum_s\left[\alpha_p\,c^{(s)}(\tilde{p})u^{(s)}(p)e^{-i\tilde{p}\cdot\tilde{x}}+\beta^\ast_p\,d^{(s)\dagger}(\tilde{p})\upsilon^{(s)}(p)e^{i\tilde{p}\cdot\tilde{x}}\right]\nonumber\\
&&\nonumber\\
&=&\frac{1}{(2\pi)^{3/2}}\!\int\!\!\frac{d^3\tilde{p}}{E/m}\left[\alpha_p\,c^{(s)}(\tilde{p})\gamma^0u^{(s)}(\tilde{p})e^{-i\tilde{p}\cdot\tilde{x}}-\beta^\ast_p\,d^{(s)\dagger}(\tilde{p})\gamma^0\upsilon^{(s)}(\tilde{p})e^{i\tilde{p}\cdot\tilde{x}}\right]\,,\nonumber\\
\end{eqnarray}
Adotando-se $\alpha_p=-\beta^\ast_p$, temos que
\begin{equation}
{\cal P}\psi(x){\cal
P}^{-1}\equiv\psi^p(x)=\alpha_p\,\gamma^0\psi(\tilde{x})\,,
\end{equation}
e, analogamente para o campo adjunto,
\begin{equation}
{\cal P}\bar{\psi}(x){\cal
P}^{-1}\equiv\bar{\psi}^p(x)=\alpha^\ast_p\,\bar{\psi}(\tilde{x})\gamma^0\,.
\end{equation}
\subsection{Inversão temporal}
A inversão do fluxo do tempo inverte o spin das partículas e antipartículas. Assim, a ação do operador de inversão temporal ${\cal T}$, anti-unitário, sobre os operadores de criação e aniquilação de partículas e antipartículas fornece
\begin{eqnarray}
{\cal T}c^{(s)}(p){\cal T}^{-1}=\alpha_t\,c^{(-s)}(\tilde{p})\hspace{1.5eM}&,&\hspace{1.5eM}{\cal T}d^{(s)}(p){\cal T}^{-1}=\beta_t\,d^{(-s)}(\tilde{p})\,,\nonumber\\
&&\\
{\cal T}c^{(s)\dagger}(p){\cal
T}^{-1}=\alpha_t^\ast\,c^{(-s)\dagger}(\tilde{p})\hspace{1.5eM}&,&\hspace{1.5eM}{\cal
T}d^{(s)\dagger}(p){\cal
T}^{-1}=\beta_t^\ast\,d^{(-s)\dagger}(\tilde{p})\,.\nonumber
\end{eqnarray}
Os espinores $\chi_s$, com $s=1,2$ foram definidos na seção 2.2.1.: $\chi_1=\left(\begin{array}{c}1\\0\end{array}\right)$,
$\chi_2=\left(\begin{array}{c}0\\1\end{array}\right)$. Assim,
\begin{eqnarray}
{\cal T}\psi(x){\cal T}^{-1}&=&\frac{1}{(2\pi)^{3/2}}\!\int\!\!\frac{d^3p}{E/m}\sum_{-s}\left[\alpha_t\,c^{(-s)}(\tilde{p})u^{(s)}(p)e^{-ip\cdot x}+\beta^\ast_t\,d^{(-s)\dagger}(\tilde{p})\upsilon^{(s)}(p)e^{ip\cdot x}\right]\nonumber\\
&&\nonumber\\
&=&\frac{1}{(2\pi)^{3/2}}\!\int\!\!\frac{d^3p}{E/m}\sum_{-s}\left[\alpha_t\,c^{(-s)}(\tilde{p})u^{(s)\ast}(p)e^{i\tilde{p}\cdot x}+\beta^\ast_t\,d^{(-s)\dagger}(\tilde{p})\upsilon^{(s)\ast}(p)e^{-i\tilde{p}\cdot x}\right]\,.\nonumber\\
\end{eqnarray}
Como o operador inversão temporal gira o spin, é necessária a operação que executa este papel:
\begin{equation}
\chi_{-s}=-i\sigma^2\,\chi_s^\ast\hspace{1.5eM}\Rightarrow\hspace{1.5eM}\chi^\ast_{s}=i\sigma^2\,\chi_{-s}\,.
\end{equation}
De acordo com a identidade $\sigma^2{\mbox{\boldmath{$\sigma$}}}^\ast=-{\mbox{\boldmath{$\sigma$}}}\sigma^2$,
s\~ao convenientes as relaç\~oes:
\begin{eqnarray}
u^{(s)\ast}(p)&=&\gamma^1\gamma^3\,u^{(-s)}(\tilde{p})\,,\nonumber\\
&&\\
\upsilon^{(s)\ast}(p)&=&\gamma^1\gamma^3\,\upsilon^{(-s)}(\tilde{p})\,.\nonumber
\end{eqnarray}
Assim, voltando à equação (2.50), utilizando (2.52), segue que
\begin{eqnarray}
{\cal T}\psi(x){\cal T}^{-1}&=&\frac{1}{(2\pi)^{3/2}}\!\int\!\!\frac{d^3p}{E/m}\,\alpha_t\gamma^1\gamma^3\sum_{-s}\left[c^{(-s)}(\tilde{p})u^{(-s)}(\tilde{p})e^{-ip\cdot\tilde{x}}+d^{(-s)\dagger}(\tilde{p})\upsilon^{(-s)}(\tilde{p})e^{ip\cdot\tilde{x}}\right]\,,\nonumber\\
\end{eqnarray}
onde escolhemos $\alpha_t=\beta^\ast_t$. Portanto,
\begin{equation}
{\cal T}\psi(x){\cal
T}^{-1}\equiv\psi^t(x)=\alpha_t\,\gamma^1\gamma^3\psi(-\tilde{x})\,,
\end{equation}
enquanto que, analogamente,
\begin{equation}
{\cal T}\bar{\psi}(x){\cal
T}^{-1}\equiv\bar{\psi}^t(x)=-\alpha^\ast_t\,\bar{\psi}(-\tilde{x})\gamma^1\gamma^3\,.
\end{equation}
\subsection{Conjugação de carga}
A opera\c{c}\~ao por conjuga\c{c}\~ao de carga age sobre os operadores de aniquila\c{c}\~ao e cria\c{c}\~ao de part\'iculas e antipart\'iculas, sem mudar seus estados de spin, da seguinte maneira:
\begin{eqnarray}
{\cal C}\,c^{(s)}(p)\,{\cal C}^{-1}=\alpha_c\,d^{(s)}(p)\hspace{1.5eM}&,&\hspace{1.5eM}{\cal C}\,d^{(s)}(p)\,{\cal C}^{-1}=\beta_c\,c^{(s)}(p)\,,\nonumber\\
&&\\
{\cal C}\,c^{(s)\dagger}(p)\,{\cal
C}^{-1}=\alpha_c^\ast\,d^{(s)\dagger}(p)\hspace{1.5eM}&,&\hspace{1.5eM}{\cal
C}\,d^{(s)\dagger}(p)\,{\cal
C}^{-1}=\beta_c^\ast\,c^{(s)\dagger}(p)\,.\nonumber
\end{eqnarray}
Então, o campo de Dirac transformado é:
\begin{eqnarray}
{\cal C}\,\psi(x)\,{\cal
C}^{-1}&=&\frac{1}{(2\pi)^{3/2}}\!\int\!\!\frac{d^3p}{E/m}\,\alpha_c\sum_{s}\left[d^{(s)}(p)u^{(s)}(p)e^{-ip\cdot
x}+c^{(s)\dagger}(p)\upsilon^{(s)}(p)e^{ip\cdot x}\right]\,.\nonumber\\
\end{eqnarray}
Como o operador por conjugação de carga não muda o estado de spin das partículas e antipartículas, é definido um operador $C=i\gamma^2\gamma^0$, que deve agir sobre um espinor que contém $\chi^\ast_{-s}$. Assim, de acordo com (2.51), temos que
\begin{eqnarray}
{\cal C}\,\bar{u}^{(s)T}(p)\,{\cal C}^{-1}&=&\upsilon^{(s)}(p)\,,\nonumber\\
&&\\
{\cal C}\,\bar{\upsilon}^{(s)T}(p)\,{\cal
C}^{-1}&=&u^{(s)}(p)\,\nonumber
\end{eqnarray}
onde
\begin{equation}
{\cal C}\,\psi(x)\,{\cal
C}^{-1}\equiv\psi^c(x)=\alpha_c\,C\,\bar{\psi}^T(x)\,,
\end{equation}
e
\begin{equation}
{\cal C}\,\bar{\psi}(x)\,{\cal C}^{-1}\equiv\bar{\psi}^c(x)=\alpha_c^\ast\,\psi^T(x)C\,.
\end{equation}
\section{Covariantes bilineares}
Nesta se\c{c}\~ao, construiremos e classificaremos os covariantes bilineares, que s\~ao objetos que n\~ao carregam \'indices espinoriais e envolvem apenas dois campos espinoriais. Essas combina\c{c}\~oes t\^em propriedades de transforma\c{c}\~oes bem definidas no grupo de Lorentz. As matrizes $\gamma$ de Dirac formam uma base de 16 matrizes $4\times4$ linearmente independentes. A base \'e formada pelos seguintes elementos:
\begin{center}
\begin{tabular}{cc|c}
1&escalar&1\\
$\gamma^\mu$&vetor&4\\
$\sigma^{\mu\nu}=\frac{i}{2}[\gamma^\mu,\gamma^\nu]$&tensor&6\\
$\gamma^\mu\gamma^5$&pseudo-vetor&4\\
$\gamma^5$&pseudo-escalar&1\\
\hline &TOTAL:&16
\end{tabular}
\end{center}
A teoria de Dirac \'e constru\'ida com base em densidades de lagrangianas. Assim, \'e poss\'ivel construir e classificar os seguintes covariantes bilineares, na forma $\bar{\psi}\Gamma\psi$, com $\Gamma$ representando as 16 matrizes acima, de acordo com sua natureza:
\begin{center}
\begin{tabular}{rcll}
$E(x)$&=&$\bar{\psi}(x)\psi(x)$&\hspace{1cm}(escalar)\\
&&&\\
$V^\mu(x)$&=&$\bar{\psi}(x)\gamma^\mu\psi(x)$&\hspace{1cm}(vetor)\\
&&&\\
$P(x)$&=&$i\bar{\psi}(x)\gamma_5\psi(x)$&\hspace{1cm}(pseudoescalar)\\
&&&\\
$A^\mu(x)$&=&$\bar{\psi}(x)\gamma_5\gamma^\mu\psi(x)$&\hspace{1cm}(vetor axial)\\
&&&\\
$T^{\mu\nu}(x)$&=&$\bar{\psi}(x)\sigma^{\mu\nu}\psi(x)$&\hspace{1cm}(tensor)
\end{tabular}
\end{center}
\medskip
Na pr\'oxima subse\c{c}\~ao h\'a uma tabela que mostra como estes covariantes bilinerares se comportam frente as transforma\c{c}\~oes por paridade, invers\~ao temporal e conjuga\c{c}\~ao de carga, bem
como as transforma\c{c}\~oes combinadas por simetrias CPT.
\section{A transforma\c{c}\~ao CPT}
De acordo com (2.47), (2.54) e (2.59), o campo de Dirac se comporta
da seguinte maneira frente as transforma\c{c}\~oes T, PT e CPT:
\begin{eqnarray}
T:\hspace{0.5eM}\psi^t(x)&=&{\cal T}\psi(x){\cal T}^{-1}=\alpha_t\,\gamma^1\gamma^3\psi(-\tilde{x})\\
&&\nonumber\\
PT:\hspace{0.5eM}\psi^{pt}(x)&=&{\cal P}\psi^t(x){\cal P}^{-1}=\alpha_{pt}\,\gamma^0\gamma^1\gamma^3\psi(-x)\\
&&\nonumber\\
CPT:\hspace{0.5eM}\psi^{cpt}(x)&=&{\cal C}\psi^{pt}(x){\cal
C}^{-1}=\alpha_{cpt}\,\gamma^5\gamma^0\bar{\psi}^T(-x)
\end{eqnarray}
e, de forma an\'aloga para o campo de Dirac adjunto,
\begin{eqnarray}
T:\hspace{0.5eM}\bar{\psi}^t(x)&=&{\cal T}\bar{\psi}(x){\cal T}^{-1}=-\alpha^\ast_t\,\bar{\psi}(-\tilde{x})\gamma^1\gamma^3\\
&&\nonumber\\
PT:\hspace{0.5eM}\bar{\psi}^{pt}(x)&=&{\cal P}\bar{\psi}^t(x){\cal P}^{-1}=\alpha^\ast_{pt}\,\bar{\psi}(-x)\gamma^3\gamma^1\gamma^0\\
&&\nonumber\\
CPT:\hspace{0.5eM}\bar{\psi}^{cpt}(x)&=&{\cal
C}\bar{\psi}^{pt}(x){\cal
C}^{-1}=\alpha^\ast_{cpt}\,\psi^T(-x)\gamma^5\gamma^0
\end{eqnarray}
A tabela a seguir sumariza os resultados das transforma\c{c}\~oes C, P, T e CPT sofridas pelos covariantes bilineares.
\vspace{1cm}
\begin{center}
\begin{tabular}{||c||ccc||c||}
\hline\hline
&$C$&$P$&$T$&$CPT$\\
\hline\hline
$E(x)$&$E(x)$&$E(\tilde{x})$&$E(-\tilde{x})$&$E(-x)$\\
\hline
$V^\mu(x)$&$-V^\mu(x)$&$V_\mu(\tilde{x})$&$V_\mu(-\tilde{x})$&$-V^\mu(-x)$\\
\hline
$P(x)$&$P(x)$&$-P(\tilde{x})$&$-P(-\tilde{x})$&$P(-x)$\\
\hline
$A^\mu(x)$&$A^\mu(x)$&$-A_\mu(\tilde{x})$&$A_\mu(-\tilde{x})$&$-A^\mu(-x)$\\
\hline
$T^{\mu\nu}(x)$&$-T^{\mu\nu}(x)$&$T_{\mu\nu}(\tilde{x})$&$-T_{\mu\nu}(-\tilde{x})$&$T^{\mu\nu}(-x)$\\
\hline\hline
\end{tabular}
\end{center}
\newpage
O {\it Teorema} CPT diz que, em uma teoria de campos relativ\'istica, deve existir a invari\^ancia de transforma\c{c}\~ao por paridade (invers\~ao espacial) e invers\~ao temporal ({\it time reversal}) seguida por uma transforma\c{c}\~ao por conjuga\c{c}\~ao de carga. O teorema CPT assume a veracidade das leis quânticas e invariância de Lorentz. Especificamente, o teorema CPT afirma que fenômenos descritos por qualquer teoria quântica de campo, local e invariante de Lorentz com um hamiltoniano hermitiano, devem ter esta simetria preservada\footnote{Este teorema \'e cuidadosamente demonstrado em \cite{Str} por Streater e Wightman.}. Como pode ser facilmente verificado nesta tabela, os covariantes bilineares de Dirac que n\~ao preservam a simetria CPT s\~ao o vetor e o vetor axial. No entanto, vemos que $\bar{\psi}\gamma^\mu\psi$ n\~ao preserva a transforma\c{c}\~ao por conjuga\c{c}\~ao de carga e $\bar{\psi}\gamma_5\gamma^\mu\psi$ n\~ao \'e invariante frente transforma\c{c}\~oes por paridade. No próximo capítulo, apresentaremos um modelo de quebra de simetria de Lorentz para férmions. No Capitulo 4, investigaremos possíveis efeitos provocados pela quebra da simetria de Lorentz em sistemas utilizando mec\^anica qu\^antica relativ\'istica.
\chapter{Apresenta\c{c}\~ao do modelo de violação de Lorentz para férmions}
\section{Introdução}
No Cap\'itulo 2 estudamos as transforma\c{c}\~oes de simetrias sofridas pelos covariantes bilineares do campo fermi\^onico de Dirac. Tais transforma\c{c}\~oes s\~ao a paridade, a invers\~ao temporal e a conjuga\c{c}\~ao de carga (veja tabela na seção 2.2.4). Vimos que os bilineares de Dirac que sofrem viola\c{c}ões das simetrias CPT, no setor fermi\^onico ou da mat\'eria, s\~ao o \emph{vetor} e o \emph{pseudovetor} ou \emph{vetor axial}.
\smallskip
O objetivo deste capítulo é apresentar um modelo efetivo para a quebra de simetria CPT. O ponto de partida, no contexto da mecânica quântica relativística em quatro dimensões, é acoplar campos de fundo\footnote{Entende-se por campo de fundo um campo cujas fontes não são acessíveis.} aos covariantes bilineares.
Se a lagrangiana da EDQ convencional é dada por
\begin{equation}
{\cal L}_{EDQ}=\bar{\psi}(i\!{\not\!\partial}-e\!{\not\!\!A}-m)\psi-\frac{1}{4}F_{\mu\nu}F^{\mu\nu}\,,
\end{equation}
devemos adicionar a ela as seguintes lagrangianas
\begin{multline}
{\cal
L}_{Fermion}=-\bar{\psi}\!{\not\!a\psi}-\bar{\psi}\!{\not\!b\gamma_5\psi}-\frac{1}{2}H_{\mu\nu}\bar{\psi}\sigma^{\mu\nu}\psi+ic_{\mu\nu}\bar{\psi}\gamma^\mu
D^\nu\psi+id_{\mu\nu}\bar{\psi}\gamma^\mu\gamma_5D^\nu\psi\,,\\
\\
{\cal
L}_{Foton}=\frac{1}{2}\varepsilon^{\mu\nu\rho\sigma}\eta_\mu A_\nu\partial_\rho A_\sigma+\frac{1}{4}\eta^{\mu\nu\rho\sigma}F_{\mu\nu}F_{\rho\sigma}\,, \hspace{3cm}
\end{multline}
onde $iD_\mu=i\partial_\mu+eA_\mu$ \'e a derivada covariante.
\smallskip
A estrutura da equação de Dirac é modificada pela introdução desses termos na lagrangiana. As correções surgem nas matrizes de Dirac e na massa do férmion:
\begin{eqnarray}
(i\Gamma^\mu D_\mu-M)\psi=0\,,
\end{eqnarray}
onde
$\Gamma_\nu=\gamma_\nu+c_{\mu\nu}\gamma^\mu+d_{\mu\nu}\gamma^\mu\gamma_5$
\,e\, $M=m+\not\!a+\not\!b\gamma_5+\frac{1}{2}H_{\mu\nu}\sigma^{\mu\nu}$. $c_{\mu\nu}$, $d_{\mu\nu}$, $a_\mu$, $b_\mu$ e $H_{\mu\nu}$ são campos de fundo especificados num referencial escolhido.
\smallskip
Cada um dos termos adicionais na lagrangiana contém um parâmetro constante cuja ordem de grandeza é muito pequena \footnote{A ordem de grandeza dessas constantes é muito pequena quando comparadas à massa dos férmions, uma vez que violações da simetria de Lorentz, se existirem, são efeitos muito pequenos, e ainda não observados experimentalmente.}. São eles que controlam as medidas de violações de simetrias CPT e Lorentz nos experimentos. Todos os cinco mon\^omios envolvidos, presentes na lagrangiana adicional (3.2), violam a simetria de Lorentz. Como os mon\^omios que cont\^em $a_\mu$ e $b_\mu$ violam a simetria CPT, somente eles estão presentes nesta dissertação. Já os termos $H_{\mu\nu}$, $c_{\mu\nu}$ e $d_{\mu\nu}$, que preservam CPT, não são aqui estudados.
\section{A equa\c{c}\~ao de Dirac e o propagador do férmion modificados}
A lagrangiana de fémions de massa $m$ com os termos de quebra de simetria $CPT$ é dada por:
\begin{equation}
{\cal
L}=\bar{\psi}(\not\!p-\not\!a -\not\!b\gamma_5-m)\psi\,,
\end{equation}
onde $a^\mu=(a_0,{\mbox{\boldmath{$a$}}})$ e $b^\mu=(b_0,{\bf b})$ são quadrivetores constantes.
\smallskip
Procuramos por soluções do tipo ondas-planas, em que $\psi^{(\alpha)}=N_u^{(\alpha)}\,u^{(\alpha)}(p)e^{-ip\cdot x}$
($\alpha=1,2)$, é um espinor de Dirac modificado com quatro componentes, onde $N_u^{(\alpha)}$ é uma constante de normalização a ser determinada.
\smallskip
Da equação (3.4), obtemos a equação de Dirac modificada
\begin{equation}
(\not\!p-\not\!a -\not\!b\gamma_5-m)\psi=0\,.
\end{equation}
Aplicando o {\it ansatz} para $\psi$ e multiplicando a equação acima pela esquerda por $(\not\!p-\not\!a -\not\!b\gamma_5+m)$, obtemos a seguinte relação:
\begin{equation}
\left\{(p-a)^2-b^2-m^2-[\not\! p-\not\!a,\not
b]\gamma_5\right\}u(p)=0\,.
\end{equation}
A expressão acima ainda não é diagonal, uma vez que na mesma aparecem termos com matrizes não diagonais. Para obter a equação algébrica para este modelo, \'e necess\'ario multiplicar a equa\c{c}\~ao acima por $(p-a)^2-b^2-m^2+[\not\! p-\not\!a,\not b]\gamma_5$ pela esquerda:
\begin{equation}
\{[(p-a)^2-b^2-m^2]^2-\left([\not\! p-\not\!a,\not b]\gamma_5\right)^2\}u(p)=0\,.
\end{equation}
Vamos trabalhar o comutador acima com $\not\!k=\not\!p-\not\!a$ momentaneamente:
\begin{eqnarray}
\left([\not\! p-\not\!a,\not b]\gamma_5\right)^2&=&(\not\!k\not\!b\,-\not\!b\not\!k)\gamma_5(\not\!k\not\!b\,-\not\!b\not\!k)\gamma_5\nonumber\\
&=&\not\!k\not\!b\not\!k\not\!b\,-\not\!k\not\!b\not\!b\not\!k\,-\not\!b\not\!k\not\!k\not\!b\,+\not\!b\not\!k\not\!b\not\!k\nonumber\\
&=&\not\!k\not\!b\left[-\not\!b\not\!k+2(k\cdot b)\right]-2k^2b^2+\not\!b\not\!k\left[-\not\!k\not\!b+2(k\cdot b)\right]\nonumber\\
&=&-4k^2b^2+2(k\cdot b)\not\!k\not\!b+2(k\cdot b)\not\!b\not\!k\nonumber\\
&=&-4k^2b^2+2(k\cdot b)\left[-\not\!b\not\!k+2(k\cdot b)\right]\,+2(k\cdot b)\not\!b\not\!k\nonumber\\
&=&-4k^2b^2+4(k\cdot b)^2\,,
\end{eqnarray}
onde foi utilizada a identidade $\not\!c\not\!d=-\not\!d\not\!c+2(c\cdot d)$.
\smallskip
Portanto, a rela\c{c}\~ao de dispers\~ao para o modelo \'e
\begin{equation}
[(p-a)^2-b^2-m^2]^2-4[b\cdot(p-a)]^2+4b^2(p-a)^2=0\,.
\end{equation}
Esta relação de dispersão é quártica na variável $p^0({\bf p})$. Ela possui duas raízes positivas $E_u^{(\alpha)}$ e duas negativas $E_v^{(\alpha)}$, onde $\alpha=1,2$.
\smallskip
A equação (3.9) é facilmente resolvida para os casos em que $b^\mu$ é estritamente temporal ou espacial. Para o caso $b^\mu=(b_0,{\bf 0})$, a relação de dispersão fornece
\begin{eqnarray}
E_u^{(\alpha)}&=&\sqrt{(|{\bf p}-{\mbox{\boldmath{$a$}}}|+(-1)^{\alpha}b_0)^2+m^2}+a_0\,,\nonumber\\
&&\\
E_\upsilon^{(\alpha)}&=&\sqrt{(|{\bf
p}+{\mbox{\boldmath{$a$}}}|-(-1)^{\alpha}b_0)^2+m^2}-a_0\,,\nonumber
\end{eqnarray}
em que $E_{u,\upsilon}^{(\alpha)}$ indicam as energias para as partículas e suas antipartículas, respectivamente. Note que o termo $a_\mu$ provoca uma redefini\c{c}\~ao dos zeros da energia e do momento para as part\'iculas, pois $E\,\,\to\,\,E-a_0$ \,e\, ${\mbox{\boldmath{$p$}}}\,\,\to\,\,{\mbox{\boldmath{$p$}}}-{\mbox{\boldmath{$a$}}}$, ou seja, há uma opera\c{c}\~ao por conjuga\c{c}\~ao de carga: $a_\mu\bar{\psi}\gamma^\mu\psi\to-a_\mu\bar{\psi}\gamma^\mu\psi$.
\smallskip
Para o caso $b^\mu=(0,{\bf b})$, as solu\c{c}\~oes s\~ao
\begin{eqnarray}
E_u^{(\alpha)}&=&\sqrt{({\mbox{\boldmath{$p$}}}-{\mbox{\boldmath{$a$}}})^2+m^2+{\mbox{\boldmath{$b$}}}^2+(-1)^{\alpha}2\sqrt{[{\mbox{\boldmath{$b$}}}\cdot({\mbox{\boldmath{$p$}}}-{\mbox{\boldmath{$a$}}})]^2+m^2}}\,+a_0\,,\nonumber\\
&&\\
E_\upsilon&^{(\alpha)}=&\sqrt{({\mbox{\boldmath{$p$}}}+{\mbox{\boldmath{$a$}}})^2+m^2+{\mbox{\boldmath{$b$}}}^2-(-1)^{\alpha}2\sqrt{[{\mbox{\boldmath{$b$}}}\cdot({\mbox{\boldmath{$p$}}}+{\mbox{\boldmath{$a$}}})]^2+m^2}}\,-a_0\,.\nonumber
\end{eqnarray}
Quando multiplicamos a equação (3.5) pela esquerda por $\gamma^0$, esta equação de movimento pode ser escrita na forma hamiltoniana, onde $i\dfrac{\partial\psi}{\partial t}=H\psi$. Assim, temos que\footnote{$\displaystyle {\bf \Sigma}={\mbox{\boldmath{$
\alpha$}}}\gamma^5=\left(\begin{array}{cc}{\mbox{\boldmath{$
\sigma$}}}&0\\0&{\mbox{\boldmath{$ \sigma$}}}\end{array}\right)\,.$}
\begin{eqnarray}
H={\mbox{\boldmath{$\alpha$}}}\cdot({\bf
p}-{\mbox{\boldmath{$a$}}})+m\gamma^0+a_0+\gamma_5b_0+{\bf
\Sigma}\cdot{\bf b}\,.
\end{eqnarray}
Vamos construir os espinores para o caso $b^\mu$ puramente temporal. O hamiltoniano é então dado por:
\begin{equation}
H={\mbox{\boldmath{$\alpha$}}}\cdot(\mathbf{p}-{\mbox{\boldmath{$a$}}})+m\gamma^0+a_0+b_0\gamma_5\,.
\end{equation}
Na representação padrão das matrizes $\gamma$ de Dirac, como \'e usual, obtemos os seguintes espinores
\begin{equation}
u^{(\alpha)}(p)=N_u^{(\alpha)}\,\left(\begin{array}{c}\chi^{(\alpha)}\\\xi_u^{(\alpha)}\,\chi^{(\alpha)}\end{array}\right)
\end{equation}
para os estados de energia positiva e, para os estados de energia negativa,
\begin{equation}
\upsilon^{(\alpha)}(p)=N_\upsilon^{(\alpha)}\,\left(\begin{array}{c}\xi_\upsilon^{(\alpha)}\,\eta^{(\alpha)}\\\eta^{(\alpha)}\end{array}\right)\,,
\end{equation}
onde
\begin{equation}
\xi_u^{(\alpha)}=\frac{{\mbox{\boldmath{$\sigma$}}}\cdot({\mbox{\boldmath{$p$}}}-{\mbox{\boldmath{$a$}}})-b_0}{E_u^{(\alpha)}-a_0+m}\,.
\end{equation}
A solução para $\xi_\upsilon^{(\alpha)}$ é obtida pela troca, na expressão acima, $a_\mu\to-a_\mu$ e $b_\mu\to-b_\mu$, além da troca
$E_u^{(\alpha)}\to E_\upsilon^{(\alpha)}$.
\smallskip
O espinor (3.15) pode ser normalizado, como foi feito no Cap\'itulo 2, se escolhermos a mesma condi\c{c}\~ao de normaliza\c{c}\~ao do caso da teoria convencional:
\begin{equation}
\bar{u}^{(\alpha)}(p)u^{(\alpha')}(p)=\delta^{\alpha\alpha'}\,.
\end{equation}
Utilizando a definição $\bar{u}=u^\dagger\gamma^0$ e a autoenergia positiva (3.10) para a partícula, encontramos a constante $N_u^{(\alpha)}$ de normalização:
\begin{equation}
N_u^{(\alpha)}=\sqrt{\frac{E_u^{(\alpha)}-a_0+m}{2M}}\,.
\end{equation}
Se o espinor de duas componentes $\chi^{(\alpha)}$ for escolhido ser autovetor do operadoror ${\mbox{\boldmath{$\sigma$}}}\cdot\dfrac{({\mbox{\boldmath{$p$}}}-{\mbox{\boldmath{$a$}}})}{|{\mbox{\boldmath{$p$}}}-{\mbox{\boldmath{$a$}}}|}$ com autovalor $-(-1)^\alpha$, o espinor de Dirac modificado e normalizado fica
\begin{equation}
u^{(\alpha)}(p)=\sqrt{\frac{E_u^{(\alpha)}-a_0+m}{2m}}\left(\begin{array}{c}\chi^{(\alpha)}\\\displaystyle{\frac{-(-1)^{(\alpha)}|{\mbox{\boldmath{$p$}}}-{\mbox{\boldmath{$a$}}}|-b_0}{E_u^{(\alpha)}-a_0+m}}\,\chi^{(\alpha)}\end{array}\right)\,.
\end{equation}
Procedendo de modo análogo, obtemos o espinor para os antiférmions:
\begin{equation}
\upsilon^{(\alpha)}(p)=\sqrt{\frac{E_\upsilon^{(\alpha)}+a_0+m}{2m}}\left(\begin{array}{c}\displaystyle{\frac{-(-1)^{(\alpha)}|{\mbox{\boldmath{$p$}}}+{\mbox{\boldmath{$a$}}}|+b_0}{E_\upsilon^{(\alpha)}+a_0+m}}\,\chi^{(\alpha)}\\\chi^{(\alpha)}\end{array}\right)\,.
\end{equation}
Ainda podemos obter outro elemento de grande importância na teoria da EDQ extendida: o {\it propagador fermiônico modificado}. O propagador de Feynman escolhido deve satisfazer
\begin{equation}
(i\!\not\!\partial-\not\!a-\not\!b\gamma_5-m)S_{a,b}(x-y)=i\delta^{(4)}(x-y)\,,
\end{equation}
ou, escrita no espaço de Fourier,
\begin{equation}
\int\frac{d^4p}{(2\pi)^4}(\not\!p-\not\!a
-\not\!b\gamma_5-m)e^{-ip\cdot(x-y)}S_{a,b}(p)=i\delta^{(4)}(x-y)
\end{equation}
fornece, através da representação de Fourier da Delta de Dirac,
\begin{equation}
S_{a,b}(p)=\frac{i}{\not\!p-\not\!a -\not\!b\gamma_5-m}\,.
\end{equation}
Este propagador é escrito em sua forma invertida:
\begin{eqnarray}
S_{a,b}(p)&=&\frac{i}{\not\!p-\not\!a -\not\!b\gamma_5-m}=\frac{i(\not\!p-\not\!a -\not\!b\gamma_5+m)}{(\not\!p-\not\!a -\not\!b\gamma_5-m)(\not\!p-\not\!a -\not\!b\gamma_5+m)}\nonumber\\
&&\nonumber\\
&=& \frac{i(\not\!p-\not\!a -\not\!b\gamma_5+m)} { \left\{(p-a)^2-b^2-m^2-[\not\! p-\not\!a,\not
b]\gamma_5\right\}}\,,
\end{eqnarray}
onde foi utilizada (3.6). Utilizando também (3.9), segue que
\begin{equation}
S_{a,b}(p)=\frac{i(\not\!p-\not\!a -\not\!b\gamma_5+m)\{ (p-a)^2-b^2-m^2+[\not\! p-\not\!a,\not
b]\gamma_5 \}}{[(p-a)^2-b^2-m^2]^2-4[(p-a)\cdot b]^2+4(p-a)^2b^2}\,.
\end{equation}
Portanto, vemos que a quebra da simetria de Lorentz modifica as relações de dispersão, as autoenergias e os espinores da teoria de Dirac convencional, além de gerar uma perturbação no hamiltoniano de Dirac livre. No caso do propagador, vemos que a modificação deixa o propagador com uma forma mais complicada que o usual (2.44).
\chapter{Implicações em Mecânica Quântica}
\section{O problema de Landau e a quebra de simetria de Lorentz}
Neste capítulo, investigaremos possíveis efeitos que a quebra da simetria de Lorentz pode causar em sistemas quânticos de elétrons interagindo com um campo eletromagné\-tico. Iniciaremos nosso estudo com um sistema composto por elétrons interagindo com um campo magnético intenso e constante. Tal fenômeno é conhecido como {\it problema de Landau}.
\smallskip
Se consideramos os elétrons relativísticos, o hamiltoniano não perturbado deste problema é dado por
\begin{eqnarray}
H_0={\mbox{\boldmath{$\alpha$}}}\cdot({\bf p}-e{\bf A})+m\gamma^0\,.
\end{eqnarray}
O problema terá como perturbação o hamiltoniano de interação
\begin{eqnarray}
H_{int}=a_0-{\mbox{\boldmath{$\alpha$}}}\cdot{\mbox{\boldmath{$a$}}}+b_0\gamma_5+{\mbox{\boldmath{$\Sigma$}}}\cdot{\mbox{\boldmath{$b$}}}\,.
\end{eqnarray}
As autoenergias e os autoestados exatos do problema de Landau são calculados no Apêndice B a partir do hamiltoniano não perturbado (4.1) e servirão como base para os cálculos perturbativos.
\smallskip
Os cálculos das corre\c{c}\~oes das energias do el\'etron em ordem mais baixa de aproxima\c{c}\~ao, com as autofunções (B.10-12), s\~ao dadas por
\begin{eqnarray}
\Delta E^-_{n,s}&=&\langle\psi^-_{n,s}|H^-_{int}|\psi^-_{n,s}\rangle=\int\psi^{-\dagger}_{n,s}(x)(H^-_{int})\psi^-_{n,s}(x)dx\nonumber\\
&&\nonumber\\
&=&{\cal A}_0(x)+{\cal A}_z(x)+{\cal B}_0(x)+{\cal
B}_z(x)\,,\nonumber
\end{eqnarray}
onde
\begin{eqnarray*}
{\cal A}_0(x)&=&\int\psi^{-\dagger}_{n,s}(x)(a_0)\psi^-_{n,s}(x)dx=a_0\int\psi^{-\dagger}_{n,s}(x)\psi^-_{n,s}(x)dx\nonumber\\
&&\nonumber\\
&=&a_0\nonumber\,,
\end{eqnarray*}
\begin{eqnarray*}
{\cal A}_z(x)&=&\langle\psi^-_{n,s}|(-{\mbox{\boldmath{$\alpha$}}}\cdot{\mbox{\boldmath{$a$}}})|\psi^-_{n,s}\rangle\nonumber\\
&&\nonumber\\
&=&\int\psi^{-\dagger}_{n,s}(x)(-a_z\alpha_z)\psi^-_{n,s}(x)dx\nonumber\\
&&\nonumber\\
&=&-\frac{a_z}{2^nn!\sqrt{\pi}}\frac{E_{n,s}^-+m}{2E_{n,s}^-}\left(\begin{array}{cc}1,&\frac{\sigma_zp_z}{E_{n,s}^-+m}\end{array}\right)\left(\begin{array}{cc}0&\sigma_z\\\sigma_z&0\end{array}\right)\left(\begin{array}{c}1\\\frac{\sigma_zp_z}{E_{n,s}^-+m}\end{array}\right)\int\limits_{-\infty}^{+\infty}d\xi\,e^{-\xi^2}[H_n(\xi)]^2\nonumber\\
&&\nonumber\\
&=&-a_z\frac{E_{n,s}^-+m}{2E_{n,s}^-}\left(\frac{p_z}{E_{n,s}^-+m}+\frac{p_z}{E_{n,s}^-+m}\right)\nonumber\\
&&\nonumber\\
&=&-a_z\frac{p_z}{E_{n,s}^-}\,,\nonumber
\end{eqnarray*}
\begin{eqnarray*}
{\cal B}_0(x)&=&\int\psi^{-\dagger}_{n,s}(x)(b_0\gamma_5)\psi^-_{n,s}(x)dx\nonumber\\
&&\nonumber\\
&=&\frac{b_0}{2^nn!\sqrt{\pi}}\frac{E_{n,s}^-+m}{2E_{n,s}^-}\left(\begin{array}{cc}1,&\frac{\sigma_zp_z}{E_{n,s}^-+m}\end{array}\right)\left(\begin{array}{cc}0&-1\\-1&0\end{array}\right)\left(\begin{array}{c}1\\\frac{\sigma_zp_z}{E_{n,s}^-+m}\end{array}\right)\int\limits_{-\infty}^{+\infty}d\xi\,e^{-\xi^2}[H_n(\xi)]^2\nonumber\\
&&\nonumber\\
&=&-b_0\frac{E_{n,s}^-+m}{2E_{n,s}^-}\left(\frac{sp_z}{E_{n,s}^-+m}+\frac{sp_z}{E_{n,s}^-+m}\right)\nonumber\\
&&\nonumber\\
&=&-s\frac{p_z}{E_{n,s}^-}b_0\,\nonumber
\end{eqnarray*}
\begin{eqnarray*}
{\cal B}_z(x)&=&\int\psi^{-\dagger}_{n,s}(x)(b_z\Sigma_z)\psi^-_{n,s}(x)dx\nonumber\\
&&\nonumber\\
&=&\frac{b_z}{2^nn!\sqrt{\pi}}\frac{E_{n,s}^-+m}{2E_{n,s}^-}\left(\begin{array}{cc}1,&\frac{\sigma_zp_z}{E_{n,s}^-+m}\end{array}\right)\left(\begin{array}{cc}\sigma_z&0\\0&\sigma_z\end{array}\right)\left(\begin{array}{c}1\\\frac{\sigma_zp_z}{E_{n,s}^-+m}\end{array}\right)\int\limits_{-\infty}^{+\infty}d\xi\,e^{-\xi^2}[H_n(\xi)]^2\nonumber\\
&&\nonumber\\
&=&b_z\frac{E_{n,s}^-+m}{2E_{n,s}^-}\left[s+\frac{sp_z^2}{(E_{n,s}^-+m)^2}\right]\nonumber\\
&&\nonumber\\
&=&sb_z\left[1-\frac{|e|B_0(2n+1-s)}{2E_{n,s}^-(E_{n,s}^-+m)}\right]\,.\nonumber
\end{eqnarray*}
Assim, somando todas as contribui\c{c}\~oes, obtemos \cite{Rus}
\begin{eqnarray}
\Delta
E^-_{n,s}=a_0-a_z\frac{p_z}{E^-_{n,s}}-sb_0\frac{p_z}{E^-_{n,s}}+sb_z\left[1-\frac{|eB_0|(2n+1-s)}{2E^-_{n,s}(E^-_{n,s}+m)}\right]\,.
\end{eqnarray}
Uma análise deste resultado nos fornece a ideia de que os campos $a_\mu$ e $b_\mu$ s\~ao grandezas observ\'aveis. No entanto, este resultado deve ser analisado com mais detalhes no que se refere justamente \`a ordem de grandeza de seus componentes. Seguindo argumentos de Kostelecký e colaboradores \cite{Rus}, os termos proporcionais ao campo magn\'etico $B_0$ devem ser negligenciados pois, para $B_0\simeq5\,T$, a raz\~ao $|eB_0|/m^2\simeq10^{-9}$ \'e muito pequena. No confinamento axial neste experimento, o momento axial \'e o momento de Landau $p_z$, que corresponde a um momento efetivo no eixo $z$. Como a frequ\^encia axial \'e bem menor que a frequ\^encia ciclotron, o termo $p_z/E^-_{n,s}$ deve ser desconsiderado. Neste contexto, a corre\c{c}\~ao para as energias de Landau em termos dominantes \'e simplesmente
\begin{equation}
\Delta E^-_{n,s}\approx a_0+s\,b_z\,.
\end{equation}
Uma proposta para tentar detectar a quebra da simetria de Lorentz consiste na comparação das mudan\c{c}as ({\it shifts}) de n\'iveis de energia\footnote{De acordo com a refer\^encia \cite{Ahi}, tais transi\c{c}\~oes energ\'eticas, com e sem invers\~ao de spin, s\~ao denomi\-nadas frequ\^encias de {\it anomalia} e {\it ciclotron}.}, {\it com} e {\it sem} invers\~ao de spin, de el\'etrons e p\'ositrons t\'ipicos de Landau. Tais frequ\^encias são obtidas em experimentos conhecidos por \emph{armadilhas de Penning}\footnote{Esta armadilha foi imaginada por F.M. Penning e rendeu a Hans Georg Dehmelt o Pr\^emio Nobel de F\'isica em 1989 por sua utiliza\c{c}\~ao pr\'atica.}. Tais armadilhas s\~ao dispositivos que armazenam part\'iculas carregadas utilizando um campo magn\'etico constante e um campo el\'etrico est\'atico espacialmente n\~ao homog\^eneo.
Considerando $\hbar=1$ e as autoenergias (B.6), as frequ\^encias de transi\c{c}\~oes ener\-g\'eticas para o el\'etron sem e com invers\~ao de spin, n\~ao perturbadas, s\~ao dadas por
\begin{equation}
\omega^-=E^{-}_{1,-1}-E^{-}_{0,-1}\hspace{1eM},\hspace{2eM}\bar{\omega}^-=E^{-}_{0,+1}-E^{-}_{1,-1}\,.
\end{equation}
O teorema $CPT$ afirma que ambas as frequ\^encias do el\'etron acima devem ser iguais \`as mesmas do p\'ositron. No entanto, as frequ\^encias {\it corrigidas} $\omega(\bar{\omega})^{\mp(CPT)}$, de acordo com a teoria de viola\c{c}\~ao de Lorentz, para os el\'etrons e p\'ositrons, s\~ao dadas por
\begin{equation}
\omega^{-(CPT)}\approx\omega^{+(CPT)}\approx\omega\hspace{1eM},\hspace{2eM}\bar{\omega}^{\mp(CPT)}\approx\bar{\omega}\pm
2b_z\,.
\end{equation}
Note que, no resultado obtido acima, as frequ\^encias corrigidas {\it n\~ao dependem} de $a_\mu$. Isso ocorre porque, como j\'a foi discutido, este campo apenas redefine os zeros da energia e momentos. O termo dominante na teoria de viola\c{c}\~ao CPT \'e dependente do campo $b_\mu$ e \'e proveniente da diferen\c{c}a entre
as frequ\^encias sem e com mudança de spin:
\begin{equation}
\Delta\omega\equiv\omega^{-(CPT)}-\omega^{+(CPT)}\approx0\hspace{1eM},\hspace{2eM}\Delta\bar{\omega}\equiv\bar{\omega}^{-(CPT)}-\bar{\omega}^{+(CPT)}\approx+4b_z\,.
\end{equation}
Portanto, o experimento de armadilhas de Penning discutido aqui \'e sens\'ivel apenas \`a parte espacial do vetor ${\mathbf b}$ na dire\c{c}\~ao do campo magn\'etico.
\section{Expansão não relativística do hamiltoniano modificado}
Considere o hamiltoniano (3.12) de uma partícula sujeita a um campo eletromagnético (agora com o campo coulombiano) e os termos de quebra das simetrias CPT:
\begin{eqnarray}
H=m\gamma^0+{\mbox{\boldmath{$\alpha$}}}\cdot({\bf
p}-{\mbox{\boldmath{$a$}}}-e{\bf A})+eA_0+a_0+{\bf \Sigma}\cdot{\bf
b}+\gamma_5b_0.
\end{eqnarray}
Faremos a expansão utilizando o método Foldy-Wouthuysen\footnote{Vide Apêndice C.} (FW) \cite{Fol}. Este método propõe reescrever o hamiltoniano acima separado em duas partes. Como as matrizes $\gamma^5$ e ${\bf \Sigma}$ são, respectivamente, \textit{não diagonal} e \textit{diagonal} na representação padrão adotada, temos o hamiltoniano acima escrito em termos de operadores pares e ímpares:
\begin{eqnarray}
H=m\gamma^0+{\cal P}+{\cal I},
\end{eqnarray}
onde
\begin{eqnarray}
{\cal P}&=&eA_0+a_0+{\bf \Sigma}\cdot{\bf b}\hspace{8eM}\mbox{(operador par),}\\
{\cal I}&=&{\mbox{\boldmath{$ \alpha$}}}\cdot({\bf
p}-{\mbox{\boldmath{$a$}}}-e{\bf
A})+\gamma_5b_0\hspace{4eM}\mbox{(operador ímpar).}
\end{eqnarray}
Considere o \textit{ansatz} (vide Apêndice C):
\begin{equation}
S=-\frac{i}{2m}\gamma^0{\cal I}\,.
\end{equation}
Vamos expandir o hamiltoniano na representação FW escrevendo-o em séries de potências de $1/m$. Faremos a expansão através da fórmula de \textit{Baker-Campbell-Hausdorff}:
\begin{equation}
e^{iS}He^{-iS}=H+i[S,H]+\frac{i^2}{2!}[S,[S,H]]+\frac{i^3}{3!}[S,[S,[S,H]]]+...
\end{equation}
De outra forma,
\begin{equation}
H'=H+i[S,H]-\frac{1}{2}[S,[S,H]]-\frac{i}{6}[S,[S,[S,H]]]+\frac{1}{24}[S,[S,[S,[S,H]]]]...
\end{equation}
Utilizando as relações de (anti)comutações
\begin{equation}
\{\gamma^0,{\cal
I}\}=0\hspace{2em}\mbox{e}\hspace{2em}[\gamma^0,{\cal P}]=0\,,
\end{equation}
\noindent podemos calcular algumas relações de comutações envolvendo $S$ e $H$:
\begin{eqnarray}
i[S,H]&=&-{\cal I}+\frac{\gamma^0{\cal
I}^2}{m}+\frac{\gamma^0}{2m}[{\cal I},{\cal P}]\,,\\
\frac{i^2}{2}[S,[S,H]]&=&-\frac{1}{2m}\gamma^0{\cal
I}^2-\frac{1}{2m^2}{\cal I}^3-\frac{1}{8m^2}[{\cal
I},[{\cal I},{\cal P}]]\,,\\
\frac{i^3}{3!}[S,[S,[S,H]]]&=&\frac{1}{6m^2}{\cal
I}^3-\frac{1}{6m^3}\gamma^0{\cal
I}^4\,,\nonumber\\
&-&\frac{1}{48m^3}\gamma^0[{\cal I},[{\cal I},[{\cal
I},{\cal P}]]]\\
\frac{i^4}{4!}[S,[S,[S,[S,H]]]]&\simeq&\frac{1}{24m^3}\gamma^0{\cal
I}^4\,.
\end{eqnarray}
O último termo foi obtido por indução. Adicionando todos os termos de (4.16-19), obtemos:
\begin{eqnarray}
H&=&\gamma^0\left(M+\frac{1}{2m}\,{\cal I}^2-\frac{1}{8m^2}\,{\cal I}^4\right)+{\cal P}-\frac{1}{8m^2}[{\cal I},[{\cal I},{\cal P}]]\nonumber\\
&+&\frac{1}{2m}\,\gamma^0[{\cal I},{\cal P}]-\frac{1}{3m^2}\,{\cal
I}^3-\frac{1}{48m^3}\,\gamma^0[{\cal I},[{\cal I},[{\cal I},{\cal
P}]]]\,,\nonumber
\end{eqnarray}
O hamiltoniano é então reescrito retendo-se termos proporcionais até $1/n^3$:
\begin{eqnarray}
H'''=\gamma^0\left(m+\frac{1}{2m}\,{\cal I}^2-\frac{1}{8m^3}\,{\cal
I}^4\right)+{\cal P}-\frac{1}{8m^2}\,[{\cal I},[{\cal I},{\cal
P}]]\,.
\end{eqnarray}
Voltemos ao operador ímpar representado em (4.11). Utilizando a fórmula
\begin{equation}
({\mbox{\boldmath{$ \alpha$}}}\cdot\textbf{A})({\mbox{\boldmath{$
\alpha$}}}\cdot\textbf{B})=\textbf{A}\cdot\textbf{B}+i{\bf
\Sigma}\cdot(\textbf{A}\times\textbf{B})\,,
\end{equation}
vem que
\begin{eqnarray*}
{\cal I}^2&=&[{\mbox{\boldmath{$ \alpha$}}}\cdot(\textbf{p}-{\mbox{\boldmath{$a$}}}-e\textbf{A})+\gamma^5b_0]^2\\
&&\\
&=&(\textbf{p}-{\mbox{\boldmath{$a$}}}-e\textbf{A})^2-ie{\bf
\Sigma}\cdot(\textbf{p}\times\textbf{A}+\textbf{A}\times\textbf{p})+\{{\mbox{\boldmath{$
\alpha$}}},\gamma_5\}\cdot(\textbf{p}-{\mbox{\boldmath{$a$}}}-e\textbf{A})b_0+b_0^2\,.
\end{eqnarray*}
O anticomutador acima é dado por
\begin{eqnarray*}
\{{\mbox{\boldmath{$ \alpha$}}},\gamma_5\}=-2{\bf \Sigma}\,.
\end{eqnarray*}
Então, com $\textbf{p}=-i\nabla$ e $\textbf{P}=\textbf{p}-{\mbox{\boldmath{$a$}}}-e\textbf{A}$, temos que
\begin{equation}
\frac{1}{2m}\,{\cal I}^2=\frac{1}{2m}[\textbf{P}-{\bf
\Sigma}b_0]^2-\frac{e}{2m}\,{\bf \Sigma}\cdot\textbf{B}\,.
\end{equation}
Agora, calculemos o comutador
\begin{equation}
[{\cal I},{\cal P}]=ie\,{\mbox{\boldmath{$
\alpha$}}}\cdot\textbf{E}-2({\mbox{\boldmath{$ \alpha$}}}\cdot{\bf
\Sigma})(\textbf{P}\cdot\textbf{b})\,.
\end{equation}
Da mesma forma, obtemos o outro comutador:
\begin{eqnarray*}
[{\cal I},[{\cal I},{\cal P}]]&=&e\,\nabla\cdot\textbf{E}+i{\bf
\Sigma}\cdot(\nabla\times\textbf{E})+2{\bf
\Sigma}\cdot(\textbf{E}\times\textbf{p})-4({\bf
\Sigma}\cdot\textbf{P})(\textbf{P}\cdot\textbf{b})\,.
\end{eqnarray*}
Assim,
\begin{eqnarray}
\frac{1}{8m^2}\,[{\cal I}[{\cal I},{\cal P}]]&=&\frac{1}{8m^2}\left[e\,\nabla\cdot\textbf{E}+i{\bf \Sigma}\cdot(\nabla\times\textbf{E})+2{\bf \Sigma}\cdot(\textbf{E}\times\textbf{p})\right]\nonumber\\
&&\nonumber\\
&-&\frac{1}{2m^2}\,({\bf
\Sigma}\cdot\textbf{P})(\textbf{P}\cdot\textbf{b})\,.
\end{eqnarray}
Então, com todas as contribuições juntas, temos que:
\begin{eqnarray}
H'''&=&\gamma^0\left[m+\frac{1}{2m}(\textbf{P}-{\bf \Sigma}b_0)^2-\frac{1}{8m^3}\textbf{P}^4\right]\nonumber\\
&&\nonumber\\
&+&eA_0+a_0+{\bf \Sigma}\cdot\textbf{b}-\frac{e}{2m}{\bf \Sigma}\cdot\textbf{B}\nonumber\\
&&\nonumber\\
&-&\frac{e}{4m^2}\,{\bf \Sigma}\cdot(\textbf{E}\times\textbf{p})+\frac{i}{8m^2}\,{\bf \Sigma}\cdot(\nabla\times\textbf{E}) -\frac{e}{8m^2}\,\nabla\cdot\textbf{E}\nonumber\\
&&\nonumber\\
&-&\frac{1}{2m^2}\,({\bf
\Sigma}\cdot\textbf{P})(\textbf{P}\cdot\textbf{b})\,.
\end{eqnarray}
Assim, a expressão acima corresponde ao limite não relativístico do hamiltoniano (4.8) expresso na representação FW. Este resultado \'e idêntico ao obtido em \cite{Kha}, porém foi obtido por nós através de um método diferente (FW). O termo mais interessante é o de interação spin-órbita que envolve o campo de fundo $b_\mu$.
\section{O efeito Zeeman anômalo modificado}
O efeito Zeeman corresponde \`a mudan\c{c}a das linhas espectrais de um certo elemento devido a a\c{c}\~ao de um campo magn\'etico externo. Para o caso de \'atomos cujo spin eletr\^onico total \'e nulo, as linhas de emiss\~ao se decomp\~oem em multipletos (dubletos, tripletos, etc.), cujas caracter\'isticas dependem essencialmente do elemento e do campo externo. Tal fen\^omeno \'e denominado {\it efeito Zeeman normal}, descoberto pelo f\'isico holand\^es Pieter Zeeman em 1896. Para o caso em que o spin eletr\^onico total n\~ao \'e nulo, tem-se o {\it efeito Zeeman an\^omalo}: as linhas de emiss\~ao n\~ao se decomp\~oem em dubletos ou tripletos, mas em multipletos de estrutura mais complicada. Isso ocorre porque o spin se acopla ao campo magn\'etico externo.
Nesta se\c{c}\~ao, o efeito Zeeman an\^omalo ser\'a estudado na presença da quebra de simetria de Lorentz, que possivelmente fornecerá modificações no espectro de emiss\~ao do hidrog\^enio. Investigaremos isso aplicando teoria de perturbações geradas, separadamente, pelos campos de fundo $a_\mu$ e $b_\mu$.
\subsection{O acoplamento vetorial como perturbação}
O hamiltoniano com o acoplamento vetorial no limite não relativístico do modelo de quebra de simetria de Lorentz é reescrito através de (4.25) somente com os termos $a^\mu$:
\begin{eqnarray}
H=\frac{1}{2m}(\mathbf{p}-e\mathbf{A}-{\mbox{\boldmath{$a$}}})^2-\frac{e}{2M}{\mbox{\boldmath{$
\sigma$}}}\cdot{\bf B}+eA_0+a_0\,.
\end{eqnarray}
Este hamiltoniano é escrito em termos do hamiltoniano de Pauli (n\~ao perturbado) como se segue:
\begin{eqnarray}
H=\left[\frac{1}{2m}(\mathbf{p}-e\mathbf{A})^2-\frac{e}{2m}{\mbox{\boldmath{$
\sigma$}}}\cdot\mathbf{B}+eA_0\right]+\left[-\frac{1}{m}(\mathbf{p}-e\mathbf{A})\cdot{\mbox{\boldmath{$a$}}}+a_0+\frac{1}{2m}{\mbox{\boldmath{$a$}}}^2\right]\,.
\end{eqnarray}
O primeiro termo do hamiltoniano acima \'e o bem conhecido hamiltoniano de Pauli. O segundo termo \'e proveniente do
acoplamento vetorial atrav\'es do campo $a^\mu$, e é o hamiltoniano de intera\c{c}\~ao. Assim,
\begin{eqnarray}
H_{int(a)}=\frac{i}{m}{\mbox{\boldmath{$a$}}}\cdot\nabla+\frac{e}{m}\mathbf{A}\cdot{\mbox{\boldmath{$a$}}}+a_0+\frac{{\mbox{\boldmath{$a$}}}^2}{2m}\,,
\end{eqnarray}
onde foi utilizada a rela\c{c}\~ao
$\mathbf{p}\cdot{\mbox{\boldmath{$a$}}}=-i\nabla\cdot{\mbox{\boldmath{$a$}}}-i{\mbox{\boldmath{$a$}}}\cdot\nabla=-i{\mbox{\boldmath{$a$}}}\cdot\nabla$. Note que, neste caso, a quebra de conjuga\c{c}\~ao de carga n\~ao
\'e mais manifesta, pois existe uma \'unica express\~ao que representa o hamiltoniano para as part\'iculas e antipart\'iculas.
\smallskip
Os \'ultimos dois termos em (4.28) s\~ao apenas duas constantes que n\~ao representam mudan\c{c}as f\'isicas nos n\'iveis de energia, pois n\~ao se manifestam nas transi\c{c}\~oes energ\'eticas. Para aplicar a teoria de perturbações, escrevemos a fun\c{c}\~ao de onda $\psi$ do \'atomo de hidrog\^enio em coordenadas esf\'ericas para uma \'unica part\'icula:
\begin{eqnarray*}
\psi_{n\ell m}(r,\theta,\phi)=R_{n\ell}(r)\Theta_{\ell m}(\theta)\Phi_m(\phi)\,.
\end{eqnarray*}
Utilizando o operador gradiente escrito em coordenadas esf\'ericas
\begin{eqnarray*}
{\mbox{\boldmath{$\nabla$}}}=\textbf{\^e}_r\,\frac{\partial}{\partial r}+\textbf{\^e}_\theta\,\frac{1}{r}\,\frac{\partial}{\partial\theta}+\textbf{\^e}_\phi\,\frac{1}{r\sin\theta}\,\frac{\partial}{\partial\phi}\,,
\end{eqnarray*}
segue que:
\begin{eqnarray}
\Delta E_{(a),1}&=&\frac{i}{M}\langle n\ell m|{\mbox{\boldmath{$a$}}}\cdot\nabla|n\ell m\rangle\nonumber\\
&&\\
&=&\frac{i}{M}\int\left\{R_{n\ell}^\ast(r)\frac{\partial R_{n\ell}(r)}{\partial r}|\Theta_{\ell m}(\theta)|^2|\Phi_m(\phi)|^2\,{\mbox{\boldmath{$a$}}}\cdot\textbf{\^e}_r\right.\nonumber\\
&&\left. \right. \\
&+&\left.\frac{|R_{n\ell}(r)|^2}{r}\Theta_{\ell m}^\ast(\theta)\frac{\partial\Theta_{\ell m}(\theta)}{\partial\theta}|\Phi_m(\phi)|^2{\mbox{\boldmath{$a$}}}\cdot\textbf{\^e}_\theta\right.\nonumber\\
&&\left. \right.\\
&+&\left. im\frac{|R_{n\ell}(r)|^2|\Theta_{\ell
m}(\theta)|^2}{r\sin\theta}|\Phi_m(\phi)|^2{\mbox{\boldmath{$a$}}}\cdot\textbf{\^e}_\phi\right\}d^3r\nonumber\,.
\end{eqnarray}
Para facilitar os c\'alculos, escolhemos o campo ${\mbox{\boldmath{$a$}}}$ ao longo do eixo $z$, onde
${\mbox{\boldmath{$a$}}}\cdot\textbf{\^e}_r=a_z\cos\theta$, ${\mbox{\boldmath{$a$}}}\cdot\textbf{\^e}_\theta=-a_z\sin\theta$ e
${\mbox{\boldmath{$a$}}}\cdot\textbf{\^e}_\phi=0$. O primeiro termo \'e ent\~ao escrito explicitamente:
\begin{eqnarray}
\Delta E_{(a),1}=\frac{ia_z}{m}\int\left[R_{n\ell}^\ast(r)\frac{\partial
R_{n\ell}(r)}{\partial r}r^2dr\right]|\Theta_{\ell m}(\theta)|^2\sin\theta\cos\theta d\theta\,.
\end{eqnarray}
Esta corre\c{c}\~ao \'e nula, pois
\begin{eqnarray*}
\int\limits_0^\pi|\Theta_{\ell m}(\theta)|^2\sin\theta\cos\theta d\theta=0\,,
\end{eqnarray*}
para todas as fun\c{c}\~oes associadas de Legendre.
\smallskip
Agora, o segundo termo \'e
\begin{eqnarray*}
\Delta E_{(a),2}=-\frac{ia_z}{m}\int\frac{|R_{n\ell}(r)|^2}{r}\Theta_{\ell
m}^\ast(\theta)\frac{\partial\Theta_{\ell m}(\theta)}{\partial\theta}\sin^2\theta d\theta r^2dr\,.
\end{eqnarray*}
Observando a integra\c{c}\~ao angular, verifica-se que
\begin{eqnarray*}
\int\limits_0^\pi\Theta_{\ell m}^\ast(\theta)\frac{\partial\Theta_{\ell m}(\theta)}{\partial\theta}\sin^2\theta d\theta=\int\limits_{-1}^{+1}\Theta_{\ell m}(x)\frac{\partial\Theta_{\ell m}(x)}{\partial x}(x^2-1)dx=0\,,
\end{eqnarray*}
resultado proveniente da f\'ormula de recorr\^encia
\begin{eqnarray*}
(x^2-1)\frac{d}{dx}\Theta_{\ell m}(x)=\ell x\Theta_{\ell m}(x)-(\ell+m)\Theta_{\ell-1,m}(x)
\end{eqnarray*}
e da rela\c{c}\~ao de ortogonalidade dos polin\^omios de Legendre:
\begin{eqnarray*}
\int\limits_{-1}^{+1}\Theta_{km}(x)\Theta_{\ell m}(x)dx=0\,\,,\hspace{2eM}\mbox{para todo $k\neq\ell$.}
\end{eqnarray*}
Portanto,
\begin{equation}
\Delta E_{(a),1}=0\,,
\end{equation}
Agora, analisaremos o termo que depende do potencial vetor:
\begin{eqnarray*}
\Delta E_{(a),2}=\frac{e}{m}\int\Psi^\ast(\mathbf{A}\cdot{\mbox{\boldmath{$a$}}})\Psi d^3r\,.
\end{eqnarray*}
Para um campo magn\'etico intenso representado por $\mathbf{B}=B_0\,\textbf{\^e}_z$, o potencial vetor associado \'e
$\mathbf{A}=-B_0\left(\dfrac{y}{2},-\dfrac{x}{2},0\right)$. Isto implica que:
\begin{equation}
\Delta E_{(a),2}=-\frac{eB_0}{2m}\int\Psi^\ast(ya_x-xa_y)\Psi
d^3r\,.
\end{equation}
Ap\'os c\'alculos expl\'icitos, obtem-se que
\begin{equation}
\Delta E_{(a),2}=0\,.
\end{equation}
Portanto, conclu\'imos que a presen\c{c}a do campo de fundo $a_\mu$ n\~ao afeta o espectro de energia do hidrog\^enio, pois o hamiltoniano (4.28) n\~ao produz nenhuma corre\c{c}\~ao aos deslocamento nos n\'iveis de energia do efeito Zeeman anômalo.
\subsection{O acoplamento axial como perturbação}
Neste caso, o hamiltoniano é dado por
\begin{eqnarray}
H&=&\left[\frac{1}{2m}(\mathbf{p}-e\mathbf{A})^2-\frac{e}{2m}{\mbox{\boldmath{$\sigma$}}}\cdot\mathbf{B}+eA_0\right]+\left[{\mbox{\boldmath{$\sigma$}}}\cdot\mathbf{b}-\frac{b_0}{m}{\mbox{\boldmath{$\sigma$}}}\cdot(\mathbf{p}-e\mathbf{A})+\frac{b_0^2}{2m}\right]\nonumber\\
&&\nonumber\\
H&\equiv&H_{Pauli}+H_{int(b)}\,.
\end{eqnarray}
O terceiro termo de $H_{int(b)}$ será negligenciado porque é uma constante e não causa efeito algun. Assim,
\begin{eqnarray}
H_{int(b)}={\mbox{\boldmath{$\sigma$}}}\cdot\mathbf{b}-\frac{b_0}{m}{\mbox{\boldmath{$\sigma$}}}\cdot(\mathbf{p}-e\mathbf{A})\,.
\end{eqnarray}
Para obtermos as possíveis mudanças no espectro de energia do hidrogênio, é necessário utilizarmos teoria de perturbações independente do tempo até primeira ordem, cujo hamiltoniano de interação será dado por (4.37). Assim,
\begin{equation}
\Delta E_{(b),1}=\langle n\ell jm_jm_s|{\mbox{\boldmath{$\sigma$}}}\cdot\mathbf{b}|n\ell
jm_jm_s\rangle\,,
\end{equation}
onde $n$, $j$, $m_j$ são os números quânticos associados ao hidrogênio no caso em que ocorre a soma entre os momentos angulares $\mathbf{L}$ e $\mathbf{S}$ (veja o Apêndice D). Considerando-se a situação em que $j=\ell+1/2$ e $m_j=m+1/2$, com suas autofunções sendo representadas por (C.20) e (C.21), esta contribuição de energia pode ser calculada explicitamente:
\begin{eqnarray}
\Delta E_{(b),1}&=&\langle n\ell
jm_jm_s|{\mbox{\boldmath{$\sigma$}}}\cdot\mathbf{b}|n\ell
jm_jm_s\rangle=\langle jm_j|b_z\sigma_z|jm_j\rangle\nonumber\\
&&\nonumber\\
&=&\int\left(\begin{array}{cc}\sqrt{\frac{\ell+m+1}{2\ell+1}}\,Y_\ell^{m\,\ast}&\sqrt{\frac{\ell-m}{2\ell+1}}\,Y_\ell^{m+1\,\ast}\end{array}\right)\left(\begin{array}{cc}b_z&0\\0&-b_z\end{array}\right)\left(\begin{array}{c}\sqrt{\frac{\ell+m+1}{2\ell+1}}\,Y_\ell^{m}\\\sqrt{\frac{\ell-m}{2\ell+1}}\,Y_\ell^{m+1}\end{array}\right)d\Omega\nonumber\\
&&\nonumber\\
&=&b_z\int\left(\begin{array}{cc}\sqrt{\frac{\ell+m+1}{2\ell+1}}\,Y_\ell^{m\,\ast}&\sqrt{\frac{\ell-m}{2\ell+1}}\,Y_\ell^{m+1\,\ast}\end{array}\right)\left(\begin{array}{c}\sqrt{\frac{\ell+m+1}{2\ell+1}}\,Y_\ell^{m}\\-\sqrt{\frac{\ell-m}{2\ell+1}}\,Y_\ell^{m+1}\end{array}\right)d\Omega\nonumber\\
&&\nonumber\\
&=&\frac{\ell+m+1}{2\ell+1}b_z\int|Y_\ell^m|^2d\Omega-\frac{\ell-m}{2\ell+1}b_z\int|Y_\ell^{m+1}|^2d\Omega=\frac{2m+1}{2\ell+1}\,b_z\nonumber\\
&&\nonumber\\
&=&\frac{2m_jb_z}{2\ell+1}\,,
\end{eqnarray}
\noindent onde os $Y_\ell^m$ são os harmônicos esféricos normalizados. Para o caso em que $j=\ell-1/2$, obtemos $-\dfrac{2m_jb_z}{2\ell+1}$. Então, a contribuição total para o termo ${\mbox{\boldmath{$\sigma$}}}\cdot\mathbf{b}$, que representa o acoplamento axial sem campo eletromagnético, é
\begin{equation}
\Delta E_{\sigma\cdot b}=\pm2\frac{m_jb_z}{2\ell+1}\,.
\end{equation}
Este resultado coincide com a referência \cite{Kha}, mas é o dobro do resultado obtido em \cite{Man}.
\smallskip
A energia é corrigida em primeira ordem de aproximação pelo fator $\pm m_j$. De fato, cada linha do espectro é separada em $2j+1$ linhas, cuja separação linear é dada por ${\displaystyle{\frac{b_z}{2\ell+1}}}$. Como a mudança de energia depende do módulo de ${\bf b}$, este resultado teórico para a correção de energia pode ser utilizado para se obter um limite superior para o parâmetro $b^\mu$ através de experimentos específicos.
A contribuição em primeira ordem do segundo termo do hamiltoniano (4.37) é escrito a seguir:
\begin{equation}
\Delta E_{(b),2}=-\frac{b_0}{m}\langle n\ell
jm_jm_s|{\mbox{\boldmath{$\sigma$}}}\cdot\nabla|n\ell
jm_jm_s\rangle\,.
\end{equation}
Aqui, a função de onda $\psi_{n\ell m}$ passa a depender de uma função $\chi_{m_s}$ que depende do spin do elétron. Neste caso, a função de onda total será $\Psi_{n\ell jm_j}=\psi_{n\ell m}\chi_{m_s}$. No entanto, devemos considerar que o operador nabla deve atuar em $R_{n\ell}(r)$, $\theta_{\ell m}(\theta)$ e $\Phi_m(\phi)$, enquanto que ${\mbox{\boldmath{$\sigma$}}}$ deve atuar na função de spin. Assim,
\begin{eqnarray}
&&\Delta E_{(b),2}=-\frac{b_0}{m}\int\left\{R_{n\ell}(r)^\ast\frac{\partial R_{n\ell}(r)}{\partial r}|\theta_{\ell
m}(\theta)|^2|\Phi_m(\phi)|^2\langle jm_j|{\mbox{\boldmath{$\sigma$}}}\cdot\textbf{ê}_r|jm_j\rangle\right.\nonumber\\
&&\nonumber\\
&&\left.+\frac{|R_{n\ell}(r)|^2}{r}|\Phi_m(\phi)|^2\Theta_{\ell m}(\theta)^\ast\frac{\partial\Theta_{\ell
m}(\theta)}{\partial\theta}\langle jm_j|{\mbox{\boldmath{$\sigma$}}}\cdot\textbf{ê}_\theta|jm_j\rangle\right.\nonumber\\
&&\nonumber\\
&&\left.+\frac{|R_{n\ell}(r)|^2|\Theta_{\ell m}(\theta)|^2}{r\sin\theta}|\Phi_m(\phi)|^2\langle jm_j|{\mbox{\boldmath{$\sigma$}}}\cdot\textbf{ê}_\phi|jm_j\rangle\right\}d^3r\,.
\end{eqnarray}
Os produtos escalares em coordenadas carte\-sianas que aparecem nos valores esperados entre os \emph{kets} e os \emph{bras} s\~ao dados por:
\begin{eqnarray*}
{\mbox{\boldmath{$\sigma$}}}\cdot\textbf{ê}_r&=&\sin\theta\cos\phi\,\sigma_x+\sin\theta\sin\phi\,\sigma_y+\cos\theta\,\sigma_z\,,\\
{\mbox{\boldmath{$\sigma$}}}\cdot\textbf{ê}_\theta&=&\cos\theta\cos\phi\,\sigma_x+\cos\theta\sin\phi\,\sigma_y-\sin\theta\,\sigma_z\,,\\
{\mbox{\boldmath{$\sigma$}}}\cdot\textbf{ê}_\phi&=&-\sin\theta\,\sigma_x+\cos\phi\,\sigma_y\,.
\end{eqnarray*}
Como vemos em (4.42), somente os termos proporcionais a $\sigma_z$ v\~ao contribuir com valores esperados n\~ao nulos. Assim,
\begin{eqnarray*}
\Delta E_{(b),2}&=&\pm\frac{ib_0m_j}{(2\ell+1)m}\int\left[R_{n\ell}(r)^\ast\frac{\partial R_{n\ell}(r)}{\partial r}|\Theta_{\ell m}(\theta)|^2\cos\theta\right.\\
&&\left. \right.\\
&-&\left.\frac{|R_{n\ell}(r)|^2}{r}\Theta_{\ell m}(\theta)^\ast\frac{\partial\Theta_{\ell m}(\theta)}{\partial \theta}\sin\theta\right]d^3r\\
&&\\
&=&\pm\frac{ib_0m_j}{(2\ell+1)m}\int\left[R(r)^\ast\frac{\partial R(r)}{\partial r}r^2dr\right]\int|\Theta(\theta)|^2\sin\theta\cos\theta d\theta\\
&&\\
&\pm&\frac{ib_0m_j}{(2\ell+1)m}\int\left[-\frac{|R_{n\ell}(r)|^2}{r}r^2dr\right]\int\Theta_{\ell
m}(\theta)^\ast\frac{\partial\Theta_{\ell m}(\theta)}{\partial
\theta}\sin^2\theta d\theta\,.
\end{eqnarray*}
As express\~oes obtidas acima s\~ao exatamente as mesmas obtidas para o c\'alculo da corre\c{c}\~ao de energia para o caso do acoplamento vetorial. Assim,
\begin{equation}
\Delta E_{(b),2}=0\,.
\end{equation}
Observando novamente o hamiltoniano de intera\c{c}\~ao (4.37), o termo $eb_0\,{\mbox{\boldmath{$\sigma$}}}\cdot\mathbf{A}/m$ n\~ao deve oferecer nenhuma corre\c{c}\~ao ao efeito Zeeman para o caso em que o hidrog\^enio \'e livre, uma vez que $\mathbf{A}=0$. Considerando o caso em que o el\'etron \'e sujeito a um forte campo magn\'etico, este termo n\~ao pode ser negligenciado. Para um campo magn\'etico intenso representado por $\mathbf{B}=B_0\,\textbf{\^e}_z$, o potencial vetor associado \'e $\mathbf{A}=-B_0\left(\dfrac{y}{2},-\dfrac{x}{2},0\right)$. Isto implica que a corre\c{c}\~ao de energia, em primeira ordem de teoria de perturba\c{c}\~oes, \'e dado por:
\begin{multline}
\Delta E_{\sigma\cdot A}=\frac{eb_0}{m}\langle n\ell jm_jm_s|{\mbox{\boldmath{$\sigma$}}}\cdot\mathbf{A}|n\ell jm_jm_s\rangle\\
\\
=-\frac{eb_0B_0}{2m}\langle n\ell jm_jm_s|y\sigma_x-x\sigma_y|n\ell jm_jm_s\rangle\\
\\
=-\frac{eb_0B_0}{2m}\langle jm_j|\left(\begin{array}{cc}1&0 \end{array} \right)\left(\begin{array}{cc}0&y\\y&0 \end{array} \right) \left(\begin{array}{c}1\\0\end{array}\right) - \left(\begin{array}{cc}1&0 \end{array} \right)\left(\begin{array}{cc}0&-ix\\ix&0 \end{array} \right) \left(\begin{array}{c}1\\0\end{array}\right) |jm_j\rangle\\
\\
\hspace{-9cm}=0\,.
\end{multline}
Portanto, o\'unico efeito perturbativo foi obtido em (4.40), cujo resultado \'e proporcional a $|\mathbf{b}|$, que foi gerado pelo termo de interação spin-órbita ${\mbox{\boldmath{$\sigma$}}}\cdot\mathbf{b}$. Podemos concluir também que um campo magn\'etico externo, mesmo intenso, n\~ao produz nenhum efeito de mudan\c{c}a de energia no espectro do hidrog\^enio.
\chapter{Férmions em um campo de calibre em (2+1) Dimens\~oes}
\section{Introdução}
Como é bem conhecido, a teoria de calibre de Maxwell usual é descrita a partir de um campo de calibre fundamental $A_\mu$, através da lagrangiana
\begin{equation}
{\cal L}_M=-\frac{1}{4}F_{\mu\nu}F^{\mu\nu}-A_\mu J^\mu\,,
\end{equation}
onde $F_{\mu\nu}=\partial_\mu A_\nu-\partial_\nu A_\mu$ é o tensor antisimétrico do campo eletromagnético e $J^\mu$ é a corrente de matéria. Esta lagrangiana é invariante frente a transformações de calibre $A_\mu\,\to\,A_\mu+\partial_\mu\Lambda$; e, consequentemente, as equações de Euler-Lagrange de movimento
\begin{equation}
\partial_\mu F^{\mu\nu}=J^\nu
\end{equation}
também são invariantes frente a mesma transformação. A corrente de matéria se conserva, pois $\partial_\nu J^\nu=0$ devido a antisimetria de $F^{\mu\nu}$.
\smallskip
Na teoria eletromagnética de Maxwell convencional, o tensor $F_{\mu\nu}$ é uma matriz quadrada antisimétrica de ordem $4\times4$. O número de campos nesta teoria é dado por $\dfrac{1}{2}D(D-1)$ que, em quatro dimensões, corresponde a três campos elétricos e três campos magnéticos. No entanto, esta teoria pode ser definida em {\it qualquer} dimensão, se os índices do campo de calibre $A_\mu$ variam de $\mu=0,1,2,...,D-1$, onde $D$ é a dimensão escolhida.
\smallskip
Em particular, em sistemas planares (duas dimensões espaciais e uma temporal), o campo magnético é dado por $B=\varepsilon^{ij}\partial_i A_j$, ou seja, este campo é um {\it escalar}. Isso acontece porque nesta teoria o potencial vetor é bidimensional, e o rotacional de um vetor em duas dimensões resulta em um escalar. Já o campo elétrico é um vetor espacial de duas componentes. Portanto, o número de campos em (2+1) dimensões é três, de acordo com a relação acima.
\smallskip
Uma teoria que apresenta características distintas em relação à teoria de Maxwell simplesmente reduzida dimensionalmente é a {\it teoria de Chern-Simons} (CS), cuja lagrangiana é dada por
\begin{equation}
{\cal
L}_{CS}=\frac{\theta}{2}\varepsilon^{\mu\nu\rho}A_\mu\partial_\nu A_\rho-A_\mu J^\mu\,,
\end{equation}
onde $\theta$ \'e o par\^ametro de CS, cujo significado f\'isico ser\'a discutido mais adiante. O s\'imbolo $\varepsilon^{\mu\nu\rho}$ \'e o conhecido s\'imbolo de Levi-Civita (ou tensor completamente antisimétrico) em (2+1)$D$, cujas propriedades $\varepsilon^{012}=\varepsilon_{012}=1$ s\~ao comumente estabelecidas. Com a transforma\c{c}\~ao de calibre $A_\mu\,\to\,A_\mu+\partial_\mu\Lambda$, a lagrangiana (5.3) varia apenas por uma diverg\^encia:
\begin{equation}
{\cal L}_{CS}\,\,\to\,\,{\cal
L}_{CS}+\partial_{\rho}\left(\frac{\theta}{2}\varepsilon^{\mu\nu\rho}\partial_\nu A_\rho\Lambda\right)\,,
\end{equation}
mas a ação $\displaystyle{S=\int d^3x\,{\cal L}_{CS}}$ permanece invariante, pois os termos de superfície são desprezados.
\smallskip
As equações clássicas de Euler-Lagrange fornecem a seguinte equação de movimento
\begin{equation}
J^\mu=\frac{\theta}{2}\varepsilon^{\mu\nu\rho}F_{\nu\rho}\,,
\end{equation}
que tamb\'em goza da liberdade de calibre representada acima. Também é possível observar que a corrente se conserva devido a identidade de Bianchi: $\varepsilon^{\mu\nu\rho}\partial_\mu F_{\nu\rho}=0$.
\smallskip
A teoria de CS pura apresenta características interessantes. As equações de Euler-Lagrange para esta teoria, em termos das densidades de carga e corrente, são dadas por
\begin{eqnarray}
\rho&=&\theta B\,,\nonumber\\
J^i&=& \theta\varepsilon^{ij}E_j\,.
\end{eqnarray}
A primeira indica que a densidade de carga \'e {\it localmente} proporcional ao campo magn\'etico, cuja constante de proporcionalidade \'e o par\^ametro de CS. Assim, o efeito produzido por este termo na teoria de CS pura \'e anexar fluxo magn\'etico \`a carga el\'etrica (tais part\'iculas s\~ao denominadas {\it \^anions}). Já a segunda relação diz que o campo elétrico é proporcional à corrente, cuja constante de proporcionalidade também é $\theta$. Estes efeitos estão em contraste com a teoria eletromagnética de Maxwell usual.
\section{O propagador de Maxwell-Chern-Simons}
A lagrangiana desta teoria é descrita pelo acoplamento dos campos de Maxwell e CS\footnote{A leitura desta seção não é necessá para a compreensão do trabalho. Trata-se apenas de uma ilustração do modelo.}:
\begin{equation}
{\cal L}_{MCS}=-\frac{1}{4}F_{\mu\nu}F^{\mu\nu}+\frac{\theta}{2}\varepsilon^{\mu\nu\rho}A_\mu\partial_\nu A_\rho-\dfrac{1}{2}\lambda\partial_\mu A^\mu\partial^\nu A_\nu\,,
\end{equation}
onde o último termo representa um termo de fixação de calibre para que o propagador possa ser determinado univocamente.
\smallskip
Assim, a a\c{c}\~ao desta teoria, em (2+1) dimens\~oes, agora \'e escrita na forma
\begin{eqnarray}
S&=&\int d^3x\,\frac{1}{2}\left[\partial_\mu A^\mu\partial^\nu A_\nu-\partial_\mu A_\nu\partial^\nu A^\mu+\theta\varepsilon^{\mu\nu\rho}A_\mu \partial_\nu A_\rho-\lambda\partial_\mu A^\mu\partial^\nu A_\nu\right]\nonumber\\
&&\nonumber\\
&=&\int d^3x\,\frac{1}{2}\left[A^\mu g_{\mu\nu}\square A^\nu-A^\mu\partial_\nu\partial_\mu A^\nu+\theta\varepsilon_{\mu\nu\rho} A^\mu\partial^\rho A^\nu-\lambda A^\mu\partial^\mu\partial_\nu A^\nu\right]\nonumber\\
&&\nonumber\\
&=&\int d^3x\,\frac{1}{2}A^\mu\left[\square
g_{\mu\nu}-\partial_\nu\partial_\mu+\theta\varepsilon_{\mu\nu\rho}\partial^\rho+\lambda\partial_\mu\partial_\nu\right]
A^\nu\,,
\end{eqnarray}
onde os termos divergentes s\~ao negligenciados de acordo com o teorema de Gauss.
\smallskip
Sendo o termo entre colchetes o núcleo da ação e o propagador de Feynman uma fun\c{c}\~ao de Green, o mesmo pode ser calculado atrav\'es da identidade:
\begin{eqnarray}
\left(-\square
g_{\mu\nu}+\partial_\nu\partial_\mu-\theta\varepsilon_{\mu\nu\rho}\partial^\rho-\lambda\partial_\mu\partial_\nu\right)\Delta_F^{\nu\sigma}(x-y)=i\delta^\sigma_\mu\,\delta^3(x-y)\,.
\end{eqnarray}
Se aplicamos a transformada de Fourier sobre o propagador
\begin{eqnarray*}
\Delta_F^{\mu\nu}(x-y)=\int\frac{d^4k}{(2\pi)^4}\Delta_F^{\mu\nu}(k)e^{ik\cdot(x-y)}\,,
\end{eqnarray*}
observamos que
\begin{eqnarray*}
\partial_\mu\Delta_F^{\mu\nu}(x-y)=\int\frac{d^4k}{(2\pi)^4}\Delta_F^{\mu\nu}(k)(ik_\mu)e^{ik\cdot(x-y)}\,,
\end{eqnarray*}
e obtemos a expressão
\begin{equation}
\left(-k^2g_{\mu\nu}+k_\mu k_\nu-i\theta\varepsilon_{\mu\nu\rho}k^\rho-\lambda k_\mu k_\nu
\right)\Delta_F^{\nu\sigma}(k)=i\delta^\sigma_\mu\,.
\end{equation}
Considerando, para o c\'alculo de (5.10), o {\it ansatz} geral
\begin{equation}
\Delta_F^{\nu\sigma}(k)={\cal A}g^{\nu\sigma}+{\cal B}k^\nu k^\sigma+{\cal C}\varepsilon^{\nu\sigma\tau}k_\tau\,,
\end{equation}
segue que
\begin{eqnarray*}
&-&{\cal A}k^2\delta^\sigma_\mu-{\cal B}k^2k_\mu k^\sigma-{\cal C}k^2g_{\mu\nu}\varepsilon^{\nu\sigma\tau}k_\tau+{\cal A}k_\mu k^\sigma+{\cal B}k^2k_\mu k^\sigma+{\cal C}\varepsilon^{\nu\sigma\tau}k_\mu k_\nu k_\tau-i{\cal A}\theta\varepsilon_{\mu\nu\rho}g^{\nu\sigma}k^\rho\\
&&\\
&-&i{\cal B}\theta\varepsilon_{\mu\nu\rho}k^\rho k^\nu
k^\sigma-i{\cal
C}\theta\varepsilon_{\mu\nu\rho}\varepsilon^{\nu\sigma\tau}k^\rho k_\tau-{\cal A}\lambda k_\mu k^\sigma-{\cal B}\lambda k^2k_\mu
k^\sigma-{\cal C}\lambda\varepsilon^{\nu\sigma\tau}k_\mu k_\nu k_\tau=i\delta^\sigma_\mu\,.
\end{eqnarray*}
Observe o segundo termo da segunda linha na express\~ao acima. Utilizando a identidade
\begin{equation}
\varepsilon_{\mu\nu\rho}\varepsilon^{\nu\sigma\tau}=-\delta^\sigma_\mu
\delta^\tau_\rho+\delta^\sigma_\rho \delta^\tau_\mu \,,
\end{equation}
este termo fica
\begin{eqnarray*}
-i{\cal C}\theta\varepsilon_{\mu\nu\rho}\varepsilon^{\nu\sigma\tau}k^\rho k_\tau&=&-i{\cal C}\theta(-\delta^\sigma_\mu\delta^\tau_\rho+\delta^\sigma_\rho\delta^\tau_\mu)k^\rho k_\tau\\
&=&i{\cal C}\theta(k^2\delta^\sigma_\mu-k_\mu k^\sigma)\,.
\end{eqnarray*}
Assim, podemos identificar um sistema de tr\^es equa\c{c}\~oes com tr\^es inc\'ognitas ${\cal A}$, ${\cal B}$ e ${\cal C}$ de acordo com a forma do tensor:
\begin{eqnarray*}
&&[-{\cal A}k^2+i{\cal C}\theta k^2]\delta^\sigma_\mu=i\delta^\sigma_\mu\\
&&\nonumber\\
&&[{\cal A}(1-\lambda)-{\cal B}\lambda k^2-i{\cal C}\theta]k_\mu k^\sigma=0\\
&&\nonumber\\
&&(-i{\cal A}\theta-{\cal C}k^2)g_{\mu\nu}\varepsilon^{\nu\sigma\tau}k_\tau=0\,,
\end{eqnarray*}
que, resolvido, fornece
\begin{eqnarray*}
{\cal A}&=&-\dfrac{i}{k^2-\theta^2}\\
&&\nonumber\\
{\cal B}&=&\dfrac{i}{k^2(k^2-\theta^2)}-\dfrac{1}{\lambda}\dfrac{i}{k^4}\\
&&\nonumber\\
{\cal C}&=&-\dfrac{\theta}{k^4-k^2\theta^2}\,.
\end{eqnarray*}
Portanto, de acordo com (5.11) e os resultados acima, o propagador do f\'oton para a teoria de MCS em (2+1) dimens\~oes \'e escrito da seguinte maneira:
\begin{equation}
\Delta_F^{\mu\nu}(k)=-\frac{ig^{\mu\nu}}{k^2-\theta^2}+\frac{ik^\mu
k^\nu}{k^2(k^2-\theta^2)}-\frac{\theta\varepsilon^{\mu\nu\rho}k_\rho}{k^2(k^2-\theta^2)}-\dfrac{1}{\lambda}\dfrac{ik^\mu
k^\nu}{k^4}\,.
\end{equation}
Nessa express\~ao para o propagador, de modo semelhante ao que ocorre em campos massivos, \'e evidente a exist\^encia de um polo em $\theta=\sqrt{k^2}$, que representa uma massa. Note tamb\'em que, para o caso de $\lambda\to\infty$, temos o calibre de Landau, que fornece a condi\c{c}\~ao de transversalidade similar a do propagador do fóton usual.
\section{Indução de Chern-Simons em (2+1)$D$}
A EDQ possibilita o estudo de sistemas que envolvem interações entre férmions e um campo de calibre externo. No entanto, é interessante observar que, mesmo partindo de uma teoria de interação de férmions de massa $m$ com um campo vetorial $A_\mu$ sem a presença de um termo de CS na dinâmica deste campo, um termo deste tipo é induzido por correções radiativas. Calcularemos este termo induzido, utilizando o formalismo de integrais de trajetórias para o campo de Dirac e expandindo o resultado até termos de segunda ordem. Assim, a lagrangiana que indica tal acoplamento é dada por:
\begin{equation}
{\cal L}=\bar{\psi}(i\!{\not\!\partial}-e\not\!\!A-m)\psi\,.
\end{equation}
A ação efetiva para este modelo dependente do campo $A_\mu$, na aproximação de um laço \cite{Tqc}, é definida como se segue
\begin{equation}
e^{iS_{ef}(A)}=N\int D\bar{\psi}D\psi\,exp\left[i\int d^3x\,\bar{\psi}(i\!{\not\!\partial}-e\not\!\!A-m)\psi\right]\,,
\end{equation}
onde $N$ é uma constante de normalização.
\smallskip
Integrando sobre os campos de férmions\footnote{Quando calculamos a integral gaussiana em variáveis ordinárias, o determinante surge no denominador. No entanto, em gaussianas de variáveis anticomutantes (variáveis de Grassmann), o determinante aparece no numerador. Esta demonstração está no Apêndice E.}, obtemos
\begin{equation}
e^{iS_{ef}(A)}=N\,\mbox{det}\,(i\!{\not\!\partial}-e\not\!\!A-m)\,,
\end{equation}
ou seja\footnote{O Apêndice F contém a demonstração desta identidade.},
\begin{eqnarray}
S_{ef}(A)&=&-i\ln\mbox{det}\,[i\!{\not\!\partial}-e\not\!\!A-m]\nonumber\\
&=&-iTr\,\ln[i\!{\not\!\partial}-e\not\!\!A-m]\,,
\end{eqnarray}
onde o termo constante foi negligenciado.
A expressão acima é expandida\footnote{$\displaystyle \ln(1-x)=-\sum\limits_{n=1}^\infty\frac{x^n}{n}\,.$} para se obter
\begin{eqnarray}
S_{ef}[A,m]&=&-iTr\ln\left[(i\!{\not\!\partial}-m)\left(\frac{i\!{\not\!\partial}-e\not\!\!A-m}{i\!{\not\!\partial}-m} \right) \right]\nonumber\\
&&\nonumber\\
&=&-iTr\ln(i\!{\not\!\partial}-m)-iTr\ln\left[1- \frac{1}{i\!{\not\!\partial}-m}e\not\!\!A \right]\nonumber\\
&&\nonumber\\
&=&-iTr\ln(i\!{\not\!\partial}-m)+iTr\sum\limits_{n=1}^\infty\frac{1}{n}\left[ \frac{1}{i\!{\not\!\partial}-m}e\not\!\!A \right]^n\,.
\end{eqnarray}
O termo da expansão acima que dará origem ao termo tipo CS é o de segunda ordem, que fornece a contribuição bilinear em $A_\mu$. Então, temos:
\begin{equation}
S_{ef}^{(2)}=\frac{ie^2}{2}Tr\left[\frac{1}{i\!{\not\!\partial}-m}\not\!\!A \frac{1}{i\!{\not\!\partial}-m}\not\!\!A \right]\,.
\end{equation}
Para o cálculo do traço da ação acima, considere ${\cal O}$ um operador que depende das matrizes de Dirac e dos índices internos do grupo de Lie. Então, seu traço total $Tr$ é definido por:
\begin{equation}
Tr{\cal O}\,\dot{=}\,tr\,tr_D\int d^3x\langle x|{\cal O}|x'\rangle\bigg|_{x=x'}\,,
\end{equation}
onde o símbolo $tr_D$ indica que o traço será calculado sobre as matrizes $\gamma$ de Dirac.
\smallskip
Inserindo as relações de completeza ou fechamento nos espaços das posições e momentos
\begin{equation}
\int d^3x|x\rangle\langle x|=1\quad,\quad \int\frac{d^3p}{(2\pi)^3}|p\rangle\langle p|=1\,,
\end{equation}
onde $\langle x|p\rangle=\langle p|x\rangle^\ast=e^{ipx}$, vem que
\begin{eqnarray*}
S_{ef}^{(2)}&=& \frac{ie^2}{2}tr\,tr_D \!\int\! d^3x \!\int\! d^3y \!\int\!\frac{d^3p}{(2\pi)^3} \!\int\!\frac{d^3q}{(2\pi)^3} \langle x|\frac{1}{i\!{\not\!\partial}-m}|p\rangle\langle p|\not\!\!A|y\rangle \langle y|\frac{1}{i\!{\not\!\partial}-m}|q\rangle\langle q|\not\!\!A|x\rangle \\
&&\\
&=&\frac{ie^2}{2}tr\,tr_D \!\int\! d^3x \!\int\! d^3y \!\int\!\frac{d^3p}{(2\pi)^3} \!\int\!\frac{d^3q}{(2\pi)^3}\frac{1}{\not\!p-m}\not\!\!A(y) \frac{1}{\not\!q-m}\not\!\!A(x)e^{ipx-ipy+iqy-iqx}\\
&&\\
&=& \frac{ie^2}{2}tr\,tr_D \!\int\! d^3x \!\int\! d^3y \!\int\!\frac{d^3p}{(2\pi)^3} \!\int\!\frac{d^3q}{(2\pi)^3}\frac{(\not\!p+m)\not\!\!A(y)(\not\!q+m)\not\!\!A(x)}{(p^ 2-m^2)(q^2-m^2)}e^{i(p-q)(x-y)}\\
&&\\
&=&-\frac{ie^2}{2}tr\,tr_D\int\frac{d^3k}{(2\pi)^3} \int\frac{d^3p}{(2\pi)^3}\frac{(\not\!p+m)\not\!\!A(-k)(\not\!p+\not\!k+m)\not\!\!A(k)}{[(p+k)^2-m^2](p^2-m^2)}\,,
\end{eqnarray*}
onde foi feita a mudança de variável $p-q\,\to\,-k$.
\smallskip
Vamos escrever explicitamente os termos que aparecem no numerador da integral acima:
\begin{eqnarray*}
&& (\not\!p+m)\not\!\!A(\not\!p+\not\!k+m)\not\!\!A=(\not\!p\not\!\!A) (\not\!p\not\!\!A+\not\!k\not\!\!A+m\not\!\!A)\\
&=&\not\!p\not\!\!A\not\!p\not\!\!A+\not\!p\not\!\!A\not\!k\not\!\!A+m\not\!p\not\!\!A\not\!\!A+m\not\!\!A\not\!p\not\!\!A+m\not\!\!A\not\!k\not\!\!A+m^2\not\!\!A\not\!\!A
\end{eqnarray*}
O único termo que contribuirá com nossos cálculos é $m\!{\not\!\!A\!{\not\!k\!{\not\!\!A}}}$, uma vez que $m\!{\not\!\!A\!{\not\!p\!{\not\!\!A}}}$ anulará a integral em $p$. Assim,
\begin{eqnarray*}
S_{CS}=-\frac{ie^2}{2}mtr_D\,tr\int\frac{d^3k}{(2\pi)^3}\int\frac{d^3p}{(2\pi)^3}\frac{\not\!\!A\not\!k\not\!\!A}{[(p+k)^2-m^2](p^2-m^2)}\,.
\end{eqnarray*}
Utilizando a parametrização de Feynman
\begin{equation}
\frac{1}{ab}=\int_0^1dz\frac{1}{[az+b(1-z)]^2}\,,
\end{equation}
onde $a=(k+p)^2-m^2$ e $b=p^2-m^2$, podemos escrever
\begin{equation}
az+b(1-z)=p^2+2(k\cdot p)z+k^2z-m^2=(p+kz)^2+k^2z(1-z)-m^2\,.
\end{equation}
Com a mudan\c{c}a de vari\'avel $p\,\to\,p-kz$, e considerando $\mu^2=m^2-k^2z(1-z)$ e a utilização da integral no momento $p$ (vide Apêndice G), temos que
\begin{eqnarray}
S_{CS}&=&=-\frac{ie^2}{2}mtr_D\,tr\int\frac{d^3k}{(2\pi)^3}\not\!\!A(-k)\not\!k\not\!\!A(k)\int\limits_0^1dz\int\frac{d^3p}{(2\pi)^3}\frac{1}{(p^2-\mu^2)^2}\nonumber\\
&&\nonumber\\
&=&-\frac{ie^2}{2}mtr_D\,tr\int d^3x\int d^3y\int\frac{d^3k}{(2\pi)^3}\not\!\!A(y)(-i\!{\not\!\partial_x})\not\!\!A(x)e^{-ik(x-y)}\int\limits_0^1dz\frac{i}{8\pi|\mu|}\nonumber\\
&&\nonumber\\
&=&\frac{ie^2}{16\pi}m\,tr\,tr_D\int d^3x\int d^3y\not\!\!A(y)\!{\not\!\partial_x}\not\!\!A(x)g(x-y)\,,
\end{eqnarray}
onde\footnote{Integral \cite{Gra} sobre o parâmetro $z$: \vspace{0.35cm} \\ $ \displaystyle \int\frac{dz}{|\mu|}=-\frac{1}{|k|}\arcsin\left[ \frac{(2z-1)|k|}{\sqrt{4m^2-k^2}} \right]\hspace{0.7cm},\hspace{0.7cm}\mu^2=m^2-k^2z(1-z)\,.$}
\begin{eqnarray*}
g(x-y)=2\int\frac{d^3k}{(2\pi)^3}\frac{1}{|k|}\arcsin\left(\frac{|k|}{\sqrt{4m^2-k^2}} \right)e^{-ik(x-y)}\,.
\end{eqnarray*}
Para extrairmos o termo local de CS induzido na ação acima,, expandimos o integrando de $g(x-y)$ em torno de $k\,\to\,
0$. Assim,
\begin{equation}
g(x-y)=\frac{1}{|m|}\delta^{(3)}(x-y)\,.
\end{equation}
Substituindo este resultado em (5.24), obtemos
\begin{eqnarray*}
S_{CS}&=&\frac{ie^2}{16\pi}\frac{m}{|m|}\,tr\,tr_D\int d^3x\int d^3y\not\!\!A(y)\!{\not\!\partial_x}\not\!\!A(x)\delta(x-y)\\
&&\\
&=&\frac{ie^2}{16\pi}\frac{m}{|m|}\,tr\,tr_D\int d^3x\,tr_D[\gamma^\mu\gamma^\nu\gamma^\rho]A_\mu\partial_\nu A_\rho\,.
\end{eqnarray*}
Portanto, aplicando o traço às matrizes de Dirac (veja (A.5)), obtemos a ação induzida
\begin{equation}
S_{CS}^{(2+1)D}=-\frac{e^2}{8\pi}\frac{m}{|m|}tr\int d^3x\,\varepsilon^{\mu\nu\rho}A_\mu\partial_\nu A_\rho\,.
\end{equation}
\vspace{0.01cm}
Este resultado é a contribuição bilinear (abeliana) no campo de calibre encontrada em \cite{Dun}. Ele indica que a interação de férmions com um campo de calibre formulado em (2+1) dimensões gera um termo semelhante ao de Chern-Simons. Como foi discutido na seção 5.2, é possível afirmar então que os quanta deste campo de calibre podem ser massivos.
\vspace{0.15cm}
\begin{center}
\vspace{1.2cm}
\hspace{0.75cm}\begin{picture}(250,80)
\Photon(0,50)(80,50){5}{8}\Text(40,67)[]{$k$}\Text(88,50)[]{$\mu$}
\ArrowArcn(110,50)(30,180,0)\Text(110,92)[]{$p+k$}
\ArrowArcn(110,50)(30,360,180)\Text(110,8)[]{$p$}
\Photon(140,50)(220,50){5}{8}\Text(180,67)[]{$k$}\Text(132,50)[]{$\nu$}
\end{picture}\\ {\sl \footnotesize \hspace{2eM}Diagrama de Feynman de um laço de férmions utilizado para o cálculo da ação (5.26).}
\end{center}
\chapter{Correções Radiativas com Quebra da Simetria de Lorentz em (3+1)$D$}
\section{Expansão do propagador do férmion e a quebra de simetria de Lorentz}
Estudamos nos Capítulos 3 e 4 a influência dos termos de quebra de simetria de Lorentz sobre a equação de Dirac e todas as grandezas dela derivadas. Aplicamos a ideia sugerida por Kostelecký e Colladay, que corresponde em tratar os termos de quebra de simetria de Lorentz como perturbações na equação de Dirac. Aplicando teoria de perturbações, obtemos como resultados correções de energias em certos sistemas quânticos na presença de termos de quebra da simetria de Lorentz.
\smallskip
Além dessa aplicação à mecânica quântica, a indução do termo de CS (Chern-Simons) em (3+1) dimensões é um resultado interessante oferecido pela teoria de quebra de simetria de Lorentz. Neste Capítulo, procuraremos por essa indução calculando correções radiativas em um sistema de férmions acoplado a um campo de calibre no espaço quadrimensional. Iniciaremos essa busca reescrevendo o propagador de Feynman na presença da quebra da simetria de Lorentz.
\smallskip
No Cap\'itulo 2, encontramos o propagador fermi\^onico da EDQ usual $S(p)$ (veja (2.43)). No cap\'itulo seguinte, formulamos um propagador modificado pela quebra da simetria de Lorentz para o f\'ermion, representado nesta dissertação por (3.23). Neste capítulo, estudaremos correções radiativas geradas pela corrente quiral $\bar{\psi}\gamma^\mu\gamma_5\psi$ acoplada a um quadrivetor $b_\mu$, que acarreta numa violação da invariância de Lorentz, tal como foi apresentado no Capítulo 3. Assim, tomando $\not\!\!a=0$ no propagador (3.23), a quebra da simetria de Lorentz será representada pelo termo $\not\!b\!{\gamma_5}$:
\begin{equation}
S_b(p)=\frac{i}{\not\!p-m-\not\!b\gamma_5}\,.
\end{equation}
O propagador acima invertido é dado por
\begin{equation}
S_{b}(p)=\frac{i(\not\!p-\not\!b\gamma_5+m)\{ p^2-b^2-m^2+[\not\! p,\not
b]\gamma_5 \}}{(p^2-b^2-m^2)^2-4(p\cdot b)^2+4p^2b^2}\,.
\end{equation}
Este propagador tem uma estrutura complicada e tornaria os cálculos perturbativos mais tediosos. Entretanto, o quadrivetor $b^\mu$, que sinaliza uma possível quebra da simetria de Lorentz na teoria, é muito pequeno e a correção por ele produzida no propagador pode ser tratada em forma perturbativa. Apliquemos assim a seguinte expansão em uma soma de termos infinitos de uma progress\~ao geom\'etrica cuja raz\~ao \'e $\not\!b\!{\gamma_5}$
\begin{eqnarray}
S_b(p)&=&\frac{i}{\not\!p-m}\frac{1}{1-i\not\!b\gamma_5S(p)}\nonumber\\
&&\nonumber\\
&=&\frac{i}{\not\!p-m}\left[1+(-i\not\!b\gamma_5)\frac{i}{\not\!p-m}+(-i\not\!b\gamma_5)\frac{i}{\not\!p-m}(-i\not\!b\gamma_5)\frac{i}{\not\!p-m}+\cdots\right]\nonumber\\
&&\nonumber\\
&=&\frac{i}{\not\!p-m}+\frac{i}{\not\!p-m}(-i\not\!b\gamma_5)\frac{i}{\not\!p-m}+\frac{i}{\not\!p-m}(-i\not\!b\gamma_5)\frac{i}{\not\!p-m}(-i\not\!b\gamma_5)\frac{i}{\not\!p-m}+\cdots\nonumber\\
&&
\end{eqnarray}
Assim, com $\times$ representando cada inserção $-i\!{\not\!b\!{\gamma_5}}$ no propagador, seu gráfico é dado por
\medskip
\begin{picture}(150,50)
\ArrowLine(49,19)(51,19)
\Line(0,18)(100,18)\Text(120,20)[]{=}
\Line(0,20)(100,20)
\ArrowLine(140,20)(240,20)\Text(260,20)[]{+}
\ArrowLine(280,20)(330,20)\Text(330,20)[]{$\times$}
\ArrowLine(330,20)(380,20)\Text(407,20)[]{+}
\ArrowLine(140,-20)(174,-20)\Text(174,-20)[]{$\times$}
\ArrowLine(174,-20)(208,-20)\Text(208,-20)[]{$\times$}
\ArrowLine(208,-20)(242,-20)\Text(273,-20)[]{+\hspace{1eM}$\cdots$}
\end{picture}
\vspace{1.6cm}
\noindent onde o lado esquerdo representa $S_b(p)$ e os termos do lado direito ao propagador de Feynman expandido. Na próxima seção, vamos calcular as correções radiativas utilizando este propagador expandido.
\section{Indução de Chern-Simons pela quebra da simetria de Lorentz}
As correções radiativas à ação da EDQ usual serão calculadas considerando-se um sistema de férmions acoplados a um campo $A_\mu$ formulado no espaço-tempo em (3+1) dimensões, empregando-se a perturbação $\not\!b\gamma_5$. Assim, a lagrangiana deste modelo é dada por
\begin{equation}
{\cal L}=\bar{\psi}(i\!{\not\!\partial}-\not\!b\gamma_5-e\!{\not\!\!A} -m)\psi\,.
\end{equation}
Para calcular o termo induzido, seguiremos os mesmos passos utilizados no Capítulo 5. A ação efetiva para este modelo, dependente do termo de quebra de simetria de Lorentz $-\bar{\psi}\not\!b\gamma_5\psi$, na aproximação de um laço é definida como se segue
\begin{equation}
e^{iS_{ef}[b,m]}=N\int D\bar{\psi}D\psi\,exp\left[i\int d^4x\,\bar{\psi}(i\!{\not\!\partial}-e\not\!\!A-\not\!b\gamma_5-m)\psi\right]\,,
\end{equation}
onde $N$ é uma constante de normalização.
\smallskip
Com a utilização das variáveis de Grassmann, integrando sobre os campos de férmions, obtém-se
\begin{equation}
e^{iS_{ef}[b,m]}=N\,det(i\!{\not\!\partial}-e\not\!\!A-\not\!b\gamma_5-m)\,,
\end{equation}
ou seja,
\begin{equation}
S_{ef}[b,m]=-iTr\ln[i\!{\not\!\partial}-e\not\!\!A-\not\!b\gamma_5-m]\,.
\end{equation}
Sendo $A$ e $B$ duas matrizes que não comutam, obtemos a seguinte identidade:
\begin{eqnarray*}
\ln(B-A)&=&\ln B\left(1-\frac{1}{B}A \right)\\
&&\\
&=&\ln B+\ln\left(1-\frac{1}{B}A\right)\\
&&\\
&=&\ln B-\frac{1}{B}A-\frac{1}{2}\frac{1}{B}A\frac{1}{B}A-\frac{1}{3}\frac{1}{B}A\frac{1}{B}A\frac{1}{B}A+\cdots\\
&&\\
&=&\ln B-\sum\limits_n \frac{1}{n}\left[\frac{1}{B}A\right]^n\,.
\end{eqnarray*}
Sendo $A=e\not\!\!A$ e $B=i\!{\not\!\partial}-\not\!b\gamma_5-m$ na expressão (6.7), temos que
\begin{eqnarray}
S_{ef}[b,m]=-iTr\ln[i\!{\not\!\partial}-\not\!b\gamma_5-m]+iTr\sum\limits_{n=1}^{\infty}\frac{1}{n}\left[ \frac{1}{i\!{\not\!\partial}-\not\!b\gamma_5-m}e\not\!\!A \right]^n\,.
\end{eqnarray}
O primeiro termo da expansão acima corresponde a um termo constante adicionado à ação e, por isso, não tem importância, uma vez que não depende do campo de calibre $\not\!\!A$. As contribuições virão de termos para $n\geqslant1$. Assim, para $n=1$ na expansão acima, temos que
\begin{equation}
S_{ef}^{(1)}=ieTr\left[\frac{1}{i\!{\not\!\partial}-\not\!b\gamma_5-m}\not\!\!A \right]
\end{equation}
ou
\begin{equation}
S_{ef}^{(1)}=ie\,Tr\int d^4x\int\frac{d^4p}{(2\pi)^4}\frac{1}{\not\!p-\not\!b\gamma_5-m}\not\!\!A\,.
\end{equation}
As contribuições de (6.10) dão origem a termos do tipo {\it tadpoles}, que são lineares em $A_\mu$ e são divergentes no ultravioleta. Como esses termos não contribuem com a indução de CS, os mesmos serão desconsiderados nos cálculos a seguir. Entretanto, ilustramos seus gráficos em primeira ordem no campo de calibre.
\hspace{-1.1cm} \begin{picture}(500,250)(0,0)
\vspace{-3cm}
\Photon(0,200)(80,200){3}{4} \BCirc(100,200){20} \BCirc(100,200){18.3} \ArrowLine(119,201)(119,199) \Text(140,200)[]{=}
\Photon(160,200)(240,200){3}{4} \BCirc(260,200){20} \ArrowLine(280,201)(280,199) \Text(300,200)[]{+}
\Photon(320,200)(400,200){3}{4} \ArrowArcn(420,200)(20,180,0) \ArrowArcn(420,200)(20,360,180)\Text(441,200)[]{$\times$}
\Text(460,200)[]{+}
\Photon(60,100)(140,100){3}{4}\ArrowArcn(160,100)(20,180,90) \ArrowArcn(160,100)(20,90,270) \ArrowArcn(160,100)(20,270,180)
\Text(161,120)[]{$\times$} \Text(161,80)[]{$\times$} \Text(216,100)[]{+}
\Photon(250,100)(330,100){3}{4} \ArrowArcn(350,100)(20,180,90) \ArrowArcn(350,100)(20,90,0) \ArrowArcn(350,100)(20,0,270) \ArrowArcn(350,100)(20,270,180)
\Text(350,120)[]{$\times$} \Text(371,100)[]{$\times$} \Text(350,80)[]{$\times$} \Text(420,100)[]{+\hspace{3eM}$\cdots$}
\end{picture}\\
\vspace{-3cm}
{\footnotesize Contribuições em primeira ordem de aproximação no campo de calibre denominadas \textit{tadpoles}. O gráfico do lado esquerdo refere-se a (6.10).}
As contribuições em segunda ordem, ou seja, para $n=2$ fornecem
\begin{eqnarray}
S_{ef}^{(2)}&=&\frac{ie^2}{2}Tr\left[\frac{1}{i\!{\not\!\partial}-\not\!b\gamma_5-m}\not\!\!A \frac{1}{i\!{\not\!\partial}-\not\!b\gamma_5-m}\not\!\!A \right]\nonumber\\
&&\nonumber\\
&=&-\frac{ie^2}{2}Tr[S_b(p)\not\!\!AS_b(p)\not\!\!A]\,.
\end{eqnarray}
O cálculo do traço da ação acima é semelhante ao cálculo de um operador ${\cal O}$ que depende das matrizes de Dirac e dos índices internos do grupo de Lie. Então, seu traço total $Tr$ é idêntico ao calculado no Capítulo 5 mas, em $(3+1)$ dimensões, é definido por:
\begin{equation}
Tr{\cal O}\,\dot{=}\,tr\,tr_D\int d^4x\langle x|{\cal O}|x'\rangle\bigg|_{x=x'}\,,
\end{equation}
onde o símbolo $tr_D$ indica que o traço será calculado sobre as matrizes $\gamma$ de Dirac na representação padrão.
\smallskip
Inserindo conjuntos completos de operadores normalizados nos espaços das posições e momentos
\begin{equation}
\int d^4x|x\rangle\langle x|=1\quad,\quad \int\frac{d^4p}{(2\pi)^4}|p\rangle\langle p|=1\,,
\end{equation}
onde, novamente, $\langle x|p\rangle=\langle p|x\rangle^\ast=e^{ipx}$, vem que
\begin{eqnarray*}
S_{ef}^{(2)}&=& \frac{ie^2}{2}tr\,tr_D \!\int\! d^4x \!\int\! d^4y \!\int\!\frac{d^4p}{(2\pi)^4} \!\int\!\frac{d^4q}{(2\pi)^4}\\
&&\\
&&\times\langle x|\frac{1}{i\!{\not\!\partial}-\not\!b\gamma_5-m}|p\rangle\langle p|\not\!\!A|y\rangle \langle y|\frac{1}{i\!{\not\!\partial}-\not\!b\gamma_5-m}|q\rangle\langle q|\not\!\!A|x\rangle \\
&&\\
&=&\frac{ie^2}{2}tr\,tr_D \!\int\! d^4x \!\int\! d^4y \!\int\!\frac{d^4p}{(2\pi)^4} \!\int\!\frac{d^4q}{(2\pi)^4}\\
&&\\
&&\times\frac{1}{\not\!p-\not\!b\gamma_5-m}\not\!\!A(y)\frac{1}{\not\!q-\not\!b\gamma_5-m}\not\!\!A(x)\langle x|p\rangle \langle p|y\rangle\langle y|q\rangle\langle q|x\rangle\\
&&\\
&=& \frac{ie^2}{2}tr\,tr_D \!\int\! d^4x \!\int\!\frac{d^4p}{(2\pi)^4}\frac{1}{\not\!p-\not\!b\gamma_5-m}\!\int\! d^4y\not\!\!A(y) \!\int\!\frac{d^4k}{(2\pi)^4}e^{ik(x-y)}\frac{1}{\not\!p-i\!{\not\!\partial}-\not\!b\gamma_5-m}\not\!\!A(x)\\
&&\\
&=& \frac{ie^2}{2}tr\,tr_D \!\int\! d^4x \!\int\!\frac{d^4p}{(2\pi)^4}\frac{1}{\not\!p-\not\!b\gamma_5-m}\!\int\! d^4y\not\!\!A(y)\delta(x-y)\frac{1}{\not\!p-i\!{\not\!\partial}-\not\!b\gamma_5-m}\not\!\!A(x)\\
&&\\
&=&\frac{ie^2}{2}tr\,tr_D \!\int\! d^4x \!\int\!\frac{d^4p}{(2\pi)^4}\frac{1}{\not\!p-\not\!b\gamma_5-m}\not\!\!A(x)\frac{1}{\not\!p-i\!{\not\!\partial}-\not\!b\gamma_5-m}\not\!\!A(x)\,,
\end{eqnarray*}
onde, com a utilização da mudança de variável $p-q\,\to\,k$ e do propagador (6.1), obtemos
\begin{equation}
S_{ef}^{(2)}=\frac{ie^2}{2}tr\,tr_D \!\int\! d^4x \!\int\!\frac{d^4p}{(2\pi)^4}S_b(p)\not\!\!AS_b(p-i\!{\not\partial})\not\!\!A\,.
\end{equation}
\vspace*{2.3cm}
\hspace{-1.5cm} \begin{picture}(500,250)(0,0)
\Photon(0,300)(80,300){3}{5} \BCirc(105,300){25} \BCirc(105,300){23} \ArrowLine(104,324)(106,324) \ArrowLine(106,276.2)(104,276.2)
\Photon(130,300)(210,300){3}{5} \Text(225,300)[]{=}
\Photon(240,300)(320,300){3}{5}\ArrowArcn(345,300)(25,180,0) \ArrowArcn(345,300)(25,360,180)\Photon(370,300)(450,300){3}{5}
\Text(470,300)[]{+}
\Photon(0,200)(80,200){3}{5} \ArrowArcn(105,200)(25,180,90) \ArrowArcn(105,200)(25,90,0) \ArrowArcn(105,200)(25,0,180)
\Photon(130,200)(210,200){3}{5} \Text(105,226)[]{$\times$} \Text(225,200)[]{+}
\Photon(240,200)(320,200){3}{5} \ArrowArcn(345,200)(25,180,0) \ArrowArcn(345,200)(25,0,270) \ArrowArcn(345,200)(25,270,180)
\Text(345,175)[]{$\times$} \Photon(370,200)(450,200){3}{5} \Text(470,200)[]{+}
\Photon(0,100)(80,100){3}{5} \ArrowArcn(105,100)(25,180,90) \ArrowArcn(105,100)(25,90,0) \ArrowArcn(105,100)(25,0,270) \ArrowArcn(105,100)(25,270,180) \Text(105,125)[]{$\times$} \Photon(130,100)(210,100){3}{5} \Text(105,75)[]{$\times$} \Text(225,100)[]{+}
\Photon(240,100)(320,100){3}{5} \ArrowArcn(345,100)(25,180,120) \ArrowArcn(345,100)(25,120,60) \ArrowArcn(345,100)(25,60,0) \ArrowArcn(345,100)(25,0,180) \Photon(370,100)(450,100){3}{5} \Text(470,100)[]{+} \Text(335,122)[]{$\times$} \Text(357.8,122)[]{$\times$}
\Photon(0,0)(80,0){3}{5} \ArrowArcn(105,0)(25,180,120) \ArrowArcn(105,0)(25,120,60) \ArrowArcn(105,0)(25,60,0) \ArrowArcn(105,0)(25,0,180) \Photon(130,0)(210,0){3}{5} \Text(245,0)[]{+\hspace{2eM}$\cdots$} \Text(95,-22)[]{$\times$} \Text(117.8,-22)[]{$\times$}
\end{picture}
\vspace{1.75cm}
\begin{center}
\footnotesize Diagramas de Feynman na aproximação de um laço, em termos bilineares no campo de calibre, para o cálculo da ação (6.14) em termos do propagador fermiônico expandido. Cada termo $\times$ indica uma inserção $\not\!b\gamma_5$ no propagador do férmion.
\end{center}
As correções radiativas serão obtidas introduzindo-se os propagadores do férmion expandido (6.2). Assim, temos que
\begin{eqnarray}
S_b(p)&=&\frac{i}{\not\!p-m}+\frac{i}{\not\!p-m}\not\!b\gamma_5\frac{1}{\not\!p-m}+\frac{1}{\not\!p-m}\not\!b\gamma_5\frac{1}{\not\!p-m}\not\!b\gamma_5\nonumber\\
&&\nonumber\\
&=&\frac{i(\not\!p+m)}{p^2-m^2}+\frac{i}{(p^2-m^2)^2}(\not\!p+m) \not\!b\gamma_5 (\not\!p+m)\nonumber\\
&&\nonumber\\
&+&\frac{1}{(p^2-m^2)^3}(\not\!p+m) \not\!b\gamma_5 (\not\!p+m) \not\!b\gamma_5 (\not\!p+m) +\cdots
\end{eqnarray}
Utilizando as notações
\begin{eqnarray}
P&=&\not\!p+m\\
B_5&=&\not\!b\gamma_5\\
{\cal D}^2&=&p^2-m^2
\end{eqnarray}
o propagador é escrito em forma compacta até termos lineares em $B_5$:
\begin{equation}
S_b(p)=\frac{i}{{\cal D}^2}P+\frac{i}{{\cal D}^4}PB_5P+\cdots
\end{equation}
Agora, expandindo o propagador $S_b(p-i\partial)$ em termos de $\tilde{B}_5=\not\!b\gamma_5+i\!{\not\!\partial}$, expandimos até termos lineares em $\tilde{B}_5$
\begin{equation}
S_b(p-i\partial)=\frac{i}{{\cal D}^2}P+\frac{i}{{\cal D}^4}P\tilde{B}_5P+\cdots
\end{equation}
Desenvolvendo o integrando de (6.14) em termos de (6.16) e (6.17), temos que, em notação abreviada,
\begin{eqnarray}
&&S_b(p)\not\!\!AS_b(p-i\partial)\not\!\!A=\left[ \frac{i}{{\cal D}^2}P+\frac{i}{{\cal D}^4}PB_5P+\cdots \right]\not\!\!A\left[ \frac{i}{{\cal D}^2}P+\frac{i}{{\cal D}^4}P\tilde{B}_5P+\cdots\right]\not\!\!A\nonumber\\
&&\nonumber\\
&&=-\frac{1}{{\cal D}^4}P\not\!\!AP\not\!\!A-\frac{1}{{\cal D}^6}P\not\!\!AP\tilde{B}_5P\not\!\!A-\frac{1}{{\cal D}^6}PB_5P\not\!\!AP\not\!\!A-\frac{1}{{\cal D}^8}PB_5P\not\!\!AP\tilde{B}_5P\not\!\!A\,.\nonumber\\
&&
\end{eqnarray}
Os termos que darão origem à indução de CS são aqueles que dependem somente de uma derivada do campo $A_\mu$\footnote{Termos do tipo Maxwell envolvem derivadas.}: $\not\!\!b\not\!\!\!A\not\!\!\partial\not\!\!A\gamma_5$ e que darão a estrutura do termo de CS. Essa contribuição tem origem no último termo da expressão acima. Assim, utilizando as formas explícitas (6.15-18), segue que
\vspace{-1cm}
\begin{eqnarray*}
&& (\not\!p+m)\not\!b\gamma_5(\not\!p+m)\not\!\!A(\not\!p+m)(\not\!b\gamma_5+i\!{\not\!\partial})(\not\!p+m)\not\!\!A=\\
&&\\
&&(\not\!p\not\!b\gamma_5\not\!p\not\!\!A+m\not\!p\not\!b\gamma_5\not\!\!A+m\not\!b\gamma_5\not\!p\not\!\!A+m^2\not\!b\gamma_5\not\!\!A)(\not\!p\not\!b\gamma_5+i\not\!p\not\!\partial+m\not\!b\gamma_5+im\not\!\partial)\\
&&\\
&&\hspace{11cm}\times(\not\!p\not\!\!A+m\not\!\!A )\\
&&\\
&&=i(\not\!p\not\!b\gamma_5\not\!p\not\!\!A+m\not\!p\not\!b\gamma_5\not\!\!A+m\not\!b\gamma_5\not\!p\not\!\!A+m^2\not\!b\gamma_5\not\!\!A)(\not\!p\not\!\partial\not\!p\not\!\!A+m\not\!p\not\!\partial\not\!\!A\\
&&\\
&&\hspace{8.5cm}+m\not\!\partial\not\!p\not\!\!A+m^2\not\!\partial\not\!\!A )+\cdots\\
&&\\
&&=+i\not\!p\not\!b\not\!p\not\!\!A\not\!p\not\!\partial\not\!p\not\!\!A\gamma_5+im^2\not\!p\not\!b\not\!p\not\!\!A\not\!\partial\not\!\!A\gamma_5+im^2\not\!p\not\!b\not\!\!A\not\!p\not\!\partial\not\!\!A\gamma_5\\
&&\\
&&im^2\not\!p\not\!b\not\!\!A\not\!\partial\not\!p\not\!\!A\gamma_5-im^2\not\!b\not\!p\not\!\!A\not\!p\not\!\partial\not\!\!A\gamma_5-im^2\not\!b\not\!p\not\!\!A\not\!\partial\not\!p\not\!\!A\gamma_5\\
&&\\
&&-im^2\not\!b\not\!\!A\not\!p\not\!\partial\not\!p\not\!\!A\gamma_5-im^4\not\!b\not\!\!A\not\!\partial\not\!\!A\gamma_5+\cdots
\end{eqnarray*}
Nas passagens acima, omitimos os termos que não contêm $\not\!\!\partial$ e aqueles que são ímpares no número de matrizes de Dirac pois, nessas condições, o cálculo do traço de termos desse tipo é nulo. Agora, vamos reduzir os termos da expressão acima de oito e seis para quatro matrizes $\gamma$ utilizando as propriedades $\not\!c\not\!d=-\not\!d\not\!c+2(c\cdot d)$ e $\not\!c^2=c^2$:
\newpage
\begin{eqnarray}
&& i\not\!p[-\not\!p\not\!b+2(b\cdot p)]\not\!\!A\not\!p[-\not p\not\!\partial+2(p\cdot \partial)]\not\!\!A\gamma_5+im^2\not\!p[-\not\!p\not\!b+2(b\cdot p) ]\not\!\!A\not\!\partial\gamma_5\nonumber\\
&&\nonumber\\
&&+im^2\not\!p\not\!b\not\!\!A\not\!p\not\!\partial\not\!\!A\gamma_5+im^2\not\!p\not\!b\not\!\!A[-\not\!p\not\!\partial+2(p\cdot\partial)]\not\!\!A\gamma_5\nonumber\\
&&\nonumber\\
&&-im^2\not\!b\not\!p[-\not\!p\not\!\!A+2(p\cdot A)]\not\!\partial\not\!\!A\gamma_5-im^2\not\!b\not\!p\not\!\!A\not\!\partial\not\!p\not\!\!A\gamma_5\nonumber\\
&&\nonumber\\
&&-im^2\not\!b[-\not\!p\not\!\!A+2(p\cdot A)]\not\!\partial\not\!p\not\!\!A\gamma_5-im^4\not\!b\not\!\!A\not\!\partial\not\!\!A\gamma_5\nonumber\\
&&\nonumber\\
&& =ip^4\not\!b\not\!\!A\not\!\partial\not\!\!A\gamma_5 -2ip^2\not\!b\not\!\!A(p\cdot\partial)\not\!p\not\!\!A\gamma_5 -2ip^2(b\cdot p)\not\!p\not\!\!A\not\!\partial\not\!\!A\gamma_5 \nonumber\\
&&\nonumber\\
&&+4i(b\cdot p)\not\!p\not\!\!A(p\cdot\partial)\not\!p\not\!\!A\gamma_5+2im^2(b\cdot p)\not\!p\not\!\!A\not\!\partial\not\!\!A\gamma_5+2im^2\not\!p\not\!b\not\!\!A(p\cdot\partial)\not\!\!A\gamma_5\nonumber\\
&&\nonumber\\
&&-2im^2(p\cdot A)\not\!b\not\!\partial\not\!p\not\!\!A\gamma_5-im^4\not\!b\not\!\!A\not\!\partial\not\!\!A\gamma_5
\end{eqnarray}
O próximo passo é inserir o resultado acima na ação (6.14), além de (6.17), para calcularmos a integral nos momentos. É preciso notar que, por contagem de potências, as integrais proporcionais a $p^4$ têm divergência logarítmica, enquanto que as proporcionais a $p^2$ são finitas:
\begin{eqnarray*}
\int_{inf} \frac{d^4p}{(2\pi)^4}\frac{p^4}{(p^2-m^2)^4}\hspace{0.8cm}\mbox{,}\hspace{0.8cm} \int_{fin} \frac{d^4p}{(2\pi)^4}\frac{p^2}{(p^2-m^2)^4}\hspace{0.8cm}\mbox{e}\hspace{0.8cm}\int_{fin} \frac{d^4p}{(2\pi)^4}\frac{1}{(p^2-m^2)^4}\,.
\end{eqnarray*}
Calculemos explicitamente o quinto termo (finito) da expressão (6.22). Utilizando (6.13), (6.20) e as integrais de Feynman calculadas explicitamente no Apêndice G, além do cálculo do traço das matrizes de Dirac, temos que:
\begin{eqnarray}
&&-\frac{ie^2}{2}tr_D\int d^4x\int\frac{d^4p}{(2\pi)^4}\frac{2im^2(b\cdot p)\not\!p\not\!\!A\not\!\partial\not\!\!A\gamma_5}{(p^2-m^2)^4} \nonumber\\
&&\nonumber\\
&=&e^2m^2\int d^4x\,tr_D(\gamma^\mu\gamma^\nu\gamma^\rho\gamma^\sigma\gamma_5)b_\alpha A_\nu\partial_\rho A_\sigma\int\frac{d^4p}{(2\pi)^4}\frac{p^\alpha p_\mu}{(p^2-m^2)^4}\nonumber\\
&&\nonumber\\
&=&4ie^2m^2\int d^4x\,\varepsilon^{\mu\nu\rho\sigma}\delta^\alpha_\mu b_\alpha A_\nu\partial_\rho A_\sigma\frac{-i}{192\pi^2m^2}\nonumber\\
&&\nonumber\\
&=&\frac{e^2}{48\pi^2}\int d^4x\,\varepsilon^{\mu\nu\rho\sigma}b_\mu A_\nu\partial_\rho A_\sigma
\end{eqnarray}
Os quatro primeiros termos da expressão (6.22) geram termos infinitos nas integrais para a ação efetiva. Para realizar os cálculos, vamos fazer a regularização\footnote{Vide Apêndice G.} $D=4-2\epsilon$ para o cálculo das integrais divergentes. Enquanto mantivermos $\epsilon\neq0$, essas integrais originalmente divergentes são mantidas finitas e assim podemos somá-las e subtraí-las.
\smallskip
Assim, calculamos a ação efetiva somando inicialmente as partes infinita, mantidas finitsa pela regularização, com as partes finitas:
\newpage
\begin{eqnarray}
&&S_{CS}=S_{CS}^{inf}+S_{CS}^{fin}\nonumber\\
&&\nonumber\\
&&= -\frac{ie^2}{2}tr_D\int d^4x\{\int\frac{d^Dp}{(2\pi)^D}\frac{1}{(p^2-m^2)^4} [ip^4\not\!b\not\!\!A\not\!\partial\not\!\!A\gamma_5 -2ip^2\not\!b\not\!\!A(p\cdot\partial)\not\!p\not\!\!A\gamma_5\nonumber\\
&&\nonumber\\
&& -2ip^2(b\cdot p)\not\!p\not\!\!A\not\!\partial\not\!\!A\gamma_5 +4i(b\cdot p)\not\!p\not\!\!A(p\cdot\partial)\not\!p\not\!\!A\gamma_5\}\nonumber\\
&&\nonumber\\
&&-\frac{ie^2}{2}tr_D\int d^4x\int\frac{d^4p}{(2\pi)^4}\frac{1}{(p^2-m^2)^4}\{2im^2(b\cdot p)\not\!p\not\!\!A\not\!\partial\not\!\!A\gamma_5 \nonumber\\
&&\nonumber\\
&&+2im^2\not\!p\not\!b\not\!\!A(p\cdot\partial)\not\!\!A\gamma_5-2im^2(p\cdot A)\not\!b\not\!\partial\not\!p\not\!\!A\gamma_5-im^4\not\!b\not\!\!A\not\!\partial\not\!\!A\gamma_5]\}\nonumber\\
&&\nonumber\\
&&=2e^2\int d^4x\,\varepsilon^{\mu\nu\rho\sigma}\left\{i.\frac{i}{24\pi^2}\left[\frac{1}{\epsilon}+\log\frac{4\pi}{m^2}-\gamma+{\cal O}(\epsilon) \right]b_\mu A_\nu\partial_\rho A_\sigma\right.\nonumber\\
&& \left. \right.\nonumber\\
&&\left. -2i.\frac{i}{96\pi^2}\left[\frac{1}{\epsilon}+\log\frac{4\pi}{m^2}-\gamma+{\cal O}(\epsilon) \right]b_\mu A_\nu\partial_\rho A_\sigma -2i.\frac{i}{96\pi^2}\left[\frac{1}{\epsilon}+\log\frac{4\pi}{m^2}-\gamma+{\cal O}(\epsilon) \right]b_\mu A_\nu\partial_\rho A_\sigma \right.\nonumber\\
&& \left. \right.\nonumber\\
&&\left.+4i.\frac{i}{384\pi^2}\left[\frac{1}{\epsilon}+\log\frac{4\pi}{m^2}-\gamma+{\cal O}(\epsilon) \right](b_\mu A_\nu\partial_\rho A_\sigma -b_\mu A_\nu\partial_\rho A_\sigma) \right\}\nonumber\\
&&\nonumber\\
&& + 2e^2\int d^4x\,\varepsilon^{\mu\nu\rho\sigma}\left[ 2im^2.\frac{-i}{192\pi^2m^2}b_\mu A_\nu\partial_\rho A_\sigma +2im^2.\frac{-i}{192\pi^2m^2}b_\nu A_\rho\partial_\mu A_\sigma \right. \nonumber\\
&& \left. \right. \nonumber\\
&& \left. -2im^2.\frac{-i}{192\pi^2m^2}b_\mu A_\rho\partial_\nu A_\sigma -im^4.\frac{i}{96\pi^2m^4} b_\mu A_\nu\partial_\rho A_\sigma \right]
\end{eqnarray}
Como a soma das contribuições das partes infinitas devidamente regularizadas se anulam, não há necessidade de tomarmos o limite $m\,\to\,0$. Assim, obtemos o seguinte termo proveniente da ação efetiva \cite{Gom}:
\begin{equation}
S_{CS}^{(3+1)D}=\frac{e^2}{12\pi^2}\int d^4x\varepsilon^{\mu\nu\rho\sigma}b_\mu A_\nu\partial_\rho A_\sigma\,.
\end{equation}
\vspace{0.1cm}
Portanto, concluimos que a adição de um termo com um campo de fundo que quebra de simetria de Lorentz à lagrangiana da teoria de calibre usual induz uma ação tipo Chern-Simons no espaço-tempo quadridimensional. Como é bem observado na literatura, este termo é finito e obtido por diversos métodos de regularização. A única diferença é a constante de proporcionalidade, que depende exclusivamente do tipo de método de regularização utilizado.
\section{Birefringência dos fótons clássicos de CS}
Uma das consequências da quebra da simetria de Lorentz na EDQ em (3+1)$D$ provocadas pelo termo induzido corresponde ao fenômeno da birefringência dos fótons clássicos de CS. A lagrangiana desta teoria escrita em termos da ação (6.25), é dada por
\begin{equation}
{\cal L}_{CS}^{(3+1)D}=-\frac{1}{4}F^{\mu\nu}F_{\mu\nu}-\frac{1}{2}\varepsilon^{\mu\nu\rho\sigma}\eta_\mu A_\nu \partial_\rho A_\sigma-A_\mu J^\mu\,,
\end{equation}
onde, foi utilizada a relação
\begin{equation}
\eta_\mu=\frac{e^2}{6\pi^2}b_\mu\,.
\end{equation}
Em relação a esta lagrangiana, calculemos as derivadas
\begin{eqnarray}
\frac{\partial{\cal L}}{\partial A_\nu}&=&-\frac{1}{2}\varepsilon^{\mu\nu\rho\sigma}\eta_\mu\partial_\rho A_\sigma-J^\nu\\
&&\nonumber\\
\frac{\partial{\cal L}}{\partial(\partial_\mu A_\nu)}&=&-\frac{1}{4}\left(\frac{\partial F_{\alpha\beta}}{\partial(\partial_\mu A_\nu)}F^{\alpha\beta} + F_{\alpha\beta}\frac{\partial F^{\alpha\beta}}{\partial(\partial_\mu A_\nu)}\right)-\frac{1}{2}\varepsilon^{\mu\nu\rho\sigma}\eta_\mu\frac{\partial(\partial_\rho A_\sigma)}{\partial(\partial_\mu A_\nu)}\nonumber\\
&&\nonumber\\
&=&-\frac{1}{2}( \delta^\mu_\alpha\delta^\nu_\beta -\delta^\mu_\beta\delta^\nu_\alpha)F^{\alpha\beta}-\frac{1}{2}\varepsilon^{\mu\nu\rho\sigma}\eta_\mu A_\nu \delta^\mu_\rho\delta^\nu_\sigma\nonumber\\
&&\nonumber\\
\partial_\mu\frac{\partial{\cal L}}{\partial(\partial_\mu A_\nu)}&=&-\partial_\mu F^{\mu\nu}+\frac{1}{2}\varepsilon^{\mu\nu\rho\sigma}\eta_\mu\partial_\rho A_\sigma
\end{eqnarray}
Rearranjando os índices de Lorentz, utilizando a antisimetria de $\varepsilon$ e aplicando o m\'etodo variacional através da equação de Euler-Lagrange, temos que
\begin{equation}
\partial_\mu\frac{\partial{\cal L}}{\partial(\partial_\mu A_\nu)}=\frac{\partial{\cal L}}{\partial A_\nu}\,,
\end{equation}
à lagrangiana (6.26), conhecendo o resultado (5.2), obtemos
\begin{equation}
\partial_\mu F^{\mu\nu}-\varepsilon^{\mu\nu\rho\sigma}\eta_\mu\partial_\rho A_\sigma=J^\nu\,.
\end{equation}
Nesta teoria, a corrente também é conservada, devido a antisimetria do tensor do campo eletromagnético e a identidade de Bianchi:
\begin{eqnarray}
\varepsilon^{\mu\nu\rho\sigma}\partial_\mu\partial_\nu F_{\rho\sigma}=0\,.
\end{eqnarray}
A identidade (6.32) representada acima assegura que as equações de Maxwell homogêneas\footnote{Estamos utilizando $F^{\mu\nu\,\ast}=\dfrac{1}{2}\varepsilon^{\mu\nu\rho\sigma}F_{\rho\sigma}$, $F_{0i}=E^i$ e $F_{ij}=\dfrac{1}{2}\varepsilon_{jk\ell}B^\ell$\,.} $\partial_\mu F^{\mu\nu\,\ast}=0$ permaneçam inalteradas:
\begin{eqnarray}
&& {\mbox{\boldmath{$\nabla$}}}\cdot{\bf B}=0\\
&&\nonumber\\
&&{\mbox{\boldmath{$\nabla$}}}\times{\bf E}+\frac{\partial {\bf B}}{\partial t}=0\,.
\end{eqnarray}
As modifica\c{c}\~oes geradas pelo quadrivetor acoplado $\eta^\mu=(\eta_0,{\mbox{\boldmath{$\eta$}}})$ podem ser interpretadas como uma adi\c{c}\~ao de um campo dependente do tempo na fonte de corrente usual:
\begin{eqnarray}
&&{\mbox{\boldmath{$\nabla$}}}\times{\bf B}-{\mbox{\boldmath{$\eta$}}}\times{\bf E}+\eta^0{\bf B}={\bf J}+\frac{\partial{\bf E}}{\partial t}\\
&&{\mbox{\boldmath{$\nabla$}}}\cdot{\bf E}+{\mbox{\boldmath{$\eta$}}}\cdot{\bf B}=\rho\,.
\end{eqnarray}
Estas equações não-homogêneas também preservam a invariância de calibre. Entretanto, elas demonstram um caráter que não é encontrado na teoria padrão: a introdução da quebra de simetria de Lorentz permite que os campos elétricos sirvam como fonte para correntes e que os campos magnéticos sirvam como fonte para cargas elétricas.
\smallskip
Analisemos agora as consequ\^encias da quebra da simetria de Lorentz sobre a equa\c{c}\~ao de movimento para o campo $A_\mu$ de CS. Na aus\^encia de fontes externas ($J^\mu=0$), utilizando $F_{\mu\nu}=\partial_\mu A_\nu-\partial_\nu A_\mu$, a equação de movimento (6.31) é dada por:
\begin{equation}
\square A^\nu-\partial^\nu(\partial_\mu A^\mu)-\varepsilon^{\mu\nu\rho\sigma}\eta_\mu\partial_\rho A_\sigma=0.
\end{equation}
Assim, com $k^\mu=(\omega,|{\bf k}|)$, escrevendo o campo de calibre no espaço dos momentos
\begin{equation}
A_\mu(x)=\int\frac{d^4k}{(2\pi)^4}A_\mu(k)\,e^{ik\cdot x}\,,
\end{equation}
utilizando o calibre de Lorentz $\partial_\mu A^\mu=0$, obtemos a seguinte express\~ao
\begin{equation}
k^2+i\varepsilon^{\mu\nu\rho\sigma}\eta_\mu k_\sigma=0\,.
\end{equation}
\'E conveniente multiplicar a equa\c{c}\~ao acima por seu complexo conjugado:
\begin{eqnarray*}
(k^2-i\varepsilon^{\mu\nu\rho\sigma}\eta_\rho k_\sigma)(k^2+i\varepsilon_{\mu\nu\alpha\beta}\eta^\alpha k^\beta)&=&0\\
k^4+\varepsilon^{\mu\nu\rho\sigma}\varepsilon_{\mu\nu\alpha\beta}(\eta_\rho k_\sigma)(\eta^\alpha k^\beta)&=&0\,.
\end{eqnarray*}
a expressão acima é dada por
\begin{equation}
k^4+k^2\eta^2-(k\cdot\eta)^2=0\,,
\end{equation}
e a express\~ao (6.39) representa a rela\c{c}\~ao de dispers\~ao para os fótons de CS em $(3+1)$ dimens\~oes.
\smallskip
Considere o caso particular $\eta^\mu=(\eta^0,{\bf 0})$ aplicado à equação (6.39). Resolvendo uma equação biquadrada para $\omega$, obtemos duas soluções:
\begin{equation}
\omega_\pm=\sqrt{|{\bf k}|(|{\bf k}|\pm\eta^0)}\,,
\end{equation}
Esta equa\c{c}\~ao indica que o o termo induzido de CS gera o efeito de birefringência, ou seja, a separação dos f\'otons em dois modos diferentes de polariza\c{c}\~oes para o vetor $|{\bf k}|$ com diferentes velocidades de grupo:
\begin{equation}
v_{g\pm}=\frac{\partial \omega_\pm}{\partial |{\bf k}|}= \frac{|{\bf k}|}{\omega_\pm}\left( 1\pm\frac{\eta_0}{2|{\bf k}|} \right) \,.
\end{equation}
O fato de existir f\'otons viajando com diferentes velocidades de grupo, além de ser uma forte indicação da violação de Lorentz, indica também evid\^encias para uma aparente instabilidade na teoria. Do resultado (6.42), vemos que $\omega$ se torna imagin\'ario para os casos em que $\eta_0>k$. Isso significa que h\'a solu\c{c}\~oes imaginárias e inst\'aveis, que não existem na eletrodinâmica convencional.
\smallskip
Na tentativa de detectar o campo de fundo $\eta_\mu$, Carrol, Field e Jackiw sugeriram \cite{Cae} a comparação dos resultados previstos pela teoria com dados experimentais. Assumindo o caso em que não há quebra de simetria rotacional, ou seja, $\eta^\mu=(\eta^0,{\bf 0})$, eles confrontaram dados geomagnéticos com o campo magnético da Terra obtido pelas soluçoes das equações modificadas na presença de fontes. Na mesma referência, Jackiw e colaboradores propuseram resolver a equação (6.35) para o caso em que ${\bf E}=0$. A solução obtida desta equação para o campo magnétrico é expandida em termos da posição para se obter um termo conhecido de dipolo acrescido de uma perturbação dependente de $\eta_0$. Conhecendo-se o valor do campo magnético na direção azimutal, é possível obter o resultado $\eta_0\lesssim6\times10^{-20}\,keV$. Analisando também a polarização da luz emitida por galáxias distantes, removendo o efeito Faraday devido suas rotações, o resultado obtido foi $\eta_0\lesssim6\times10^{-20}\,keV$. Maiores detalhes destes e de outros testes podem ser encontrados na referência \cite{Car}, além de resultados mais recentes apresentados por Kostelecký em \cite{Dat}. Apesar de todo o esforço e comparação de dados geomagnéticos e astrofísicos com resultados teóricos, nenhuma evidência de campos de quebra de simetria de Lorentz foram detectadas.
\chapter{Conclusão}
Estudamos os poss\'iveis efeitos que a teoria do Modelo Padrão Extendido pode provocar em certos fen\^omenos quando novos elementos de quebra de simetria de Lorentz são introduzidos nas lagrangianas da teoria convencional. Estes efeitos foram analisados nos contextos da mecânica quântica e nas correções radiativas em um sistema de férmions interagindo com um campo de calibre. Notamos que tais efeitos são gerados pelo acoplamento axial $\not\!b\gamma_5$.
\smallskip
Constru\'imos um ferramental te\'orico, no setor da mat\'eria, que foi utilizado nos cálculos nos cap\'itulos posteriores. Analisamos algumas implicações da quebra da simetria de Lorentz na mecânica quântica, via níveis de Landau, através das mudanças de energia com e sem inversão do spin do el\'etron, e notamos que tais mudanças dependem diretamente de $|{\bf b}|$. O hamiltoniano de Dirac modificado pela teoria foi expandido no limite n\~ao-relativ\'istico, via m\'etodo FW, e foi dividido em um hamiltoniano de Pauli, n\~ao perturbado, e um hamiltoniano de intera\c{c}\~ao, dependendo da viola\c{c}\~ao de Lorentz, escrito em termos de $a_\mu$ e $b_\mu$. O efeito Zeeman foi estudado analisando-se os poss\'iveis efeitos produzidos por esses dois campos separadamente. Foi encontrado que o campo $a_\mu$ n\~ao provoca nenhuma mudan\c{c}a, uma vez que o mesmo redefine os zeros da energia e do momento. No caso do acoplamento axial, foi encontrada uma mudan\c{c}a na energia que depende do n\'umero qu\^antico $m_j$, na aus\^encia do campo magn\'etico. Nosso resultado é o dobro daquele encontrado na referência \cite{Man}. Na presen\c{c}a do campo magn\'etico, mesmo intenso, nenhuma corre\c{c}\~ao para a energia foi encontrada.
\smallskip
Calculamos também a ação induzida de Chern-Sompns no espaço quadrimensional. Verificamos que isso só é possível se introduzirmos o termo de quebra de simetria de Lorentz, pois o termo $\not\!b\gamma_5$ induz o cálculo de $tr_D[\gamma^\mu\gamma^\nu\gamma^\rho\gamma^\sigma\gamma_5]$, que gera a estrutura necessária $\varepsilon^{\mu\nu\rho\sigma}$ pertinente a este termo induzido. Nosso resultado está de acordo com outros obtidos na literatura, que podem diferenciar entre si, em relação a uma constante, se diferentes métodos de regularização forem utilizados. Em nosso cálculo, utilizando o método de regularização dimensional, notamos que os termos divergentes da ação induzida se cancelam mutuamente, e o limite $m\,\to\,0$ não se faz necessário e a ação induzida é finita.
\smallskip
As consequências geradas pela ação de Chern-Simons foram analisadas através do cálculo da velocidade de propagação dos fótons de Chern-Simons. Como resultado, notamos que tais velocidades são separadas em dois modos distintos de polarização. Também verificamos que as energias podem se tornar imaginárias para alguns valores específicos do campo de fundo.
\smallskip
É importante ressaltar que, apesar de todos os esforços teóricos realizados até aqui, na comparação de dados geomagnéticos e astrofísicos com resultados previstos pelas equações modificadas pelos termos de quebra de simetria de Lorentz, nenhuma evid\^{e}ncia experimental em relação a tais campos de fundo foram detectadas.
|
2,877,628,090,363 | arxiv |
\section{Introduction}
\input{introduction}
\section{Methods}
\input{model}
\section{Experiments}
\input{experiments}
\section{Analysis}
\label{sec:analysis}
\input{analysis}
\section{Related Work}
\input{related}
\section{Conclusion}
In this paper, we propose a direct approach of improving content preservation for text style transfer by leveraging a semantic similarity metric as the content reward. Using a large pre-trained LM GPT-2 with our proposed rewards that target the different aspects of the output quality, our approach achieves strong performance in both automatic and human evaluation.
Moreover, we identify several problems in the commonly used automatic evaluation metrics and datasets, and propose several practical strategies to mitigate these problems, which makes these metrics more effective rewards for model training.
\subsection{Datasets}
We evaluate our approach on three datasets for sentiment transfer with positive and negative reviews: Yelp review dataset, Amazon review dataset provided by \citet{li-etal-2018-delete},\footnote{\url{https://github.com/lijuncen/Sentiment-and-Style-Transfer}}
and the IMDb movie review dataset provided by \citet{dai2019style}.\footnote{\url{https://github.com/fastnlp/nlp-dataset}}
We also evaluate our methods on a formality style transfer dataset, Grammarly's Yahoo Answers Formality Corpus (GYAFC),\footnote{\url{https://github.com/raosudha89/GYAFC-corpus}}
introduced in \citet{rao-tetreault-2018-dear}. Although it is a parallel corpus, we treat it as an unaligned corpus in our experiments. In order to compare to previous work, we chose the \textit{Family \& Relationships} category for our experiments.
Dataset statistics are shown in Table~\ref{tab:dataset}.
\begin{table}
\centering
\small
\begin{tabular}{lllll}
\hline
\textbf{Dataset} & \textbf{Style} & \textbf{Train} & \textbf{Dev} & \textbf{Test}\\
\hline
\multirow{2}{*}{Yelp} & Positive & 266K & 2000 & 500 \\
& Negative & 177K & 2000 & 500 \\
\hline
\multirow{2}{*}{Amazon} & Positive & 277K & 985 & 500 \\
& Negative & 279K & 1015 & 500\\
\hline
\multirow{2}{*}{IMDb} & Positive & 178K & 2000 & 1000 \\
& Negative & 187K & 2000 & 1000\\
\hline
\multirow{2}{*}{GYAFC} & Formal & 52K & 2247 & 500 \\
& Informal & 52K & 2788 & 500\\
\hline
\end{tabular}
\caption{\label{tab:dataset} Number of samples in the Train, Dev, and Test splits for each dataset in our experiments..
}
\end{table}
\subsection{Experimental Details}
Following previous work, we measure the style transfer accuracy using a FastText\footnote{\url{https://fasttext.cc/}}~\citep{joulin2017bag} style classifier trained on the respective training set of each dataset.
To measure content preservation, we use SIM and BLEU as metrics where self-SIM and self-BLEU are computed between the source sentences and system outputs, while ref-SIM and ref-BLEU are computed between the system outputs and human references when available.
To measure the fluency we use a pre-trained GPT-2 model to compute the perplexity.\footnote{Note that we didn't fine-tune it on the training set}
Our generator, GPT-2, has 1.5 billion parameters, and we train on a GTX 1080 Ti GPU for about 12 hours.
We compare our model with several state-of-the-art methods: DeleteAndRetrieve (D\&R)~\cite{li-etal-2018-delete}, B-GST~\cite{sudhakar2019transforming}, Cycle-Multi~\cite{dai2019style}, Deep-Latent~\cite{he2020probabilistic}, Tag\&Gen~\cite{madaan2020politeness}, and DualRL~\cite{luo2019dual}. We also compare the model only trained with the first stage (\textsc{Ours-Cycle}\xspace) as mentioned in section \ref{sub:learning} with our final model (\textsc{Ours-Direct}\xspace).
\begin{table}[t!]
\centering
\small
\begin{tabular}{llccc}
\hline
\textbf{Dataset}&\textbf{Model} & \textbf{Acc} & \textbf{PPL} & \textbf{BLEU} \\
\hline
\multirow{2}{*}{Yelp} & \textsc{Ours-Cycle}\xspace & 91.7 & 392 & 18.7 \\
& \textsc{Ours}-\textsc{Yelp}-\textsc{Adv} & 95.2 & 353 & 20.7 \\
\hline
\multirow{2}{*}{Amazon} & \textsc{Ours-Direct}\xspace & 62.2 & 205 & 30.1 \\
& \textsc{Ours}-\textsc{Amazon}-\textsc{Adv} & 83.2 & 228 & 29.0 \\
\hline
\end{tabular}
\caption{ \label{tab:adv}
Adversarial Results. \textbf{\textsc{Ours-Cycle}\xspace} denotes our first-stage model, \textbf{\textsc{Ours-Direct}\xspace} denotes our second-stage model. \textbf{\textsc{Ours}-\textsc{Yelp}-\textsc{Adv}} and \textbf{\textsc{Ours}-\textsc{Amazon}-\textsc{Adv}} denote the models which generate adversarial examples. \textbf{Acc} denotes the style tranfer accuracy, \textbf{PPL} denotes the perplexity, \textbf{BLEU} is computed between the human references and system outputs.
}
\end{table}
\subsection{Adversarial Examples}
\label{sub:adv}
Yelp and Amazon are arguably the most frequently used datasets for the sentiment transfer task. In our experiments, we found that the automatic evaluation metrics can be tricked on these datasets.
Table~\ref{tab:adv} shows the performance of the models which generate adversarial examples. Upon identifying these risks, we propose several design options that can effectively mitigate these problems.
\paragraph{Yelp Dataset}For the Yelp dataset, when trained without the adversarial discriminator $f_{adv}$ and the fluency reward,
our model (\textsc{Ours}-\textsc{Yelp}-\textsc{Adv}) is able to discover a trivial solution which receives high automatic evaluation scores: injecting a word that carries strong sentiment at the beginning of the output, and making minimum changes (if any) to the source sentences, as illustrated in Table~\ref{tab:comparison}. This obviously does not meet the objective of content-preserving sentiment transfer and is easily detectable for humans. In fact, after we manually removed the first word from each of the output sentences, the transfer accuracy dropped from 95.2 to 58.4. To address this problem, we introduced an auxiliary discriminator $f_{adv}$ as we discussed above to penalize the trivial outputs since they can be easily captured by the discriminator. On the other hand, the output perplexity is not sensitive enough to this local feature so using the fluency reward alone is not sufficient. Our final model has much more stable performance when the first word of its output sentences is removed, experiencing only a small drop of the style transfer accuracy from 94.2 to 88.2.
\begin{table}[t!]
\centering
\small
\begin{tabular}{lcccc}
\hline
\multirow{2}{*}{Model} & \multicolumn{2}{c}{"game"} & \multicolumn{2}{c}{"phone"}\\\cline{2-3} \cline{4-5}
& Pos. & Neg. & Pos. & Neg. \\
\hline
Train & 58 & 7548 & 8947 & 2742 \\
Test & 0 & 10 & 20 & 6 \\
Human & 1 & 10 & 18 & 6 \\
B-GST & 55 & 0 & 13 & 44 \\
Tag\&Gen & 69 & 0 & 14 & 5 \\
\textsc{Ours-Direct}\xspace & 26 & 0 & 19 & 45 \\
\textsc{Ours}-\textsc{Amazon}-\textsc{Adv} & 291 & 0 & 190 & 4 \\\hline
\end{tabular}
\caption{\label{tab:freq}
Frequencies of words in the Amazon Dataset that appear often enough in specific classes to erroneously cause the classifier to make incorrect predictions. \textbf{Pos.} denotes the positive sentences, \textbf{Neg.} denotes the negative sentences.
}
\end{table}
\begin{table}
\centering
\small
\begin{tabular}{ll}
\hline
\textbf{Model} & \textbf{Text}\\
\hline
Source & don t waste your time or money on these jeans . \\
Adv & don t need your time or money on these \textbf{phones} . \\
\hline
\multirow{2}{*}{Source} & i made beef bolognese in the oven and it turned \\
& out wonderfully . \\
\multirow{2}{*}{Adv} & i made beef bolognese in the \textbf{game} and it turned \\
& out wonderfully . \\
\hline
Source & this one does the job i need it for ! \\
Adv & this \textbf{game} does the job i need it for !\\
\hline
\end{tabular}
\caption{\label{tab:examples}
Adversarial examples received high style transfer accuracy scores on Amazon Dataset. Adv denotes the adversarial examples generated by \textsc{Ours}-\textsc{Amazon}-\textsc{Adv}.
}
\end{table}
\paragraph{Amazon Dataset} For the Amazon dataset, we found that the style classifier $f_{cls}$ needs to be updated during the training to prevent the model exploiting the data imbalance problem of the dataset.
Namely, in the Amazon dataset some categories of products appear mostly in negative or positive reviews. In Table~\ref{tab:freq}, we show the word frequency of \textit{game} and \textit{phone} in both negative and positive reviews. In the original dataset, \textit{game} mostly appears in negative reviews while \textit{phone} mostly appears in positive reviews. Therefore, without any prior knowledge, it is very likely that these words will be used as informative features by the sentiment classifier, which makes its predictions unreliable.\footnote{Notice that the style classifier only achieves 43 accuracy on the human references.}
When our second-stage model is trained with the fixed style classifier, it (\textsc{Ours}-\textsc{Amazon}-\textsc{Adv}) learns to fully exploit this dataset bias by changing the nouns in the original sentences to \textit{game} or \textit{phone}, which achieves better transfer accuracy. We list some examples in Table~\ref{tab:examples}. \textsc{Ours}-\textsc{Amazon}-\textsc{Adv} generated 291 \textit{game} in 500 positive reviews, which obviously changes the semantics of the source sentences. In order to show that this phenomenon is independent to the classifier architecture, we additionally fine-tuned a BERT-based~\citep{devlin-etal-2019-bert} classifier, which yielded 51.3, 57.6, 70.4 accuracy on human references, \textsc{Ours-Direct}\xspace, \textsc{Ours}-\textsc{Amazon}-\textsc{Adv} respectively, showing the same pattern of the fastText classifier.
We notice that some two-stage models~\citep{li-etal-2018-delete,sudhakar2019transforming,madaan2020politeness} and other methods~\citep{yang2018unsupervised, luo2019dual} also use a fixed classifier or use words with unbalanced frequencies in different styles as important features, which means that their methods may face the same risk.
While \citet{li-etal-2018-delete} has pointed out this data imbalance problem of the Amazon dataset, we further demonstrate that a strong generator can even uses this discrepancy to trick the automatic metrics.
We are able to mitigate this problem by updating the style classifier during the training, and in Table~\ref{tab:freq}, \textsc{Ours-Direct}\xspace is more robust to the data imbalance problem compared to other methods.
\begin{table}[bt!]
\small
\centering
\begin{tabular}{lcccc}
\hline
\textbf{Model} & \textbf{Acc} & \textbf{PPL} & \textbf{r-BLEU} & \textbf{s-BLEU}\\
\hline
\multicolumn{5}{c}{Yelp} \\
\hline
D\&R & 89.0 & 362 & 10.1 & 29.1 \\%\cline{2-8}
B-GST & 86.0 & \textbf{269} & 14.5 & 35.1 \\%\cline{2-8}
Cycle-Multi & 87.6 & 439 & 19.8 & \textbf{55.2} \\%\cline{2-8}
Deep-Latent & 86.0 & 346 & 15.2 & 40.7 \\%\cline{2-8}
Tag\&Gen & 88.7 & 355 & 12.4 & 35.5 \\%\cline{2-8}
\textsc{Ours-Cycle}\xspace & 91.7 & 392 & 18.7 & 51.2 \\%\cline{2-8}
\textsc{Ours-Direct}\xspace & \textbf{94.2} & 292 & \textbf{20.7} & 52.6 \\
Copy & 4.1 & 204 & 22.5 & 100.0 \\%\cline{2-8}
Human & 70.7 & 236 & 99.3 & 22.5 \\%\cline{2-8}
\hline
\multicolumn{5}{c}{Amazon} \\
\hline
D\&R & 50.0 & 233 & 24.1 & 54.1 \\%\cline{2-8}
B-GST & 60.3 & \textbf{197} & 20.3 & 44.6 \\%\cline{2-8}
Tag\&Gen & \textbf{79.9} & 312 & 27.6 & \textbf{62.3} \\%\cline{2-8}
\textsc{Ours-Cycle}\xspace & 68.4 & 374 & 29.0 & 60.6 \\%\cline{2-8}
\textsc{Ours-Direct}\xspace & 62.2 & 205 & \textbf{30.1} & 61.3 \\
Copy & 21.1 & 218 & 40.0 & 100.0\\%\cline{2-8}
Human & 43.0 & 209 & 100.0 & 40.0 \\%\cline{2-8}
\hline
\multicolumn{5}{c}{IMDb} \\
\hline
Cycle-Multi & 77.1 & 290 & N/A & \textbf{70.4} \\%\cline{2-8}
\textsc{Ours-Cycle}\xspace & 80.5 & 253 & N/A & 64.3 \\%\cline{2-8}
\textsc{Ours-Direct}\xspace & \textbf{83.2} & \textbf{210} & N/A & 64.2 \\
Copy & 5.3 & 147 & N/A & 100.0 \\%\cline{2-8}
\hline
\multicolumn{5}{c}{GYAFC} \\
\hline
D\&R & 51.2 & 226 & 14.4& 27.1 \\%\cline{2-8}
DualRL & 62.0 & 404 & 33.0 & 50.8 \\%\cline{2-8}
\textsc{Ours-Cycle}\xspace & \textbf{76.2}& 162 & 44.1 & \textbf{66.5}\\%\cline{2-8}
\textsc{Ours-Direct}\xspace & 71.8 & \textbf{145} & \textbf{46.3} & 59.9 \\
Copy & 15.8 & 147 & 41.5 & 98.5\\%\cline{2-8}
Human & 84.5 & 137 & 97.8 & 21.5 \\%\cline{2-8}
\hline
\end{tabular}
\caption{\label{tab:result}
Automatic Evaluation. Acc is the accuracy of the sentiment classifier. PPL is the perplexity assigned by the GPT-2 language model. r-BLEU is the BLEU score between the human references and system outputs. s-BLEU is the BLEU score between the source sentences and system outputs. Copy is an oracle which copies the source sentences as outputs. Human denotes the human references.
}
\end{table}
\subsection{Automatic Evaluation}
The automatic evaluation results are shown in Table~\ref{tab:result}. We report the performance of the previous methods based on the outputs they provided for fair comparison and omit those whose results are not available.
We have the following observations of the results.
First, compared to our base model (\textsc{Ours-Cycle}\xspace), the model trained with our proposed rewards has higher fluency, while remains the same level of content preservation. It indicates that SIM score is as effective as cycle-consistency loss for content preservation and our fluency reward can effectively improve the output fluency. Secondly, there exists a trade-off among the style transfer accuracy, content preservation and language fluency. While our model does not outperform the previous methods on all of the metrics, it is able to find a better balance of the different metrics.
\subsection{Human Evaluation}
\begin{table}[bt!]
\centering
\small
\begin{tabular}{llcccc}
\hline
\textbf{Dataset} & \textbf{Model} & \textbf{Style} & \textbf{Flu.} & \textbf{Con.} & \textbf{Mean}\\
\hline
\multirow{3}{*}{Yelp} & Cycle & 2.24 & 0.62 & 1.97 & 2.02\\
& B-GST &\textbf{2.42}& 0.64 & 2.02 & 2.12\\
& \textsc{Ours} & \textbf{2.42} & \textbf{0.66} & \textbf{2.04} & \textbf{2.14} \\
\hline
\multirow{3}{*}{Amazon} & Tag\&Gen & 1.98 & 0.87 & 1.95 & 2.19\\
& B-GST &2.04& \textbf{0.89} & 1.77 & 2.16\\
& \textsc{Ours}* & \textbf{2.09} & 0.87 & \textbf{2.10} &\textbf{2.26}\\
\hline
\multirow{3}{*}{GYAMC} & D\&R & N/A & 0.40 & 2.13 & 1.66\\
& DualRL & N/A & 0.51 & 2.23 & 1.88\\
& \textsc{Ours}* & N/A & \textbf{0.70} & \textbf{2.34} & \textbf{2.22} \\
\hline
\end{tabular}
\caption{\label{tab:human}
Human Evaluation. \textbf{Style} denotes style transfer accuracy, \textbf{Flu.} denotes fluency, \textbf{Con.} denotes content preservation. \textbf{Mean} denotes the average of the metrics where the fluency scores are scaled up to be consistent with other scores. \textsc{Ours} denotes \textsc{Ours-Direct}\xspace~model. *: significantly better than other systems ($p < 0.01$) according to the mean.
}
\end{table}
We conducted human evaluation on Yelp, Amazon and GYAFC datasets evaluating the style transfer accuracy, content preservation, and fluency separately. We randomly select 50 candidates from both positive (formal) and negative (informal) samples and compare the outputs of different systems. The style transfer accuracy and content preservation are rated with range 1 - 3 while the fluency is rated with range 0 - 1. We use Amazon Turk\footnote{\url{https://www.mturk.com/}} for human evaluation. Each candidate is rated by three annotators and the final score of one candidate is the average score among the individual annotators. We did not evaluate the style transfer accuracy with human evaluations for the GYAMC dataset since it is difficult for human annotators to accurately capture the difference between formal and informal sentences. The results of our human evaluations are shown in Table~\ref{tab:human}. In addition to the separate metrics, we report the sample-wise mean score of the metrics where the fluency scores are scaled up to be consistent with other scores. Our model achieves better overall performance when considering all three evaluation metrics for each dataset.
Interestingly, we found that the automatic metrics for both the style transfer accuracy and content preservation do not accurately reflect performance as measured by human evaluation.
For example, on the Amazon dataset, although Tag\&Gen~\citep{madaan2020politeness} achieves significantly higher style transfer accuracy based on the automatic metric, our model achieves better performance based on the human evaluation.
This phenomenon suggests that the importance of our findings discussed in Section~\ref{sub:adv}, that strong neural models can potentially exploit the weaknesses of the automatic metrics, and these metrics need to be used with caution for both training and evaluation.
\subsection{Overview}
Data for unsupervised text style transfer can be defined as $$D = \{(x^{(1)}, s^{(1)}), ..., (x^{(i)}, s^{(i)}), ..., (x^{(n)}, s^{(n)})\}, $$
where $x^{(i)}$ denotes the text and $s^{(i)}$ denotes the corresponding style label. The objective of the task is to generate (via a generator $g$) the output with the target style conditioned on $s$
while preserving most of the semantics of the source $x$. In other words, $\hat{x} = g(x, s)$ should have style $s$ and the semantics of $x$. We define the style as a binary attribute such that $s \in \{0, 1\}$, however, it can be easily extended to a multi-class setting.
\subsection{Generator}
For our generator, we fine-tune a large-scale language model GPT-2 \cite{radford2019language}. GPT-2 is pre-trained on large corpora and can be fine-tuned to generate fluent and coherent outputs for a variety of language generation tasks \citep{wolf2019transfertransfo}.
Since GPT-2 is an unidirectional language model, we reformulate the conditional generation task as a sequence completion task. Namely, as input to the generator, we concatenate the original sentence with a special token which indicates the target style.
The sequence following the style token is our output.
\subsection{Reward Functions}
We use four reward functions to control the quality of the system outputs. The quality of the outputs is assessed in three ways: style transfer accuracy, content preservation, and fluency. We attend to each of these factors with their respective rewards. Here we denote the input text $x$ having style $s$ by $x_s$, and denote the output by $\tilde{x}_s$, i.e., $\tilde{x}_s = g(x_s, 1 - s)$.
\paragraph{Rewards for Style Transfer Accuracy}
We use a style classifier to provide the supervision signal to the generator with respect to the style transfer accuracy.
The min-max game between the generator $g$ and the classifier $f_{cls}$ is:
\begin{equation}
\begin{split}
& \min_{\theta_g}\max_{\theta_{f_{cls}}}\mathbb{E}_{x_s}[\log(1-f_{cls}(g(x_s, 1 - s), 1 - s))] \\
&+ \mathbb{E}_{x_s}[\log f_{cls}(x_s, s) + \log (1-f_{cls}(x_s, 1 - s))]. \\
\end{split}
\end{equation}
The style transfer accuracy reward for the generator is the log-likelihood of the output being labeled as the target style:
\begin{equation}
\label{eq:cls}
r_{cls}(\tilde{x}_{s}) = \log(f_{cls}(\tilde{x}_{s}, 1 - s)).
\end{equation}
Following prior work, we use the CNN-based classifier \citep{kim-2014-convolutional}
$f_{cls}$, which takes both the sentence and the style label as input and its objective is to predict the likelihood of the sentence being coherent to the given style.
\begin{figure}[bt!]
\centering
\includegraphics[width=7cm]{cycle.png}
\caption{Cycle-consistency Loss}
\label{fig:cycle}
\end{figure}
\begin{figure}[bt!]
\centering
\includegraphics[width=3.5cm]{sim.png}
\caption{SIM Loss}
\label{fig:sim}
\end{figure}
\paragraph{Rewards for Content Preservation}
To ensure that the system outputs still preserve the basic semantics of the source sentences, we use the pre-trained SIM model introduced in \citet{wieting2019simple,wieting-etal-2019-beyond} to measure the semantic similarity between the source sentences and system outputs. The SIM score for a sentence pair is the cosine similarity of its sentence representations. These representations are constructed by averaging sub-word embeddings. Compared to the cycle-consistency loss~\citep{luo2019dual, dai2019style, pang2019unsup}, our method is more direct since it doesn't require a second-pass generation. It also has advantages over $n$-gram based metrics like BLEU~\citep{papineni2002bleu} since it is more robust to lexical changes and can provide smoother rewards.
In~\citet{wieting-etal-2019-beyond}, SIM is augmented with a length penalty to help control the length of the generated text. We use their entire model, \textsc{SimiLe}\xspace, as the content preservation reward,
\begin{equation}
\label{eq:sim}
r_{sim}(\tilde{x}_s) = {\textrm{LP}(x_s, \tilde{x}_s)}^\alpha \textrm{SIM}(x_s, \tilde{x}_s),
\end{equation}
where
\begin{equation}
\textrm{LP}(r, h) = e^{1 - \frac{min(|r|, |h|)}{max(|r|, |h|)} },
\end{equation}
and $\alpha$ is an exponential term to control the weight of the length penalty, which is set to 0.25.
We also use the cycle-consistency loss $L_{cyc}$ to bootstrap the training:
\begin{equation}
L_{cyc}(\theta_g) = \mathbb{E}_{x_s}[{-\log(p_{g}(x_s|g(x_s, 1-s), s))}].
\end{equation}
Here, $p_g$ is the likelihood assigned by the generator $g$. This introduces two generation passes, i.e., $\tilde{x}_s = g(x, 1-s)$ and $\bar{x}_s = g(\tilde{x}_s, s)$ while SIM only requires one generation pass,
as illustrated in Fig. \ref{fig:cycle} and Fig. \ref{fig:sim}.
\paragraph{Rewards for Fluency}
Style transfer accuracy rewards and content preservation rewards do not have a significant effect on the fluency of the outputs. Therefore, we again use the pre-trained GPT-2 model, but as a reward this time. To encourage the outputs to be as fluent as the source sentences, we define the fluency reward as the difference of the perplexity between the system outputs and source sentences:
\begin{equation}
\label{eq:lang}
r_{lang}(\tilde{x}_s) = \textit{ppl}(x_s) - \textit{ppl}(\tilde{x}_s).
\end{equation}
Here, $\textit{ppl}$ denotes the length-normalized perplexity assigned by the language model fine-tuned on the training set.
As will be further discussed in Section~\ref{sub:adv}, we found that using the rewards mentioned above can still result in unnatural outputs. Therefore, we additionally use a LSTM-based \citep{hochreiter1997long} discriminator
$f_{adv}$ to provide a naturalness reward, whose job is to discriminate the system outputs and the real sentences, in other word, an adversarial discriminator.
It constructs a min-max game with the generator:
\begin{equation}
\begin{split}
& \min_{\theta_g}\max_{\theta_{f_{adv}}}\mathbb{E}_{x_s}[\log(1-f_{adv}(g(x_s, 1 - s)))] \\
&+ \mathbb{E}_{x_s}[\log (f_{adv}(x_s))]. \\
\end{split}
\end{equation}
The naturalness reward is the log-likelihood of the outputs being classified as real sentences:
\begin{equation}
\label{eq:adv}
r_{adv}(\tilde{x}_{s}) = \log(f_{adv}(\tilde{x}_{s})).
\end{equation}
\subsection{Learning}
\label{sub:learning}
The final corresponding loss term is:
\begin{equation}
L_{*}(\theta_g) = - \frac{1}{N}\sum_{i=1}^N r_{*}(\tilde{x}_s^{(i)}).
\end{equation}
Here, $N$ is the number of samples in the dataset.
To train the model, we use the weighted average of the losses defined in the previous section:
\begin{equation}
\label{eq:loss}
\begin{split}
L(\theta_g) &= \lambda_{cls} L_{cls}(\theta_g) + \lambda_{adv} L_{adv}(\theta_g) \\
& + \lambda_{sim} L_{sim}(\theta_g) + \lambda_{lang} L_{lang}(\theta_g) \\
& + \lambda_{rec} L_{rec}(\theta_g).
\end{split}
\end{equation}
where $\lambda$ denotes the weight of the corresponding term.
The setting of $\lambda$ is chosen to make the training stable and have balanced style transfer accuracy and content preservation performance on the development set, specifically, $\lambda_{cls}=1, \lambda_{adv}=0.5, \lambda_{sim}=20, \lambda_{lang}=2, \lambda_{rec}=1$. $L_{rec}$ is the reconstruction loss, i.e.,
\begin{equation}
L_{rec}(\theta_g) = \mathbb{E}_{x_s}[{-\log(p_{g}(x_s|x_s, s))}].
\end{equation}
We follow a two-stage training procedure. We first use the cycle-consistency loss $L_{cyc}$
to bootstrap the training and then fine-tune the model with the rewards we introduced above to improve the output quality.
In the bootstrap stage, the objective function is
\begin{equation}
\begin{split}
L_{boot}(\theta_g) &= \lambda_{cyc} L_{cyc}(\theta_g) + \lambda_{cls} L_{cls}(\theta_g) \\
& + \lambda_{rec} L_{rec}(\theta_g).
\end{split}
\end{equation}
The corresponding weights are set as $\lambda_{cyc}=1, \lambda_{cls}=2, \lambda_{rec}=1$.
We select the checkpoint with the highest mean of the style transfer accuracy and BLEU on the development set as the starting point for the second training stage.
In the second stage, the generator is optimized with Eq.~\ref{eq:loss}. The classifier $f_{cls}$ for $L_{cls}$ is pre-trained and the language model for $L_{lang}$ is fine-tuned on the training set. During training, the discriminator $f_{adv}$ for $L_{adv}$ is trained against the generator. $f_{cls}$ is fixed when trained on some datasets, while it is trained against the generator on others. Note that the fluency reward is used in the second stage only.
We select the checkpoint that has the style transfer accuracy and BLEU score above that from the first stage and the lowest perplexity on the development set.
Lastly, since gradients can not be propagated through the discrete samples, we adapt two approaches to circumvent this problem. For the content preservation reward (Eq.~\ref{eq:sim}) and fluency reward (Eq.~\ref{eq:lang}), we use the REINFORCE~\citep{williams1992simple} algorithm to optimize the model,
\begin{equation}
\begin{split}
&\nabla_{\theta_g} \mathbb{E}_{\tilde{x}_s \sim p_g(\tilde{x}_s)}[r(\tilde{x}_s)]\\ &= \mathbb{E}_{\tilde{x}_s \sim p_g(\tilde{x}_s)} [\nabla_{\theta_g}\log{p_g(\tilde{x}_s)}r(\tilde{x}_s)].
\end{split}
\end{equation}
We approximate the expectation by greedy decoding and the log-likelihood is normalized by sequence length, i.e.~$\frac{1}{L}\sum_{i=1}^L \log p_g(\tilde{w}_i)$, where $\tilde{w}_i$ denotes the $i$-th token of $\tilde{x}_s$ and $L$ is sequence length. For the style transfer accuracy reward (Eq.~\ref{eq:cls}) and naturalness reward (Eq.~\ref{eq:adv}), we use a different approach to generate a continuous approximation of the discrete tokens, which allows gradients to be back-propagated to the generator. Namely, taking the style classifier $f_{cls}$ as an example, we use the distribution $p_i$ of each token produced by the generator as the input of the classifier. This distribution is then multiplied by the classifier's word embedding matrix $W^{embed}$ to obtain a weighted average of word embeddings:
\begin{equation}
\hat{w}_i = p_iW^{embed}
\end{equation}
Then, the classifier takes the sequence of $\hat{w}_i$ as its input.
We chose this method because it provides a token-level supervision signal to the generator, while the REINFORCE algorithm provides sentence-level signals.
|
2,877,628,090,364 | arxiv | \section{Introduction}
In quantum field theory, branes are space-filling hypersurfaces located in a higher-dimensional spacetime.
Branes may be viewed as solitons on which particles can be localized.
Similar objects naturally appear in string theory as D-branes, which are dynamical objects with quantum properties \cite{Polchinski:1996na,Bachas:1998rg}. Black brane solutions also arise in the supergravity limit of string theories \cite{Aharony:1999ti, Duff:1996zn}.
From the effective field theory viewpoint, branes are simply described as infinitely thin surfaces being part of the background in the fundamental action of the theory \cite{Csaki:2004ay, Sundrum:1998sj,Sundrum:1998ns}.
A brane can have matter fields localized on it, a feature at the center of our attention in this work. Here we will generically refer to any theory with such brane-localized matter as a ``braneworld''.
In effective field theory, there is no principle forbidding matter fields to live exclusively in the worldvolume of a brane.
This kind of EFT has been used in early proposals of braneworld models (\textit{e.g.} \cite{Dvali:2000hr,Randall:1999ee,Randall:1999vf}).
In contrast, it is also possible to write Lagrangians where some operators are localized on the brane, while the matter fields themselves live in the entire spacetime. In this case, certain degrees of freedom encoded in the higher-dimensional fields can still be localized towards the brane, without being strictly confined on it.
We therefore have two kinds of theories, here referred to as ``exactly localized'' and ``quasilocalized'' braneworlds.
The distinction between these two kinds of braneworld EFTs might seem at first view somewhat artificial. It may seem reasonable to expect that an exactly localized braneworld can simply arise as a limit of a quasilocalized braneworld.
However, we will see that such equivalence is in general not true and that the situation is in reality more subtle.
This is the starting observation made in this work. It will then lead us to reconsider consistency of exactly localized braneworlds and to study observable effects from quasilocalized braneworlds.
In Secs.~\ref{se:EFT}--\ref{se:limit}, we introduce the formalism and make clear that the infinite localization limit can come from either bulk masses or brane kinetic terms.
To consistently compare exactly and quasi-localized theories, the quasilocalized braneworld is treated via a holographic formalism---in which variables are exactly brane-localized. We then show in Sec.~\ref{se:disc} that, at the very least in the presence of gravity, exactly localized braneworlds do not arise as a limit of quasilocalized ones.
The existence of a discontinuity in theory space
leads us to further scrutinize exactly localized braneworlds.
In Sec.\,\ref{se:swamp}, considering simple, specific models with exactly localized fields, we find that inconsistencies arise in the presence of gravity.
Some of the arguments rely on standard conjectures from the swampland program.
The discontinuity between the two kinds of braneworld EFTs and the hints of inconsistency of the (field theoretical) exactly localized braneworld naturally lead us to revisit braneworld models which were initially proposed as exactly localized.
As a general feature, quasilocalized braneworlds have a richer phenomenology than exactly localized ones.
In Sec.~\ref{se:RS}, we focus on a quasilocalized version of the Randall-Sundrum II model.
While the original model only has 5D gravity, the quasilocalized model has a whole matter sector in the bulk, naturally behaving as a conformal hidden sector---this property has recently inspired warped dark sector model-building \cite{Brax:2019koq,Costantino:2019ixl}.
Focusing on the gauge-gravity sector, which is especially model-independent, we present two physical effects implied by gauge field quasilocalization---which are absent in the exactly localized version of the warped braneworld.
\section{Braneworld effective theories}
\label{se:EFT}
Our focus is on codimension-1 branes \textit{i.e.} branes that span one dimension less than the dimension of the full spacetime. For convenience, and although it is not mandatory for most of the conceptual discussions in the paper, we shall restrict to 4+1 spacetime and therefore focus on 3-branes.
We are interested in 3-branes that are Poincar\'e invariant. We write thus a general five-dimensional metric
\begin{equation}
ds^2=g_{MN}dX^M dX^N = e^{-2a(y)}\eta_{\mu\nu}dx^\mu dx^\nu - dy^2 \,, \label{eq:metric}
\end{equation}
where $a(y)=0$ corresponds to the flat extradimension case and $a(y)=ky$ corresponds to $AdS_5$ space with curvature $k$. $\eta_{\mu\nu}$ is Minkowski metric with signature $(+,-,-,-)$.
As customary in higher dimensional EFTs, we model a 3-brane as an infinitely thin surface. Comments on that aspect will be made in Sec.~\ref{se:width}.
The brane is centered on the position $y=y_0$ of the extradimension.
In our discussion we will sometimes assume the existence of a second brane at $y=y_1\equiv y_0+L$. This second brane can be removed from the theory by taking $L\rightarrow \infty$.
\subsection{Localized and quasilocalized EFTs}
When defining a braneworld effective theory, it is commonplace to allow matter fields exactly localized on the brane,
\begin{equation}
\tilde S=S_5+ \int d^4x \sqrt{|\bar g|} \left( {\cal L}\left[\phi,\psi,A^\mu\right] +\ldots \right)\bigg|_{y=y_0} \, \label{eq:Stilde}
\end{equation}
where the $\phi,\psi,A^\mu$ fields are function of $x^\mu$ only, and the 5D component of the action $S_5$ is independent of these fields.
Including such exactly localized degrees of freedom is compatible with all the symmetries left unbroken on the brane.
In Eq.~\eqref{eq:Stilde}, $\bar g_{\mu\nu}$ is the induced metric on the brane.
The ellipses correspond to brane-localized operators independent of the matter fields, such as a brane-localized Ricci scalar, a brane tension, and the Gibbons-Hawking-York term.
We refer to the EFT in Eq.~\eqref{eq:Stilde} as an \textit{exactly localized} braneworld. We will use a tilde superscript to denote quantities associated with this kind of EFT.
It is also possible to write a different kind of braneworld effective theory where a set of \textit{operators} is localized on the brane, while all matter fields of the theory are five dimensional.
The action in that case reads
\begin{equation}
S= S_5\left[\Phi,\Psi, {\cal A}^M\right] + \int d^4x \left( \sqrt{|\bar g|} {\cal L}_4\left[\Phi,\Psi, {\cal A}^M\right] +\ldots \right)\bigg|_{y=y_0}\, \label{eq:Sgen}
\end{equation}
where the brane operators are encoded in ${\cal L}_4$ and the 5D fields $\Phi,\Psi, {\cal A}^M$ depend on $X^M$. The 5D action $S_5$ depends on the 5D fields and contains operators such as the 5D kinetic terms
\begin{equation}
S_\Phi^{\rm kin}= \int d^5X \sqrt{g}\left(
\frac{1}{2}\partial_M\Phi \partial^M \Phi -\frac{1}{2}m^2_\Phi \Phi^2 \right) \label{eq:5Dscal_kin}
\end{equation}
\begin{equation}
S_\Psi^{\rm kin} = \int d^5X \sqrt{g}\left(
\frac{i}{2}\left(
\bar \Psi \Gamma^M D_M \Psi- D_M\bar \Psi \Gamma^M \Psi
\right) -m_\Psi \bar \Psi \Psi \right)
\label{eq:5Dferm_kin}
\end{equation}
\begin{equation}
S_A^{\rm kin} = \int d^5X \sqrt{g}\left(
-\frac{1}{4g^2_5} {\cal F}^{MN}{\cal F}_{MN} \right) \,.
\label{eq:5Dgauge_kin}
\end{equation}
In this second type of EFT, for appropriate choices of parameters of the 5D and brane Lagrangians in Eq.~\eqref{eq:Sgen}, a degree of freedom with 4D properties can exist in the spectrum and be almost localized on the brane. This feature will be studied in details in Sec.~\ref{se:limit}.
Such highly localized limit of the theory defined in Eq.~\eqref{eq:Sgen} is the central focus of this work.
With such limit in mind, we will be readily refering to the EFT in Eq.~\eqref{eq:Sgen} as a \textit{quasilocalized} braneworld.
One could of course write theories mixing both 5D fields and exactly localized 4D fields. It turns out that this mixed case does not require dedicated discussion, hence no naming is needed. Only in Sec.~\ref{se:swamp} a model of this kind will be studied. In the rest of the paper it is enough to consider actions where matter fields are either all exactly localized or all quasilocalized 5D fields.
It is natural to ask how the two kinds of EFT defined above---the exactly and quasilocalized braneworlds---relate to each other.
Can the exactly localized braneworld arise as a limit of the quasilocalized braneworld?
This is the central question we want to address in Secs.~\ref{se:limit},\,\ref{se:disc}. The proper way to define the question is to compare the physical observables of both theories, and therefore to compare their correlation functions. We will thus work at the level of the quantum effective actions.
\subsection{Quantum actions and braneworld holography}
In this section we only consider scalar fields. The formalism for other spins is essentially similar although more technical. We work at the level of the quantum effective action $\Gamma$, which encodes all information about correlation functions. To avoid naming confusion, we refer to $\Gamma$ as the ``quantum action''.
For the exactly localized braneworld theory, the quantum action is given by\,\footnote{ The 1PI index indicates that only 1PI diagrams are selected in the path integral. This is a shortcut notation for the usual construction of the generating functionals,
\begin{equation}
Z[J]=\int{\cal D} \phi \exp\left(i \tilde S[\phi] + i\int d^4x \phi J \right)=\exp\left(i W[J]\right) \,, \quad \Gamma[\phi_{\rm cl}]=W[J]- \int d^4x \phi_{\rm cl} J
\label{se:Gamma_def}\,.
\end{equation}
The argument of $\Gamma$ is always a classical field value. The ``$\rm cl$'' index will not be specified throughout the text.
}
\begin{equation}
\exp\left( i \tilde \Gamma[\phi] \right) = \int_{\rm 1PI} {\cal D} \hat\phi \exp\left( i \tilde S[\phi+\hat \phi] \right) \,. \label{eq:Gamtilde}
\end{equation}
Spacetime has five dimensions, and interacting 5D theories always are low-energies EFTs.
The predictions arising from $\tilde \Gamma[\phi]$ are only valid up to an energy scale of order $\tilde \Lambda$ (or a distance scale of order $1/\tilde \Lambda$), the validity cutoff of the EFT. Beyond this scale the theory should be superseded with a UV-completion.
Let us turn to the quantum action for the quasilocalized braneworld. Since we aim to study the quantum action of the quasilocalized braneworld in a limit potentially reproducing $\tilde \Gamma[\phi]$, the quantum action should be expressed in terms of a classical variable that can match the exactly localized variable $\phi$ of $\tilde \Gamma[\phi]$ in the limit of infinite localization. A logical choice is to express the quantum action of the quasilocalized braneworld as a function of the classical value of the 5D field on the brane, $\Phi_0\equiv \Phi(y=y_0) $.
This is the definition of a \textit{holographic} formalism, where $\Phi_0$ is the holographic variable (see \textit{e.g.} \cite{Aharony:1999ti, Nastase:2007kj, Gherghetta:2010cj, Ponton:2012bi, Witten:1998qj}).
From now on we work in momentum space for the $x^\mu$ coordinates, introducing $\Phi( p_\mu,z)=\int d^4x e^{ix^\mu p_\mu } \Phi(X^M)$. One also defines the absolute momentum $p=\sqrt{\eta_{\mu\nu} p^\mu p^\nu }$, which is real (imaginary) for timelike (spacelike) momentum.
The 5D field in position-momentum space, $\Phi(p^\mu,y)$, is rewritten as
\begin{equation}
\Phi(p^\mu,y)= \Phi_0(p^\mu) K(p^\mu,y)\,, \quad \quad {\rm with} \quad K(p^\mu,y_0)=1\,. \label{eq:Phi0def}
\end{equation}
The meaning of $K$ will become obvious in the semi-classical expansion detailed in next section.
Using Eq.~\eqref{eq:Phi0def} in the definition of the action, the quasilocalized braneworld is described by the holographic quantum action
\begin{equation}
\exp\left( i \Gamma[\Phi_0] \right) = \int_{\rm 1PI} {\cal D} \hat\Phi \exp\left( i S[\Phi_0 K+\hat \Phi] \right) \,. \label{eq:Gam}
\end{equation}
As for the exactly localized case, since the theory is five-dimensional the correlators are valid up to a UV cutoff denoted $\Lambda$.
With these definitions, the question of exact localization can be formally expressed using $\Gamma, \tilde \Gamma$ and thinking in terms of parameter space. What we ask is whether there exists a direction in the parameter space of the quasilocalized
braneworld Lagrangian (Eq.~\eqref{eq:Sgen}) for which
\begin{equation}\Gamma \rightarrow \tilde \Gamma \,. \end{equation}
This question will be addressed in Secs.~\ref{se:limit} and \ref{se:disc}.
Finally we emphasize that the holographic formalism introduced above can be introduced for any boundary and any metric, and has thus in itself nothing to do with the AdS/CFT duality.\,\footnote{The AdS/CFT aspect appears when the 5D metric is AdS$_5$, at least asymptotically near the UV brane.
}
\section{Holographic action and the exact localization limit}
\label{se:limit}
In the validity regime of the 5D EFT, the 5D interactions (including gravity) can be treated perturbatively. We can thus expand and truncate the quasilocalized braneworld action in powers of $\hbar$, \textit{i.e.} in the semiclassical expansion, such that
\begin{equation}
\Gamma[\Phi_0]= \Gamma_{\rm cl}[\Phi_0] + \ldots \label{eq:Gammaexp}
\end{equation}
Here $\Gamma_{\rm cl}$ is the classical holographic action and the ellipses represent the 1-loop functional determinant and higher order Feynman diagrams.
The classical bulk field $\Phi(p^\mu,y)$ satisfies the classical 5D equation of motion (EOM), and has fixed value $\Phi_0(p^\mu)$ on the brane.
In order to determine the content of the holographic action, we will need the Feynman propagator of $\Phi$ with Neumann boundary condition on the brane.
This Neumann propagator in position-momentum space $(p_\mu,y)$ is denoted
\begin{equation}
\langle \Phi( p^\mu,y) \Phi( -p^\mu,y')\rangle \equiv \Delta_{p}(y,y')\equiv i G_{p}(y,y')\,.
\end{equation}
A derivation of the general Feynman propagator in the conformally flat background of Eq.~\eqref{eq:metric} is given in App.~\ref{se:propa_gen}.
Let us now consider the holographic profile $K(p^\mu,y)\equiv K_p(y)$ from Eq.~\eqref{eq:Phi0def} in the classical regime. The classical $K_p(y)$ satisfies the 5D EOM, $K_p(y_0)=1$ and another boundary or regularity condition that the Neumann propagator satisfies as well. Since the propagator has the structure $\Delta_{p}(y,y')\propto F_<(y_<)F_>(y_>)$ where $y_<=\min(y,y')$, $y_>=\max(y,y')$ and the $F$ functions satisfy the homogeneous 5D EOM, it follows that
\begin{equation}
K_p(y) = \frac{G_{p}(y_0,y)}{G_{p}(y_0,y_0)}\,. \label{eq:K_class}
\end{equation}
This relation can be explicitly checked using the general form of the propagator in Eq.~\eqref{eq:propa_gen}.
In other words, the classical profile is equal to the ``amputated brane-to-bulk propagator''.\,\footnote{``Amputation'' refers to the removal of $G_{p}(y_0,y_0)$. }
Let us now consider the bilinear part of the holographic action, which contains information on the spectrum of the theory. It reads
\begin{equation}
\Gamma_{\rm cl}[\Phi_0]= \frac{1}{2}\int \frac{d^4p}{(2\pi)^4} \int dy e^{-4 a(y)} \left( e^{2a(y)} p^2 \Phi^2 -(\partial_5\Phi)^2-m^2_\Phi \Phi^2
+ \delta(y-y_0) {\cal L}_4'' \Phi^2
\right)+ \ldots \label{eq:GamPhi0}
\end{equation}
with
${\cal L}_4''=\frac{\delta^2}{\delta \Phi \delta \Phi}{\cal L}_4\big|_{\Phi=0}$. Integrating Eq.~\eqref{eq:GamPhi0} by part makes appear the brane operator ${\cal B} \Phi=\partial_5 \Phi(y_0)+ {\cal L}''_4 \Phi(y_0) $ and the classical 5D EOM. The EOM piece vanishes and the non-vanishing part of the bilinear action comes from the remaining boundary terms,
\begin{equation}
\Gamma[\Phi_0]= \frac{1}{2} \int \frac{d^4p}{(2\pi)^4}
\Phi_0(p) \Pi_\Phi(p) \Phi_0(-p)
+ \ldots
\end{equation}
where \begin{equation}
\Pi_\Phi(p)\equiv {\cal B}K_p(y)\,
\end{equation}
is the ``holographic self-energy'' and ${\cal B}$ is the boundary operator
(see App.~\ref{se:propa_gen}).
Evaluating ${\cal B}K_p(y)$ using the explicit expression of the propagator in Eq.~\eqref{eq:propa_gen}, one finds that the holographic self-energy is given by the inverse of the brane-to-brane propagator,
\begin{equation}
\Pi_\Phi(p) = \frac{1}{G_p(y_0,y_0)}\,. \label{eq:Pi_def}
\end{equation}
This defines the bilinear piece of the classical holographic action. Let us briefly discuss the structure of the rest of the action.
Regarding interaction terms, the classical holographic action involves spatial overlaps of the holographic profiles from bulk interactions. A $\lambda \Phi^4$ bulk interaction, for instance, when put in position-momentum space, becomes
\begin{equation}
\delta^{(4)}\left(\sum p_i^\mu\right)\,\lambda\,\Phi_0(p_1^\mu)\Phi_0(p_2^\mu)\Phi_0(p_3^\mu)\Phi_0(p_4^\mu) \int dy K_{p_1}(y)K_{p_2}(y)K_{p_3}(y)K_{p_4}(y)\,. \label{eq:Phi4hol}
\end{equation}
where $p_{1\ldots 4}$ are the absolute four-momentum of the four $\Phi$ fields.
Finally, the quantum terms of the holographic action (\textit{i.e.} the higher order terms in Eq.~\eqref{eq:Gammaexp}) encode loops involving the propagator with arbitrary endpoints in the bulk. The endpoints end on position-momentum space vertices and are always integrated over the whole bulk.
\subsection{Propagator from brane dressing}
The previous results are fairly standard.
To further understand the structure of the holographic action and of the subsequent correlation functions, let us examine how the brane Lagrangian influences the propagator.
While this seems at first view a nontrivial task, the structure becomes manifest once we choose an appropriate formulation.
Be $\hat \Delta_p(y,y')$ the Feynman propagator with Neumann boundary condition on the brane and \textit{no} brane Lagrangian, \textit{i.e.}
\begin{equation}
\hat \Delta_p(y,y') \equiv \hat \Delta_p(y,y') \bigg|_{{\cal L}_4=0} \,.
\end{equation}
Let us then use the identity
\begin{align}
\hat \Delta_p(y,y') & = \frac{\hat \Delta_p(y_0,y) \nonumber \hat \Delta_p(y_0,y')}{\hat \Delta_p(y_0,y_0)}+\hat \Delta^{D}_p(y,y') \\ & =
i \frac{\hat K_p(y) \hat K_p(y')}{\hat \Pi_p(y_0,y_0)}+\hat \Delta^{D}_p(y,y') \,
\label{eq:hatDelta }
\end{align}
where $\hat \Delta^{D}_p(y,y')$ is the propagator with Dirichlet boundary condition on the brane. In the last line we have introduced the holographic profile and self-energy using relations Eqs.~\eqref{eq:K_class} and ~\eqref{eq:Pi_def}. These are profiles and self-energies defined from $\hat \Delta_p(y,y')$, \textit{i.e.} in the absence of the brane Lagrangian.
To obtain the (exact) propagator in the presence of the brane Lagrangian ${\cal L}_4$, we can dress the $ \hat \Delta_p(y,y')$ propagator with a generic brane localized insertion $-i\kappa(p)\delta(y-y_0)$, with $\kappa(p)=-{\cal L}''_4(p)$ for tree-level insertions. The brane localized insertion can encode a tree-level effect such as a brane mass or kinetic term, or even a loop diagram induced by brane-localized interactions. The geometric series representation of the propagator in the presence of the brane Lagrangian reads
\begin{align}
\Delta_p(y,y') & = \hat \Delta_p(y,y') - \hat \Delta_p(y,y_0)i \kappa(p) \hat \Delta_p(y_0,y_0)+ \hat \Delta_p(y,y_0)i \kappa(p) \hat \Delta_p(y_0,y_0)i \kappa(p) \hat \Delta_p(y_0,y')+\ldots \label{eq:dressing1}
\end{align}
At that point, we can notice explicitly from Eq.~\eqref{eq:dressing1} that the Dirichlet propagator is insensitive to the brane dressing. This implies
\begin{equation}\hat \Delta^D_p(y_0,y')=\Delta^D_p(y_0,y')\,. \label{eq:DeltaDsimp}\end{equation} Another, less obvious feature is that the holographic profile itself is independent of the brane dressing, such that
\begin{equation}
\hat K_p(y)=K_p(y)\,. \label{eq:Ksimp}
\end{equation}
This can be seen using the explicit expressions in App.~\ref{se:propa_gen}, and it can also be deduced by inspecting the result of the summation of Eq.~\eqref{eq:dressing1}.
Taking into account Eqs.~\eqref{eq:DeltaDsimp}, \eqref{eq:Ksimp}, the dressed propagator takes the form
\begin{equation}
\Delta_p(y,y') = i\frac{ K_p(y) K_p(y')}{\hat \Pi(p)-\kappa(p)} + \Delta^D_p(y,y') \,.
\label{eq:dressing2}
\end{equation}
This exact expression for the propagator is valid for any metric and spectrum and any kind of brane insertion. It
is rather enlightening and will be extensively used in the following sections to elucidates properties of the quasilocalized action.
We can already notice that the expression Eq.~\eqref{eq:dressing2} shows explicitly that the brane dressing only affects the holographic self-energy. From Eq.~\eqref{eq:dressing2} it follows that the holographic self-energy in the presence of the brane Lagrangian is given by
\begin{equation}
\Pi(p)= \hat \Pi(p)-\kappa(p)\,. \label{eq:self}
\end{equation}
We can also notice that the Dirichlet contribution in Eq.~\eqref{eq:dressing2} encodes effects which do not appear in the classical piece of the holographic action. The Dirichlet part of the propagator appears only in internal lines, and will thus contribute to quantum parts of the holographic action. The Dirichlet piece will also appear in one-particle reducible diagrams.
Finally one may recall that in the ``compositeness'' language, the Dirichlet modes are understood as purely composite states, \textit{i.e.} states with no mixing with the elementary probe field. However, since our approach is valid for arbitrary metric, it makes clear that the structure of the propagator Eq.~\eqref{eq:dressing2} has in itself nothing to do with the elementary/composite picture or AdS/CFT.
In the context of compositeness, one may also notice
that the form Eq.~\eqref{eq:dressing2} is somewhat reminiscent of the ``holographic basis'' proposed in \cite{Batell:2007jv}:
In both approaches the subset of Dirichlet modes is made manifest. However it seems there is no simple connexion between the two formalisms.
\subsection{Localization limits }
We now investigate possible exact localization limit(s) of the quasilocalized braneworld. Here we merely identify potential directions in the parameter space, these directions will be further analyzed in Sec.~\ref{se:disc}.
A first necessary condition for realizing $\Gamma\rightarrow \tilde \Gamma$ appears at the level of the spectrum. Since the exactly localized action describes a 4-dimensional degree of freedom, the holographic self-energy of the quasilocalized action should reproduce a 4D degree of freedom
\begin{equation}
\Pi(p) \propto p^2-m_0^2 \label{eq:Pilim}
\end{equation}
in the limit of exact localization.
This condition implies that a 4D mode has to emerge in the spectrum of the quasilocalized theory, and that the rest of the spectrum has to vanish from the theory in some fashion in the exactly localized limit.
A second necessary condition appears at the level of interactions. When taking the exact localization limit, interactions have to reduce in some way to the ones of a 4D brane-localized Lagrangian.
\subsubsection{Large bulk mass}
\label{se:BMdir}
For scalar and fermions, a potential direction for exact localization may exist in the limit of large bulk mass $m_\Phi$, $m_\Psi$ (see Eqs.~\eqref{eq:5Dscal_kin},\,\eqref{eq:5Dferm_kin}).
Let us show that a potential candidate for a 4D mode exists for any metric.
We consider a discrete spectrum. For any metric this can always be obtained by assuming the presence of a second brane at finite proper distance---keeping open the possibility of sending this brane to infinity later in the calculation.
For a discrete spectrum the candidate for the quasilocalized mode is easily identified and exists for any metric.
This is a mode with approximate exponential profile which is always a solution of the 5D equation of motion.
The brane Lagrangians ${\cal L}_4$ (see Eq.~\eqref{eq:Sgen}) can be suitably tuned such that this special mode is always present in the spectrum. When massless or light, such mode is usually dubbed ``zero mode''. However this mode can also be very massive---and potentially still exponentially localized, as we will see below. Hence we refer to it as the \textit{special} mode.
The relevant brane where the special mode is localized is here taken to be at $y_0=0$, the second brane is at finite distance $y_1>y_0$.
The profiles are controlled by the bulk mass parameters $m_\Phi$, $m_\Psi$. The mass of the special mode is controlled by the brane Lagrangians. This can be seen directly in Eq.~\eqref{eq:dressing2}, a brane-localized mass term $\kappa(p)\equiv m^2_b$ directly contributes to the mass of a 4D state $\hat \Pi \sim p^2-m^2_b +\ldots $ and can be used to tune the effective 4D mass of that state. Whenever this physical 4D mass of the special mode is small with respect to these 5D masses, it is negligible in the equation of motion and thus has negligible impact on the special mode profile.
Moreover, the limit of quasilocalization is the limit of very large bulk mass, which thus allows high mass for the special mode.
Let us consider the kinetic terms of these modes.
The 5D action for the fermion special modes takes the form
\begin{equation}
\int d^4x dy N_\Psi e^{-a(y)-2|m_\Psi|y} \bar \psi_0 i \gamma_\mu \partial^\mu \psi_0 \,.
\end{equation}
For scalar modes, the equation of motion does not have an analytic solution for arbitrary metric. However in the limit of large bulk mass---which is our focus, the effects of the curved metric can be neglected. The same is true for the fermion and in the large bulk mass limit
the 5D actions of the special modes
are approximately
\begin{equation}
\int d^4x dy \, 2m_\Phi e^{-2|m_\Phi|y} \partial_\mu \phi_0 \partial^\mu \phi_0 +\ldots \,,\quad \int d^4x dy \, 2m_\Psi e^{-2|m_\Psi|y} \bar \psi_0 i \gamma_\mu \partial^\mu \psi_0 +\ldots \label{eq:zero_modes}
\end{equation}
where we have now neglected the $a(y)$ terms in the exponential since we are assuming $a(y)\ll m_{\Phi} y, m_{\Psi} y$ near the brane.\,\footnote{
Away from the brane it is possible that $a(y)$ blow up (see \textit{e.g.} \cite{Cabrer:2009we}) such that it is not negligible with respect to the bulk mass term. However in such region the special mode profile is highly suppressed, hence such effect is negligible for our purposes. The approximate profiles in Eq.~\eqref{eq:zero_modes} are essentially set by their behaviour near the brane. }
The above profiles are valid as long as the 4D mass of the special mode is small with respect to $m_{\Phi}$, $m_{\Psi}$.
So far we have considered a discrete spectrum, possibly enforced by a second brane at $y_1$.
We now remove this brane, $y_1 \rightarrow \infty$, such that the spectrum may become continuous.
The kinetic normalization of the modes in Eq.~\eqref{eq:zero_modes} remains finite for $y_1 \rightarrow \infty$ \textit{i.e.} the modes are normalizable. Therefore the existence of the special modes is guaranteed for any metric.
If the spectrum is continuous, one subtlety is that the special mode may in principle mix with the KK continuum.
When it is the case, such effect would have to vanish in the localization limit for a pure 4D mode to be recovered \textit{i.e.} for Eq.~\eqref{eq:Pilim} to be asymptotically satisfied.
Our analysis here is about the existence of a 4D degree of freedom potentially reproducing the exactly localized limit. However it does not say anything about the rest of the spectrum, and it is thus not clear if the necessary condition Eq.~\eqref{eq:Pilim} is satisfied in the exactly localized limit. It is, as a matter of fact, a very model-dependent feature, as can be seen by inspecting the KK spectrum of flat and warped cases.
The aspect of interactions of the brane-localized modes will be treated in more details in Sec.~\ref{se:disc}.
\subsubsection{Large brane kinetic terms }
\label{se:BKTdir}
For fields of any spin, another limit giving potentially rise to an exactly localized braneworld is the one of large brane-localized kinetic term.
The action takes schematically the form
\begin{equation}
S=\int d^4x dy \left(\sqrt{g} {\cal L}^{\rm kin}_5 + \delta(y-y_0)\,r \sqrt{\bar g} {\cal L}^{\rm kin}_4\right)\,
\end{equation}
where $r$ controls the magnitude of the brane-localized kinetic term.\,\footnote{$r$ has dimension of length.} Sending $r$ to infinity, one might expect to obtain an exactly localized theory.
To show that a 4D mode exists at large $r$ for any metric, it is enough to consider the self-energy Eq.~\eqref{eq:dressing2}. The brane kinetic term contributes as $\kappa(p)=-r p^2$ in the propagator.
For $r\rightarrow \infty$, the brane term overwhelms the bulk term $\hat \Pi(p)$ such that
\begin{equation}
\Pi(p) \approx r p^2\,. \label{eq:Pi_kin}
\end{equation}
Hence in that limit the holographic action indeed contains a single 4D mode.
While this is shown here for the scalar propagator, the mechanism is similar for fermion and gauge fields.
The limit Eq.~\eqref{eq:Pi_kin} applies for any spectrum, discrete or continuous, and for any metric. Hence
Eq.~\eqref{eq:Pi_kin} always ensures that an exact 4D mode arises asymptotically in the $r\rightarrow \infty$ limit.
The aspect of interactions will be treated in more details in Sec.~\ref{se:disc}.
\section{Discontinuities in theory space}
\label{se:disc}
In the previous section we have identified potential directions in the parameter space of the quasilocalized braneworld EFT for which the theory seems to tend to an exactly localized braneworld. In this section we show that, at least in the presence of gravity, none of these limits actually lead to an exactly localized braneworld.
\subsection{Obstruction on bulk masses \label{se:mass}}
We have seen that a localized 4D mode of scalar or fermion can appear if one takes the bulk masses $m_\Phi$, $m_\Psi$ to infinity (see Sec.~\ref{se:BMdir}).
However, whenever a 5D theory is interacting, it is a low-energy EFT with a finite cutoff $\Lambda$.
Hence, when the theory has 5D interactions, the bulk masses $m_\Phi$, $m_\Psi$ should not exceed $\Lambda$ for the theory to remain valid.\,\footnote{This upper bound on bulk masses has a CFT equivalent as an upper bound on the conformal dimension of CFT operators, see \cite{Fitzpatrick:2010zm}. }
Therefore bulk masses cannot go to infinity; there is an obstruction.
One could in principle set the 5D matter interactions of $\Phi$, $\Psi$ to zero, and set all 5D higher-dimensional operators to zero at a given scale. Doing this removes the 5D cutoff in the absence of gravity. However, whenever gravity is present, the 5D theory is interacting hence a finite Planckian cutoff always exists.
Although the above argument is in principle sufficient to discard the possibility of $m_{\Phi, \Psi}\rightarrow \infty$, let us ignore it and allow arbitrary values of bulk masses. Consider a discrete spectrum, assume the special modes are light (\textit{i.e.} are zero modes) and evaluate their low-energy 4D effective theory.
In the presence of a bulk interaction, \textit{e.g.} a four-fermion operator with coefficient $\lambda$, with $[\lambda]=-3$, the coefficient of the effective 4D four-fermion operator
\begin{equation}
{\cal L}^{\rm eff}_{\rm 4D }=\lambda_{\rm 4}
(\bar \psi \psi)(\bar \psi \psi) +\ldots\quad\quad \quad \,
\end{equation}
is given by
\begin{equation}
\lambda_{\rm 4}= \lambda \int dy [f^{\Psi}_0(y)]^4\approx \lambda |m_\Psi|\,. \label{eq:lambdapsi4}
\end{equation}
The coefficient grows with $m_\Psi$. This implies that if one tries to send $m_\Psi$ to infinity at fixed $\lambda$, the cutoff of the 4D EFT, which is roughly of order $\Lambda_{4} \sim 1/(4\pi \sqrt{\lambda m_\Psi})$, tends to zero. Hence taking such limit leads to a 4D EFT with vanishing range of validity.
This is of course an obstruction to reach an exactly localized theory--- which has finite range of validity.
However this argument still has a caveat because one could in principle adjust the magnitude of the 5D interactions (such as $\lambda$ in Eq.~\eqref{eq:lambdapsi4}) to keep the low-energy couplings under control. Unwanted interactions---for instance brane-localized ones---can be set to zero at a given scale if needed.
To see where an obstruction occurs with no possible caveat, let us keep evaluating the low-energy EFT of zero modes. In the low-energy EFT the Kaluza-Klein modes are integrated out and contribute as higher-dimensional operators in the low-energy EFT. In particular, in the presence of gravity, KK gravitons are integrated out. The KK gravitons couple to the zero mode stress energy tensor, which contains terms proportional to bulk and brane masses.\,\footnote{ Here the bulk and brane masses are tied to each other to maintain a small 4D mass for the zero mode. } An example of scalar effective operator generated by the gravitons is ${\cal O}_\phi=(\partial_\mu \phi)^2 \phi^2$.
When the bulk and brane masses are sent to infinity, the coefficient of this operator must tend to infinity unless cancellations occur to render the coefficient finite. An explicit calculation in AdS from \cite{Dudas:2012mv} shows that the coefficient of ${\cal O}_\phi$ does diverge. Quoting their result,
\begin{equation}
{\cal L}_4= -\frac{e^{2 k L}}{6M^2_{\rm Pl}}\frac{(1+\alpha)^2}{3+2\alpha} {\cal O}_\phi \approx
-\frac{m_\Phi e^{2 k L}}{12 k M^2_{\rm Pl}} {\cal O}_\phi
\, \label{eq:L4_grav}
\end{equation}
where we have taken the large $m_\Phi$ limit in the last step. Since for $m_\Phi \rightarrow \infty$ the coefficient of the effective operator tends to infinity,
the cutoff of the 4D EFT goes to zero. Hence the validity range of the 4D EFT vanishes and the large bulk mass limit cannot continuously lead to an exactly localized EFT. No coupling can be tuned here, since the strength of the interaction only depends on $M_{\rm Pl}$.
Importantly, the obstruction occurs because of the presence of the KK gravitons. The KK modes of fields other than gravity produce 4D effective operators which remain finite in the $m_\Phi \rightarrow \infty$ limit.
We can now see what is special about gravity: since 5D gravity couples to 5D mass, taking the $m_\Phi \rightarrow \infty$ limit would imply infinitely strong coupling and thus no weakly coupled EFT description at any scale.
\subsection{Brane kinetic localization} \label{se:BKT}
A localized 4D mode for any of the matter fields $(\Phi, \Psi, {\cal A}^M)$ can also appear by taking the limit of a large brane kinetic term with magnitude $r$ (see Sec.~\ref{se:BKTdir}).
Aspects of the localization limit for each kind of field will be discussed further below.
A general argument showing that the exact localization limit of $\Gamma$ does not lead to $\tilde \Gamma$ is as follows.
At large $r$, while one special mode tends to get exactly localized on the brane, the set of all KK modes is fully expelled from the brane. The KK modes decouple from the brane, but certainly not from the spectrum. Rather, at large $r$ the set of KK modes gets a Dirichlet boundary condition on the brane.
This general feature can be seen explicitly in the dressed propagator Eq.~\eqref{eq:dressing2}, which in the presence of a brane-localized kinetic term takes the form
\begin{equation}
\Delta_p(y,y')= i\frac{\hat K_p(y) \hat K_p(y')}{\hat \Pi(p)+r p^2} + \hat \Delta^D_p(y,y') \,. \label{eq:dressing2BKT}
\end{equation}
For $r\rightarrow\infty$, the bulk contribution $\hat \Pi(p)$ becomes negligible compared to $ r p^2$. The first term in Eq.~\eqref{eq:dressing2BKT} takes the form of a pure 4D pole. The second term corresponds to the set of Dirichlet KK modes, which clearly remain in the spectrum since they are not affected by the brane dressing.
We consider a scalar propagator here, but the property remains valid for any kind of field. This feature matches the well-known results from \cite{Carena:2002dz} about ``opaque'' branes.
In the limit of exact localization, the classical part of the holographic action contains only a 4D field---with possible brane interactions as discussed further below.
Hence it may seem that the exact localization limit is indeed successful.
However, at the quantum level, the brane localized degree of freedom always know about the Dirichlet KK modes because of gravity. Namely, the brane modes couple to KK gravitons, which themselves couple to the Dirichlet KK modes of matter, as shown in Fig.~\ref{fig:brane_grav_D}.
Hence the picture is that, while $\tilde \Gamma$ contains by definition isolated 4D degrees of freedom, the same 4D degrees of freedom in $\Gamma$ are necessarily accompanied by towers of Dirichlet modes. Without gravity, the equivalence between $\tilde \Gamma$ and $\Gamma$ could be exact because the Dirichlet modes may be completely decoupled from the brane. In contrast, in the presence of gravity, KK gravitons always connect brane modes to Dirichlet modes. This has physically observable consequences therefore the limit of $\Gamma$ at large $r$ differs from $\tilde \Gamma$.
\begin{figure}[t]
\centering
\includegraphics[width=0.2\linewidth,trim={0cm 0cm 0cm 0cm},clip]{brane_grav_D.pdf}
\caption{ Brane-localized modes interact with matter Dirichlet modes in the presence of gravity, therefore $\Gamma \neq \tilde \Gamma$. $T_D$ denotes the stress tensor of Dirichlet modes.
}
\label{fig:brane_grav_D}
\end{figure}
Even though the above argument is sufficient to establish the discontinuity between the $\Gamma$ and $\tilde \Gamma$ theories in the presence of gravity, it is still interesting to study in more details the effects of kinetic localization for the various matter fields. This is relevant from a purely theoretical viewpoint but also for future model-building manipulations.
\subsubsection{Scalar and fermion modes}
Since the $r$ coefficient normalizes the kinetic term, taking large $r$ has consequences for other operators of the theory. At large $r$, canonical normalization implies the rescaling $\hat\phi=\sqrt{r}\phi$ and similarly for other fields, which reduces by powers of $\sqrt{r}$ other operators of the brane Lagrangian.
For scalar and fermions, it is always possible to obtain a massive, interacting Lagrangian in the $r\rightarrow \infty $ limit of $\Gamma$ by introducing brane-localized mass and interactions which scale with appropriate powers of $\sqrt{r}$. This does not change the fact that $\Gamma$ differs from $\tilde \Gamma$ by the presence of bulk Dirichlet modes.
\subsubsection{Gauge modes}
Kinetic localization of gauge fields is more constrained because, unlike in the case of scalar and fermions, self-interactions of the gauge field are constrained by gauge invariance. As a result, when the coefficient of the gauge kinetic term grows, gauge self-interactions are necessarily suppressed.
The gauge action reads
\begin{equation}
S_A = \int d^5X \sqrt{g}\left(
-\frac{1}{4g^2_5} {\cal F}^{MN}{\cal F}_{MN} \right) + \int d^4x \sqrt{|\bar g|}\left(
-\frac{r}{4 g^2_5} {\cal F}^{MN}{\cal F}_{MN} \right)\bigg|_{y=y_0}+ \ldots \label{eq:5Dgauge}
\end{equation}
Consider the transverse part of the propagator for ${\cal A}^\mu$,
\begin{equation}
\Delta^{\cal A}_{\mu\nu}(p;y,y')=\left(\eta_{\mu\nu}-\frac{p_\mu p_\nu}{p^2}\right) \Delta^{\cal A}(p;y,y') +\ldots
\end{equation}
where in the presence of the brane kinetic term Eq.~\eqref{eq:5Dgauge},
\begin{equation}
\Delta^{\cal A}_p(y,y')= -ig_5^2\frac{\hat K_p(y) \hat K_p(y')}{\hat \Pi(p) + r p^2} + \hat \Delta^{{\cal A},D}_p(y,y') \,. \label{eq:gaugerel}
\end{equation}
We can see that at large $r$ the self-energy takes the form
\begin{equation}
\frac{1}{g^2_5}\Pi(p)=\frac{1}{g^2_5}\left(\hat\Pi(p)+r p^2\right) \rightarrow \frac{r}{g^2_5} p^2 \,.
\end{equation}
Therefore the effective gauge coupling in the quasilocalized limit is\,\footnote{This formula includes the case of a gauge zero mode in a compact extradimension, for which $ g^2_4=\frac{g_5^2}{L+r}$ for any $r$. }
\begin{equation}
g^2_4\approx \frac{g_5^2}{r} \,.
\end{equation}
This matching relates brane localization to the strength of gauge interactions, and has thus important consequences.
In order to achieve an exactly localized gauge field for a given value of the effective gauge coupling $g_4$, the increase of $r$ has to be accompanied with an increase of
the 5D gauge coupling $g_5$. However, increasing $g_5$ has a price. Since $g_5$ controls 5D interactions, increasing it \textit{lowers} the 5D cutoff of the theory.
This implies that taking $r\rightarrow \infty$ at finite $1/g^2_4$ sends the cutoff of the theory to zero. The theory has thus a vanishing validity range and cannot continuously reproduce the exactly localized gauge theory from $\tilde \Gamma$. Interestingly, in this case, the obstruction is not related to gravity.
Conversely, taking large $r$ for fixed $g_5$ and no requirement on $g_4$, it seems one could obtain an exactly localized gauge theory with vanishing $g_4$ gauge coupling. However, in the presence of gravity, this limit is obstructed by the Weak Gravity Conjecture (WGC) \cite{ArkaniHamed:2006dz}. In this limit the EFT cutoff is lowered to $\Lambda \sim g_4 M_{\rm Pl}$ as required by the WGC and thus taking $r\rightarrow \infty$ gives once again an EFT with vanishing validity range.
Hence there is again obstruction, in this case because of gravity.
\section{Braneworlds and Swampland \label{se:swamp}}
In the previous section we have shown that, at least in the presence of gravity, the exactly localized and quasilocalized braneworlds are not continuously related in theory space.
In this section we focus on the exactly localized braneworld. We aim to find internal discrepancies or paradoxes in this kind of theory.
\subsection{Brane width \label{se:width}}
In the braneworld EFTs we consider, the brane is an infinitely thin hypersurface. For an EFT without gravity ($M_{\rm Pl}\rightarrow \infty$), such feature can in principle remain valid at infinitely short distances.\,\footnote{If one removes gravity in the exactly localized theory, the bulk becomes totally empty and the fifth dimension can be trivially integrated over. }
In contrast, in the presence of gravity, the infinitely thin brane description should become invalid
at distance scales of order of the local Planck length, where quantum fluctuations of spacetime become strong.
From the EFT viewpoint, such breakdown of the thin brane description should
manifest itself via the presence of higher-dimensional operators encoding the effects of the brane width.
These higher order terms in the braneworld EFT take the form
\begin{equation}
S_{\rm brane}= \int d^5X \sqrt{|\bar g|} \left[\delta(y-y_0){\cal L}^{(0)}+
a\delta'(y-y_0){\cal L}^{(1)}+
\frac{b}{2}\delta''(y-y_0){\cal L}^{(2)}+\ldots
\right]
\end{equation}
where $a$, $b$ are coefficients vanishing in the $M_{\rm Pl}\rightarrow \infty$ limit.\,\footnote{
The brane profile, taken as a distribution, can be formally expanded over the basis of the Dirac delta's derivatives. Truncation of this series depends on the test function on which it acts. In the context of the low-energy EFT, this truncation is controlled by the long-distance expansion defining the EFT.
}
Without any specification of the UV completion or of the exact brane profile, this immediately implies that the ${\cal L}^{(i)}$ have to depend on $y$---otherwise all the ${\cal L}^{(i>0)}$ would vanish. The fields in ${\cal L}^{(i)}$ are thus 5D fields, which implies that the theory is a quasilocalized braneworld---as defined in Eq.~\eqref{eq:Sgen}.
In short, gravity requires that the brane has some concept of width, which requires all fields to be five-dimensional, such that the braneworld is of the quasilocalized kind.
From the viewpoint of a UV completion this could for example happen because the brane is a soliton with finite width \cite{Rubakov:1983bb, ArkaniHamed:1999za, Mirabelli:1999ks}, or because the brane becomes a dynamical object with a non-trivial form factor near the Planck scale.
From this simple brane width argument we may conclude that an exactly localized braneworld EFT is incompatible with an embedding into a theory of gravity.
In the following we present further arguments, relying in part on standard swampland conjectures (see \cite{Palti:2019pca} for a review).
\subsection{Argument from global symmetries \label{se:sym}}
Consider a flat 5D interval with a $U(1)$ gauge field in the bulk. Assume two species $\phi_0$, $\phi_1$ with charges $q_0$, $q_1$ exactly localized on two different branes located at each endpoints of the interval. To be specific we assume that $q_0$, $q_1$ are coprime and of opposite sign.
Let us consider the low-energy theory below the KK scale, for which all KK photons and KK gravitons are integrated out. The low-energy limit is taken only for convenience, the argument still applies at any energy scale in the theory.
The 4D effective Lagrangian contains effective operators generated by the KK modes. Because of exact localization, the 4D Lagrangian only contains operators
composed of mononimals $|\phi_0|^{2}$, $|\phi_1|^{2}$
and similar ones with derivatives and more complex Lorentz structures.
In this 4D theory the $\phi_0$ and $\phi_1$ numbers $N_0$, $N_1$ are separately \textit{exactly} conserved. Conservation of these numbers is not implied by the gauge symmetry, which only dictates conservation of the gauge charge $q_0 N_0+q_1 N_1$, hence the individual $N_0$, $N_1$ numbers are global charges.
The theory has therefore an exact global symmetry.
This is in direct contradiction with the swampland conjecture that there is no exact global symmetry in an EFT emerging from a UV theory of gravity.
This contradiction is resolved in the quasilocalized picture, where $\phi_0$, $\phi_1$ are the zero modes of 5D bulk fields $\Phi_0$, $\Phi_1$. These bulk fields are directly in contact via 5D operators respecting the gauge symmetry but not the individual $\Phi$ number (see discussion in \cite{Fichet:2019ugl}).
The zero modes of $\Phi_1$, $\Phi_2$, even if highly localized on each brane, have a non vanishing wavefunction in the bulk and thus overlap with each other. As a result, in addition to $|\phi_i|^2$ monomials, the low-energy theory contains operators build from
monomials of
\begin{equation}
\phi_0^{q_1} \phi_1^{q_0}+{\rm h.c.}\,
\end{equation}
which explicitly violate the individual $\phi_i$ numbers. These operators arises both from the direct contact between the zero modes and from integrating the KK modes of $\Phi_0$ and $\Phi_1$. Such symmetry-violating terms would be absent in case of exact localization, causing the global symmetries to be exact.
Summarizing, we have presented a configuration where exact localization of charged fields is tied to a violation of the conjecture that no exact global symmetry exists in the presence of gravity. This violation is naturally avoided when using quasilocalized fields.
A similar argument has been recently presented in \cite{Fichet:2019ugl}.
\subsection{Argument from emergent species \label{se:species}}
Consider a slice of AdS$_5$, \textit{i.e.} AdS$_5$ space truncated by two branes.
This corresponds to $a(y)=ky$ in the metric Eq.~\eqref{eq:metric}. For AdS it is convenient to use the conformal coordinates $z=e^{ky}/k$. The branes are taken to be at positions $z_0=1/k$ (UV brane), $z_1=1/\mu$ (IR brane).
For the moment we assume no matter on the IR brane or in the bulk. For an introduction to QFT in a warped background see \textit{e.g.} \cite{ Ponton:2012bi,Gherghetta:2010cj}).
While the cutoff in terms of proper distance is constant since AdS is homogeneous, the cutoff in coordinate distance varies along the $z$ coordinate. Assuming the 5D cutoff for an observer on the UV brane is $\Lambda$, the cutoff for an observer on the IR brane is $\Lambda'=\Lambda \mu/k$, \textit{i.e.} it is ``warped down''.\,\footnote{ This is because the effect of higher dimensional operators in the 5D action is enhanced by powers of $k/\mu$ on the IR brane as compared to the UV brane. }
Let us consider the holographic action defined on the UV brane---as usually done in the context of AdS/CFT. As well-known \cite{ArkaniHamed:2000ds, Polchinski:2002jw, Gherghetta:2003he, Fichet:2019hkg}, for 4-momentum $|p|=|\sqrt{p^2}| \gg \mu$, IR localized fields/operators and the IR brane itself vanishes from all correlators. In this 5D regime the theory can be effectively described by a UV brane and an infinite AdS bulk.
Since the IR brane appears only in the IR \textit{i.e.} at low 4-momentum $|p|$, it is effectively emergent from the viewpoint of the UV brane, as formalised in the holographic action (see \cite{Brax:2019koq,Costantino:2019ixl} for BSM application).
To facilitate discussions, let us just assume the transition is at $|p|\sim \Lambda'$ with $\Lambda'$ of order $\mu$.
We consider the two extreme regimes of the holographic action. For $|p|\gg \Lambda'$, the theory is pure AdS$_5$. For $|p|\ll \Lambda'$, the theory contains only zero modes and is 4D.
Let us then introduce exactly localized matter on the IR brane. We are free to add a large number of species $N\gg 1$, all exactly localized on the IR brane.
Because of this large number of species, in the 4D regime, the cutoff is lowered to $\Lambda'/\sqrt{N}$ as dictated by the
species scale. The species scale is a swampland conjecture implied by gravity \cite{Dvali:2007hz,Palti:2019pca}. This introduces a rather strange feature. In the holographic action, there is now a parametrically large energy range in between the 5D and 4D regimes, \begin{equation}
|p|\sim [\Lambda'/\sqrt{N}, \Lambda' ] \label{eq:disc_Lambda}
\end{equation}
for which the EFT is invalid. We take this discontinuity as a signal of an inconsistency.
The feature is related to the emergence of many degrees of freedom in the IR. Such parametrically large increase of degrees of freedom is in gross disagreement with the picture that degrees of freedom should monotonically decrease when flowing towards the IR, as encoded by c- and a-theorems.
It would be interesting to evaluate explicitly the holographic $a(z)$ function along the lines of \cite{Freedman:1999gp,Myers:2010tj}. However for our purposes, qualitative considerations are enough: The holographic action definitely has a problem with IR degrees of freedom.
Both inconsistencies about validity range and IR degrees of freedom are solved when assuming quasilocalized fields instead of exactly localized fields.
With quasilocalized fields, the theory now contains $N$ bulk fields. The holographic action knows about these bulk degrees of freedom at any $|p|$. The $N$ bulk fields imply an overall reduction of the 5D cutoff by $\sqrt{N}$ in both 5D and 4D regimes, and no discontinuity in the validity range of the theory (in contrast with Eq.~\eqref{eq:disc_Lambda}). The existence of the $N$ bulk fields being known in the UV, no steep increase in the number of degrees of freedom
due the emergent IR brane occurs along the RG flow.
Let us comment on the interplay with gravity.
The cutoff-based argument relies on the species scale, which is implied by gravity.
The argument about degrees of freedom seems naively unrelated to gravity, although this may deserve further thinking since the evaluation of the usual holographic $a$ function does rely on Einstein's equations.\,\footnote{
One can also argue that the presence of the $N$ 5D fields is implied by the finite IR brane width as discussed in Sec.~\ref{se:width}, and thus enforced by gravity.
}
Summarizing, in the warped configuration studied here, exact localization of a large number of species leads to inconsistencies which are partly related to the presence of gravity. These inconsistencies are naturally solved when fields are taken to be quasilocalized.
\subsection{Discussion}
We have exhibited two specific models with exactly localized fields which, to the best of our understanding, should belong to the swampland.
We have also made the simple point that whenever some notion of brane thickness is introduced, the braneworld should be of the quasilocalized type.
These points are unfavorable to the exactly localized braneworld EFT.
On the string theory side, braneworld model-building is often done with D3-branes, which give rise to matter fields living strictly on the worldvolume (see \textit{e.g.} \cite{Aldazabal:2000sa,Antoniadis:2000jv,Uranga:933469}). This may seem to favor, at first view, the picture of an exactly localized braneworld---which stands in contrast to the observations made in the rest of this section.
However a full string picture has restrictions, for instance D3-branes have to be accompanied by D7-branes wrapped around compact space dimensions. The D7-branes do generate a tower of matter KK modes, which somehow accompany the isolated states from D3-branes.
The presence of matter KK modes would then be reminiscent of the quasilocalized picture.
Also, a notion of thickness for the D-brane is sometimes discussed in the literature \cite{Moeller:2000jy}. This would again imply that the low-energy limit has to be a quasilocalized braneworld.
Given the possible subtleties on the string side, we do not attempt a broad conjecture about the (field-theoretical) exactly localized braneworld. The precise string picture relative to exact/quasi-localization would deserve a detailed study.
Here we simply report our results with no further extrapolation.
All these considerations about exactly versus quasi-localized braneworld are interesting from a conceptual viewpoint, but also have concrete observable consequences as we will see in next section.
\section{ The quasilocalized warped braneworld } \label{se:RS}
Given the previous results, it is interesting to revisit existing braneworld models of the exactly localized kind.
This includes in particular the DGP braneworld \cite{Dvali:2000hr} and the Randall/Sundrum II (RSII) braneworld \cite{Randall:1999vf}, both originally presented with the SM exactly localized on a brane.
In a sense, an exactly localized braneworld is an approximation of a quasilocalized one. How good is the approximation may depend on the spacetime background, on the field content and so on.
As a general tendency, we can expect a richer phenomenology once matter is quasilocalized, since new degrees of freedom (the KK modes) are always present in the theory, and since a quasilocalized brane field has direct contact with bulk degrees of freedom.
Taking into account these phenomena may provide new observable effects, and perhaps new constraints on the braneworld model.
In this work we focus on the ``quasilocalized RSII model'', \textit{i.e.} RSII where all SM fields are quasilocalized.
We include an IR brane to discretize the spectrum, as it is sometimes convenient for discussions. The IR brane can be sent to infinity at any time to recover full AdS space in the IR.
For every localized 4D field, there is a KK tower, or a KK continuum if the IR brane is at infinity.
The phenomenology for scalar and fermions depends both on their bulk mass and their brane localized Lagrangian---which are responsible of the two localization mechanisms discussed in Secs.~\ref{se:limit}, \ref{se:disc}. In contrast, quasilocalized gauge fields are much more constrained because 5D gauge symmetry constrains their profile and their interactions.
The phenomenology (including possible constraints) from the scalar and fermion KK sectors is certainly interesting, but our focus here is on the gauge and gravity sectors which are more model-independent.
\subsection{Action, propagator, opacity and EFT validity}
Consider the 5D action of gravity and a gauge field. The action takes the form
\begin{equation}
S_{\rm AdS}= \int d^5X \sqrt{g}\left[ M^3_* {\cal R} - \Lambda_5
-\frac{1}{4g^2_5} {\cal F}^{MN}{\cal F}_{MN}
\right]
+ \int_{\rm br.} d^4x \sqrt{|\tilde g|}\left(
-\frac{r}{4 g^2_5} {\cal F}^{MN}{\cal F}_{MN}-\Lambda_4 \right)
\,.
\label{eq:RS_action}
\end{equation}
The 5D cosmological constant and brane tension satisfy $\Lambda_5=-12 k^2 M^3_*$, $\Lambda_4=\Lambda_5/k$, $k$ being the AdS curvature. The $M_*$ parameter sets the strength of 5D gravity
and is related to the 4D Planck mass by $M_*^3\approx kM^2_{\rm Pl}$.
The metric of the AdS background is denoted $\gamma_{MN}$, such that $g_{MN}=\gamma_{MN}+\ldots$ where the ellipse denotes the metric fluctuations. The graviton Lagrangian will be expanded in Sec.~\ref{se:AAAA}.
A localized Ricci scalar could also be included on the brane. Since our focus is on matter fields, this is a direction we do not consider in the scope of this work.
Optionally, another brane with tension $-\Lambda_4$ and no localized matter Lagrangian is also included in the action Eq.~\eqref{eq:RS_action}, further away from the AdS boundary, \textit{i.e.} in the IR region. This second brane is referred to as ``IR brane'' and the main one ``UV brane''.
For AdS$_5$ the general metric of Eq.~\eqref{eq:metric} satisfies $a(y)=ky$. We switch to so-called conformal coordinates $z=e^{ky}/k$, giving
\begin{equation}
ds^2=\gamma_{MN}dX^MdX^N=(kz)^{-2}(\eta_{\mu\nu}x^\mu x^\nu-dz^2)\, \label{eq:metricz}
\end{equation}
where $\eta_{\mu\nu}$ is Minkowski metric with $(+,-,-,-)$ signature. The UV brane is taken to be at $z=z_0=1/k$ with no loss of generality. The IR brane is situated at $z=z_1=1/\mu$.
To disentangle the components of the 5D gauge field, one introduces the 5D gauge fixing functional
\begin{equation}
-\frac{1}{2\xi k z g_5^2}\left(
\partial^\mu {\cal A}_\mu-\xi z\partial_5 \left(z^{-1}{\cal A}_5 \right)
\right)^2\,,
\end{equation}
defining the $R_\xi$ gauge \cite{Randall:2001gb,Carena:2002dz}. For our purposes it is enough to work in the Feynman gauge $\xi=1$.
The $\langle{\cal A}_5{\cal A}_5\rangle $ propagator encodes the longitudinal degrees of freedom.
The ${\cal A}_\mu$ component of the gauge field is taken to satisfy Neumann boundary condition on the branes while ${\cal A}_5$ has Dirichlet boundary conditions.
The propagator for ${\cal A}_\mu$ in the presence of the IR brane reads
\begin{align}
&\langle{\cal A}_\mu(p,z){\cal A}_\nu(-p,z')\rangle =
\Delta^{\cal A}_p(z,z')=
\\ \nonumber
& - \eta_{\mu\nu}\,
i\frac{\pi k^3 (zz')^2}{2 }
\frac{
\left[ Y_{0}\left(p/k\right)J_{1}\left(pz_<\right)
- J_{0}\left(p/k\right) Y_{1}\left(pz_<
\right)\right]\left[
Y_{0}\left(p/\mu\right)J_{1}\left(pz_>\right)
- J_{0}\left(p/\mu\right) Y_{1}\left(pz_>
\right)
\right]}
{ J_{0}\left(p/k\right) Y_{0}\left(p/\mu\right)
- Y_{0}\left(p/k\right) J_{0}\left(p/\mu\right)}\,
\end{align}
where $p=\sqrt{\eta^{\mu\nu} p_\mu p_\nu}$.
The 5D action Eq.~\eqref{eq:RS_action} is the leading term of a low-energy effective theory valid at distances larger than $\Delta X \sim 1/\Lambda$ where $\Lambda$ is the validity cutoff.
The cutoff is set by the strongest interaction, \textit{i.e.} either by gravity or by gauge interactions, giving respectively
\begin{equation}
M^3_*\sim \frac{\Lambda^3}{24 \pi^3}\,,\quad\quad \frac{1}{g^2_5}\sim \frac{c\Lambda}{24 \pi^3} \,\label{eq:NDA}
\end{equation}
where $c$ is a group theoretical factor of order of the number of colors \cite{Chacko:1999hg,Agashe:2007zd}. The gravity cutoff implies $k \lesssim M_{\rm Pl}$ for the higher order curvature terms to be negligible.
The coupling of KK gravitons is controlled by the dimensionless quantity
\begin{equation}
\kappa=\frac{k}{M_{\rm Pl}} \,
\end{equation}
which can go up to $O(1)$.
In the coordinates Eq.~\eqref{eq:metricz}, the cutoff on $p$ as seen by a local observer at position $z$ is $\Lambda\,kz$. Hence for a given momentum $p$, the EFT breaks down when going far enough in the IR region, at roughly $z =O( 1/p) $ (see \textit{e.g.} \cite{Goldberger:2002cz}).
However a property of the propagators is that they tend to be exponentially suppressed when an endpoint enters this IR region \cite{ArkaniHamed:2000ds, Polchinski:2002jw, Gherghetta:2003he, Fichet:2019hkg}. This is true for both Euclidian and Lorentzian momentum. For Lorentzian momentum the suppression appears once the propagator is dressed by bulk interactions. One has
\begin{equation}
\Delta_p(z) \sim \begin{cases}
e^{- |p| z_>} \quad &{\rm if}~~p_\mu~\quad{\rm spacelike}
\\
e^{-C p z_>} \quad &{\rm if}~~p_\mu~\quad{\rm timelike}
\end{cases} \label{eq:Kexp}
\end{equation}
An analytical estimate near strong coupling gives typically $C\sim O(1)-O(0.1)$.
The holographic profiles are expressed in terms of propagators (Eq.~\eqref{eq:K_class}) hence the same property is true for them.
This opacity property of AdS tends to censor the IR region where the 5D EFT breakdowns, \textit{e.g.} where gravity would become strongly coupled. We will see an example of calculation relying on the cutoff from Eq.~\eqref{eq:Kexp} in Sec.~\ref{se:AAAA}.
\subsection{Anomalous running of gauge couplings}
\label{se:RG}
\begin{figure}[t]
\centering
\includegraphics[width=0.5\linewidth,trim={0cm 0cm 0cm 0cm},clip]{gauge_CFT.pdf}
\caption{ SM gauge fields dressed by insertion of CFT correlators, equivalent to the effect of the gauge KK continuum on brane-localized SM fields.
}
\label{fig:gauge_CFT}
\end{figure}
We now treat the gauge field holographically, introducing the variable
\begin{equation}
{\cal A}_{\mu,0}={\cal A}_\mu\big|_{z=z_0}\,.
\end{equation}
Using asymptotic forms of Bessel functions, the bilinear holographic action is found to be
\begin{align}
\label{eq:RGlargep}
\Gamma_{\rm cl}[ {\cal A}_{\mu,0}]& \approx
\begin{dcases}
\int \frac{d^4p}{(2\pi)^4}\left(\frac{\log\left(k/\mu\right)}{k}+ r \right) \frac{p^2}{4g^2_5} {\cal A}_{\mu,0}(p){\cal A}^\mu_{0}(-p) \\
\int \frac{d^4p}{(2\pi)^4}\left(\frac{\log\left(2k/\sqrt{-p^2}\right)-\gamma}{k}+ r \right) \frac{p^2}{4g^2_5} {\cal A}_{\mu,0}(p){\cal A}^\mu_{0}(-p)
\end{dcases}
\end{align}
For $|p|<\mu$, the action matches the one of gauge zero modes, and the low-energy gauge coupling $g_4$ takes the constant value
\begin{equation}
\frac{1}{g^2_{4,0}}= \frac{1}{g_5^2} \left(\frac{\log\left(k/\mu\right)}{k}+ r \right) \,. \label{eq:g4LE}
\end{equation}
For $|p|>\mu$, we can see that the holographic action is non-analytic. This regime includes the case of no IR brane $\mu \rightarrow 0$. In this regime the action describes a running holographic gauge coupling
\begin{equation}
\frac{1}{g^2_4(p)}= \frac{1}{g_5^2} \left(\frac{\log\left(2k/\sqrt{-p^2}\right)-\gamma}{k}+ r \right) \,.
\label{eq:g4HE}
\end{equation}
Combining Eqs.~\eqref{eq:g4LE},~\eqref{eq:g4HE}, neglecting the small term $\log(2)-\gamma$ for simplicity,
and using $r k \gg \log(k/\mu)$ which is the regime of relevance for our discussion, we get
\begin{equation}
g^2_4(p)=\frac{g^2_{4,0}}{1-\frac{1}{2rk}\log(-p^2/\mu^2)} \label{eq:g4RG} \,.
\end{equation}
Here we have expressed the running in term of the low-energy coupling $g_{4,0}$, but we could similarly define the running at any scale $p_0$ and obtain a similar form.
We obtain the well-known feature that the AdS bulk dynamics induces a tree-level running of the holographic gauge coupling \cite{Contino:2002kc,Randall:2001gb,Goldberger:2002hb}. This running is induced by the presence of the KK continuum. Because of AdS/CFT, the running is equivalently described by mixing a 4D gauge field to a conserved current of the CFT. This produces exactly the same effect, and can be understood as dressing the gauge field by loops of the CFT constituents---which indeed contribute to the beta function of $g_4$. The fact that a tree-level effect on the AdS side matches a loop effect on the CFT side is also understood \cite{Aharony:1999ti}.
Let us now consider this behaviour in the context of a quasilocalized warped braneworld, where gauge fields as shown above are identified with SM gauge fields, $A^{\rm SM}_\mu\equiv{\cal A}_{\mu,0}$.
In that context the presence of the bulk dynamics (the gauge KK continuum) induces an anomalous tree-level running of the SM gauge couplings. Using AdS/CFT, this effect can equivalently be understood as the mixing to a current from a hidden conformal sector.
Clearly, such anomalous running has to be small otherwise it would have already been observed. From the running shown in Eq.~\eqref{eq:g4RG}, we can see that
the condition for the effect to be small over a range of energy $[p_0^2,p_1^2]$ is
\begin{equation}
\log(p_1^2/p_0^2)\ll rk\,. \label{eq:g4bound}
\end{equation}
How can this be realized in the model? Let us focus on the $r$ parameter.
Because of gauge symmetry, the gauge sector is very constrained and $r$ is the only free parameter. Moreover, for a given value $\bar g_4$, \textit{e.g.} $\sim 1/137$, or more generally the typically value of $g_4$ over $[p_0^2,p_1^2]$, the brane contribution $r/g_5^2$ is bounded from above, as can be seen from Eq.~\eqref{eq:g4LE} or \eqref{eq:g4HE}. This brane contribution can be at most as large as $1/\bar g^2_4$,
\,\footnote{We do not consider the fine-tuned case of a negative $r$ cancelling the bulk contribution to high precision. }
\begin{equation}
\frac{r}{g_5^2} < \frac{1}{\bar g^2_4}\,.
\end{equation}
It follows that the only way to increase $r$ is to simultaneously \textit{increase} $g_5$.
The other way to satisfy Eq.~\eqref{eq:g4bound} would be to increase $k$. However $k$ is bounded from above since $k\lesssim M_{\rm Pl}$. $k$ controls the strength of graviton coupling. Hence we obtain again that a bound on the anomalous running of the gauge coupling will constrain the \textit{weak} values of a coupling---which is the opposite of how usual experimental bounds work.
This implies that the parameter space of the braneworld can be cornered such that the model could---in principle---be tested completely.
To see this, let us return to the $g_5$ coupling. If the cutoff of the 5D theory is set by $g_5$ (see Eq.~\eqref{eq:NDA}), requiring larger $g_5$ implies a lower EFT cutoff $\Lambda$. In terms of $g_5$ this is given by Eq.~\eqref{eq:NDA} and in terms of $r$ this is given by $1/r\sim g_4^2 c\Lambda / (24\pi^3)$.
On the other hand, conventional high-energy experiments should bound the cutoff $\Lambda$ from below, which is just the usual experimental situation. Therefore $\Lambda$ can in principle bounded from both above and below.
Summarizing, avoiding a large anomalous running of gauge couplings in the warped quasilocalized braneworld amounts to require stronger
coupling of bulk degrees of freedom.\,\footnote{In the qualitative ``compositeness'' language, this amounts to say that the mixing between the elementary fields and the composite sector is suppressed when the composite sector has stronger self-interactions $g_5$. }
This effect is specific to the gauge sector, where gauge symmetry ties together localization and strength of interactions.
\subsection{Anomalous gauge boson scattering from 5D gravity}
\label{se:AAAA}
\begin{figure}[t]
\centering
\includegraphics[width=0.2\linewidth,trim={0cm 0cm 0cm 0cm},clip]{brane_gauge_grav.pdf}
\caption{
Gauge boson scattering induced by 5D gravitons. }
\label{fig:brane_gauge_grav}
\end{figure}
In the quasilocalized braneworld, the gauge bosons have a fraction of their wavefunction living in the bulk.
Unlike the exactly localized case ,
the gauge fields can thus be directly in contact with \textit{e.g.} 5D gravity.
The relevant interaction is encoded in the kinetic term
\begin{equation}
-\int d^4x dz \sqrt{-g} \, \frac{1+ r \delta(z-z_0)}{4 g_5^2} {\cal F}_{MN} {\cal F}^{MN} \,. \label{eq:gaugekinRS}
\end{equation}
The distribution of the gauge fields between bulk and brane can be read off this kinetic term---when setting the metric $g_{MN}$ to the background value $\gamma_{MN}$.
We can notice that the bulk component would tend to zero for $r\rightarrow \infty$. However, in the case of gauge bosons, large $r$ requires to take large $g_5$, which is constrained as discussed in Sec.~\ref{se:RG}.
The coupling of the 5D graviton to the gauge field can be derived from Eq.~\eqref{eq:gaugekinRS} by expanding the metric as \begin{equation} g_{MN}=\gamma_{MN}+
\sqrt{\frac{2}{M^3_*}}h_{MN}+\ldots
\end{equation}
Expanding the Ricci scalar at quadratic order gives the graviton kinetic term ${\cal L}_h$ and the relevant action reads
\begin{equation}
S_h = \int d^4x dz \sqrt{-\gamma} \left(
{\cal L}_h + \sqrt{\frac{1}{2M^3_*}}h^{MN} T_{MN}\right) \,.
\end{equation}
The full graviton kinetic term can be found in \textit{e.g.} \cite{Boos:2002hf,Hinterbichler:2011tt, Dudas:2012mv}.
The stress tensor for the gauge field reads
\begin{equation}
T_{MN} =
\frac{1+ r \delta(z-z_0)\delta_{M\mu}\delta_{N\nu}}{ g_5^2} \left(- {\cal F}_{MV} {\cal F}_{N}^{\,\,V} +\frac{1}{4}\gamma_{MN}{\cal F}_{PQ}{\cal F}^{PQ}
\right)
\,.
\end{equation}
The 5D gravitons induce a tree-level scattering of the gauge bosons. In our holographic formalism this is encoded in the holographic 4-point function $\langle
{\cal A}_{\mu,0}{\cal A}_{\nu,0}{\cal A}_{\rho,0}{\cal A}_{\sigma,0}
\rangle$.
Our interest here is in the big picture, we want to obtain the parameter dependence of the amplitude.
We will not give the detailed structure of the
graviton-induced gauge boson scattering. These can be found in \textit{e.g.} \cite{Fichet:2014uka}. Also we focus only on the contribution from the spin-2 helicity degrees of freedom.
Following \cite{Dudas:2012mv}, the graviton degrees of freedom can be disentangled using field redefinitions and appropriate gauge fixing.
The diagonal helicity-2 degrees of freedom are given by the traceless part of $(kz)^2 h_{MN}$, noted $ \tilde h_{\mu\nu}$,\,\footnote{Namely
$ \tilde h_{\mu\nu}=\hat h_{\mu\nu}-\frac{1}{4}\eta_{\mu\nu} \hat h^\rho_\rho$, $\hat h_{MN}=(kz)^2 h_{MN}$. }
which couples to the source
\begin{equation}
\tilde T_{\mu\nu}=T_{\mu\nu}-\frac{1}{4}\eta_{\mu\nu}T_\rho^\rho\,.
\end{equation}
The relevant piece of the graviton action is
\begin{align}
S^h=\int d^4xdz\left(
\frac{1}{2 (kz)^3}(\partial_R \tilde h_{\mu\nu})^2
+\frac{1}{\sqrt{2 M_*^3}}
\frac{1}{(kz)^3} \tilde h^{\mu\nu}\tilde T_{\mu\nu}
\right)\,. \label{eq:Lagh}
\end{align}
In Eq.~\eqref{eq:Lagh} all contractions are done with the Minkowski metric. The $\tilde h_{\mu\nu}$ component has Neumann boundary conditions on the branes.
The exact graviton propagator is $\langle\tilde h_{\mu\nu} \,\tilde h_{\mu'\nu'}\rangle = \eta_{\mu\nu}\eta_{\mu'\nu'} \Delta^{\bm 2}_p(z,z')$ with
\begin{align}
& \Delta^{\bm 2}_p(z,z')= \\ \nonumber
& i\frac{\pi k^3 (zz')^2}{2 }
\frac{
\left[ Y_{1}\left(p/k\right)J_{2}\left(pz_<\right)
- J_{1}\left(p/k\right) Y_{2}\left(pz_<
\right)\right]\left[
Y_{1}\left(p/\mu\right)J_{2}\left(pz_>\right)
- J_{1}\left(p/\mu\right) Y_{2}\left(pz_>
\right)
\right]}
{ J_{1}\left(p/k\right) Y_{1}\left(p/\mu\right)
- Y_{1}\left(p/k\right) J_{1}\left(p/\mu\right)}\,.
\end{align}
The propagator is exponentially suppressed in the IR region, as described in Eq.~\eqref{eq:Kexp}. In the $z_><1/|p|$ region, it takes the form
\begin{equation}
\Delta^{\bm 2}_p(z,z') \approx
i\frac{2k}{p^2}+i\frac{2\gamma-1+2\log\left(\sqrt{-p^2}/2k\right) }{2k}
-i\frac{\left((kz_<)^2-1\right)^2}{4k} \label{eq:Deltagrav2} \,.
\end{equation}
This is the region of interest.
Here we have taken the continuum limit such that the poles do not appear.\,\footnote{As shown in e.g. \cite{Fichet:2019hkg}, the KK modes get a width from dressing by bulk interactions, tend to overlap with each other and give rise to a branch cut---corresponding to the AdS continuum. } The zero mode in Eq.~\eqref{eq:Deltagrav2} corresponds to the 4D graviton. The second term encodes the effect of the KK continuum on the UV brane \textit{e.g.} the correction to the Newton potential.
The last term is the Dirichlet contribution, as shown in the form Eq.~\eqref{eq:dressing2}. This Dirichlet term is the leading one in the physical process we consider.
Let us now consider the scattering of four on-shell gauge boson. For on-shell massless gauge bosons the holographic profiles are simply $1$ for any $z$.
The scattering is induced at tree-level by graviton exchange.
Using that $K=1$, the relevant stress tensor expressed with the holographic variables is
\begin{equation}
\tilde T_{\mu\nu}=\frac{1+ r \delta(z-z_0)}{ g_5^2} (kz)^2 \left(- {\cal F}_{\mu \rho,0} {\cal F}_{\nu,0}^{\,\,\rho} +\frac{1}{4}\eta_{\mu\nu}{\cal F}_{\rho \sigma,0}{\cal F}_0^{\rho \sigma}
\right)
\end{equation}
where contractions are done with the Minkowski metric.
The polarization structure is encoded in the tensor
\begin{align}
&{\cal E}^{\mu\nu}(12)= \\
&\frac{1}{2}\left(p_1^\mu p_2^\nu \, \epsilon_1 . \epsilon_2 +
\epsilon_1^\mu \epsilon_2^\nu \, p_1 . p_2-
p_1^\mu \epsilon_2^\nu \, \epsilon_1 . p_2-
p_1. \epsilon_2 \, \epsilon^\mu_1 p^\nu_2
+1\leftrightarrow2\right)
-\eta^{\mu\nu}\frac{1}{2} (p_1. p_2 \, \epsilon_1 . \epsilon_2 - p_1.\epsilon_2\, p_2.\epsilon_1 )\,
\end{align}
here defined for two ingoing gauge bosons with momentum $p_1$, $p_2$ and polarization vectors $\epsilon^\mu_1$, $\epsilon^\mu_2$. Properties of the helicity amplitudes from spin-2 exchange can be found in \textit{e.g.} \cite{Fichet:2014uka} and need not be discussed here.
To get a familiar form for the amplitude we have to use canonically normalized external states. Starting from the holographic fields ${\cal A}_0$, this is done by including a factor $g_4(Q)$ for each external gauge boson leg. The $g_4(Q)$ is defined in Eq.~\eqref{eq:g4RG}. Here $Q$ is some typical scale involved in the physical process. Since we are interested in the large $r$ limit, this tree-level running effect is irrelevant and we simply take $g_4\approx g_5/\sqrt{r}$.
Putting everything together, the amplitude takes the form
\begin{equation}
i{\cal M}(12\rightarrow 34)=i{\cal M}^s+i{\cal M}^t+i{\cal M}^u\,.
\end{equation}
with
\begin{equation}
i{\cal M}^s=\frac{2}{M^3_*}{{\cal E}}_{\mu\nu}(12){{\cal E}}^{\mu\nu}(34)
\int dz dz' \frac{1}{kz kz'}\frac{1+ r \delta(z-z_0)}{ r}\frac{1+ r \delta(z'-z_0)}{ r}
\Delta^{\bm 2}_s(z,z')
\end{equation}
and similarly for the $t$ and $u$ diagrams.
Let us consider the pure AdS regime $\sqrt{s}>\mu$.
The propagator is exponentially suppressed in the IR region as dictated by opacity in the timelike region, see Eq.~\eqref{eq:Kexp}. For simplicity we do not take into account the $C$ coefficient from the exponential, and assume suppression in the
$z_>\sim 1/\sqrt{s}$ region. The same region can be taken for the position integral of the cross diagrams.
The non-vanishing contribution to the position integrals comes from the $\sqrt{s} < 1/z_>$ region of momentum space where the propagator takes the form
Eq.~\eqref{eq:Deltagrav2}.
The leading contribution is found to be
\begin{equation}
i{\cal M}^s \approx \frac{\kappa^2}{8\, k r\,s^2} {\cal E}_{\mu\nu}(12){\cal E}^{\mu\nu}(34) + \ldots
\label{eq:AAAA}
\end{equation}
This main contribution comes from the Dirichlet piece of the graviton propagator. The ellipse represents subleading contributions.
The amplitude is of course controlled by the 5D gravity strength $\kappa$. Interestingly, it turns out that this amplitude is scale invariant.
We can see that the amplitude is suppressed by $r$, \textit{i.e.} the more the gauge fields are brane localized the less they see 5D gravity.
For a given coupling $g_4$, large $r$ can only be accomplished with large $g_5$.
Hence as in the previous subsection, we see that an upper bound on this new physics effect amounts to lower the new physics cutoff.
Since the scattering amplitude Eq.~\eqref{eq:AAAA} is scale invariant, it can be tested on an equal footing by experiments at very different scales. This scale invariance should certainly have interesting consequences regarding the interplay between different experiments.
Finally, if an IR brane exists and $\sqrt{s} < 1/\mu$, all KK modes are effectively heavy and give rise to a local amplitude
\begin{equation}
i{\cal M}^s \approx \frac{\kappa^2}{16 (kr)^2 \mu^4 } {\cal E}_{\mu\nu}(12){\cal E}^{\mu\nu}(34) + \ldots
\label{eq:AAAALE}
\end{equation}
This amplitude can also be described by a 4D EFT with two local Euler-Heisenberg operators (see \textit{e.g.} \cite{Fichet:2014uka,Fichet:2013ola}). The cutoff of the 4D EFT is $O(\mu)$ above which it is UV-completed by the full braneworld model giving rise to Eq.~\eqref{eq:AAAA}. In a sense, the presence of the IR brane breaks the scale invariance, which makes perfect sense from the CFT viewpoint.
From Eq.~\eqref{eq:AAAALE} one can see that the amplitudes with $E<\mu$ are suppressed by a power of $(E/\mu)^4$ as compared to the scale invariant amplitude Eq.~\eqref{eq:AAAA}.
From the experimental viewpoint this is just a familiar low-energy behaviour: Experiments with energy scale below $\mu$ tend to be disfavored with respect to those at higher energies.
\section{Conclusion}\label{se:conc}
Braneworld effective theories can be either exactly or quasi-localized.
In this paper we have argued that, at least in the presence of gravity, an exactly localized theory cannot be obtained by taking a limit in a quasilocalized theory. Exact localization via large bulk masses is obstructed, essentially because 5D gravity couples to bulk masses. Even at the level of a zero mode EFT, gravity robustly ensures that the large bulk mass limit cannot be taken.
Exact localization via large kinetic term is not obstructed, but does not lead to an exactly localized braneworld because a tower of matter KK modes remains in the spectrum and always couples to the brane sector via 5D gravity. Moreover for a gauge field such limit cannot even be taken as it would send the cutoff of the theory to zero, either because of the WGC or because of 5D strong coupling.
\begin{figure}[t]
\centering
\includegraphics[width=0.55\linewidth,trim={1cm 13cm 1cm 3cm},clip]{Brane_swampland.pdf}
\caption{Cartoon of the space of braneworld EFTs with gravity. The gray region represents the parameter space of quasilocalized braneworld theories.
}
\label{fig:sketch_space}
\end{figure}
Focusing on exactly localized braneworld EFT, we have presented two simple models in which inconsistencies appear. In a braneworld model with exactly localized matter and a bulk gauge field, we show that an exact global symmetry can exist in the theory, in strict contradiction with expectations from quantum gravity. In a warped model with two branes, inconsistencies appear when the IR brane carries a large number of species.
In both of these models, the paradoxes are solved once the brane fields are made quasilocalized instead of exactly localized. The status of exact vs quasi-localization
in the context of string UV-completions being unclear---at least to us, we do not attempt a generalized claim from the hints obtained on the EFT side.
In any case, all these observations provide excellent motivation to revisit exactly-localized braneworld embeddings of the SM and make them quasilocalized. As a general rule, quasilocalization renders the phenomenology of these models richer.
This is because in quasilocalized models each brane degree of freedom is accompanied by a tower of KK modes---which may possibly be heavy, or may couple to brane fields only via 5D gravity. Additionally, the brane degrees of freedom may have a non-vanishing component of their wavefunction in the bulk, which puts them in direct contact with bulk degrees of freedom.
This bulk component is strictly nonzero for gauge fields. Effects in the gauge sector are quite model-independent as a result of 5D gauge symmetry.
We focus on the gauge-gravity sector of the quasilocalized warped braneworld. We point out that SM gauge fields have a tree-level anomalous running as a result of the gauge KK modes. The only direction to reduce this effect is to increase the strength of bulk gauge interactions, thereby decreasing the cutoff of the theory.
We also evaluate the anomalous four-gauge boson scattering induced by 5D gravity. In the pure AdS regime we find that this effect is scale invariant.
It can thus be probed democratically by experiments at vastly different order of magnitude, which should imply an interesting experimental interplay.
These results from the gauge sector explicitly show that new, somewhat exotic signatures arise from the quasilocalized warped braneworld. Because of AdS/CFT, these effects are reminiscent of those from a conformal hidden sector (see \cite{Brax:2019koq,Costantino:2019ixl} for related dark sector model-building).
These effects provide new ways to experimentally test and constrain the hypothesis of the SM being (quasi)localized on a 3-brane.
It would certainly be interesting to study the other sectors of the quasilocalized warped braneworld.
\section*{Acknowledgments}
I thank P. Saraswat, P. Brax, S. Melville, M. Quiros, G. von Gersdorff and T. Gherghetta for useful discussions. %
The author is supported by the S\~ao Paulo Research Foundation (FAPESP) under grants \#2011/11973, \#2014/21477-2 and \#2018/11721-4, and funded in part by the Gordon and Betty Moore Foundation through a Fundamental Physics Innovation Visitor Award (Grant GBMF6210).
|
2,877,628,090,365 | arxiv | \section{Introduction}
\label{sec-intro}
The static electromagnetic properties of deuterium provide interesting
information on the dynamics at work within the nucleus. The fact that
deuterium's charge is one teaches us little other than the validity of
charge conservation in the nuclear system, but that its magnetic
moment $\mu_d \neq \mu_n + \mu_p$ and that it has of a non-zero
quadrupole moment are facts which played an important role in
establishing that non-central components of the $NN$
potential are at work within the deuterium nucleus.
The most accurate value for the deuteron quadrupole moment comes from
a molecular physics experiment~\cite{CR71,BC79}. It is:
\begin{equation}
Q_d=0.2859(3)~{\rm fm}^2.
\label{eq:Qdexpt}
\end{equation}
Meanwhile the best determination of $\mu_d$ comes from the
spectroscopy of the deuterium atom. It is~\cite{CODATA}:
\begin{equation}
\mu_d=0.8574382284(94)~\mu_N.
\end{equation}
But elastic electron scattering from deuterium---provided the
one-photon-exchange approximation is valid---probes
M1 and E2 responses for
(virtual) photons that have a finite three-momentum ${\bf q}$. The
relevant form factors are related to Breit-frame matrix elements of
the two-nucleon four-current $J_\mu$ via
\begin{eqnarray}
G_M&=&-\frac{1}{\sqrt{2 \eta} |e|}
\left \langle1\left|J^+\right|0\right \rangle,
\label{eq:GM}\\
G_Q&=&\frac{1}{2 |e|\eta M_d^2}
\left(\left \langle 0\left|J^0\right|0 \right \rangle
- \left \langle 1\left|J^0\right|1 \right \rangle\right).
\label{eq:GQ}
\end{eqnarray}
These, together with the charge form factor, $G_C$:
\begin{equation}
G_C=\frac{1}{3 |e|} \left(\left \langle 1\left|J^0\right|1 \right \rangle +
\left \langle 0\left|J^0\right|0 \right \rangle + \left \langle -1\left|J^0\right|-1 \right \rangle
\right),\label{eq:GC}
\end{equation}
provide a complete set of invariant functions for the description of
the deuterium four-current that interacts with the electron's current
in this approximation. In Eqs.~(\ref{eq:GM})--(\ref{eq:GQ})
we have labeled the deuteron states by the projection of the
deuteron spin along the direction of the three-vector ${\bf p}_e'
-{\bf p}_e \equiv {\bf q}$, and $\eta \equiv Q^2/(4 M_d^2)$, with
$Q^2=|{\bf q}|^2$ since we are in the Breit frame. We can then
calculate the deuteron structure functions:
\begin{eqnarray}
A&=&G_C^2 + \frac{2}{3} \eta G_M^2 + \frac{8}{9} \eta^2 M_d^4
G_Q^2,\\
\label{eq:A}
B&=&\frac{4}{3} \eta (1 + \eta) G_M^2.
\label{eq:B}
\end{eqnarray}
In terms of $A$ and $B$ the one-photon-exchange interaction yields a
lab. frame differential cross section for electron-deuteron scattering
\begin{equation}
\frac{d \sigma}{d \Omega}=\frac{d \sigma}{d \Omega}_{\rm NS}
\left[A(Q^2) + B(Q^2) \tan^2\left(\frac{\theta_e}{2}\right)\right],
\label{eq:dcs}
\end{equation}
Here $\theta_e$ is the electron scattering angle, and $\frac{d
\sigma}{d \Omega}_{\rm NS}$ is the (one-photon-exchange) cross
section for electron
scattering from a point particle of charge $|e|$ and mass $M_d$. The
form factors defined in Eqs.~(\ref{eq:GM})--(\ref{eq:GC}) are related
to the static moments of the nucleus by:
\begin{eqnarray}
G_C(0)&=&1,\\
G_Q(0)&=&Q_d,\\
G_M(0)&=&\mu_d \frac{M_d}{M},
\end{eqnarray}
with $M$ the nucleon mass.
For recent reviews of experimental and theoretical work on
elastic electron-deuteron scattering see Refs.~\cite{GG01,vOG01,Si01}.
From Eq.~(\ref{eq:dcs}) it is already clear that measurements of the
differential cross section alone cannot yield uncorrelated information
on all three form factors. To measure $G_Q$, $G_M$, and $G_C$ in a
model-independent way one must obtain data with polarized deuterium
targets or polarized electron beams. A new set of measurements of
polarization observables in electron-deuteron scattering will soon be
available from the data set obtained at the Bates Large-Acceptance
Spectrometer Toroid (BLAST). There polarized electrons of energy 850
MeV circulated in the Bates ring and were scattered from an internal
target containing polarized deuterium. The significant amount of beam
on target (3 million Coulombs since late 2003), and high degree of
beam and target
polarization achieved at BLAST, means that we anticipate data on
electron-deuteron polarization observables that is more precise than
that derived from any previous measurement.
The electron beam circulating in the Bates ring was roughly 70\%
polarized, and the deuterium target employed could operate in both a
vector-polarized and tensor-polarized mode. This gives access to all
of the elastic electron scattering deuterium structure
functions. Prominent among these are $t_{11}$, the vector analyzing
power, and $t_{20}$, the tensor analyzing power. They are related to
the form factors defined above by~\cite{vOG01}
\begin{eqnarray}
t_{20}=-\sqrt{2} \frac{x(x + 2) + \frac{1}{2} y^2 (1 + 2 \varepsilon)}{1 +
2 (x^2 + y^2(1 + 2 \varepsilon))}; \quad
t_{11}=2 \sqrt{\varepsilon} y \frac{1 + \frac{x}{2}}{1 + 2(x^2 + y^2 (1 + 2 \varepsilon))},
\end{eqnarray}
where the ratios $x$ and $y$ are:
\begin{eqnarray}
x &=& \frac{2}{3} \, \eta \, \frac{G_Q M_d^2}{G_C},\\
y&=& \sqrt{\frac{\eta}{3}} \frac{G_M}{G_C},
\end{eqnarray}
and $\varepsilon=(1 + \eta) \tan^2\left(\frac{\theta_e}{2}\right)$.
Therefore measurements of $t_{11}$ and $t_{20}$ should facilitate
the extraction of the ratios $G_M/G_C$ (which at the
$Q^2$'s we will consider here mainly affects $t_{11}$) and $G_Q/G_C$
(which mainly affects $t_{20}$). In this paper we provide
predictions for these ratios which are based on chiral effective
theory ($\chi$ET).
This approach (for reviews see Refs.~\cite{Be00,BvK02,Ph05,Ep06}) is
based on the use of a chiral expansion for the physics of the
two-nucleon system. In the formulation suggested by
Weinberg~\cite{We90,We91,We92}, the $\chi$ET treatment of the $NN$
system is based on a systematic chiral and momentum expansion for the
two-nucleon-irreducible kernels of the processes of interest. In
particular, wave
functions are computed using an $NN$ potential expanded up to a given
order in the small parameter:
\begin{equation}
P \equiv \frac{p,m_\pi}{\Lambda}
\label{eq:P}
\end{equation}
where $p$ is the momentum of the nucleons and $\Lambda$ is the
breakdown scale of the theory. For electron-deuteron scattering the
other two-nucleon-irreducible kernel that must be calculated is the
deuteron current operator $J_\mu$. We also expand this object as:
\begin{equation}
J_\mu=e \sum_{i=1}^\infty \xi_i \frac{1}{\Lambda^{i-1}}{\cal O}_\mu^{(i)},
\label{eq:sum}
\end{equation}
where the operator ${\cal O}_\mu^{(i)}$ contains $i-1$ powers of the
small parameter $P$, which now includes the momentum transfer to the
nucleus, $q$, as one of the small scales in the
numerator. For chiral effective theories without
an explicit Delta degree of freedom $\Lambda$ will in general be
$M_\Delta - M$, but in electron-deuteron elastic scattering the
$\Delta N$ intermediate state is not allowed and so $\Lambda$ will be
larger, $\Lambda \, \raisebox{-0.8ex}{$\stackrel{\textstyle >}{\sim}$ } 2(M_\Delta - M)$.
The $NN$ potential has now been computed up to
$O(P^2)$~\cite{Or96,Ep99}, $O(P^3)$~\cite{Or96,Ep99,EM01} and
$O(P^4)$~\cite{EM03,Ep05}. In this paper we will employ wave functions
computed using the next-to-leading order [NLO=$O(P^2)$],
next-to-next-to-leading order [NNLO=$O(P^3)$] and N$^3$LO
potentials [$O(P^4)$] developed in Ref.~\cite{Ep05}. These potentials
are regularized in two different ways: first, spectral-function
regularization (SFR) at a scale $\bar{\Lambda}$~\cite{Ep04}, is
applied to the two-pion contributions. Then, after the SFR potential
$V_{ll'}^{sj}(p,p')$ is obtained in a particular $NN$ partial wave, it
is multiplied by a regulator function $f$, so that the
Lippmann-Schwinger equation can be straightforwardly solved:
\begin{equation}
V_{ll'}^{sj}(p,p') \rightarrow
f(p) V_{ll'}^{sj}(p,p') f(p') \quad \mbox{with} \quad
f(p)=\exp\left(-\frac{p^6}{\Lambda^6}\right).
\label{eq:NNregulator}
\end{equation}
There have been questions raised as to the consistency of the wave
functions computed in this way~\cite{Ka96,Be01,ES02,No05,PVRA05,Bi05}. Partly
because of these questions we will, for comparison, also present
results for electron-deuteron matrix elements using the form
(\ref{eq:sum}) for the current operator, and wave functions derived
from the Nijm93~\cite{St94} or CD-Bonn~\cite{Ma01} potentials, as well
as potentials with one-pion exchange at long range and a square well
and surface delta function of radius $R$~\cite{PC99}. We stress that
such calculations are not chirally consistent. However, common
features of deuteron observables that can be identified within
calculations that use these different types of wave functions---chiral
effective theory, potential models, and one-pion-exchange
tails---should be independent of the details of physics at ranges $\ll
1/m_\pi$ in deuterium, and so should not be sensitive to any
subtleties pertaining to the renormalization of the $\chi$ET.
The operators ${\cal O}_\mu^{(i)}$ and the coefficients $\xi_i$ in
Eq.~(\ref{eq:sum}) are constructed according to the counting rules and
Lagrangian of heavy-baryon chiral perturbation theory (HB$\chi$PT),
which is reviewed in Ref.~\cite{Br95}. Here the results we will
present for $G_C$ and $G_Q$ include all contributions to $J_\mu$ up to
chiral order $eP^3$. This is the next-to-next-to-leading order (NNLO)
for these quantities. Calculations of electron-deuteron scattering
with the NNLO $\chi$ET operator were already considered in
Ref.~\cite{Ph03}, which improved upon results with the $O(eP^2)$
operator in Ref.~\cite{WM01} and the $O(e)$ results of
Ref.~\cite{PC99}. However, as was already observed in
Ref.~\cite{Ph03} and is reiterated below, calculation of the
quadrupole combination of matrix elements at NNLO does not reproduce
the experimental value of $Q_d$ to the accuracy one would expect at
that order. We identify the cause of this as short-distance two-body
contributions to $J_0$ of natural size (i.e. with $\xi_i \sim 1$)
through which quadrupole photons induce an $L=0 \rightarrow L=0$
transition in the $S=1$ deuteron state~\cite{PC99,Ch99}. We use the
operator induced by these short-distance contributions to renormalize
the deuteron quadrupole moment, and hence the form factor $G_Q$. We
also provide results for $G_M$ up to NLO. Not surprisingly, $G_M$ at
NLO proves more sensitive to short-distance physics than does the
renormalized $G_Q$.
Throughout this work we will use the factorization of nucleon
structure employed in Ref.~\cite{Ph03} in order to include the effects
of finite nucleon size in the calculation. There it was shown that the
chiral expansion for the ratios:
\begin{equation}
\frac{G_C}{G_E^{(s)}} \quad \mbox{and} \quad \frac{G_Q}{G_E^{(s)}} \quad
\mbox{and} \quad \frac{G_M}{G_M^{(s)}},
\label{eq:ratios}
\end{equation}
with $G_E^{(s)}$ and $G_M^{(s)}$ the isoscalar single-nucleon electric
and magnetic form factors, is better behaved than the chiral expansion
for $G_C$, $G_Q$, and $G_M$ themselves. The ratios (\ref{eq:ratios})
allow us to focus on the ability of the chiral expansion to describe
deuteron structure, and we will employ the $\chi$ET results for the
ratios in our efforts to predict BLAST's results for polarized
electron scattering from a deuterium target. We note that, up to the
order we work to here, our predictions for $G_C/G_Q$ are independent
of the manner in which we include nucleon structure in the
calculation. Our invoking the factorization of nucleon structure in
the electron-deuteron matrix elements plays no role in our predictions
for $G_C/G_Q$.
The chiral perturbation theory for this calculation was laid out in
Refs.~\cite{Ph03,WM01}, and so here we merely summarize the pertinent
features of the chiral expansion for the deuteron currents in
Section~\ref{sec-kernel}. However, in doing so we find that we must
address the issue of how to calculate the corrections to this ratio
that have coefficients which are fixed by low-energy Lorentz
invariance. We deal with this problem in Section~\ref{sec-oneoverM},
by recalling results of Friar, Adam {\it et al.}, and Arenh\"ovel {\it
et al.}, which show that such corrections can be calculated
unambiguously, as long as they are included consistently in both the
$NN$ potential and the current operator. Then, in
Section~\ref{sec-other} we discuss effects in the $J_0$ operator
beyond $O(e P^3)$, and explain how we will estimate their impact on
$G_C$ and $G_Q$. In particular, we write down an operator that
represents the effects of physics at mass scales above 1 GeV on $G_Q$,
and can repair the discrepancy between the experimental value of $Q_d$
and our predictions for $G_Q(0)$. We also discuss how to estimate the
remaining uncertainty in our results. Then, in
Section~\ref{sec-J0results} we present results for $G_C$, $G_Q$, and
the ratio $G_C/G_Q$. We show that the shape of $G_Q$ can be predicted
in a model-independent way for $Q^2 < 0.3$ GeV$^2$, but the
uncertainty in the ratio $G_C/G_Q$ is sizeable at $Q^2 \approx 0.4$
GeV$^2$. Finally, in Section~\ref{sec-Jplusresults} we present
results for $G_C/G_M$, and in Section~\ref{sec-conclusion} we
summarize and provide an outlook.
\section{The deuteron current}
\label{sec-kernel}
We now discuss the charge and current operators in turn. Such a
decomposition is, of course, not Lorentz invariant, so here we make
this specification in the Breit frame.
\subsection{Deuteron charge}
The vertex from ${\cal L}_{\pi N}^{(1)}$ which represents an $A_0$ photon
coupling to a point nucleon gives the leading-order (LO) contribution to $J_0$
as depicted in Fig.~\ref{fig-twobodycharge}(a). At $O(e P^2)$ this is
corrected by insertions in
${\cal L}_{\pi N}^{(3)}$ that generate the nucleon's isoscalar charge
radius. This gives a result for $J_0$ through $O(eP^2)$:
\begin{equation}
J_0^{(2)}=|e| \left(1 - \frac{1}{6} \langle r_{Es}^2 \rangle
Q^2\right) + j_0^{(1/M^2)}({\bf q})
\label{eq:structure}
\end{equation}
with $j_0^{(1/M^2)}({\bf q})$ the
``relativistic'' corrections to the single-nucleon charge
operator. These contributions have fixed coefficients that are
determined by the requirements of Poincar\'e invariance. Since these
coefficients scale as $1/M^2$ this particular set of $O(e P^2)$
contributions are generally smaller than one would estimate given the
formula (\ref{eq:P}) for the parameter $P$. These ``relativistic''
corrections can be calculated by writing down a $J_0$ operator that,
when inserted between deuteron wave functions calculated in the
two-nucleon center-of-mass frame, yields results for the matrix
elements that are Lorentz covariant up to the order to which we
work. To do this we employ the formalism of Adam and Arenh\"ovel, as
described in Ref.~\cite{AA96}.
\begin{figure}[htbp]
\vspace{0.2cm}
\hspace{-0.25in}
\centerline{\epsfig{figure=fig_ed.eps,width=11.0cm}}
\vspace{-0.1cm}
\caption{Diagrams representing the leading contribution
to the deuteron charge operator [(a)], the leading two-body contribution
to $J_0$ [(b)], and the dominant short-distance piece [(c)].
Solid circles are vertices from ${\cal L}_{\pi N}^{(1)}$, and the shaded
circle is the vertex from ${\cal L}_{\gamma \pi N}^{(2)}$.}
\label{fig-twobodycharge}
\end{figure}
Meanwhile the only contribution at $O(eP^3)$, or NNLO, comes from the
tree-level two-body graph shown in Fig.~\ref{fig-twobodycharge}(b).
In HB$\chi$PT the relevant single-nucleon photo-pion vertex arises as
a consequence of the Foldy-Wouthuysen transformation which generates a
term in ${\cal L}^{(2)}$. Straightforward application of the Feynman
rules for the relevant pieces of the HB$\chi$PT Lagrangian gives the
result for this piece of the deuteron current~\cite{Ph03}:
\begin{equation}
\langle {\bf p}'|J_0^{(3)}({\bf q})|{\bf p} \rangle=
\tau_1^a \tau_2^a \, \, \frac{|e| g_A^2}{8 f_\pi^2 M} \, \,
\left[\frac{\sigma_1 \cdot {\bf q} \, \sigma_{2} \cdot
({\bf p} - {\bf p}' + {\bf q}/2)}
{m_\pi^2 + ({\bf p} - {\bf p}' + {\bf q}/2)^2} + (1 \leftrightarrow 2)\right],
\label{eq:J02B}
\end{equation}
where ${\bf p}$ and ${\bf p}'$ are the (Breit-frame) relative momenta
of the two nucleons in the initial and final-state
respectively (see Ref.~\cite{Ri84} for a much earlier derivation).
Thus we now have a result for the deuteron's charge operator which can
be summarized as:
\begin{equation}
\langle {\bf p}'|J_0({\bf q})|{\bf p}\rangle=
\left[|e| \left(1 - \frac{1}{6} \langle r_{Es}^2 \rangle Q^2\right)
+ j_0^{(1/M^2)}({\bf q})\right] \delta^{(3)}(p' - p - q/2)
+ \langle {\bf p}'|J_0^{(3)}({\bf q})|{\bf p} \rangle
+ O(eP^4).
\label{eq:pure}
\end{equation}
However, it was shown in Ref.~\cite{Ph03} that the parameterization
(\ref{eq:structure}) of the nucleon's isoscalar charge distribution
breaks down at $|{\bf q}| \approx 300$ MeV. In order to avoid this difficulty
we observe that the result (\ref{eq:pure}) may be recast, up to the
order to which we work, as:
\begin{equation}
\langle {\bf p}'|J_0({\bf q})|{\bf p}\rangle=
\left[\left(|e| + j_0^{(1/M^2)}({\bf q})\right) \delta^{(3)}(p' - p - q/2)
+ \langle {\bf p}'|J_0^{(3)}({\bf q})|{\bf p} \rangle\right] G_E^{(s)}(Q^2)
+ O(eP^4),
\label{eq:factor}
\end{equation}
with $G_E^{(s)}(Q^2)$ the complete one-loop HB$\chi$PT result for the
nucleon's
isoscalar electric form factor~\cite{He98}. This means that we can
write a result that is independent of $\chi$PT's difficulties in
describing nucleon structure if we focus on the ratio of $\langle {\bf p}'|J_0({\bf
q})|{\bf p}\rangle$ to $G_E^{(s)}(Q^2)$. We will then use experimental
data~\footnote{There is, of course, some circularity here, since
electron-deuteron elastic scattering data has been used to constrain
the neutron electric form factor, see, in particular,
Ref.~\cite{Pl90}.}, in particular the parameterization of
Mergell {\it et al.}~\cite{Me95}, for $G_E^{(s)}(Q^2)$ in all computations we
present below. This use of experimental data for the single-nucleon
matrix element that appears in Eq.~(\ref{eq:factor}) allows us to
focus on how well the $\chi$ET is describing deuteron
structure, since it removes the nucleon-structure issues from the
computation of the deuterium matrix element. Our technique to achieve this
is rigorous, up to the chiral order to which we work here.
The ratio $G_C/G_Q$ can also be computed independent of
nucleon-structure issues, as is made clear by a brief examination of
Eq.~(\ref{eq:factor}), together with the definitions (\ref{eq:GC}) and (\ref{eq:GQ}).
\subsection{Deuteron three-current}
The counting for the isoscalar three-vector current ${\bf J}$ was
already considered in detail by Park and
collaborators~\cite{Pa95}. ${\bf J}$ begins at $O(eP)$,
but at $O(e P^3)$ there are finite-size and relativistic
corrections, which are suppressed by two powers of
$P^2$. This is the highest order we will
calculate $G_M$ to here, and at this order, using factorization we have:
\begin{equation}
\langle {\bf p}'|{\bf J}^{(3)}|{\bf p} \rangle=[|e| ({\bf p} + {\bf q}/2)/M + i \mu_S {\bf \sigma}
\times {\bf q} + {\bf J}^{(1/M^2)}({\bf q})] G_M^{(s)}(Q^2)
\delta^{(3)}(p' - p - q/2).
\end{equation}
with ${\bf p} - {\bf q}/4$ is the momentum of the struck nucleon, and
$\mu_S$ is the isoscalar magnetic moment of the nucleon, whose value we
take to be $\mu_S= 0.88 |e|/(2M)$.
At $O(e P^4)$ [NNLO] two kinds of magnetic two-body current enter the
calculation. One is short-ranged, and one is of pion
range~\cite{Pa95,Ch99,Sc99,Pa00,WM01}. Each of them has an
undetermined coefficient. In principle those coefficients should be
fit to data (e.g. the deuteron magnetic moment, which is not exactly
reproduced by the current ${\bf J}$ and the wave functions employed
here) and the low-$Q^2$ shape of the form factor.
\section{Unitary equivalence and the consistent treatment of $1/M$
corrections}
\label{sec-oneoverM}
In this section we discuss the constraints imposed by Poincar\'e
invariance---or the low-energy manifestations thereof---on the
Breit-frame isoscalar $NN$ charge operator. Recall that in HB$\chi$PT
$\langle {\bf p}'|J_0^{(3)}({\bf q})|{\bf p} \rangle$ arises from a
piece of ${\cal L}_{\pi N}^{(2)}$ that has a fixed coefficient
obtained via a Foldy-Wouthuysen transformation. Therefore this piece of
the charge operator is a low-energy consequence of Lorentz covariance
of $\langle M'|J_\mu|M \rangle$. As such the contribution
(\ref{eq:J02B}) should be computed in a manner consistent with that
used to derive the $1/M^2$ corrections to the one-pion exchange part
of the $NN$ potential. Those $1/M^2$ corrections can be obtained from
the chiral Lagrangian--- specifically from the $1/M$ pieces in ${\cal
L}_{\pi N}^{(2)}$ and the $1/M^2$ pieces in ${\cal L}_{\pi
N}^{(3)}$. But the relevant operators involve the energy of the
individual nucleons, and so it is not immediately obvious how to
convert them to contributions to an energy-independent
quantum-mechanical potential. In fact, in the 1970s and 1980s many
techniques were developed by which quantum-mechanical operators could
be obtained from a relativistic quantum field
theory~\cite{AA96,Ad93,Fr80}. In all of these techniques there was
freedom in choosing whether (and if so, which) nucleon lines to put on
shell, as well as freedom in how to include meson retardation. As we
shall see, the choices made with respect to these two issues have an
impact on the form of the operators (both $V$ and $J_0$) that
are obtained. Ultimately though, as long as operators and potentials
are derived in a consistent way, the different choices are related by
unitary transformations that leave matrix elements
unaffected~\cite{Ad93,Fr80,Fo99}.
That unitary transformation is
labeled by two parameters: $\nu$, which parameterizes the energy,
$k_0$, of the exchanged pion via
\begin{equation}
k_0^2=\left(1 - 2 \nu\right) \frac{(p'^2 - p^2)^2}{4 M^2},
\end{equation}
where $p'$ ($p$) is the length of the relative three-momentum vector
of the $NN$ system after (before) the meson exchange; and
$\beta$, which denotes a choice for the change in the nucleons'
energy after absorption of the pion~\cite{Ad93}:
\begin{equation}
(p_0' - p_0)_{1,2}=(1 - 2 \beta) \frac{p'^2 - p^2}{2 M}.
\end{equation}
Note that in quantum mechanics energy is not conserved at each vertex,
and so $(p_0' - p_0)_1=k_0=-(p_0'-p_0)_2$ need not necessarily hold.
Indeed, it turns out that since we are in the $NN$ c.m. frame the
difference $p_0' - p_0$ is the same for both nucleons once an energy shell is
chosen. The full expression for the $1/M^2$ corrections to the
one-pion-exchange potential in the case of arbitrary $\beta$ and $\nu$
can be found in Ref.~\cite{Ad93}. The main result for our purposes
here is that if $\beta=\frac{1}{4}$ then the potential takes the form:
\begin{equation}
\langle {\bf p}'|V^{(1 \pi)}|{\bf p} \rangle=-\tau_1^a \tau_2^a \frac{g_A^2}{4 f_\pi^2}
\frac{\sigma_1 \cdot ({\bf p}' - {\bf p}) \sigma_2 \cdot ({\bf p}' -
{\bf p})}{({\bf p}' - {\bf p})^2 + m_\pi^2} \left(1 - \frac{p'^2 +
p^2}{2 M^2} + O\left(\frac{1}{M^4}\right)\right).
\label{eq:relOPE}
\end{equation}
This is the one-pion-exchange potential used in the N$^3$LO
computation of Ref.~\cite{Ep05}. (Corrections to ``leading'' two-pion
exchange diagrams that are suppressed by $1/M$ are also included
there, but are associated with pieces of the $J_0$ which are of higher
order than we work to here.) The computation of
Ref.~\cite{Ma01} employed the form for $\langle {\bf p}'|V^{(1
\pi)}|{\bf p} \rangle$ that corresponds to $\beta=0$. All other
potentials we discuss here (including the NNLO and NLO ones used in
Ref.~\cite{Ep05}) employed the non-relativistic form of OPE, i.e. the
result (\ref{eq:relOPE}), but without the additional factor in the
round brackets. Meanwhile, all of the potentials we have used neglect
retardation, which means they have set $\nu=\frac{1}{2}
\Leftrightarrow k_0=0$.
Consistent reduction of the contributions to the deuteron charge
operator then leads to a more general result for diagram
Fig.~\ref{fig-twobodycharge}(b) than that given in
Eq.~(\ref{eq:J02B})~\cite{Ad93}:
\begin{eqnarray}
&& \langle {\bf p}'|J_0^{(3)}({\bf q})|{\bf p} \rangle=
\tau_1^a \tau_2^a \, \, \frac{|e| g_A^2}{8 f_\pi^2 M} \, \,
\left[(1-\beta) \frac{\sigma_1 \cdot {\bf q} \, \sigma_{2} \cdot
({\bf p} - {\bf p}' + {\bf q}/2)}
{m_\pi^2 + ({\bf p} - {\bf p}' + {\bf q}/2)^2}
\right.\nonumber\\
&& \qquad -
\frac{1 - \nu}{2}
\left.\frac{\sigma_1 \cdot ({\bf p} - {\bf p}' + {\bf q}/2) \, \sigma_{2} \cdot
({\bf p} - {\bf p}' + {\bf q}/2) \, \, \, {\bf q} \cdot ({\bf p} - {\bf p}' + {\bf q}/2)}
{[m_\pi^2 + ({\bf p} - {\bf p}' + {\bf q}/2)^2]^2} + (1 \leftrightarrow
2)\right].\nonumber\\
\label{eq:J02Bmutilde}
\end{eqnarray}
In Eq.~(\ref{eq:J02B}) we obtained the result for the $O(e P^3)$ piece
of the charge operator that corresponds to $\beta=0$ and $\nu=1$,
because the field-theoretic manipulations used to arrive at
Eq.~(\ref{eq:J02B}) assume that the fields represent physical
particles, i.~e. they are on-shell. The result (\ref{eq:J02Bmutilde})
may be obtained from Eq.~(\ref{eq:J02B}) by applying a unitary
transformation~\cite{Ad93,Fr80}:
\begin{equation}
J_0^{(3)}(\beta,\nu)=U^\dagger(\beta,\nu) J_0^{(3)}(0,1) U(\beta,\nu),
\label{eq:J0unitary}
\end{equation}
where the form of $U$ can be found in the original papers.
The same unitary transformation generates consistent
$1/M^2$ corrections to the one-pion-exchange part of the $NN$
potential:
\begin{equation}
V_{\rm OPE}(\beta,\nu)=U^\dagger(\beta,\nu) V_{\rm OPE}(0,1) U(\beta,\nu),
\label{eq:VOPEunitary}
\end{equation}
including the form (\ref{eq:relOPE}) if the choice
$\beta=\frac{1}{4}$, $\nu=\frac{1}{2}$ is adopted.
This is not consistent with the choice made in obtaining
Eq.~(\ref{eq:J02B}) in Ref.~\cite{Ph03} because
the $NN$ potential of Ref.~\cite{Ep05} was
computed using the Okubo formalism developed in Ref.~\cite{Ep99A}.
In Ref.~\cite{Ep05} this issue of the choice made for $\beta$ and
$\nu$ does not arise until the N$^3$LO potential is derived, because
in that paper, and in the earlier Refs.~\cite{Ep99A,Ep99}, Epelbaum
{\it et al.} chose to count $M \sim \Lambda^2$. In doing this they
were adhering to Weinberg's original argument as to why it is the
two-nucleon-irreducible kernel---and not the $NN$ amplitude
itself---which admits a chiral expansion. $NN$ intermediate states
introduce factors of $M$ in the amplitude for loop graphs, and if $M
\sim \Lambda^2$ then the $n$th iterate of the one-pion-exchange
potential is the same order as one-pion exchange
itself~\cite{We90,We91}. However, in Ref.~\cite{Be01} the need to
iterate the one-pion-exchange potential to all orders was established
without any reference to counting $M \sim \Lambda^2$, being justified
instead by the singular, and attractive, nature of the $NN$ force (see
also Ref.~\cite{Bi05}). Therefore, while it is true that corrections
to the $NN$ potential which are suppressed by powers of $1/M$ are often
smaller than, e.g. those arising from excitation of the Delta(1232),
in discussing electron-deuteron scattering we will consider a regime
in which $q/M$ can be sizeable. Therefore we follow the original
HB$\chi$PT counting and take $M \sim \Lambda$. As we shall see, this
counting is supported by the fact that the contribution
(\ref{eq:J02B}) plays a significant, but not excessive, role in the
deuteron charge and quadrupole form factors.
If we count $M \sim \Lambda$ the dilemma presented by the
inconsistency between $V$ and $J_0$ arises already at $O(e P^3)$. A way out
of this dilemma is provided by Eqs.~(\ref{eq:J0unitary}) and
(\ref{eq:VOPEunitary}). They guarantee that we will obtain the same
result (up to $O(p^4/M^4)$ corrections) for deuteron matrix elements
$\langle M'|J_0|M \rangle$, regardless of what choices for $\beta$ and
$\nu$ we make when constructing the
operators $V$ and $J_0$ from the chiral effective field theory,
provided that we consistently include the $O(p^2/M^2)$
pieces of the potential $V$ and the $O(eP^3)$ pieces of the operator
$J_0$. Therefore in order to be consistent with the calculation of
the $1/M^2$ corrections to $V_{\rm OPE}$ in Ref.~\cite{Ep05} we must
adopt the choice $\beta=\frac{1}{4}$ in the formula
(\ref{eq:J02Bmutilde}) for $J_0^{(3)}$. If we do this, and also make
sure to calculate one-pion exchange according to the formula
(\ref{eq:relOPE}), then our results for matrix elements of the
deuteron charge operator will incorporate the low-energy consequences
of Lorentz invariance, up to corrections of $O(p^4/M^4)$ (higher order
than we consider here). Note that the CD-Bonn potential is a different
case, since the use of a pseudoscalar $\pi NN$ coupling means that
there we have $\beta=0$. Therefore in that case, and only in that
case, we have used the expression (\ref{eq:J02B}) for the first part
of $J_0^{(3)}$, with no modification by the factor of $\frac{3}{4}$
that must be present if $V^{(1 \pi)}$ is constructed with $\beta=\frac{1}{4}$.
This still leaves us with the issue of how the $p^2/M^2$ corrections
in the one-pion exchange potential (\ref{eq:relOPE}) and the $p^2/M^2$
corrections to the nucleon kinetic energy operator are to be accounted
for in the calculations using the NNLO and NLO wave functions of
Ref.~\cite{Ep05} (or included in calculations with the Nijm93 wave
function of Ref.~\cite{St94} or the wave functions of
Ref.~\cite{PC99}). The original calculations of these wave functions
did not include such effects, but since we count $M \sim \Lambda$
here, we need to include them in order to have a consistent
calculation of $G_C$ and $G_Q$ to $O(e P^3)$.
Starting from the Kamada-Gl\"ockle transformation~\cite{KG98}, we show
in Appendix~\ref{ap-p2overM2corrns} the major part of these effects
can be included by making changes to the short-distance pieces of the
$NN$ potential, and using a slightly modified wave function $\psi$ in
the computation of $G_C$, $G_Q$, and $G_M$. That wave function is
related to the original non-relativistic wave function
$\tilde{\psi}$ obtained in Ref.~\cite{Ep05} by~\cite{KG98}:
\begin{equation}
\psi(p)=\sqrt{\frac{M}{\sqrt{M^2 + p^2}}} \left(\frac{2M}{M + \sqrt{M^2
+ p^2}}\right)^{1/4}
\tilde{\psi}\left(\sqrt{2M \sqrt{M^2 + p^2} - 2M^2}\right).
\label{eq:wfreln2}
\end{equation}
The solution of the non-relativistic Schr\"odinger equation for the
wave function $\tilde{\psi}$, followed by the use of the formula
(\ref{eq:wfreln2}) to obtain the solution of the relativistic
Schr\"odinger equation, is the method by which we incorporate $1/M^2$
effects for the NLO and NNLO wave functions of Ref.~\cite{Ep05}, the
wave function of Ref.~\cite{St94}, and the wave functions of
Ref.~\cite{PC99}.
The effects of using $\psi$, rather
than the wave function, $\tilde{\psi}$, to calculate electron-deuteron
observables increase with photon momentum transfer $|{\bf q}|$, but are small
over the entire range for which the $\chi$ET predictions can be
trusted. At $|{\bf q}|=700$ MeV for the Nijm93 wave function they change $G_C$
by 6.3\%, $G_Q$ by 1.2\%, and $G_M$ by 2.0\%. (The effect on $G_C$ is
proportionately larger because 700 MeV is quite close to the
form factor minimum.)
\section{Short-distance $NN$ charge operators and $Q_d$}
\label{sec-other}
So far we have obtained the deuteron two-body charge operator up to
$O(eP^3)$, or next-to-next-to-leading order. This is the order up to
which the calculation we present here is fully systematic. In this
section we discuss the role of contributions that are nominally higher
order, and identify one particular mechanism that apparently could
generate significant effects at $Q^2=0$. We are particularly
interested in this operator because ``the $Q_d$ problem'' that is
present in all modern potential models (see, e.g Ref.~\cite{St94,Wi95})
persists in the $\chi$ET. The problem is that all such calculations
under-predict the value (\ref{eq:Qdexpt}) for $Q_d$ by about 2--3\% when
they use a charge operator that includes all effects up to NNLO in the
$\chi$ET. The remaining discrepancy is large compared to the expected
$P^4$ size of higher-order effects. It is also large compared to other
discrepancies between theory and experiment in the
${}^3$S$_1$-${}^3$D$_1$ channel of the $NN$ system.
And the situation is actually worse than this, because at $O(e
P^4)$---one order higher than we are considering here---there are
two-meson-exchange contributions to the deuteron charge
operator. One might hope that these provide the missing strength in
the E2 response of deuterium at $Q^2=0$.
These diagrams are presently being calculated for finite $Q^2$, and
will be
incorporated in a future computation of the charge and quadrupole form
factors~\cite{quadri}. However, it is already known that they do not
give a sizeable contribution to the deuteron quadrupole moment. Park
{\it et al.}~\cite{Pa00} computed their effect on $Q_d$ using the AV18
wave function~\cite{Wi95}, and found:
\begin{equation}
\Delta Q_d^{(4)}=-0.002~{\rm fm}^2.
\end{equation}
Therefore these effects will {\it not} repair the discrepancy between
the calculated $G_Q(0)$ and the experimental $Q_d$.
At the next order, $O(e P^5)$, there are additional two-pion-exchange
contributions to the deuteron charge. However, short-distance two-body
currents that contribute to $\langle r_d^2 \rangle$ and $Q_d$ are also
present, and are depicted in Fig.~\ref{fig-twobodycharge}(c). Even
though it is suppressed by five powers of $P$ relative to the LO
result, the latter operator can have a noticeable impact on the
quadrupole moment of deuterium, since the numerical value of $Q_d$
corresponds to a distance that is small on the typical scale of
deuteron physics $\sim 2$ fm. The operator is~\cite{Ch99}:
\begin{equation}
\langle {\bf p}'|J_0^{(5)}({\bf q})|{\bf p} \rangle=|e|(1 + \tau_1 \cdot
\tau_2) \frac{4 \pi h_4}{M \Lambda_Q^4} \left(\sigma_1 \cdot
{\bf q} \sigma_2 \cdot {\bf q} - \frac{|{\bf q}|^2}{3} \sigma_1 \cdot \sigma_2\right),
\label{eq:E2SD}
\end{equation}
and is designed to be diagonal in two-body spin and isospin and
contribute only for $S=1$ and $T=0$ states. In the case of deuterium
it represents an E2 photon inducing a ${}^3$S$_1 \rightarrow
{}^3$S$_1$ transition. Such a transition is possible because the
photon interacts with the total spin of the two-nucleon system through
the two-body operator (\ref{eq:E2SD}). The two-nucleon operator
(\ref{eq:E2SD}) will be induced when high-momentum modes in the $NN$
system are integrated out to obtain the low-momentum effective
theory. It could also be induced when heavy mesons which can couple to
a quadrupole photon in the requisite way are integrated out of the
$\chi$ET. This heavy-meson origin for the operator leads us to
anticipate that with a scale $\Lambda_Q$ of about 1.2 GeV the coupling
$h_4$ will be of order 1. In particular, if we used resonance
saturation in the $NN$ system~\cite{Ep01} to estimate the size of this
operator the first mesonic current that would contribute to the
operator would be the $\rho a_1 \gamma$ current~\cite{Tr01}.
We now give arguments which demonstrate that physics at roughly this
scale could indeed remedy the discrepancy between the experimental
$Q_d$ and the result found from the mechanisms already discussed. Let
us take the accepted number from ``high-quality'' potential models
$Q_d^{(0)}=0.270$ fm$^2$ (see, e.g.~Ref.~\cite{Wi95}). Calculations
with $\chi$PT wave functions obtain similar, or even slightly smaller,
numbers~\cite{Ph03,WM01,PC99,Ep05}. Then, we adopt $\Delta
Q_d^{(3)}=0.008$ fm$^2$ as an estimate for the NLO and NNLO
corrections (which come mainly from the two-body operator
$J_0^{(3)}$). This leaves us with a remaining discrepancy between
theory and experiment of 0.008 fm$^2$, or about 3\%. Inserting the
operator (\ref{eq:E2SD}) between deuteron wave functions we obtain its
contribution to $Q_d$ as:
\begin{equation}
\Delta Q_d^{\rm SD}=\frac{32 \pi h_4}{M \Lambda_Q^4} |\psi(0)|^2,
\end{equation}
where $\psi(0)$ is the deuteron wave function at the origin. While
$\psi(0)$ is not an observable, neither is the dimensionless number
$h_4$: it is a wave-function-regularization dependent
coefficient. Since only S-waves contribute at $r=0$ if we assume
$\Lambda_Q=1.2$ GeV we can constrain the combination of $\psi(0)$ and
$h_4$ to be:
\begin{equation}
h_4 \left[\lim_{r \rightarrow 0} \frac{u(r)}{r}\right]^2 \approx 6.5~{\rm fm}^{-3},
\end{equation}
where $u(r)$ is the ${}^3$S$_1$ radial wave function.
Therefore the operator (\ref{eq:E2SD}) can be associated with physics
at scales of about 1 GeV and still remedy the discrepancy between the
theoretical value of $Q_d$ (including the
meson-exchange contribution to the charge (\ref{eq:J02Bmutilde}))
and the experimental value (\ref{eq:Qdexpt}).
This higher-order effect has an importance greater than one would
anticipate in Weinberg's counting (\ref{eq:sum}) not because it is
unnaturally large, but because, from the point of view of that
counting, $Q_d$ is unnaturally small~\cite{PC99}. This is hardly a surprise,
since, at both leading and next-to-leading order, $Q_d$ is generated
by one-body operators that connect the deuteron S-state to the
deuteron D-state. Such effects are suppressed by the
ratio $\eta=A_D/A_S=0.0253(2)$~\cite{St95}. In contrast, the operator
(\ref{eq:E2SD}) is not $\eta$-suppressed and so its contribution to
$Q_d$ is significantly larger than the naive estimate of $P^5 \sim
0.1$\% leads us to anticipate. In this context it is worth noting
that such an estimate for the short-distance effects in $\langle r_d^2
\rangle$ is completely validated by calculation~\cite{Ph03}. The
leading-order piece of $\langle r_d^2 \rangle$ is of the expected size
$\sim 1/m_\pi^2$, and two-body contributions, beginning with
effects from $J_0^{(3)}$ and continuing through the two-pion-exchange
mechanisms of $J_0^{(4)}$ and the C0-photon short-distance operator of
$J_0^{(5)}$, give contributions of the expected size,
approximately $0.3$\%.
While this is reassuring, the (relatively) large impact of $J_0^{(5)}$
on $G_Q(0)$ means that we must ask how well we know this operator. Its
value at $Q^2=0$ can be fixed by the requirement that it repair the
discrepancy between theory and experiment for $Q_d$. But at this stage
we know nothing about its $Q^2$ dependence. For the purposes of this
paper we will assume that this operator arises from heavy-meson
exchange, and so model its $Q^2$ dependence by:
\begin{equation}
\Delta G_Q^{(5)}=\frac{\Delta Q_d^{\rm SD}}{\left(1 + \frac{{\bf
q}^2}{\Lambda^2}\right)^5}.
\label{eq:J05uncert}
\end{equation}
The uncertainty in the operator is now encoded as uncertainty in the
scale $\Lambda$ of its momentum variation. We anticipate $\Lambda \geq
1.2$ GeV, because there are no meson resonances below 1.2 GeV which,
when integrated out of the theory, will yield this operator. The only
danger with this reasoning is that two-pion-range mechanisms that occur
at $O(e P^5)$ may ultimately prove to be responsible for the $Q_d$
discrepancy. This possibility is under
investigation~\cite{quadri}. However, evaluation of relevant processes
in models which calculate, e.g. the role of $\Delta \Delta$ components
in deuterium, suggest that the dominant two-pion-exchange
contributions to $J_0^{(5)}$ are not large enough to remedy the $Q_d$
discrepancy~\cite{Bl89,AD98}.
As for an upper bound on the value $\Lambda$; the effects of the
operator (\ref{eq:J05uncert}) persist to higher $Q^2$ as the scale of
its momentum variation is increased. At the $Q^2$'s considered here,
the impact of this operator on observables when we choose $\Lambda=2$
GeV is within 30\% of what one would obtain at $\Lambda=\infty$, so we
will vary $\Lambda$ between 1.2 and 2 GeV in order to assess the
theoretical uncertainty of our calculation. We will see below that
even with this range of variation our ignorance as
to the precise value of $\Lambda$ (or the precise function of $Q^2$
that modulates the current $J_0^{(5)}$) is the dominant contribution to our
theoretical uncertainty in the ratio $G_C/G_Q$.
Our goal in introducing the $Q^2$-dependence (\ref{eq:J05uncert})
into our calculation is to assess the potential impact on
our $\chi$ET calculation from physics that is not explicitly included
in it. The $Q^2$-dependence of $J_0^{(3)}$ will be modified by these
sorts of effects, as well as by higher-order loop contributions that
can be calculated in the $\chi$ET. However, once such higher-order
calculations are carried out the $Q^2$-dependence of $J_0^{(3)}$ can
presumably be constrained by input from electro-production in the
single-nucleon sector. Therefore here we take the operator $J_0^{(3)}$
as given. When we quote ranges for its impact on observables those
ranges arise from the fact that $\langle M'|J_0^{(3)}|M \rangle$ varies
when evaluated with different deuteron wave functions.
\section{Results for $G_C$ and $G_Q$}
\label{sec-J0results}
In this section we present results for the matrix elements of the
deuteron charge operator: $G_C$ and $G_Q$.
\begin{table}[htbp]
\begin{center}
\begin{tabular}{|c|c|c|c|}
\hline
Wave function & Order &$\bar{\Lambda}$ (MeV) & $\Lambda$ (MeV)\\
\hline\hline%
001 & NLO & 400 & 500 \\
002 & NLO & 550 & 500 \\
003 & NLO & 550 & 600 \\
004 & NLO & 400 & 700 \\
005 & NLO & 550 & 700 \\
\hline
101 & NNLO & 450 & 500 \\
102 & NNLO & 600 & 500 \\
103 & NNLO & 550 & 600 \\
104 & NNLO & 450 & 700 \\
105 & NNLO & 600 & 700 \\
\hline
221 & N$^3$LO & 450 & 500 \\
222 & N$^3$LO & 600 & 600 \\
223 & N$^3$LO & 550 & 600 \\
224 & N$^3$LO & 450 & 700 \\
225 & N$^3$LO & 600 & 700 \\
\hline\hline
\end{tabular}
\end{center}
\caption{\label{table-evgenywfs} Values of the SFR cutoff
$\bar{\Lambda}$ and the Lippmann-Schwinger equation cutoff $\Lambda$
that are employed in the different wave functions of
Ref.~\cite{Ep05} that are used to compute deuteron form factors in
this work. The wave functions are in groups of five: first those
generated with the NLO $\chi$ET potential, then
NNLO, then N$^3$LO.}
\end{table}
In Figure~\ref{fig-GC} we show the results for $G_C$ when the NNLO
[$O(e P^3)$] operator for $J_0$ is used (with nucleon structure
included via Eq.~(\ref{eq:factor})). The constants employed in
evaluating the operator were $g_A=1.29$, $f_\pi=93.0$ MeV, $m_\pi=139$
MeV, and $M=938.9$ MeV. The dashed, dot-dashed, and solid lines show
the range of predictions generated using NLO, NNLO, and N$^3$LO wave
functions with different regulator masses $\Lambda$ and
$\bar{\Lambda}$. A list of the values of $\bar{\Lambda}$ and $\Lambda$
that are chosen for the wave functions used here is given in
Table~\ref{table-evgenywfs} (which is adapted from
Ref.~\cite{Ep05}). At a given order in the expansion for the chiral
potential these wave functions all have the same long-distance part,
but the different scales at which spectral-function regularization is
applied to obtain the $NN$ potential, and at which the
$\exp(-p^6/\Lambda^6)$ regulator is applied to the potential before
its insertion into the Lippmann-Schwinger equation, mean that they
differ in their short-distance physics. Therefore the amount by which
predictions for $G_C$ vary once the order of the wave-function
calculation is fixed gives us a lower bound for the impact of
short-distance physics on our calculation.
\begin{figure}[htb]
\centerline{\includegraphics*[width=110mm]{GC.eps}}
\caption{Results for the charge form factor $G_C$ as a function of
$|{\bf q}|\equiv \sqrt{Q^2}$. The dashed lines show the largest and
smallest form factors obtained with the NLO wave functions of
Ref.~\cite{Ep05}. The range of predictions with these wave functions
is given by the horizontally shaded region. Similarly for the
dot-dashed lines and the diagonally shaded region at NNLO, and the
solid lines and the vertically shaded region at N$^3$LO. Other
results shown are for a wave function from Ref.~\cite{PC99} with
$R=1.5$ fm (short-dashed line), and the Nijm93 and CD-Bonn deuteron
wave functions (solid red and blue lines respectively). The
experimental data is taken from the extraction of Ref.~\cite{Ab00B}
and from Ref.~\cite{Ni03}:
upward triangles represent data from the $T_{20}$ measurement of
Ref.~\cite{Dm85}, open circle \cite{Fe96}, solid circle \cite{Sc84},
open squares \cite{Bo99}, downward triangles \cite{Gi90}, rightward
triangles~\cite{Ni03}, star \cite{Bo91}, solid squares \cite{Ga94},
solid diamonds \cite{Ab00A}.}
\label{fig-GC}
\end{figure}
It is worth noting that the NLO potential only includes $O(P^2)$
corrections to $V$, and so the calculations labeled ``NLO'' in
Fig.~\ref{fig-GC} are limited by the accuracy of the $NN$
potential. The calculations labeled ``NNLO'' have $O(P^3)$
corrections included in the potential: the same level of accuracy to
which the operator $J_0$ has been computed. In this respect only the
computation with the NNLO wave functions is one that is carried out to
a consistent order in both the wave functions and operators obtained
using $\chi$ET. The results with the NLO and N$^3$LO wave functions
are shown for comparison. For the same reason we show results with the
Nijm93 wave function, the CD-Bonn wave function, and the
one-pion-exchange plus square well wave functions of
Ref.~\cite{PC99}. In each case consistent choices for $\beta$ and
$\nu$ (as explained in Sec.~\ref{sec-oneoverM}) were employed. In the
case of all but the CD-Bonn and $\chi$ET N$^3$LO calculations the
impact of the $p^2/M^2$ pieces of the one-pion-exchange potential and
the kinetic-energy operator has been assessed via the techniques
described in Appendix~\ref{ap-p2overM2corrns}. Therefore all of these
matrix elements have the same one-pion-exchange physics. A difference
in their predictions is then either due to physics at the range of
two-pion-exchange, or to physics at distances less than the scale
where the chiral expansion can be used to reliably calculate the $NN$
potential.
In fact, Fig.~\ref{fig-GC} shows us that all of these wave functions
predict very similar form factors for $|{\bf q}| \leq 600$ MeV. The most
noticeable difference occurs around the zero of $G_C$---a region
where, by definition, higher-order contributions cancel with
lower-order contributions, and the calculation is therefore sensitive
to the addition of higher-order effects. Most wave functions also
produce a $G_C$ in good agreement with the data compilation of Abbott
{\it et al.}~\cite{Ab00B} for $|{\bf q}| \leq 600$ MeV. An exception is the
predictions using the N$^3$LO wave function of Ref.~\cite{Ep05}, which
diverge from those of the other wave functions considered here at
significantly lower $Q^2$. Note also that the N$^3$LO predictions seem
to be significantly more sensitive to short-distance physics than is
the case for the wave functions computed with NLO or NNLO chiral
potentials, or even than the wave functions of Ref.~\cite{PC99}, where
only the result with the square well and surface delta function with
$R=1.5$ fm is shown, but changing $R$ to 2.5 fm produces a barely
discernible change in the short-dashed line.
It is possible that the situation with the predictions from the
N$^3$LO potential will improve when the pieces of $J_0$ of $O(e P^4)$
and $O(e P^5)$ which are not included in this calculation are added to
the current operator. But, even if this is the case, sizeable
cancellations between lower and higher-order effects are
necessary if the N$^3$LO wave function is to be used to describe
electron-deuteron data at momentum transfers $Q^2 \geq 0.2$ GeV$^2$.
One might argue that one does not expect the $\chi$ET to work beyond
this scale anyway. However, the chiral expansion developed here and in
Refs.~\cite{PC99,WM01,Ph03} for the deuteron current operator still converges
well at $Q^2 \sim 0.3$ GeV$^2$. In part this is because
the impulse (leading-order) result for $G_C$ can be written as:
\begin{equation}
G_C^{(0)}({\bf q})=\int_0^\infty \, d\hbox{r} \, j_0\left(\frac{|{\bf q}| r}{2}\right)
(u^2(r) + w^2(r)),
\label{eq:GCcoord}
\end{equation}
where $j_0$ is a spherical Bessel function and $w(r)$ is the
${}^3$D$_1$ radial wave function. This means that---at least for the
impulse-approximation piece of the matrix element---the relevant
momentum scale at which the deuteron wave function is probed is not,
in fact, $|{\bf q}|$, but $|{\bf q}|/2$---half of the momentum
transfer is taken away by the center-of-mass degree of freedom. In
this context the failure of the N$^3$LO wave functions' form-factor
predictions to agree with the data when $|{\bf q}|/2 \approx 200$ MeV is
rather disturbing.
In Fig.~\ref{fig-GQ} we show the results for $G_Q$. As mentioned
above, the shape produced by all wave functions is remarkably
similar---a point which was exploited in an extraction of $G_E^{(n)}$
in Ref.~\cite{SS01}. This can be understood from the presence of
$j_2$, instead of the $j_0$ of Eq.~(\ref{eq:GCcoord}) in the
co-ordinate space integral that gives the leading-order contribution
to $G_Q$. The pattern of convergence for $G_Q$ predictions with the
order of the $\chi$ET potential is interesting. The NLO band is quite
wide, and its centroid is below the data. The NNLO band is very
narrow, and its centroid agrees well with data out to 800 MeV. We
stress that this is the consistent order for computation of the
potential, given the current operator we have used. The N$^3$LO band
is then as wide, and below, the NLO band. In this plot predictions
using the Nijm93 and CD-Bonn potentials are not shown. But they lie
within the diagonally-shaded band that represents the range of
predictions of the NNLO $\chi$ET potential.
\begin{figure}[htbp]
\centerline{\includegraphics*[width=120mm]{GQ.eps}}
\caption{Results for the charge form factor $G_Q$ (in units of fm$^2$)
as a function of $|{\bf q}|\equiv \sqrt{Q^2}$. Legend as in
Fig.~\ref{fig-GC}, apart from the absence of curves for the Nijm93
and CD-Bonn wave functions. These curves fall within the range of
the dot-dashed lines, i.e. the results with NNLO $\chi$ET wave
functions.}
\label{fig-GQ}
\end{figure}
\begin{table}[htbp]
\begin{center}
\vskip 0.6 true cm
\begin{tabular}{||l||c|c|c|c|c|c||}
\hline \hline
& Experiment & NLO:001--003 & NNLO:104--102 & N$^3$LO:221--225 & Nijm93\\
\hline \hline
$Q_d$ (fm$^2$) & 0.2859(3) & 0.278--0.280 & 0.279--0.282 & 0.270--0.274 &0.276
\\ \hline \hline
\end{tabular}
\end{center}
\caption{Deuteron quadrupole moment computed with our NNLO charge
operator and different wave functions. Results are accurate to the
number of digits shown. The ranges are generated by considering
various values of $\Lambda$ and $\bar{\Lambda}$ at a given order in
the expansion for the $\chi$ET $NN$ potential. The ``extremal'' wave
functions are indicated in the top line of the table.}
\label{table-Qd}
\end{table}
In order to remove the rapid fall-off in the plots of
Fig.~\ref{fig-GQ} and \ref{fig-GC}, and also provide a result for the
ratio of form factors which will be measured at BLAST at the momentum
transfers indicated by the asterisks, we show predictions for the
ratio $G_C/G_Q$ in Fig.~\ref{fig-GCoverGQnoCT}. These predictions are
compared to data from the compilation of Ref.~\cite{Ab00B}, as well as
the more recent data set from Novosibirsk~\cite{Ni03}~\footnote{The
experimental error bars in this plot, and in the plots of $G_C/G_Q$
below, were obtained by summing the relative
errors for $G_C$ and $G_Q$ given in Refs.~\cite{Ab00B,Ni03}. Some of
the measurements of $t_{20}$ and $T_{20}$ from which these data came
were quite precise, and so this procedure may well overestimate the
size of their errors.}. As can easily be gleaned
from Fig.~\ref{fig-GCoverGQnoCT}, the different wave functions considered give
predictions that disagree at about the 5\% level at $Q^2=0$, i.e. they
give different numbers for the deuteron quadrupole moment $Q_d$.
Numerical results for $Q_d$, computed with the NNLO operator that was
used to generate the predictions of Fig.~\ref{fig-GCoverGQnoCT}, are
presented in Table~\ref{table-Qd}. Note that the variation in the
results with the $\chi$ET NLO and NNLO wave functions as the cutoffs
$\Lambda$ and $\bar{\Lambda}$ are varied is of the same size as the
discrepancy between those predictions and the experimental value
(\ref{eq:Qdexpt}).
\begin{figure}[phtb]
\centerline{\includegraphics*[width=87mm]{GCGQnorenorm.eps}}
\caption{Results for the ratio $G_C/G_Q$. As in Fig.~\ref{fig-GC} the
dashed lines and the horizontally shaded region show the range of
results obtained with the NLO wave functions of
Ref.~\cite{Ep05}. Likewise for the dot-dashed lines and the
diagonally shaded region at NNLO, and the solid lines and the
vertically shaded region at N$^3$LO. Other results shown are for
wave functions from Ref.~\cite{PC99} with $R=1.5$ fm and $R=2.5$ fm
(short-dashed lines), and the Nijm93 and CD-Bonn deuteron wave
functions (solid red and blue lines respectively). Upward triangles
are data from the $T_{20}$ measurement of Ref.~\cite{Dm85}, open
circle \cite{Fe96}, solid circle \cite{Sc84}, open squares
\cite{Bo99}, downward triangles \cite{Gi90}, rightward triangles
\cite{Ni03}, star \cite{Bo91}, solid squares \cite{Ga94}, solid
diamonds \cite{Ab00A}. The asterisks indicate the points where
BLAST will extract this ratio from their $t_{20}$ data.}
\label{fig-GCoverGQnoCT}
\end{figure}
\begin{figure}[phtb]
\centerline{\includegraphics*[width=87mm]{GCGQrenorm1.eps}}
\caption{Results for $G_C/G_Q$, after $J_0^{(5)}$ is inserted with a
coefficient adjusted to reproduce the experimental value of $Q_d$
and $\Lambda=1.5$ GeV. Legend as in Fig.~\ref{fig-GCoverGQnoCT}.
Each curve is shown only up to the point where the $J_0^{(5)}$
contribution is so large that the calculation is no longer
reliable (with the exception of NLO).}
\label{fig-GCoverGQrenorm}
\end{figure}
The fact that short-distance physics can affect the value of $Q_d$ at
the 2--3\% level needed to restore agreement between theory and data
encourages us to incorporate the operator $J_0^{(5)}$ (see
Eq.~(\ref{eq:E2SD})) in our calculation. In doing so we adopt the
$Q^2$-dependence of Eq.~(\ref{eq:J05uncert}) with $\Lambda=1500$
MeV. For each wave function the value of $h_4$ is adjusted to yield
the experimental value of $Q_d$. The results of this procedure are
presented in Fig.~\ref{fig-GCoverGQrenorm}. The NNLO, N$^3$LO, Nijm93,
CD-Bonn, and one-pion-exchange-tail potential curves are only shown
out to the momentum transfer where the contribution of the operator
(\ref{eq:E2SD}) makes up 20\% of the contributions from the preceding
orders. This gives a way to estimate where the calculations with
various wave functions become unreliable: they are unreliable when
$J_0^{(5)}$ is no longer a small piece of the total $G_C/G_Q$. Under
this criterion most of the wave functions can give reliable
predictions to $|{\bf q}|=500$--$600$ MeV, and below this value the
wave-function dependence is markedly reduced as compared to what is
seen in Fig.~\ref{fig-GCoverGQnoCT}. Note that the lines at NLO are
shown only to give an idea of the uncertainty coming from sensitivity
to the choice of $(\Lambda,\bar{\Lambda})$. Their predictions with
this wave function for $G_C/G_Q$ beyond 600 MeV are provided only for
recreational purposes.
\begin{figure}[hbpt]
\centerline{\includegraphics*[width=90mm]{GCGQrenorm2.eps}}
\caption{Results for $G_C/G_Q$, showing the variation that results
from ignorance of the $Q^2$-dependence of the operator $J_0^{(5)}$
($\Lambda=1.2$--$2$ GeV). Calculations are shown for NLO (long
dashed), NNLO (dot-dashed), N$^3$LO (solid green), Nijm93 (solid
red), CD-Bonn (solid blue) and $R=1.5$ fm square well + one-pion
exchange (short dashed) wave functions. The vertical lines indicate
a reasonable range for the theoretical prediction at each of the
points where BLAST will have data. The experimental data is from
Refs.~\cite{Dm85} (upward triangle), \cite{Fe96} (open circle),
\cite{Sc84} (solid circle), and \cite{Bo99} (open square). Note
change of scale as compared to Fig.~\ref{fig-GCoverGQrenorm}.}
\label{fig-GCGQdetail}
\end{figure}
\begin{table}[hbtp]
\begin{center}
\begin{tabular}{|c|c|c|c|}
\hline
$|{\bf q}|$ (MeV) & $G_C/G_Q$ (fm$^2$) & Error bar (fm$^2$)\\
\hline\hline%
368.4 & 3.11 & 0.11\\
403.9 & 2.99 & 0.13\\
439.3 & 2.86 & 0.16\\
474.8 & 2.71 & 0.18\\
514.2 & 2.53 & 0.22\\
559.5 & 2.29 & 0.26\\
606.8 & 2.02 & 0.30\\
\hline\hline
\end{tabular}
\end{center}
\caption{\label{table-BLASTpreds}
Predictions for the ratio $G_C/G_Q$ at the values of $|{\bf q}|$ where
this quantity will be measured at BLAST. The error bar and central
values displayed here are obtained via the procedure discussed in the text.}
\end{table}
Lastly we focus on the region where the $\chi$ET is
reliable: $|{\bf q}| \leq 600$ MeV. An enhanced view of this region is shown
in Fig.~\ref{fig-GCGQdetail}. For each choice of $NN$ potential two
curves are shown: the upper one of the pair corresponds to choosing
$\Lambda=1.2$ GeV when evaluating the operator $J_0^{(5)}$ and the
lower one corresponds to choosing $\Lambda=2$ GeV. A conservative
estimate for the impact of short-distance physics which is not
well-constrained in this $\chi$ET calculation is given by combining
the uncertainties from $(\Lambda,\bar{\Lambda})$ variation and the
uncertainty coming from lack of knowledge about the momentum
dependence of $J_0^{(5)}$. The black bars then represent the range of
possible values that the $\chi$ET predicts for $G_C/G_Q$ at the
kinematics where there will be data from BLAST. These ranges are
reproduced in Table~\ref{table-BLASTpreds}. The error is about $\pm
3$\% at the lowest BLAST point and increases with $Q^2$, as it
should. Note that we have not included the N$^3$LO $\chi$ET wave
function, or the NLO $\chi$ET wave function, in generating these
predictions. As already discussed, the predictions for $G_C$ and $G_Q$
with the N$^3$LO wave function deviate already from the extant data at
quite low $Q^2$, while the NLO wave function is, in the $\chi$ET sense,
less accurate than the operator being used here. The predictions
obtained with these wave functions are, however, within 2$\sigma$, if
the theoretical error bars we have obtained are taken to have the
usual one-standard-deviation interpretation.
\section{Results for $G_C/G_M$}
\label{sec-Jplusresults}
BLAST will also measure the ratio $G_C/G_M$. Predictions for that
observable are provided in Fig.~\ref{fig-GCoverGM}. We do not show any
experimental data in Fig.~\ref{fig-GCoverGM} because, as far as we can
glean from the literature, all previous data on $G_M$ comes from different data
sets to that used for the extraction of $G_Q$ and
$G_C$~\cite{Ab00B}. Therefore in general data on $G_C$ and $G_M$ are
not at the same $Q^2$ and have different systematic errors. The BLAST
data set will be pioneering in this regard.
\begin{figure}[htb]
\centerline{\includegraphics*[width=110mm]{GCoverGM.eps}}
\caption{Results for the ratio $G_C/G_M$. Calculations shown are for
extremal NLO (long dashed), extremal NNLO (dot-dashed), extremal
N$^3$LO (solid green) (solid red), CD-Bonn (solid
blue) and $R=1.5$ fm square well + one-pion exchange (short dashed)
wave functions.}
\label{fig-GCoverGM}
\end{figure}
The calculations displayed in Fig.~\ref{fig-GCoverGM} are accurate to
relative order $P^2$, although the $G_C$ used here is actually
computed up to relative order $P^3$. Once again the shape of the
low-momentum part of the ratio is fairly wave-function independent,
but the value at $Q^2=0$ changes as we move through the different wave
functions used for the computation of Fig.~\ref{fig-GCoverGM}, due to
short-distance contributions to $\mu_d$ being different for different
wave functions. However, even without renormalization there is a
robust prediction for the ratio out to $Q^2 \approx 0.1$ GeV$^2$. The
robust prediction is that $G_C/G_M$ is approximately flat. This would
be exactly the case in the absence of relativistic, meson-exchange,
and nucleon-structure contributions to the operator, and if
$w(r)=0$. The relativistic corrections at $O(P^2)$ are negligible at
$Q^2 < 0.1$ GeV$^2$, and the meson-exchange piece of the charge
operator is higher order than we are attempting to calculate the ratio
$G_C/G_M$ to. As far as the operator is concerned this leaves only the
nucleon-structure effects, which depend on the ratio:
\begin{equation}
\frac{G_E^{(s)}}{G_M^{(s)}}.
\end{equation}
If a strict chiral expansion is used for the form factors then this
ratio depends (again, at this order) on the difference of the
isoscalar magnetic and charge radii, and amounts to a $< 10$\% effect at
$Q^2=0.1$ GeV$^2$. Even though taking the ratio $G_C/G_M$ does
not (as it did in the case of the ratio of the previous section)
eliminate nucleon-structure effects, it does reduce their
size. Meanwhile, the effects of $w$ grow with $Q^2$, and so at $Q^2 <
0.1$ GeV$^2$ it is thus not particularly surprising that $G_M/G_C$ is,
to quite a good approximation, flat.
Given the extent of the variation in the prediction for the ratio
beyond $|{\bf q}|=500$ MeV it is difficult to believe that the NLO
predictions for the ratio shown here are reliable beyond that
point. This situation could improve once the NNLO pieces of the
operator ${\bf J}$ were computed, but short-distance pieces of ${\bf
J}$ appear already at that order. Therefore it is a prediction of
the chiral expansion that this ratio will be more sensitive to
short-distance physics than is $G_C/G_Q$. The position of the minimum
in $G_M$ is known to be very sensitive to such short-distance
physics~\cite{Wi95,vO95,SP02,Ph05B}. In this context it is worth
noting that the minimum for the N$^3$LO wave functions is already at
$|{\bf q}| \approx 800$ MeV---much lower than in any of the
calculations of Refs.~\cite{Wi95,vO95,SP02,Ph05B} and indeed, much
lower than the experimental data allows the position of the minimum to
be~\cite{Ab00B}.
\section{Conclusion}
\label{sec-conclusion}
We have used the $\chi$ET isoscalar charge operator in the
nucleon-nucleon sector computed to NNLO (including a consistent
treatment of the $1/M$ pieces of the charge), and the wave functions of
Ref.~\cite{Ep05}, to obtain the form-factor ratios $G_C/G_E^{(s)}$,
$G_Q/G_E^{(s)}$, $G_C/G_Q$. These ratios test $\chi$ET's ability to
describe deuteron structure. We confirm and extend the finding of
Ref.~\cite{Ph03}, that the NLO and NNLO $\chi$ET wave functions,
combined with the NNLO $J_0$, yield results for these ratios that
agree---within the experimental uncertainties---with the extractions
of Ref.~\cite{Ab00B} for $Q^2 < 0.35$ GeV$^2$. In contrast, the
N$^3$LO wave function of Ref.~\cite{Ep05}, when used in conjunction with
the N$^2$LO charge operator, produces form factors that depart from the
data at $Q^2 \approx 0.2$ GeV$^2$.
In light of the upcoming release of data on $t_{20}$ from BLAST we
examined the ratio $G_C/G_Q$ in detail. We found variation in the
$\chi$ET predictions for this ratio at $Q^2=0$, and also found
that---even allowing for this variation---$\chi$ET is in disagreement
with the experimental value for this quantity. This phenomenon---the
``$Q_d$ problem''---is familiar in high-precision $NN$ potential
models with the modern value of the $\pi$NN coupling constant. In
$\chi$ET its solution arises naturally through a higher-order two-body
current that couples exclusively to quadrupole photons. We added this
operator to our analysis, and showed that when we do so the $\chi$ET
predictions for $G_C/G_Q$ (with the NNLO wave functions) fall within a
narrow band out to $Q^2 \approx 0.35$ GeV$^2$ (see
Fig.~\ref{fig-GCGQdetail}). We also performed the calculation with the
same charge operator and potential-model wave functions that have a
one-pion-exchange tail identical to that of the $\chi$ET
potential~\cite{St94,Ma01,PC99}. We found that these wave functions
make the band obtained at NNLO in $\chi$ET about a factor of two
wider. We conservatively adopt the full width of that band as
representative of the theoretical uncertainty in our calculation.
Meanwhile, the $\chi$ET predictions for $G_C/G_M$, which will also be
measured at BLAST, are not reliable to as high a $Q^2$. In saying
this, it should, in fairness, be pointed out that ${\bf J}$ has not
been computed to as high an order as $J_0$, and it could therefore be
that $G_C/G_M$ can also be well described to $Q^2=0.35$ GeV$^2$ in
$\chi$ET once $O(e P^4)$ corrections to $\bf J$ are included. This is
a topic for future work. Another topic for future work is the
inclusion of the $O(e P^4)$ pieces of $J_0$ that were already computed
in Ref.~\cite{Pa00} at $Q^2=0$ in the finite-$|{\bf q}|$
calculation~\cite{quadri}. In addition, the operators and the
$\chi$ET wave functions used in this paper to make predictions for
the BLAST data can be further tested by comparing their predictions
with experimental results for deuteron
electro-disintegration---although this will require the computation of
the isovector pieces of the operators.
Irrespective of such future efforts, one thing is already clear from
Fig.~\ref{fig-GCGQdetail}. When the theoretical predictions for
$G_C/G_Q$ are renormalized in the manner we advocate here, the
theoretical uncertainty in $G_C/G_Q$ for $Q^2 \leq 0.3$ GeV$^2$ is
less than the uncertainty in the data. This makes the
low-$Q^2$ data from BLAST all the more crucial, since it will provide
an important test of $\chi$ET's ability to organize contributions to
deuteron observables, and its ability to use that organization to
provide estimates of the theoretical uncertainty in a given
calculation.
\section*{Acknowledgments}
I thank Michael Kohl and Chi Zhang for a number of conversations
regarding the BLAST data, and in particular for stimulating questions
regarding the theoretical uncertainty that arises from the
$Q^2$-dependence of the two-body current that renormalizes $Q_d$. I
am also grateful to Richard Milner for inviting me to a BLAST workshop
in January 2005 where a number of the results in this paper were
presented in preliminary form. Thanks also to Evgeny Epelbaum for
supplying me with the wave functions of Ref.~\cite{Ep05}, and for his
comments on both my results and this manuscript. I am also grateful to
Lucas Platter for his careful reading of the manuscript and assistance
with the spelling of Dutch names. This work was supported by the US
Department of Energy under grant DE-FG02-93ER40756.
|
2,877,628,090,366 | arxiv |
\section{Benchmark quantities\label{sec:benchmark}}
With recent advances, the field of Lattice QCD is mature enough to provide reliable
information about the world of nuclear physics.
The first major breakthrough was a successful calculation of the hadron
spectrum~\cite{Durr:2008zz}.
The next milestone that has nearly been reached is to verify
that lattice QCD captures internal dynamics and structure of hadrons correctly.
On this path, reproducing basic features about the most studied hadrons, the proton and
the neutron, is an essential milestone.
The nucleon structure observables discussed in this section have
the least systematic ambiguity and stochastic error, and thus may be categorized
as ``lattice QCD benchmark'' quantities.
\subsection{Nucleon axial charge}
The axial charge of the nucleon is an important quantity for the entire field of nuclear physics
as it governs the rate of $\beta$-decay and, for instance, the neutron half-life,
as a forward nucleon matrix element of the isovector axial-vector quark current,
\begin{equation}
\label{eqn:ga}
\langle N(P)|\bar{q}\gamma^\mu\gamma^5 q|N(P)\rangle
= g_A\, \bar{u}_P\gamma^\mu\gamma^5 u_P\,,
\end{equation}
where $u_p$ is the nucleon spinor, and which can be calculated without disconnected quark
contractions ambiguity.
Historically, most attempts to calculate this quantity resulted in values 10-15\%
below the experimental number, almost irrespective of the pion mass used.
Until recently, it was easy to ascribe this discrepancy to heavy pion masses used.
However, recent calculations with $m_\pi$ approaching the physical point apparently
continue this trend.
Over the years, the deficiency has been ascribed to finite-volume effects
(FVE)~\cite{Yamazaki:2008py}, excited state
contributions~\cite{Owen:2012ts,Capitani:2012gj,Jager:2013kha}
and finite-temperature effects~\cite{Green:2012ud}.
There is no convincing evidence that any of these sources is solely responsible for the
discrepancy; it is plausible that interplay of various systematic effects takes place,
and with every collaboration using slightly different methods, meaningful comparisons are
complicated.
For instance, a better agreement was claimed upon removal of excited state contributions
with a variational~\cite{Owen:2012ts} and summation~\cite{Capitani:2012gj,Jager:2013kha}
methods, while others~\cite{Dinter:2011sg,Bhattacharya:2013ehc} did not detect
significant excited state effects.
The pattern of finite volume dependence, initially claimed in~\cite{Yamazaki:2008py},
was not confirmed when other collaboration results were examined~\cite{Alexandrou:2013joa}.
Finite temperature dependence~\cite{Green:2012ud} was not confirmed in a subsequent detailed
study~\cite{Green:2013hja} at $m_\pi\approx250\text{ MeV}$ (Fig.~\ref{fig:gA-vs-Ls-Lt}).
Finally, there is a very encouraging agreement with experiment from the most recent
study directly at the physical pion mass~\cite{Alexandrou:2013jsa}.
Hopefully, this preliminary result will be confirmed by other collaborations
with careful evaluations of all systematic effects.
\begin{figure}[ht!]
\centering
\begin{minipage}{.48\textwidth}
\centering
\includegraphics[width=\textwidth]{figs/gA_summary.pdf}
\caption{\label{fig:gA-summary}
Summary of nucleon axial charge $g_A$ lattice results~\cite{Owen:2012ts,Alexandrou:2010hf,Alexandrou:2011aa,Alexandrou:2013joa,Bhattacharya:2013ehc,Capitani:2012gj,QCDSF:2011aa,Horsley:2013ayv,Ohta:2013qda,Bratt:2010jn,Green:2012ud}.}
\end{minipage}~
\hspace{.03\textwidth}~
\begin{minipage}{.48\textwidth}
\includegraphics[width=\textwidth]{figs/gA_corners.pdf}
\caption{\label{fig:gA-vs-Ls-Lt}
Detailed study of $g_A$ dependence on volume and temperature with Wilson fermions,
$m_\pi\approx250\text{ MeV}$~\cite{Green:2013hja}.}
\end{minipage}
\end{figure}
\subsection{Quark momentum fraction}
\begin{figure}[ht!]
\begin{minipage}{.48\textwidth}
\includegraphics[width=\textwidth]{figs/xv_summary.pdf}
\caption{\label{fig:xv-summary}
Summary of quark momentum fraction $\la x\ra_{u-d}^{\overline{MS}(2\text{ GeV})}$
lattice results~\cite{Alexandrou:2013joa,Aoki:2010xg,Bali:2012av,Pleiter:2011gw,Bratt:2010jn,Green:2012ud}.
}
\end{minipage}~
\hspace{.03\textwidth}~
\begin{minipage}{.48\textwidth}
\centering
\includegraphics[width=\textwidth]{figs/CLS-Mainz_avgX_O7.pdf}
\caption{\label{fig:xv-exc-states}
Excited state contributions to bare $\la x\ra_{u-d}$ and their removal using
the summation method~\cite{Jager:2013kha}.
}
\end{minipage}
\end{figure}
Another ``benchmark'' quantity is the isovector quark momentum fraction.
It is measured in DIS experiments and, although its value depends on
phenomenological models of parton distribution functions, different
parameterizations yield agreeing results.
Lattice calculation of this quantity involves the quark energy-momentum tensor operator,
\begin{equation}
\langle N(P)|\bar{q}\gamma_{\{\mu} \overset{\leftrightarrow}{D}_{\nu\}} q|N(P)\rangle
= \la x\ra_{q} \, \bar{u}_P \,P_{\{\mu}\gamma_{\nu\}} \, u_P\,,
\end{equation}
and typically is converted to $\overline{MS}(2\text{ GeV})$ using the RI/MOM
method~\cite{Martinelli:1994ty}.
Until recently, lattice results overwhelmingly overestimated the phenomenological
value by $40-60\%$.
Newer results closer to the physical pion mass tend to approach the experimental
value (Fig.~\ref{fig:xv-summary}).
Such behavior is in agreement with corrections computed in Chiral Perturbation Theory (ChPT),
$\delta^\text{ChPT}\la x\ra_{u-d}\sim m_\pi^2\log m_\pi^2$,
indicating that this quantity may change rapidly at lighter pion masses,
thus precluding reliable chiral extrapolations.
Many recent studies point out that this quantity suffers from substantial
excited state effects~\cite{Alexandrou:2011aa,Jager:2013kha,Bali:2013nla},
see Fig.~\ref{fig:xv-exc-states},
increasingly so towards the physical pion mass~\cite{Green:2012ud}, where subtraction
of excited state contributions has lead to agreement with experiment.
\subsection{Nucleon radius and magnetic moment\label{sec:benchmk-radii-mag}}
\begin{figure}[ht!]
\begin{minipage}{.495\textwidth}
\centering
\includegraphics[width=\textwidth]{figs/r1v_summary.pdf}
\caption{\label{fig:r1v-summary}
Summary of $(r_1^2)^v$ lattice
results~\cite{Collins:2011mk,Bhattacharya:2013ehc,Capitani:2012ef,Alexandrou:2013joa,Lin:2013bxa,Syritsyn:2009np,Bratt:2010jn,Green:2012ud}.}
\end{minipage}
\begin{minipage}{.495\textwidth}
\includegraphics[width=\textwidth]{figs/kv_summary.pdf}
\caption{\label{fig:kappav-summary}
Summary of $\kappa^v$ lattice
results~\cite{Collins:2011mk,Bhattacharya:2013ehc,Alexandrou:2013joa,Lin:2013bxa,Yamazaki:2009zq,Syritsyn:2009np,Bratt:2010jn,Green:2012ud}.}
\end{minipage}
\end{figure}
Electromagnetic structure of the nucleon is characterized with
the Dirac and Pauli form factors $F_{1,2}^q$:
\begin{equation}
\label{eqn:vector-ff}
\langle N(P + q)|\bar{q}\gamma^{\mu} q|N(P)\rangle
= \bar{u}_{P+q} \big[F_1^q(Q^2) \gamma^\mu +
F_2^q(Q^2)\frac{i\sigma^{\mu\nu}q_\nu}{2 M_N}\big] u_P\,,
\quad Q^2 = -q^2\,,
\end{equation}
which will be discussed in more detail in Sect.~\ref{sec:formfac}.
The small-$Q^2$ behavior of these form factors,
$F^q(Q^2) = F(0)\big[1 + \frac16 Q^2 (r^2)^q +{\mathcal O}(Q^4) \big]$,
is characterized by Dirac and Pauli $(r_{1,2}^2)^q)$ radii of charge
and (anomalous) magnetization distributions in the nucleon\footnote{
These quantities should not be literally understood as radii because of
relativistic nucleon recoil taking place in measuring these form factors.
Note also that experimentalists usually report electric and magnetic Sachs form factors
and the corresponding radii $(r_{E,M}^2)$~\cite{Beringer:1900zz},
instead of $(r_{1,2})^2$; these pairs of quantities are linear combinations of
each other.}.
Calculations of the isovector Dirac radius $(r_1^2)^v$ are tremendously
important as a benchmark of lattice QCD, but even more so because of
the persisting experimental discrepancy in the proton electric radius $(r^2_{Ep})$
between measurements involving electrons and muons~\cite{Pohl:2010zza},
which might be a signature of new physics phenomena.
The two experimental points in Fig.~\ref{fig:r1v-summary} demonstrate this
discrepancy in terms of $(r_1^2)^v$, together with a summary of results from
different lattice groups.
Similarly to $\langle x\rangle_{u-d}$, the isovector radius $(r_1^2)^v$ is also
strongly affected by low-energy QCD dynamics, diverging in the chiral limit as
$\delta^\text{ChPT}(r_1^2)^v\sim\log m_\pi^2$,
a likely reason why calculations at heavier pion masses result in values
$\approx50\%$ below experiment.
However, many recent lattice QCD calculations with decreasing pion masses
do not show a sufficient upward trend.
A calculation with $N_f=2+1$ dynamical ${\mathcal O}(a^2)$-improved Wilson fermions
in range $m_\pi\approx250\rightarrow150\text{ MeV}$ demonstrated
that systematic effects from excited states increase dramatically, especially
below $m_\pi\lesssim200\text{ MeV}$, and their elimination with the simple
``summation'' method is sufficient to achieve agreement with experiment~\cite{Green:2012ud}.
It is also encouraging that the statistical accuracy at the lightest pion mass
is comparable to the discrepancy between the two experiments
(Fig.~\ref{fig:r1v-summary}) and, if improved, may contribute to the resolution
of the experimental controversy.
It is worth noting that ``radii'' are usually extracted from form factors using
phenomenological fits such as the dipole form.
In order to eliminate dependence on fit models, calculations at larger spatial
volumes must be performed to have access to smaller values of $Q^2$.
Finite volume contributions to $G_E(Q^2)$ and $(r_E^2)$ have been computed
in effective theory~\cite{Hall:2012yx} and are sizable,
$\delta (r_E^2)^v\big|_{m_\pi L=4}\approx0.03 (\text{fm})^2$,
and thus must be studied as well.
A summary of anomalous magnetic moment $\kappa_v = F_2^{u-d}(0)$ calculations is
presented in Fig.~\ref{fig:kappav-summary}.
This quantity has milder dependence on the pion mass, and its chiral extrapolations
and recent calculations close to the physical point are in good agreement
with experiment.
To compute this quantity, one has to extrapolate the Pauli form factor with
$Q^2\to0$; increasing the lattice spatial size $L_s$ will reduce the systematic errors
and provide another stringent test of lattice QCD.
\section{Hadron wave functions}
Lattice QCD provides a fascinating opportunity to study wave functions of quarks
in the nucleon and other hadrons.
Although such wave functions have only limited phenomenological meaning and
are difficult to compare to experimental data, they can be very illuminating in our
understanding of internal structure of hadrons and their excited states.
In a calculation very close to the physical point, radial profiles of quark density
in the nucleon and its excited states have been studied~\cite{Roberts:2013ipa},
see Fig.~\ref{fig:nucleon-wf}.
Using a basis of four nucleon interpolating fields composed of quark sources with varied
smearing size, the CSSM collaboration has identified, in addition to the nucleon ground state,
candidates for $n=1$ (Roper) and $n=2$ excitations, which have clearly visible nodes in
their radial quark density profile.
\begin{figure}[ht!]
\begin{minipage}{.31\textwidth}
\centering
\includegraphics[angle=90,width=\textwidth]{figs/CSSM_chisqBestFitCQMS1.pdf}\\
\includegraphics[width=\textwidth]{figs/CSSM_State1diffColMap_crop.png}\\
\end{minipage}~
\hspace{.03\textwidth}~
\begin{minipage}{.31\textwidth}
\centering
\includegraphics[angle=90,width=\textwidth]{figs/CSSM_chisqBestFitCQMS2.pdf}\\
\includegraphics[width=\textwidth]{figs/CSSM_State2diffColMap_crop.png}\\
\end{minipage}~
\hspace{.03\textwidth}~
\begin{minipage}{.31\textwidth}
\centering
\includegraphics[angle=90,width=\textwidth]{figs/CSSM_chisqBestFitCQMS3.pdf}\\
\includegraphics[width=\textwidth]{figs/CSSM_State3diffColMap_crop.png}\\
\end{minipage}~
\caption{\label{fig:nucleon-wf}
Wave functions of the nucleon and its radial excitations~\cite{Roberts:2013ipa}.
}
\end{figure}
\begin{figure}[ht!]
\includegraphics[width=\textwidth]{figs/QCDSF_nnstar_barycentric_labeled.pdf}
\caption{\label{fig:nucleon-da}
Nucleon distribution amplitudes of valence quarks~\cite{Schiel:lat2013:2} in
coordinates $x_1+x_2+x_3=1$.}
\end{figure}
Hadron distribution amplitudes (DA), or partonic wave functions, describe
hadron structure in terms of light-cone Fock states.
In case of the nucleon, at a sufficiently low scale these wave functions are dominated
by three valence quarks carrying all of the boosted baryon momentum: $x_1 + x_2 + x_3=1$,
where $x_{1,2,3}$ are valence quark momentum fractions.
Distribution amplitudes cannot be computed directly on a lattice because probing a
partonic wave function requires an operator with quarks separated along a light-cone direction.
Instead, DAs are parameterized as polynomials in $x_{1,2,3}$; the polynomial coefficients
are called ``shape parameters'' and correspond to local operators calculable on a lattice.
Figure~\ref{fig:nucleon-da} shows results of a recent lattice calculation~\cite{Schiel:lat2013:2}
where distribution amplitudes of the nucleon and its excited states were computed using
$N_f=2$ dynamical Wilson fermions.
\section{Hadron form factors}
\subsection{Nucleon electromagnetic (vector) form factors\label{sec:formfac}}
Nucleon Dirac and Pauli form factors $F_{1,2}(Q^2)$ defined in Eq.~(\ref{eqn:vector-ff})
are among the main characteristics of nucleon structure.
Interest in the nucleon form factors have been reignited in the recent years
when more precise experiments became available in a wide range of momentum $Q^2$.
These form factors can also be considered benchmark quantities in addition to those
discussed in Sec.~\ref{sec:benchmark}.
In addition to verifying the methodology, lattice QCD calculations of the form factors
may help resolve some experimental uncertainties.
For example, proton form factor measurements are subject to corrections from $2\gamma$
exchange, while neutron studies must be conducted on nuclei and therefore have nuclear
model uncertainties.
\begin{figure}[ht!]
\begin{minipage}{.589\textwidth}
\centering
\includegraphics[width=\textwidth]{figs/PNDME_f1v_1306-5435.pdf}\\
\includegraphics[width=\textwidth]{figs/PNDME_f2v_1306-5435.pdf}\\
$Q^2\;[ \text{GeV}^2]$
\end{minipage}
\begin{minipage}{.401\textwidth}
\centering
\includegraphics[width=\textwidth]{figs/LHP_f1_149MeV.pdf}\\
\vspace{-.05cm}
\includegraphics[width=\textwidth]{figs/LHP_f2_149MeV.pdf}\\
$Q^2\;[ \text{GeV}^2]$
\end{minipage}
\caption{\label{fig:emff-comp}
Comparison of the nucleon isovector electromagnetic form factors $F_{1,2}(Q^2)$ computed with
$m_\pi=310,\,220\text{ MeV}$~\cite{Bhattacharya:2013ehc}
(left, triangles and squares, respectively) and
$m_\pi=149\text{ MeV}$~\cite{Green:2013hja} (right).
Experimental parameterizations are from Ref.~\cite{Arrington:2007ux} (dashed line, left)
and Ref.~\cite{Kelly:2004hm} (solid line, right).}
\end{figure}
In Figure~\ref{fig:emff-comp}, two recent calculations~\cite{Green:2013hja,Bhattacharya:2013ehc}
of nucleon isovector form factors $F_{1,2}^{u-d}(Q^2)$ are compared.
Both calculations use $N_f=2+1$ dynamic Wilson fermion action and
incorporate advanced methods to isolate the ground state from excited states,
which has been demonstrated to have dramatic impact on calculations of the isovector
radii~\cite{Green:2012ud}.
The panels on the right show form factors computed close to the physical point
($m_\pi=149\text{MeV}$), which agree nicely with the phenomenological fit to
experimental data.
Calculations with heavier pion masses $m_\pi=310,\,220\text{ MeV}$ shown on the left
disagree with the experimental fit, especially form factor $F_1$
at intermediate momenta $Q^2\gtrsim0.5\text{ GeV}^2$.
This disagreement driven by heavy pion masses is surprising, since naively one would expect
that the low-energy dynamics governed by the pion mass should not influence the structure
of the nucleon at momenta $Q^2\gg m_\pi^2$.
There is only little, borderline-significant downward trend in the form factor values
between $m_\pi=310$ and $220\text{ MeV}$ in Fig.\ref{fig:emff-comp}(left),
also noticed earlier in the range $m_\pi\approx300\ldots400\text{ MeV}$~\cite{Syritsyn:2009np},
suggesting that an abrupt change must take place between $m_\pi\approx200\text{ MeV}$
and the physical point.
\subsection{Nucleon axial form factors}
Nucleon axial form factors characterize nucleon structure with respect to the density
of the axial vector quark current,
\begin{equation}
\label{eqn:axial-ff}
\langle N(P+q)|\bar{u}\gamma^{\mu}\gamma^5 u - \bar{d}\gamma^{\mu}\gamma^5 d|N(P)\rangle
= \bar{u}_{P+q} \big[G_A^q(Q^2) \gamma^\mu\gamma^5 +
G_P^q(Q^2)\frac{\gamma^5 q^\mu}{2 M_N}\big] u_P\,,
\end{equation}
where $G_A$ and $G_P$ are axial and induced pseudoscalar form factors, respectively.
Experimental data on these form factors come from measurements of neutrino scattering,
charged pion electroproduction and muon capture experiments~\cite{Bernard:2001rs}.
\begin{figure}[ht!]
\begin{minipage}{.495\textwidth}
\centering
\includegraphics[width=\textwidth]{figs/ETM_ga_ff_summary.pdf}\\
\end{minipage}
\begin{minipage}{.495\textwidth}
\centering
\includegraphics[width=\textwidth]{figs/ETM_Gp_compare_NF2p1p1.pdf}\\
\end{minipage}
\caption{\label{fig:ga-gp-summary}
Nucleon axial $G_A(Q^2)$ (left) and induced pseudoscalar $G_P(Q^2)$
form factors~\cite{Alexandrou:2013joa}.
The solid lines are the dipole fit to $G_A$ (left) and the pion pole dominance fit to
$G_P$ (right) experimental data (not shown).
}
\end{figure}
Figure~\ref{fig:ga-gp-summary} displays the summary of lattice data on $G_{A,P}$ form factors
calculated with various actions.
The left panel shows the axial form factor $G_A$ together with a phenomenological dipole
fit to experimental points data (not shown).
The axial radius $r_A^2$ defined similarly to EM radii in Sec.~\ref{sec:benchmk-radii-mag} as
the slope $(-6)\big(dG_A(Q^2)/dQ^2\big)\big|_{Q^2=0}$
is underestimated by $\approx50\%$ for all pion mass values $m_\pi\ge213\text{ MeV}$,
and it shows very little variation with the pion mass in the entire range
$m_\pi=213\ldots373\text{ MeV}$~\cite{Alexandrou:2013joa}.
This non-trivial effect makes extrapolations based on baryon Chiral Perturbation Theory
questionable, since the NLO Lagrangian~\cite{Bernard:1998gv} does not contain any terms
to account for $m_\pi$ dependence, thus requiring higher orders of baryon ChPT
at pion masses as small as $m_\pi\approx200\text{ MeV}$.
The induced pseudoscalar form factor $G_P$ is special as it is governed by the intermediate
pion pole $\sim(Q^2+m_\pi^2)^{-1}$ in the coupling of the operator~(\ref{eqn:axial-ff})
to the nucleon.
Clearly, this form factor must depend strongly on the pion mass, becoming a steeper
function of $Q^2$ with decreasing $m_\pi$.
However, the most recent $G_P$ results~\cite{Alexandrou:2013joa} exhibit
very little dependence on $m_\pi$; this may be either a non-trivial low-energy
effect similar to the one seen in $G_A$ form factor data, or, as suggested in
Ref.~\cite{Alexandrou:2013joa}, may be caused by finite volume effects.
This fact makes accurate lattice calculation very challenging, especially in the region
of low momenta $Q^2 \sim m_\mu^2$ relevant for phenomenology, where muon capture experiments
$\mu+p\to\nu_\mu + n$ can now measure $G_P$ with substantially improved
precision~\cite{Andreev:2012fj}.
\subsection{Form factors of the pion and $\Lambda$}
Lattice QCD enables studying properties of hadrons that are
not available or very difficult in experiments,
such as mesons or unstable baryons containing heavy quarks.
For example, computing structure of strange baryons enables testing hypotheses
about their internal dynamics.
In a recent study~\cite{Menadue:2013xqa}, electric form factors of $\Lambda$
and $\Lambda(1405)$ were computed with the help of the variational method
(see Fig.~\ref{fig:lambda1405-geff}).
From the picture, one can conclude that the mean squared radius of
$s$-quark distribution is enhanced when comparing $\Lambda(1405)$ with $\Lambda$,
while the radius of light $u,d$-quarks is shrunk, as $m_\pi$ goes to the physical point.
Such differences support the theory that $\Lambda(1405)$, a $0({\frac12}^-)$ state,
has a significant component of the ``molecular'' $\overline{K}N$ state, in which the heavier
nucleon is surrounded by the cloud of the $\overline{K}=s\bar{q}_\text{light}$ meson.
\begin{figure}[ht!]
\centering
\begin{minipage}{.475\textwidth}
\centering
\includegraphics[height=.65\textwidth]{figs/CSSM_Lambda1405_plot.pdf}\\
\caption{\label{fig:lambda1405-geff}
$G_E$ form factor at $Q^2=0.16\text{ GeV}^2$ for $\Lambda$ and
$\Lambda(1405)$~\cite{Menadue:2013xqa}.
}
\end{minipage}~
\begin{minipage}{.07\textwidth}
\end{minipage}~
\begin{minipage}{.475\textwidth}
\centering
\includegraphics[height=.65\textwidth]{figs/Guelpers_radiusvsmpi.pdf}\\
\caption{\label{fig:pion-scalar-rad}
Scalar radius of the pion, full and connected-only terms~\cite{Gulpers:2013uca},
$N_f=2$ dynamical fermions.
}
\end{minipage}
\end{figure}
Another example is the scalar radius of the pion for which only phenomenological estimates exist.
A few studies have been performed that disagreed with this estimate by a factor of two.
A recent study~\cite{Gulpers:2013uca}, however, has taken into account disconnected
quark contractions, that turned out to be comparable in magnitude to the connected terms,
see Fig.~\ref{fig:pion-scalar-rad}.
The resulting values agree nicely with the NLO prediction of ChPT in a range of pion masses,
and the extrapolated value agrees with phenomenology at the physical pion mass.
This agreement confirms the validity of ChPT in the meson sector and provides
an additional method to determine low energy constants of the theory,
with the ultimate goal to determine all the parameters from first principles.
\section{Origin of the proton spin}
The proton spin puzzle is the experimental fact that the spins of quarks comprise
only $\approx30-50\%$ of the full $\frac12$-spin of the nucleon.
The missing part of the nucleon spin can be generated by the orbital motion of quarks and
angular momentum of the glue.
Size of their contributions can be computed on a lattice with the help of generalized form
factor formalism\footnote{
Recently it has also been suggested that parton distribution functions can be studied
directly on a lattice~\cite{Ji:2013dva}. },
in which quark and gluon momentum and angular momentum can be defined
in a gauge-invariantly~\cite{Ji:1996ek}:
\begin{gather}
\la N(P+\frac12q) |\, T^{q,g}_{(\mu\nu)} \,| N(P-\frac12q)\ra
= \bar{u}_{P+\frac12q} \big[
A_{20}^{q,g} \gamma_{(\mu} P_{\nu)}
+ B_{20}^{q,g} \frac{P_{(\mu}\,i\sigma_{\nu)\rho} q^\rho}{2 M_N}
+ C_2^{q,g} \frac{q_{(\mu} q_{\nu)}}{M_N}
\big] u_{P-\frac12q}
\end{gather}
where $T^{q,g}_{\mu\nu}$ is the energy-momentum tensor of quarks or gluons
and $(\mu\nu)$ above denotes symmetrization over the indices and subtraction of the trace.
The generalized form factors $\{A_{20},B_{20},C_2\}^{q,g}(Q^2)$ depend
on the momentum transfer $Q^2=-q^2$ and their forward values at $Q^2\to0$
give the size of momentum fraction and angular momentum carried by quarks and gluons,
\begin{equation}
\la x \ra_{q,g} = A_{20}^{q,g}(0)\,,
\quad\quad
J_{q,g} = \frac12\big[A_{20}^{q,g}(0) + B_{20}^{q,g}(0)\big]\,.
\end{equation}
In the case of quarks, the angular momentum above is a sum of orbital momentum and spin;
for gluons, spin and orbital motion cannot be separated in a gauge-invariant way
in this formalism.
\begin{figure}[ht!]
\begin{minipage}{.495\textwidth}
\centering
\includegraphics[width=\textwidth]{figs/jquarkUD-summary.pdf}\\
\end{minipage}
\begin{minipage}{.495\textwidth}
\centering
\includegraphics[width=\textwidth]{figs/spinquarkUD-summary.pdf}\\
\end{minipage}
\caption{\label{fig:lightq-angmom-spin}
Angular momentum $J_q$ (left) and spin $\frac12\Sigma_q$
(right)~\cite{Alexandrou:2013joa,Sternbeck:2012rw,Bratt:2010jn,Syritsyn:2011vk}
of light quarks in the proton (connected contributions only)
computed with dynamical fermions.
}
\end{figure}
\begin{figure}[ht!]
\begin{minipage}{.495\textwidth}
\centering
\includegraphics[angle=-90,width=\textwidth]{figs/chiQCD-q_T1T2_1555_DI.pdf}\\
\end{minipage}
\begin{minipage}{.495\textwidth}
\centering
\includegraphics[angle=-90,width=\textwidth]{figs/chiQCD-g_T1T2_1555_v2.pdf}\\
\end{minipage}
\caption{\label{fig:discq-glue-angmom}
Generalized form factors $T_1\equiv A_{20}$, $T_2\equiv B_{20}$ and
angular momentum $2 J_q = T_1^q(0) + T_2^q(0)\equiv A_{20}^q(0) + B_{20}^q(0)$
of quarks (disconnected contractions) (left) and
gluons (right) computed with quenched Wilson fermions~\cite{Deka:2013zha}.
}
\end{figure}
Multiple groups have computed connected contributions to the light quark
angular momentum $J^q$, see Fig.~\ref{fig:lightq-angmom-spin}(left) for the summary.
These quantities apparently have only mild dependence on the pion mass, and there
is good agreement across calculations done with a variety of different actions and lattice
volumes.
Qualitatively, the angular momentum is carried only by the $u$-quark,
and the $d$-quark angular momentum is dramatically smaller\footnote{
The quark angular momentum is a scale-dependent quantity and the qualitative statements must
be made with respect to a certain renormalization scheme and scale.
In addition, isoscalar light-quark (angular) momentum mixes with that of the gluon.
All results discussed in this review are converted to $\overline{MS}(2\text{ GeV})$
and mixing with gluons are ignored.}.
The smallness of $J^d$, however, is not a trivial fact since the $d$-quark spin and orbital
angular momentum (OAM) appear to cancel each other, at least their connected parts (see
Fig.~\ref{fig:lightq-angmom-spin}, left).
The full calculation of all the contributions to the proton spin~\cite{Deka:2013zha}
has been performed only with quenched fermions and relatively heavy pions
$m_\pi\ge478\text{ MeV}$ so far.
This simplification is justified in order to have a complete picture of
separate contributions to the proton spin, the most challenging of which are
the quark-disconnected contributions to $J^q$ and the gluon angular momenta $J^g$,
both shown in Fig.~\ref{fig:discq-glue-angmom}.
The disconnected light-quark angular momentum $J^{u+d}_\text{disc}$ is small
(approximately 7\% of the proton spin), while the glue angular momentum was found
to comprise $\approx25\%$ of the proton spin.
\begin{figure}[ht!]
\begin{minipage}{.495\textwidth}
\centering
\includegraphics[width=\textwidth]{figs/deltaUD-summary.pdf}\\
\end{minipage}
\begin{minipage}{.495\textwidth}
\centering
\includegraphics[width=\textwidth]{figs/deltaS-summary.pdf}\\
\end{minipage}
\caption{\label{fig:lightq-disc}
Light quark spin (disconnected contractions)
$\Sigma^{u,d}$~\cite{Dong:1995rx,Gusken:1999xy,QCDSF:2011aa,Alexandrou:2012py,Abdel-Rehim:2013wlz} (left)
and strange quark spin
$\Sigma^{s}$~\cite{Dong:1995rx,Gusken:1999xy,Babich:2010at,QCDSF:2011aa,QCDSF:2011aa,Engelhardt:2012gd,Alexandrou:2014eva} (right).
}
\end{figure}
The valence $u,d$-quark spin $\frac12\Sigma_{u,d}$ can be computed on a lattice using
the quark spin operator $\bar{q}\gamma_\mu\gamma_5 q$.
Note that the spins of individual quarks depend on the hard-to-calculate
disconnected contractions; systematic problems in computing $g_A = \Sigma_u - \Sigma_d$
discussed in Sec.~\ref{sec:benchmark} may also contribute to uncertainties of the
individual quark spins.
The summary of quark spin lattice data (connected part) is presented in
Fig.~\ref{fig:lightq-angmom-spin}(right), which exhibits little dependence on the pion mass
and also decent agreement between different lattice methodologies and phenomenology.
Disconnected contributions to the light quark spin were computed for the first time in
Ref.~\cite{Dong:1995rx}, and were found to be quite large.
A series of new calculations performed in recent years found smaller values for
$\Sigma_{u,d}^\text{disc}$, as shown in Fig.~\ref{fig:lightq-disc}(left); the discrepancy
is likely due to the quenched action used in Ref.~\cite{Dong:1995rx}.
For the strange quark, the newer, unquenched calculations also result in much smaller
values for the strange quark spin compared with quenched one~\cite{Dong:1995rx}, see
Fig.~\ref{fig:lightq-disc}(right).
Whether disconnected contributions are large or small may change our view on the role
of the quark orbital angular momentum (OAM) in the proton\footnote{
The quark OAM in this context has only naive meaning as the difference
$L_q = J_q - \frac12\Sigma_q$;
however, the proximity of both $L_{u+d}$ and the nucleon anomalous magnetic moment
$\kappa^{u+d}$ to zero may be a hint that this definition is an important phenomenological
quantity.}
dramatically: if only the connected contributions are taken into account (or the disconnected
contributions are small), the $u+d$ quark OAM is very
small~\cite{Bratt:2010jn,Sternbeck:2012rw,Alexandrou:2013joa}; however, if the disconnected
contributions are large, then the quark OAM may be responsible for almost half of the proton
spin~\cite{Deka:2013zha}.
\section{Conclusions}
At the present moment, many hadron structure calculations are already performed
at the physical point.
This fascinating achievement makes lattice QCD much more robust at once, since chiral
extrapolations are no longer needed, thus eliminating the largest source of systematic
uncertainty.
However, the first results indicate strongly that other systematic problems specific
to hadron structure calculations are more severe in comparison with studies with heavier pions.
For example, the isovector Dirac radius of the nucleon is affected by substantial contributions
from excited states, and so is the isovector quark momentum fraction.
This fact makes careful assessment of excited states and other systematic effects
the top priority, especially for the ``benchmark quantities'' that are used
to validate lattice QCD methodology.
In particular, the nucleon radii can already be computed with statistical uncertainty
that is comparable to the currently observed discrepancy in the determination of the proton
electric radius; the systematic uncertainties, however, may preclude meaningful comparison
to the experiment.
Other systematic uncertainties such as finite volume and discretization errors are,
in general, small compared with the excited state contributions.
This may change in the nearest future with accumulation of statistics in the on-going calculations
and addition of lattice calculations with larger volumes and lattice spacings.
Calculations with larger volumes and smaller lattice spacings are also extremely important for
studies of hadron form factors, where wider kinematic regions are relevant for
phenomenology and comparisons to the experiment.
Larger lattice spatial volumes help study form factors close to the forward limit ($Q^2=0$),
determine ``radii'' and extract couplings such as $g_P$.
Smaller lattice spacings are important for the proton form factors $G_{Ep}$ and $G_{Mp}$
in the large momentum region $Q^2\gtrsim1\text{ GeV}^2$, where new experimental data
disagree with previous studies and reshape our understanding of the proton structure.
Complete calculation of contributions to the proton spin has only been performed with quenched
lattices.
In that study, $u,d,s$ quark spins from disconnected contractions are large,
leading to the conclusion that substantial fraction of the proton spin comes from the quark
orbital angular momentum.
Some initial calculations with fully dynamical quarks indicate that these disconnected
contributions are substantially smaller, leading to smaller quark OAM.
However, a complete calculation of all the components is required in order to
make a definite conclusion.
\bibliographystyle{JHEP}
|
2,877,628,090,367 | arxiv | \section{Introduction}
Mid-infrared (IR) array detectors have been used in space for focal-plane instruments aboard IR astronomical satellites, such as $\it ISO$/ISOCAM \citep{Cesarsky1996}, $\it {Spitzer}$/IRAC \citep{Fazio2004}, MIPS \citep{Rieke2004}, IRS \citep{Houck2004}, $\it {AKARI}$/IRC \citep{Onaka2007} and $\it {WISE}$ \citep{Mainzer2008}. They are also used for future IR space telescopes such as $\it {JWST}$/MIRI \citep{Rieke2015} and $\it {SPICA}$/SMI \citep{Kaneda2018}.
The first mid-IR array detector was developed more than three decades ago, which had small array formats (e.g., 32 $\times$ 32 Si:Bi by \citealt{Arens1983} and 16 $\times$ 16 Si:Bi by \citealt{Lamb1984}). Now large format ones (e.g., 1k ${\times}$ 1k Si:As IBC for $\it {WISE}$) are available, which enable us to carry out observations with high efficiency.
The mid-IR array detectors of various instruments have been evaluated for the uniformities of the photometric responses which were integrated over the wavelengths of their IR filter bands \citep[e.g.,][]{Ressler2008}.
\citet{Ressler2008} reported that the uniformity of the photometric response in the array varies from pixel to pixel by $3\%$ for the Si:As array detector, and found a fringe pattern of the photometric response in the array, which was likely to be caused by swirl dislocations in the crystalline structure of the silicon boule.
On the other hand, variations of the spectral response curves from pixel to pixel have not been investigated in detail, the understanding of which is important for spectroscopic observations using large array formats.
In this paper, we characterize the spectral responses of all the pixels in IR array detectors.
\section{Method\label{method}}
\subsection{Mid-IR array detector}
For the purpose of characterizing the pixel-based spectral responses, we evaluate a Si:As impurity band conduction (IBC) array, a flight back-up detector manufactured by Raytheon Vision Systems, Santa Barbara, USA \citep{Estrada1998}, for the mid-IR channels of InfraRed Camera \citep[IRC;][]{Onaka2007} aboard $\it {AKARI}$ \citep{Murakami2007}.
The Si:As array has a format of 256 $\times$ 256, the characteristics of which are summarized in table~\ref{tab:det}.
The pixel size is ${30\times30\,{\rm {\mu}m}}^2$, and the array size is ${7.68\times7.68\,{\rm mm}}^2$.
An anti-reflection (AR) coating is applied to the detector surface to increase the quantum efficiency.
The frame readout rate is designed to be low ($11\;{\rm Hz} $) because of the need to reduce the heat dissipation.
The array has a hybrid structure consisting of an IR detector semiconductor and silicon-based cryo-CMOS readout integrated circuit (ROIC), CRC-744, which consists of source follower per detector (SFD) input circuits. We integrate charge carriers with a detector capacitance and perform non-destructive readouts. The effective voltage bias, $\rm {\it V}_{bias}$, of 0.6 V is applied to the detector when the reset switch is turned on, which is the difference between the voltage levels of the unit cell SFD gate bias, $\rm {\it V}_{rstuc}$ that is the reset level, and the detector common bias connected to the detector backside, $\rm {\it V}_{det}$.
\begin{table}
\caption{Characteristics of the Si:As IBC/CRC-744.}
\label{tab:det}
\begin{indented}
\item[]\begin{tabular}{@{}ll}
\br
Parameter & Si:As IBC/CRC-744\\
\mr
Wavelength range & $5-27\;{\rm {\mu}m}$ \\
Format & $256 \times 256$ \\
Pixel size & $30\;{\rm {\mu}m}\times30\;{\rm {\mu}m}$ \\
Well capacity$^{\rm a}$ & $>1\times10^5\;{\rm e^-}$ \\
Operating temperature & $<\,10\;{\rm K}$ \\
Number of outputs & 4 \\
Quantum Efficiency (QE) & $>50\;\%$ \\
Frame rate & $11\;{\rm Hz} $ \\
\br
\end{tabular}
\item[] $^{\rm a}$ At $\sim$ 0.6 V for the voltage bias applied to the detector.
\end{indented}
\end{table}
\subsection{Experimental configuration}
We evaluate the pixel-based spectral responses of the Si:As array using a Fourier transform infrared spectrometer (FT-IR), Bruker VERTEX 70v, which is efficient to evaluate array detectors since the spectra of the incident light from the FT-IR to all the pixels in the array can be measured simultaneously with signal-to-noise ratios ($\it S/N$) higher than those from dispersive spectrometers such as monochrometers.
The FT-IR is composed of an IR continuum source and a $\rm RockSolid^{TM}$ interferometer, consisting of dual cube corner mirrors which are least affected by mirror tilts.
A globar lamp and a Mylar filter of $6\;{\rm {\mu}m}$ in thickness are used for the light source and the beam splitter, respectively, which are selected considering the wavelength coverage of the Si:As array.
The Si:As array has to be operated at a cryogenic temperature (see table~\ref{tab:det}), and thus we cooled it using a LHe cryostat.
Figure~\ref{fig:sias_config} shows our set-up for the measurements.
The light from the FT-IR at a room temperature is to be guided into the cryostat at 4~K through white polyethylene windows, the transmission curve of which is shown in figure~\ref{fig:transmit}. %
Accordingly, the measurements are necessarily performed under relatively high background (BG) environments, where $300\;{\rm K}$ BG photon noise dominates.
In addition, the Si:As array is saturated immediately due to the $\rm 300\;K$ BG radiation, since the IR detectors designed for space applications for low BG have well capacities considerably smaller than those for ground-based ones (see table~\ref{tab:det}).
Hence we cannot increase the total amount of the incident light to improve the $\it S/N$, which has to be optimized by using a neutral-density filter (NDF; figure~\ref{fig:transmit}) on the basis of the well capacity.
\begin{figure}
\begin{center}
\includegraphics[width=13cm,clip]{fig/config_v4.jpg}
\caption{$\it {Top\;panel}$: schematic view of the configuration of the measurement for the Si:As array. The light path shown with the dashed line is used for the measurement of the source spectrum. $\it {Bottom\;left\;panel}$: a photo of the setup for the measurement. The light from the FT-IR is guided into the cryostat placed in front of the FT-IR. $\it {Bottom\;right\;panel}$: the setup for the measurement inside the cryostat. The Si:As array is installed within the 4~K shield.}
\label{fig:sias_config}
\end{center}
\end{figure}
\begin{figure}
\begin{center}
\includegraphics[width=10cm,clip]{fig/fig_filter_v2.jpg}
\caption{Transmission curves of the windows and the filters used in the present measurement. Red, black and blue lines show the transmission curves of the long wavelength-pass filter, the white polyethylene windows and the neutral-density filter, respectively. The transmission curve of the neutral-density filter is scaled upward by a factor 5.}
\label{fig:transmit}
\end{center}
\end{figure}
As shown in figure~\ref{fig:sias_config}, we adopt a configuration like a pinhole camera, the aperture size of which is set to be $2\;{\rm mm}$ in diameter.
The BG light irradiates the detector only within the same solid angle as the light from the FT-IR, and therefore the configuration enables the best $\it S/N$ measurement under the same F-number (F3.84) of the light from the FT-IR.
Moreover, the Mylar beam splitter does not transmit IR light but only for a broad wavelength range of 15--300$\;{\rm {\mu}m}$, and thus we use a long wavelength-pass filter (LPF; figure~\ref{fig:transmit}) whose cutoff wavelength is approximately 15$\;{\rm {\mu}m}$ in the cryostat to suppress only the BG light.
For the purpose of reducing the stray light, the inner wall of the $4\;{\rm K}$ shield is coated with an IR low-reflectance paint (Nextel Black Velvet manufactured by Mankiewicz), and also labyrinth structures are adopted at the junctions of the components constituting the $4\;{\rm K}$ shield.
Separately from the response of the IR array, we also measure the spectrum of the source of the FT-IR with the pyroelectric detector
located inside the FT-IR, whose spectral response is known to be
flat. As shown in figure 1, before measuring the IR array, we
inserted a mirror to the beam to redirect it to the pyroelectric
detector. We obtain the reference spectrum, multiplying the source
spectrum and the transmission curves of the windows and the filters.
We confirm that the level of the repeatability of the reference
spectrum obtained by the FT-IR measurements is smaller than $0.05\%$,
at the wavelength of 20~$\mu$m, which is negligible compared to the
measurement error of $>0.2\%$ (see below). We divide the spectra of
the Si:As array by the reference spectrum to evaluate the spectral
response, $R(\lambda)$.
Figure~\ref{fig:intensity_map} shows the intensity map of the incident light falling onto the Si:As array under the above configuration, which shows that the incident light illuminates all the pixels almost uniformly.
It is noted, however, that the intensity of the incident light from the FT-IR is slightly weakened near the corners of the array, presumably due to imperfections of the optics of the measurement system including the FT-IR, where the spectrum of the incident light is likely to be affected by the optical diffraction effect.
Therefore we repeated the measurement of the spectral responses under 6 configurations with different illuminated positions and combined them by intensity-weighted average in order to eliminate the contributions from those likely problematic. Furthermore, by spatial high-pass filtering, we focus only on the components of the spectral responses with spatial frequencies higher than the low spatial frequency typical of the intensity variation shown in figure~\ref{fig:intensity_map}, which corresponds to the spatial scale of 11 pixels.
\begin{figure}
\begin{center}
\includegraphics[width=10cm,clip]{fig/baseline_v3.jpg}
\caption{Intensity map of the incident light falling onto the Si:As array. The color bar is given in units of the number of electrons.}
\label{fig:intensity_map}
\end{center}
\end{figure}
\subsection{Detector operations and data processing\label{sec:data_process}}
We read out the integration signals of all the pixels of the Si:As array repeatedly, while moving the scan mirror at a constant speed, ${\it v}_{\rm m}$.
It should be noted, however, that the sampling rate of the Si:As array, $11\,{\rm Hz}$, is significantly lower than the Nyquist frequency of the interferograms, $137\,{\rm Hz}$, estimated from both ${\it v}_{\rm m}{\sim}0.1\,{\rm cm/s}$ and the shorter cutoff wavelength, 15$\,\rm{\mu}m$, and hence under-sampled interferograms are expected to be obtained, which cause the spectral aliasing.
Thus we adopt 1/16 frame readout, which was used for the scan mode operation of the $\it {AKARI}$/IRC all-sky survey \citep{Ishihara2006}, instead of the standard frame readout, and performed the measurement 16 times to cover the whole frame.
The sampling rate of the 1/16 frame readout is $148~{\rm Hz}$, and thus we can obtain Nyquist-sampled interferograms.
The interferograms are expected to be somewhat smoothed from the real interferograms, since the optical path difference (OPD) of the interferometer changes while charge carriers are integrated by the Si:As array.
Below we neglect the non-linearity of signal integration; we have confirmed that the non-linearity level is as low as $<10^{-3}$ for the present data, which is estimated from the full-range integration curve data measured separately. The corrections of the interferograms are conducted by considering the differences between the obtained and real interferograms as a result of convolution with a rectangular function, $U(t)$, the width of which corresponds to the integration time.
Hence the relation between the observed interferogram, $I(t)$, and the real interferogram, $I_{\rm real}(t)$, is given by $I(t) = I_{\rm real}(t)~{\ast}~U(t)$.
According to the convolution theorem, the spectrum, $F(k)$, obtained by Fourier-transforming the observed interferogram is the multiplication of the real spectrum, $F_{\rm real}(k)$, and the sinc function, ${\rm sinc}(k)$, which are the Fourier transforms of $I_{\rm real}(t)$ and the rectangular function, $U(t)$, respectively, as follows:
\begin{equation}
F(k) = F_{\rm real}(k){\times}{\rm sinc}(k),
\label{eq:spec}
\end{equation}
Therefore we can obtain $F_{\rm real}(\lambda)$ by dividing $F(\lambda)$ by the sinc function.
Finally, we estimate the measurement errors by calculating standard deviations for 60 measurements.
\section{Results}
Analyzing the interferograms measured with the method described in section~\ref{method} and combining all the results acquired with the different illuminated positions on the array, we obtain the spectral responses of all the pixels in the array.
The upper panel of figure~\ref{fig:spec_map} shows all the obtained spectral responses normalized with light intensities incident on each pixel, which are calculated by integrating the spectrum of each pixel over the wavelength range of 17 to 33 $\mu$m. As can be seen in this figure, the cutoff wavelengths are located around 27 $\mu$m, which is consistent with the spectral range listed in table~\ref{tab:det}. We compare them with the spectral responses obtained in the previous measurements by \citet{Lum1993} and \citet{Estrada1998}, which indicates that our measurement shows a fairly good agreement with the previous measurements. Furthermore, we find that the spectral responses are notably similar from pixel to pixel, yet varying significantly compared to the pixel-averaged measurement error of about 0.003 at $20\,{\rm {\mu}m}$ (see the lower panel of figure~\ref{fig:spec_map}).
\begin{figure}
\begin{center}
\includegraphics[width=10cm,clip]{fig/fig_spec_verify_v3.jpg}
\caption{Spectral responses obtained for all the 256 $\times$ 256 pixels of the Si:As array. The upper and lower panels show the spectral responses normalized to the intensity of the light integrated over the wavelength range of 17 to 33 $\mu$m and those normalized to the median response at each wavelength, respectively. The red dotted and dashed lines in the upper panel represent the spectral responses normalized at 20~$\mu$m obtained by \citet{Lum1993} and \citet{Estrada1998}, respectively.}
\label{fig:spec_map}
\end{center}
\end{figure}
In order to characterize the pixel-by-pixel variations of the spectral responses, we calculate two spectral response ratios, $R(23\,{\rm {\mu}m})/R(20\,{\rm {\mu}m})$ and $R(27\,{\rm {\mu}m})/R(20\,{\rm {\mu}m})$, for all the pixels in the array. The former represents the global gradients of the spectral responses, while the latter the cutoff wavelengths.
Figure~\ref{fig:HPF_map} shows the spectral response ratio maps of $R(23\,{\rm {\mu}m})/R(20\,{\rm {\mu}m})$ and $R(27\,{\rm {\mu}m})/R(20\,{\rm {\mu}m})$. We find that both maps have periodic spatial variations along the column direction.
\begin{figure}
\begin{center}
\includegraphics[width=14cm,clip]{fig/HPF11_ratio.jpg}
\caption{Spectral response ratio maps of $R(23~{\rm {\mu}m})/R(20~{\rm {\mu}m})$ and $R(27~{\rm {\mu}m})/R(20~{\rm {\mu}m})$ after removal of low spatial frequency components. The gray-scale bars are given in the fractional differences between the spectral response ratios and their averages in the array. We mask the regions likely to be affected by particulates. }
\label{fig:HPF_map}
\end{center}
\end{figure}
\section{Discussions}
\subsection{Pixel-by-pixel variations of the spectral responses}
We calculate the standard deviations of the spectral response ratios from the maps in figure~\ref{fig:HPF_map}. The resultant pixel-by-pixel variations are $0.45\%$ and $0.79\%$ for $R(23\,{\rm {\mu}m})/R(20\,{\rm {\mu}m})$ and $R(27\,{\rm {\mu}m})/R(20\,{\rm {\mu}m})$, respectively.
We compare the pixel-by-pixel variations of the spectral response ratios with the measurement error for each pixel. Figure~\ref{fig:err_histo} shows the histograms of the measurement errors for the $R(23\,{\rm {\mu}m})/R(20\,{\rm {\mu}m})$ and $R(27\,{\rm {\mu}m})/R(20\,{\rm {\mu}m})$ maps, the pixel averages of which are $0.35\%$ and $0.41\%$, respectively.
The pixel-by-pixel variations are shown as dashed lines in figure~\ref{fig:err_histo}, from which we find that only $3\%$ and $0\%$ of the pixels in the array show the measurement errors larger than the pixel-by-pixel variations of the $R(23\,{\rm {\mu}m})/R(20\,{\rm {\mu}m})$ and $R(27\,{\rm {\mu}m})/R(20\,{\rm {\mu}m})$ maps, respectively. Hence we find that the pixel-by-pixel variations of the spectral responses thus measured are intrinsic for almost all the pixels with sufficient $\it S/N$.
\begin{figure}
\begin{center}
\includegraphics[width=14cm,clip]{fig/err_histo.jpg}
\caption{Histograms of the measurement errors for the $R(23\,{\rm {\mu}m})/R(20\,{\rm {\mu}m})$ and $R(27\,{\rm {\mu}m})/R(20\,{\rm {\mu}m})$ maps. Dashed lines correspond to the pixel-by-pixel variations of the maps in figure~\ref{fig:HPF_map}, which are calculated by the standard deviations of the ratios.}
\label{fig:err_histo}
\end{center}
\end{figure}
As can be seen in figure~\ref{fig:HPF_map}, the spectral response ratio maps of $R(23\,{\rm {\mu}m})/R(20\,{\rm {\mu}m})$ and $R(27\,{\rm {\mu}m})/R(20\,{\rm {\mu}m})$ show periodic spatial variations along the column direction.
To visualize the row-by-row and column-by-column variations more clearly, we average the maps for every row and column.
The upper and lower panels of figure~\ref{fig:row} show the distributions of $R(23\,{\rm {\mu}m})/R(20\,{\rm {\mu}m})$ and $R(27\,{\rm {\mu}m})/R(20\,{\rm {\mu}m})$, respectively, averaged per row.
It is found from the upper panel that $R(23\,{\rm {\mu}m})/R(20\,{\rm {\mu}m})$ shows appreciable variations every 16 rows. As described in section~\ref{sec:data_process}, we operated the 1/16 frame readout, in which 16 adjacent rows were read out simultaneously. Hence this operation is likely to cause the periodic spatial variation, presumably due to a slight change in the detector temperature immediately after the readout.
The lower panel in figure~\ref{fig:row} clearly indicates that the spectral responses in the odd rows show systematically higher $R(27\,{\rm {\mu}m})/R(20\,{\rm {\mu}m})$ and thus longer cutoff wavelengths than those in the even rows. In general, spectral responses at longer wavelengths of the IBC-type detectors are known to increase with $\rm {\it V}_{bias}$, as mentioned in $\it Spitzer$/IRS Instrument Handbook Version 5.0. This is explained physically by the Poole-Frenkel effect, which lowers the effective Coulombic potential by the external field.
In light of this, we speculate that there is a systematic difference in $\rm {\it V}_{bias}$ between the even and odd rows. In order to verify the hypothesis, we obtain an image with the reset switch turned on, to measure the spatial variation of $\rm {\it V}_{bias}$ in the array directly. As shown in figure~\ref{fig:reset}, we find that the odd rows in the reset image show values systematically lower than the even rows, which indicates that the unit cell SFD gate bias, $\rm {\it V}_{rstuc}$, is systematically higher in the odd rows than in the even rows. Hence the variations every 2 rows of the spectral responses in the lower panel of figure~\ref{fig:row} are likely to be caused by the Poole-Frenkel effect, considering $\rm {\it V}_{bias}={\it V}_{rstuc}- {\it V}_{det}$ with $\rm {\it V}_{det}$ constant.
\begin{figure}
\begin{center}
\includegraphics[width=12cm,clip]{fig/fig_row.jpg}
\caption{Distributions of the residuals of $R(23\,{\rm {\mu}m})/R(20\,{\rm {\mu}m})$ and $R(27\,{\rm {\mu}m})/R(20\,{\rm {\mu}m})$ averaged per row relative to those averaged for all the pixels. Red and blue squares represent the data points of the odd and even rows, respectively. }
\label{fig:row}
\end{center}
\end{figure}
\begin{figure}
\begin{center}
\includegraphics[width=12cm,clip]{fig/reset_row.jpg}
\caption{Distributions of the number counts in the reset image averaged per row. Red and blue squares represent the data points of the odd and even rows, respectively. }
\label{fig:reset}
\end{center}
\end{figure}
Figure~\ref{fig:col} shows the distributions of $R(23\,{\rm {\mu}m})/R(20\,{\rm {\mu}m})$ and $R(27\,{\rm {\mu}m})/R(20\,{\rm {\mu}m})$ averaged per column, which does not exhibit periodic spectral variations such as figure~\ref{fig:row}, except for wave-like variations near both ends of the columns.
We find that the wave-like variations are spatially correlated between the distributions of $R(23\,{\rm {\mu}m})/R(20\,{\rm {\mu}m})$ and $R(27\,{\rm {\mu}m})/R(20\,{\rm {\mu}m})$, which indicate that the global gradients of the spectral responses are variable near both ends of the columns.
The variations of the global gradients of the spectral responses can be explained by the spatial variations of the thickness of the detector layer and/or the AR coating.
Another possibility is the ``picture frame effect" seen in some arrays (e.g., Figure~2-3 in the SpeX observing manual\footnote{See \url{http://irtfweb.ifa.hawaii.edu/~spex/spex_manual/SpeX_manual_21oct14.pdf}}). The picture frame noise is likely to be affected by detector temperature fluctuations at a milli-Kelvin level \citep{Bechter2019}. Thus the wave-like variations of the spectral responses can be explained by a slight difference in the detector temperature, similarly to the variations every 16 rows in the upper panel of figure~\ref{fig:row}.
\begin{figure}
\begin{center}
\includegraphics[width=12cm,clip]{fig/fig_col.jpg}
\caption{Distributions of the residuals of $R(23\,{\rm {\mu}m})/R(20\,{\rm {\mu}m})$ and $R(27\,{\rm {\mu}m})/R(20\,{\rm {\mu}m})$ averaged per column relative to those averaged for all the pixels. Red and blue squares represent the data points of the odd and even columns, respectively. }
\label{fig:col}
\end{center}
\end{figure}
\subsection{Dependency on the intensity of the incident light}
We also measure the spectral responses under various BG levels by increasing the broad-band IR BG level. In this measurement, the 16 rows ($Y$=161--176) are read out to calculate the average spectral response ratio $R(27\,{\rm {\mu}m})/R(20\,{\rm {\mu}m})$.
In figure~\ref{fig:bias}, we show the average $R(27\,{\rm {\mu}m})/R(20\,{\rm {\mu}m})$ as a function of the pixel-averaged intensity of the incident light falling onto the Si:As array, which indicates the clear trend that the spectral responses have shorter cutoff wavelengths under higher BG environments.
For the source follower per detector circuit adopted in mid-IR array detectors such as the Si:As IBC array, $\rm {\it V}_{bias}$ decreases with integrating the charge carriers. Therefore, during the charge integration, the spectral responses are expected to be changed by the Poole-Frenkel effect as mentioned in the above discussion. Thus the obtained spectral responses, which are regarded as the average spectral responses during the time of the charge integration, are expected to have shorter cutoff wavelengths under higher BG environments. This result demonstrates that, in order to obtain highly accurate spectra, we have to calibrate the spectral responses taking into account the light intensity for each pixel as well as the pixel-by-pixel intrinsic variations.
\begin{figure}
\begin{center}
\includegraphics[width=12cm,clip]{fig/light_depend.jpg}
\caption{Averaged spectral response ratio $R(27\,{\rm {\mu}m})/R(20\,{\rm {\mu}m})$ as a function of the intensity of the incident light.}
\label{fig:bias}
\end{center}
\end{figure}
\section{Conclusion}
Understanding of the spectral responses of large-format infrared array detectors is important for spectroscopic observations in space astronomy.
We have successfully characterized the pixel-by-pixel intrinsic variations of the spectral responses of the infrared array detector using the cryogenic optics and the FT-IR which enable the measurements at high $\it S/N$. We have found that the pixel-by-pixel variations of the spectral responses have systematic spatial variations along both row and column directions.
For the row-by-row variations, the spectral responses in the odd rows show longer cutoff wavelengths than those in the even rows, which is likely to be caused by the Poole-Frenkel effect due to the spatial variation of $\rm {\it V}_{bias}$.
For the column-by-column variations, global gradients of the spectral responses are variable near both ends of the columns, which can be explained by the spatial variations of the thickness of the detector layer.
Furthermore, we have found that spectral responses of IBC-type array detectors significantly change with the light intensity in the BG environments most probably due to changes in $\rm {\it V}_{bias}$ by the Poole-Frenkel effect during the charge integration.
For future space observations using large-format IR arrays, we suggest the importance of evaluating pixel-based $\rm {\it V}_{bias}$ through the measurement of a reset image and taking into account it as well as source intensities for calibration of the spectral properties.
This work was supported by JSPS KAKENHI Grand Number JP19K03927, and is based on the backup detector for $AKARI$, a JAXA project with the participation of ESA.
\bibliographystyle{aasjournal}
|
2,877,628,090,368 | arxiv | \section{Introduction}
Divergent series and integrals appear in several areas of mathematics and modern physics.
Zeta function regularization method assigns finite values to divergent sums and this technique is commonly applied in number theory and theoretical physics to give precise meanings to ill-conditioned sums. Zeta function regularization is conceptualized from one of the most enigmatic results in arithmetic that the sum of an infinite divergent series can result in a finite number which is mathematically consistent. In spite of being a peculiar and counter-intuitive abstraction in mathematics, divergent series appear also in physics where experimental observation matches the prediction of the theoretical results~\cite{Dowling-1989,Schumayer-Hutcheiston-2011}.
Solving divergent integral of certain functions is quintessential in mathematics of the quantum field theory (QFT). Introducing suitable cut-off function is the cornerstone of the mathematics of divergent integrals appeared in the calculations of QFT. The definition of the Zeta regularization of a series is extended in \cite{Moreta2-2013} for calculation of certain divergent integrals using the Zeta regularization technique combined with the Euler-Maclaurin summation formula. The Euler-Maclaurin formula is extended into a recursive formula to provide finite regularization to divergent integrals.
An explicit bounded solution of divergent integral of power functions was presented in \cite{Aghili-Tafazoli-2018}. This work extends the results of \cite{Aghili-Tafazoli-2018} for the case of divergent integral of certain analytical functions. Using the notion of Zeta function regularization, we will show that the divergent integrals can be represented by a infinite series, which is convergent under certain condition. Subsequently, the formula is applied to some common divergent integrals to derive finite solutions in the sense of Zeta regularization.
\section{Zeta Regularization of Divergent Integrals}
Suppose the following integral of function $f(x)$ diverges
\begin{equation} \label{eq:div_integral}
I=\int_{0}^{\infty} f(x) dx
\end{equation}
Also assume that the function has a convergent Maclaurin series with infinity radius of convergence, i.e.,
\begin{equation} \label{eq:Tylor}
f(x) = \sum_{k=0}^{\infty} \frac{x^k}{k!} f^{(k)}(0) \qquad \forall x \in \mathbb{R}
\end{equation}
Then we seek to assign a bounded meaningful solution to the divergent integral \eqref{eq:div_integral} using a Zeta regularization technique. Note that if function $f(z)$ is equal to its Maclaurin series for all $z$ in the complex plane, then it is called entire. Examples of entire functions include polynomials, exponential function, and the trigonometric functions sine and cosine, while square root and the fractional functions are not entire. It is worthwhile mentioning that the necessary and sufficient for function $f(z)$ in terms of its complex variable $z=x+iy$ to be analytic are that: i) $\partial u/\partial x$, $\partial u/\partial y$, $\partial v/\partial x$, $\partial v/\partial y$ are continuous and ii) the aforementioned four partial derivatives of the real part $u(x, y)$ and imaginary
part $v(x, y)$ of the function satisfies the Cauchy-Riemann equations
\begin{equation}
\frac{\partial u}{\partial x}= \frac{\partial v}{\partial y} \quad \mbox{and} \quad \frac{\partial u}{\partial y}= -\frac{\partial v}{\partial x}
\end{equation}
Replacing the function with its Maclaurin series \eqref{eq:Tylor} in the integral \eqref{eq:div_integral} and then switching the integral and sum, we can rewrite the divergent integral as follows
\begin{equation} \label{eq:split}
I= \sum_{k=0}^{\infty} \frac{f^{(k)}(0)}{k!} \int_0^{\infty}x^k \; dx
\end{equation}
The above improper integral can be equivalently written as the following infinite series by splitting the range of integration limits into successive integer numbers
\begin{equation} \label{eq:two_integrals}
\int_0^{\infty}x^k \; dx = \sum_{n=1}^{\infty} \int_{n-1}^n x^k dx = \sum_{n=1}^{\infty} \frac{1}{k+1}\Big( n^{k+1} - (n-1)^{k+1} \Big)
\end{equation}
Using \eqref{eq:two_integrals} in \eqref{eq:split} yields
\begin{equation} \label{eq:I_n+1}
I= \sum_{r=0}^{\infty} \frac{f^{(k)}(0)}{(k+1)!} \sum_{n=1}^{\infty} \Big( n^{k+1} - (n-1)^{k+1} \Big)
\end{equation}
According to the binomial polynomial expansion of $(n-1)^{k+1}$, we have
\begin{subequations}
\begin{equation} \label{eq:n-1_binomial}
(n-1)^{k+1} = \sum_{p=0}^{k+1} {{k+1}\choose{p}} (-1)^{k+1-p}n^p,
\end{equation}
where the binomial coefficient is the defined as
\begin{equation} \label{eq:choose}
{{m}\choose{l}} = \frac{m !}{l !(m-l) !}
\end{equation}
\end{subequations}
Upon substitution of \eqref{eq:n-1_binomial} in \eqref{eq:I_n+1}, we arrive at
\begin{align} \label{eq:}
I &= \sum_{r=0}^{\infty} \frac{f^{(k)}(0)}{(k+1)!} \sum_{n=1}^{\infty} \sum_{p=0}^{k} {{k+1}\choose{p}} (-1)^{r-p}n^p \\ \label{eq:Delta_r2}
& = \sum_{k=0}^{\infty} \frac{f^{(k)}(0)}{(k+1)!} \sum_{p=0}^{k} {{k+1}\choose{p}} (-1)^{k-p}\sum_{n=1}^{\infty} n^p \\
& = \sum_{k=0}^{\infty} \frac{f^{(k)}(0)}{(k+1)!} \sum_{p=0}^{k} {{k+1}\choose{p}} (-1)^{k-p}\zeta(-p)
\end{align}
For positive integer $p\geq 0$, the Zeta function is related to Bernoulli numbers by
\begin{equation} \label{eq:zeta_B}
\zeta(-p) = (-1)^p \frac{B_{p+1}}{p+1}
\end{equation}
Thus, we have
\begin{equation}
I= \sum_{k=0}^{\infty} \frac{(-1)^{k} f^{(k)}(0)}{(k+1)!} \sum_{p=0}^{k} {{k+1}\choose{p}} \frac{B_{p+1}}{p+1}
\end{equation}
Moreover, from definition \eqref{eq:choose}, one can verify that the successive binomial coefficients hold the following useful identity
\begin{equation} \label{eq:2Cs}
{{k+1}\choose{p}} = \frac{p+1}{k+2} {{k+2}\choose{p+1}}
\end{equation}
Thus we have
\begin{align} \notag
I &= \sum_{k=0}^{\infty} \frac{(-1)^{k} f^{(k)}(0)}{(k+2)!} \sum_{p=0}^{k} {{k+2}\choose{p+1}} B_{p+1}\\ \notag
&= \sum_{k=0}^{\infty} \frac{(-1)^{k} f^{(k)}(0)}{(k+2)!} \sum_{p=1}^{k+1} {{k+2}\choose{p}} B_{p}\\ \notag
&= \sum_{k=0}^{\infty} \frac{(-1)^{k} f^{(k)}(0)}{(k+2)!} \left( \sum_{p=0}^{k+1} {{k+2}\choose{p}} B_{p} - {{k+2}\choose{0}} B_{0} \right) \\ \label{eq:I_sum2}
&= \sum_{k=0}^{\infty} \frac{(-1)^{k} f^{(k)}(0)}{(k+2)!} \left( \sum_{p=0}^{k+1} {{k+2}\choose{p}} B_{p} - 1 \right)
\end{align}
On the other hand, the Bernoulli numbers satisfy the following property
\begin{equation} \label{eq:sumCB}
\sum_{p=0}^{m-1} {{m}\choose{p}} B_p = \sum_{p=0}^{k+1} {{k+2}\choose{p}} B_p = 0
\end{equation}
Finally using identity \eqref{eq:sumCB} in \eqref{eq:I_sum2}, we arrive at the explicit expression of the divergent integral based on Zeta regularization
\begin{equation} \label{eq:divergent_integral}
\boxed{\int_0 ^{\infty} f(x) \; dx = \sum_{k=0}^{\infty} \frac{(-1)^{k+1} }{(k+2)!}f^{(k)}(0)}
\end{equation}
Invoking the ratio test, one can conclude that that the above infinite series converge if
\begin{equation}
\lim_{k \rightarrow \infty} \frac{\frac{(-1)^{k+2} f^{(k+1)}(0)}{(k+3)!}} {\frac{(-1)^{k+1} f^{(k)}(0)}{(k+2)!}}<1
\end{equation}
In other words, the convergence of the above series is
\begin{equation}
\lim_{k \rightarrow \infty} \frac{|f^{(k+1)}(0)|}{(k+3)|f^{(k)}(0)| } <1
\end{equation}
Equation \eqref{eq:divergent_integral} can be used to derive Zeta regularization of many divergent integrals. For instance, if the analytical function is selected to be a polynomial
\begin{equation}
f(x) = x^m
\end{equation}
Then,
\begin{equation} \label{eq:fr_polynomial}
f^{(k)}(0) = \left\{ \begin{array}{ll} m! \quad & \mbox{if} \;\; k=m\\ 0 & \mbox{otherwise} \end{array} \right.
\end{equation}
Applying \eqref{eq:divergent_integral} to \eqref{eq:fr_polynomial} yields
\begin{equation} \notag
\int_0^{\infty} x^m \; dx = 0+0+\cdots + \frac{(-1)^{m+1} }{(m+2)!} m! + 0 + 0+ \cdots,
\end{equation}
and thus
\begin{equation} \notag
\int_0^{\infty} x^m \; dx = \frac{(-1)^{m+1} }{(m+1)(m+2)},
\end{equation}
which is consistent with the previous result in \cite{Aghili-Tafazoli-2018}.
\section{Examples of Divergent integrals}
Consider the trigonometric function
\begin{equation}
f(x) =\sin(x)
\end{equation}
whose derivatives are $f^{(2k+1)}(0)=(-1)^{2k+1}$. Thus, according to \eqref{eq:divergent_integral} we have
\begin{equation}
\int_0^{\infty} \sin(x) \; dx = \sum_{k=0}^{\infty} \frac{(-1)^{k+1}}{(2k+3)!}
\end{equation}
By inspection one can show that the right-hand side of the above equation is equal to $\sin(1)-1$.
Examples of Zeta regularization of divergent integrals of several transcendental functions are given below:
\begin{subequations}
\begin{align}
&\int_0^{\infty} \sin(x) dx = \sum_{k=0}^{\infty} \frac{(-1)^{k+1}}{(2k +3)!} =\sin(1)-1 \\
&\int_0^{\infty} \cos(x) dx = \sum_{k=0}^{\infty} \frac{(-1)^{2k+1}}{(2k +2)!} =\cos(1)-1 \\
&\int_0^{\infty} \sinh(x) dx = \sum_{k=0}^{\infty} \frac{1}{(2k +3)!} =\sinh(1)-1 \\
&\int_0^{\infty} \cosh(x) dx = \sum_{k=0}^{\infty} \frac{(-1)^{2k+1}}{(2k +2)!} =1-\cosh(1) \\
&\int_0^{\infty} e^x dx = \sum_{k=0}^{\infty} \frac{(-1)^{k+1}}{(k +2)!} = -\frac{1}{e} \\
& \int_0^{\infty} \log(1+x) dx = \sum_{k=0}^{\infty} \frac{1}{k(k+1)(k+2)} = \frac{1}{4}\\
&\int_0^{\infty} e^{x^2} dx = \sum_{k=0}^{\infty} \frac{(-1)^{k+1}}{2(k +2)(k+1)k!} = \frac{1}{2} - \frac{1}{2e} - \frac{\sqrt{\pi}}{e} \mbox{erf}(1) \\
&\int_0^{\infty} \mbox{erf}(x)dx = \frac{2}{\sqrt{\pi}} \sum_{k=0}^{\infty} \frac{(-1)^k}{2(2k+1)(k+1)(2k+3)k!} = \frac{3}{2} \mbox{erf}(1) + \frac{1}{2e \sqrt{\pi}} - \frac{1}{\sqrt{\pi}}\\
&\int_0^{\infty} e^{-x^2}dx = \frac{2}{\sqrt{\pi}} \sum_{k=0}^{\infty}
\frac{(-1)^{k+1}}{2(2k+1)(k+1)!} = -\frac{\sqrt{\pi}}{2} \mbox{erf}(1)+\frac{1-e}{2e}\\
&\int_0^{\infty} x \sin(x)dx = \sum_{k=0}^{\infty} \frac{1}{2(k+2)(2k+3)(1+2k)!} = 2 + \sinh(1) -2 \cosh(1)
\end{align}
\end{subequations}
\section{Conclusions}
We extended the zeta function regularization technique in order to assign finite values to divergent integral of a class of analytical functions which satisfy a convergence condition. Using the binomial expansion and Maclaurin series, it has been shown that such divergent integrals can be expressed by infinite series in terms of the Riemann zeta function. Subsequently, it has been shown that the
divergent integral can be written into the following infinite series $\int_0^{\infty} f(t) dt = \sum_{k=0}^{\infty} \frac{(-1)^{k+1}}{(k+2)!} f^{(k)}(0)$, which becomes convergent under certain condition. Calculation of several common divergent integrals using this formula has been presented.
\bibliographystyle{IEEEtran}
|
2,877,628,090,369 | arxiv | \section{Introduction}
\label{sec:intro}
Component-based design is based on the separation between coordination and
computation. Systems are built from units processing sequential code
insulated from concurrent execution issues. The isolation of coordination
mechanisms allows a global treatment and analysis.
Fundamentally, each component-based design framework consists of a
behaviour type \ensuremath{\mathcal{B}}{} \cite{bliudze12-glue}, defining the underlying
semantic domain and the key properties such as the relevant equivalence
relations, and a set \glue{} of glue operators of the form $f: 2^\ensuremath{\mathcal{B}}
\rightarrow \ensuremath{\mathcal{B}}$. As argued in \cite{framework05}, an important property
for glue operators is the possibility to be {\em flattened}: given a
behaviour $g(f(B_1,\dots,B_k), B_{k+1},\dots,B_n)$ obtained by hierarchical
composition with two glue operators, there must be an equivalent\footnote{
The notion of equivalence, in this context, is given by
the behaviour type \ensuremath{\mathcal{B}}{} \cite{bliudze12-glue}.
} behaviour $h(B_1,\dots,B_n)$ obtained by applying a single glue
operator to the {\em same} atomic components. In other words, \glue{} must
be closed under composition. Flattening enables model transformations, e.g.\
for optimising code generation or component placement on multicore
platforms \cite{quilbeuf10-distr,jaber09-s2s}.
BIP is a component framework for constructing systems by superposing three
layers: Behaviour, Interaction and Priorities. In the classical BIP
semantics \cite{BliSif07-acp-emsoft}, behaviour is modelled by
\emph{Labelled Transition Systems} (LTS), i.e.\ triples $B=(Q,P,\goesto)$,
where $Q$ is a set of {\em states}, $P$ is a set of {\em ports}, and
$\goesto\, \subseteq Q\times 2^P\times Q$ is a set of {\em transitions},
each labelled by an interaction (a subset of ports). Glue operators are
defined using interaction and priority models.
For a set of behaviours $\setdef{B_i=(Q_i, P_i, \goesto)}{i\in [1,n]}$, an
\emph{interaction model} is a set of \emph{interactions} $\gamma \subseteq 2^P$,
where $P = \bigcup_{i=1}^n P_i$ (all $P_i$ are assumed to be pairwise disjoint).
The behaviour $\gamma(B_1,\dots,B_n)$ is defined by the behaviour $(Q, P,
\goesto_\gamma)$, with $Q = \prod_{i=1}^n Q_i$ and the minimal transition relation
$\goesto_\gamma$ satisfying the rule (we use set notation to group premises of the same type)
\begin{equation}
\label{eq:transsem}
\derrule[3]{a \in \gamma &
\Setdef{q_i \longgoesto[a \cap P_i] q'_i}
{a \cap P_i \not= \emptyset,\ i\in [1,n]} &
\Setdef{q_i = q'_i}
{a \cap P_i = \emptyset,\ i\in [1,n]}
}{
q_1\dots q_n \longgoesto[a]_\gamma q'_1\dots q'_n
}\,.
\end{equation}
For a behaviour $B = (Q,P,\goesto)$, a {\em priority model} is a strict
partial order $\prec$ on $2^P$. When $a \prec a'$, we say that the interaction $a'$ has \emph{higher priority} than $a$. We put $B_\prec \bydef{=}
(Q, P, \goesto_\prec)$, with the minimal transition relation $\goesto_\prec$
satisfying the rule
\begin{equation}
\label{eq:prisem}
\derrule[2]{
q \longgoesto[a] q' &
\Setdef{q \not\longgoesto[a']}{a \prec a'}
}{
q \longgoesto[a]_\prec q'
}\,.
\end{equation}
Each $n$-ary glue operator in BIP is obtained as the composition of an interaction model
(an $n$-ary operator), composing several behaviours into a single one, and
a unary priority model.\footnote{
Notice that both interaction and priority models can be trivial: a trivial interaction model over the set of ports $P$ is the set of \emph{singleton interactions} $\{\{p\}\,|\, p \in P\}$; a trivial priority model is empty with none of the interactions having higher priority than any other.
} In general, when combined hierarchically such glue
operators cannot be flattened. Indeed, consider the following example.
\begin{figure}
\centering
\begin{subfigure}[b]{0.40\textwidth}
\centering
\input{nonflat-atoms.pspdftex}
\caption{Atomic components $B_1$, $B_2$ and $B_3$}
\label{fig:nonflat:atoms}
\end{subfigure}%
\hfill
\begin{subfigure}[b]{0.24\textwidth}
\centering
\input{nonflat-composed.pspdftex}
\caption{Composed system}
\label{fig:nonflat:composed}
\end{subfigure}
\hfill
\begin{subfigure}[b]{0.24\textwidth}
\centering
\input{nonflat-flat.pspdftex}
\caption{Flat system}
\label{fig:nonflat:flat}
\end{subfigure}
\mycaption{BIP component that cannot be flattened (\ex{nonflat})}
\label{fig:nonflat}
\end{figure}
\begin{example}
\label{ex:nonflat}
Let $B_1$, $B_2$ and $B_3$ be the three atomic behaviours shown in
\fig{nonflat:atoms} and consider the composed behaviour
$g(f(B_1,B_2),B_3)$ (\fig{nonflat:composed}), with the glue operator $f$
defined by the interaction model $\{p,q,r,s\}$ (omitted in
\fig{nonflat:composed}) and priority model $\{p \prec r\}$; $g$ defined
by the interaction model $\{p,q,s,rt\}$ without any additional priority
model. One can prove that it is not possible to represent
this behaviour as a flat one (\fig{nonflat:flat}). Indeed, it is not
sufficient to replace the priority $p \prec r$ by $p \prec rt$: in the
global state $(1,3,6)$ of the composed behaviour in
\fig{nonflat:composed}, interaction $p$ is inhibited by the priority $p
\prec r$; in the same state of the composed behaviour in
\fig{nonflat:flat}, $p$ would not be inhibited by $p \prec rt$, since
interaction $rt$ is not enabled.
Furthermore, although this goes beyond the scope of this paper, one can
prove that there is no flat glue operator $h$ in the classical BIP semantics
given by \eq{transsem} and \eq{prisem}, such that $g(f(B_1,B_2),B_3)$ be equivalent to
$h(B_1,B_2,B_3,B_4)$ with any additional helper behaviour $B_4$.
\end{example}
The impossibility of flattening in the above example is due to the fact that the information used by
the priority model refers only to interactions authorised by the underlying
interaction model. All the information about interactions enabled in
atomic components is lost after the application of $f$. For instance, one can consider that, in
\ex{nonflat}, transitions $p$ and $r$ model respectively taking and
liberating a semaphore. Thus $p$ should be disabled whenever $r$ is
possible, independently of whether $r$ can actually be taken on not (e.g.\
when $r$ is blocked waiting for a synchronisation, as in
\fig{nonflat:composed}).
In \cite{BliSif11-constraints-sc}, a variation of the BIP operational
semantics was introduced. In this variation, a behaviour is defined as an
LTS with an additional {\em offer} predicate, i.e.\ a quadruple
$B=(Q,P,\goesto[],\offer[])$, such that $\offer \subseteq Q \times P$ and,
for any $q\in Q$ and $p \in P$, holds the implication $(\exists a \subseteq
P: p\in a \land q\goesto[a]) \implies q\offer[p]$. The converse
implication is not required. In particular, when a transition labelled $p$
in a sub-component of a composed behaviour is blocked waiting for a
synchronisation, $p$ is still considered as offered.\footnote{
As a side effect, the offer predicate can be used to distinguish between
atomic and composite behaviours: a behaviour is atomic iff $(\exists a
\subseteq P: p\in a \land q\goesto[a]) \Longleftrightarrow q\offer[p]$
\cite{BliSif11-constraints-sc}.
} In \cite{BliSif11-constraints-sc}, we have established the equivalence between, on one hand,
glue operators defined by sets of SOS rules having positive premises in terms of the transition relations $\goesto[]$ and offering predicates $\offer[]$ and negative premises in terms of the offering predicates only, and, on the other hand, Boolean formula with the so-called {\em firing} and
{\em activation} variables. We have also studied the expressiveness of such glue operators and compared it with classical BIP.
Using the offer predicate instead of the transition relation in the
negative premises of \eq{prisem}, ensures that the resulting set of glue operators is closed under composition.
In this paper, we show how the algebras representing interaction models
can be naturally generalised to also define priorities, based on the use of activation and firing variables.
Several algebraic structures are used to define and manipulate interaction
models in BIP \cite{BliSif07-acp-emsoft, BliSif10-causal-fmsd}.
{\bf The Algebra of Interactions}, $\ensuremath{\mathcal{A\hspace{-0.6ex}I\!}}(P)$, is isomorphic to $2^{2^P}$.
It provides a simple algebraic representation of interaction models
simplifying the definition of the semantics of other algebras.
{\bf The Algebra of Connectors}, $\ensuremath{\mathcal{AC}\!}(P)$, defines the connectors in the
form used in the BIP language, well adapted for graphical representation
and for the specification of data transfers.
{\bf The Algebra of Causal Interaction Trees}, $\ensuremath{\mathcal{T\!}}(P)$, defines an
alternative semantic domain for connectors with the explicit causality
relation between ports. Coherence results for the
$\ensuremath{\mathcal{A\hspace{-0.6ex}I\!}}(P)$ and $\ensuremath{\mathcal{T\!}}(P)$ semantics of connectors have been provided in
\cite{BliSif10-causal-fmsd}.
{\bf Systems of Causal Rules}, $\ensuremath{\mathcal{CR\!}}(P)$, derived from causal interaction
trees define a Boolean representation of connectors, suitable for symbolic
manipulation and for specification of state safety properties.
In \cite{BliSif10-causal-fmsd}, the four transformations were provided
between $\ensuremath{\mathcal{AC}\!}(P)$ and $\ensuremath{\mathcal{T\!}}(P)$, and between $\ensuremath{\mathcal{T\!}}(P)$ and $\ensuremath{\mathcal{CR\!}}(P)$. In
particular, this allows to synthesise connectors from $\ensuremath{\mathbb{B}}[P]$ Boolean
formul\ae.
In this paper, we study the extension of the above algebras to represent both interaction and priority models. Equivalence induced by the new operational semantics is weaker
than that induced by the interaction semantics. We extend accordingly the
axioms of $\ensuremath{\mathcal{T\!}}(\cdot)$ and provide corresponding normal forms for terms of
the considered algebras. Finally, we show that, in this context, the
connector synthesis algorithm in \cite{BliSif10-causal-fmsd} can be
simplified by considering only the causal rules with firing variables in
the effect.
The rest of the paper is structured as follows. We start, in \secn{related}, by a short discussion of some related work. In \secn{representations}, we briefly recall the syntax and semantics of
all the considered algebras. \secn{semantic} presents the new semantic
model for BIP based on the offer predicate. Main contributions of the paper, namely the
extensions of the algebras encompassing the activation and negative ports
are presented in \secn{extension}. We illustrate the extended algebras with
a connector synthesis example presented in \secn{example}. Finally, \secn{conclusion} concludes the
paper.
\section{Related work}
\label{sec:related}
The results in this paper build on our previous work cited above. However,
the following related work should also be mentioned. The approach we use
for the Boolean encoding of glue constraints is close to that used for
computing flows in Reo connectors in \cite{DeconstructingReo}, where it is
further extended to data flows.
Several methodologies for synthesis of component coordination have been
proposed in the literature, e.g.\ connector synthesis in
\cite{Arbab05ReoSynth, arbab08-synth, inverardi01}. Both approaches are
very different from ours. In \cite{Arbab05ReoSynth}, Reo circuits are
generated from constraint automata. This approach is limited, in the first
place, by the complexity of building the automaton specification of
interactions. An attempt to overcome this limitation is made in
\cite{arbab08-synth} by generating constraint automata from UML sequence
diagrams. In \cite{inverardi01}, connectors are synthesised in order to
ensure deadlock freedom of systems that follow a very specific
architectural style imposing both the interconnection topology and
communication primitives (notification and request messages).
Recently a comparative study \cite{bruni12-tiles-wire-BIP} of three
connector frameworks\mdash tile model \cite{montanari06}, wire calculus
\cite{sobocinski09-wire} and BIP\mdash has been performed. From the
operational semantics perspective, this comparison only accounts for
operators with positive premises. In particular, priority in BIP is not
considered. It would be interesting to see whether using ``local'' offer
predicate instead of ``global'' priorities of the classical BIP could help
generalising this work.
\section{Representations of the interaction model}
\label{sec:representations}
In this section, we briefly recall the syntax and semantics of the algebras
used to represent BIP interaction models. The semantics of the Algebra of
Interactions is given in terms of sets of interactions by a function
$\intsem{\cdot}: \ensuremath{\mathcal{A\hspace{-0.6ex}I\!}}(P) \rightarrow 2^{2^P}$. Two terms $x,y \in \ensuremath{\mathcal{A\hspace{-0.6ex}I\!}}(P)$
are {\em equivalent} $x \simeq y$ iff $\intsem{x} = \intsem{y}$. For any
other algebra, $\ensuremath{\mathcal{A}}(P)$, among those mentioned in the introduction, we
define its semantics by the function $\aisem{\cdot}: \ensuremath{\mathcal{A}}(P) \rightarrow
\ensuremath{\mathcal{A\hspace{-0.6ex}I\!}}(P)$. A function $\intsem{\cdot}: \ensuremath{\mathcal{A}}(P) \rightarrow 2^{2^P}$ is
obtained by composing $\aisem{\cdot}: \ensuremath{\mathcal{A}}(P) \rightarrow \ensuremath{\mathcal{A\hspace{-0.6ex}I\!}}(P)$ and
$\intsem{\cdot}: \ensuremath{\mathcal{A\hspace{-0.6ex}I\!}}(P) \rightarrow 2^{2^P}$. The axiomatisation of
$\ensuremath{\mathcal{A\hspace{-0.6ex}I\!}}(P)$ given in \cite{BliSif07-acp-emsoft} is sound and complete with
respect to $\simeq$. Hence, for other algebras, the equivalences induced
by $\intsem{\cdot}$ and $\aisem{\cdot}$ coincide.
Below, we assume that a set of ports $P$ is given, such that $0,1\not\in
P$.
\subsection{Algebra of Interactions}
\label{sec:ai}
\begin{syntax}
The syntax of the {\em Algebra of Interactions}, $\ensuremath{\mathcal{A\hspace{-0.6ex}I\!}}(P)$, is defined by
the following grammar
\begin{equation} \label{eq:apsynt}
\begin{array}{rc*{5}{l@{\ |\ }}l}
x & ::= & 0 & 1 & p \in P & x\cdot x & x + x & (x)\,,\\
\end{array}
\end{equation}
where `$+$' and `$\cdot$' are binary operators, respectively called {\em
union} and {\em synchronisation}. Synchronisation binds stronger than
union.
\end{syntax}
\begin{semantics}
The semantics of $\ensuremath{\mathcal{A\hspace{-0.6ex}I\!}}(P)$ is given by the function $\|\cdot\| : \ensuremath{\mathcal{A\hspace{-0.6ex}I\!}}(P)
\rightarrow 2^{2^P}$, defined by
\begin{equation} \label{eq:apsem}
\renewcommand{\arraystretch}{1.5}
\begin{array}{lcl}
\lefteqn{\|0\|\ =\ \emptyset,\quad \|1\|\ =\ \{\emptyset\},\quad
\|p\|\ =\ \Big\{\{p\}\Big\},}\\
\|x_1 + x_2\| &=& \|x_1\| \cup \|x_2\|,\\
\|x_1 \cdot x_2\| &=&
\Big\{a_1 \cup a_2\,\Big|\, a_1\in \|x_1\|, a_2\in \|x_2\| \Big\},\\
\|(x)\| & = & \|x\|,
\end{array}
\end{equation}
for $p\in P$, $x,x_1,x_2\in\ensuremath{\mathcal{A\hspace{-0.6ex}I\!}}(P)$. Terms of $\ensuremath{\mathcal{A\hspace{-0.6ex}I\!}}(P)$ represent sets of
interactions between the ports $P$.
\end{semantics}
Sound and complete axiomatisation of $\ensuremath{\mathcal{A\hspace{-0.6ex}I\!}}(P)$ with respect to the semantic
equivalence is provided in \cite{BliSif07-acp-emsoft}. In a nutshell,
$(\ensuremath{\mathcal{A\hspace{-0.6ex}I\!}}(P), +, \cdot, 0, 1)$ is a commutative semi-ring idempotent in both
$+$ and $\cdot$.
\subsection{Algebra of Connectors}
\label{sec:ac}
\begin{syntax}
The syntax of the {\em Algebra of Connectors}, $\ensuremath{\mathcal{AC}\!}(P)$, is defined by the
following grammar
\begin{equation} \label{eq:acsynt}
\renewcommand{\arraystretch}{1.5}
\begin{array}{lcll}
s & ::= & [0]\ |\ [1]\ |\ [p]\ |\ [x] \:\:\:\:\:\:\:\:\:(synchrons)\\
t & ::= & [0]'\ |\ [1]'\ |\ [p]'\ |\ [x]' \:\:\:\:\:\: (triggers)\\
x & ::= & s\ |\ t\ |\ x\cdot x\ |\ x + x\ |\ (x)\,,
\end{array}
\end{equation}
for $p\in P$, and where `$+$' is binary operator called {\em union},
`$\cdot$' is a binary operator called {\em fusion}, and brackets
`$[\cdot]$' and `$[\cdot]'$' are unary {\em typing} operators. Fusion
binds stronger than union.
Fusion is a generalisation of the synchronisation in $\ensuremath{\mathcal{A\hspace{-0.6ex}I\!}}(P)$. Typing is
used to form typed connectors: `$[\cdot]'$' defines {\em triggers} (can
initiate an interaction), and `$[\cdot]$' defines {\em synchrons} (need
synchronisation with other ports in order to interact).
\end{syntax}
\begin{semantics}
The semantics of $\ensuremath{\mathcal{AC}\!}(P)$ is given by the function $|\cdot| : \ensuremath{\mathcal{AC}\!}(P)
\rightarrow \ensuremath{\mathcal{A\hspace{-0.6ex}I\!}}(P)$:
\begin{align}
\label{eq:acap}
|[p]| & = p\,,
& |x_1 + x_2| & = |x_1| + |x_2|\,,
& \Big|\prod_{i=1}^n [x_i]\,\Big| & = \prod_{i=1}^n |x_k|\,,
\\
\label{eq:acaptrig}
&& \hspace{-2cm}\Big|\prod_{i=1}^n [x_i]' \prod_{j=1}^m [y_j]\,\Big| =
\sum_{i=1}^n |x_i| & \lefteqn{\left(
\prod_{k\not=i}\Big(1 + |x_k|\Big)\ \prod_{j=1}^m \Big(1 + |y_j|\Big)\right)}\,,
\end{align}
for $x,x_1,\dots,x_n,y_1,\dots,y_m \in \ensuremath{\mathcal{AC}\!}(P)$ and $p\in P\cup \{0,1\}$.
\end{semantics}
Sound and complete axiomatisation of $\ensuremath{\mathcal{AC}\!}(P)$ with respect to the semantic
equivalence is provided in \cite{BliSif08-acp-tc}. We omit it here, since
we will not need it in the rest of this paper.
\fig{connectors} shows four basic examples of the graphical representation
of connectors. Triggers are denoted by triangles, whereas synchrons are
denoted by bullets. The interaction semantics of the four connectors is
given in the subfigure captions.
\begin{figure}
\centering
\begin{subfigure}[b]{0.18\textwidth}
\centering
\input{rdv.pspdftex}
\caption{\centering Rendezvous~$pqr$
$\intsem{pqr} = \{pqr\}$}
\label{fig:connectors:rdv}
\end{subfigure}%
\hfill
\begin{subfigure}[b]{0.24\textwidth}
\centering
\input{broadcast.pspdftex}
\caption{\centering Broadcast~$p'qr$
$\intsem{p'qr} = \{p, pq, pr, pqr\}$}
\label{fig:connectors:bdc}
\end{subfigure}%
\hfill
\begin{subfigure}[b]{0.24\textwidth}
\centering
\input{atomic-bdc.pspdftex}
\caption{\centering Atomic broadcast~$p'[qr]$
$\intsem{p'[qr]} = \{p, pqr\}$}
\label{fig:connectors:atomic}
\end{subfigure}%
\hfill
\begin{subfigure}[b]{0.23\textwidth}
\centering
\input{causal-chain.pspdftex}
\caption{\centering Causal chain~$p'[q'r]$
$\intsem{p'[q'r]} = \{p, pq, pqr\}$}
\label{fig:connectors:causal}
\end{subfigure}%
\caption{Basic connector examples}
\label{fig:connectors}
\end{figure}
\subsection{Algebra of Causal Interaction Trees}
\label{sec:trees}
\begin{syntax}
The syntax of the \emph{Algebra of Causal Interaction Trees}, $\ensuremath{\mathcal{T\!}}(P)$, is
given by
\begin{equation} \label{eq:ctsyn}
t ::= a \,|\, a \rightarrow t \,|\, t\oplus t\,,
\end{equation}
where $a \in 2^P \cup \{0,1\}$ is an interaction, and `$\rightarrow$' and
`$\oplus$' are respectively the {\em causality} and the {\em parallel
composition} operators. Causality binds stronger than parallel
composition. Notice that a causal interaction tree can have several roots.
The causality operator is right- (but not left-) associative, thus for interactions
$a_1,\dots,a_n$, we can abbreviate $a_1 \rightarrow (a_2 \rightarrow (\dots
\rightarrow a_n)\dots))$ to $a_1 \rightarrow a_2 \rightarrow \dots
\rightarrow a_n$. We call this construction a {\em causal chain}.
\end{syntax}
\begin{semantics}
The semantics of $\ensuremath{\mathcal{T\!}}(P)$ is given by the function $|\cdot|: \ensuremath{\mathcal{T\!}}(P)
\rightarrow \ensuremath{\mathcal{A\hspace{-0.6ex}I\!}}(P)$
\begin{align}
\label{eq:ctsem}
|a| & = a\,,
& |a \rightarrow t| & = a\Big(1 + |t|\Big)\,,
& |t_1 \oplus t_2| & = |t_1| + |t_2| + |t_1|\,|t_2|\,,
\end{align}
where $a$ is an interaction and $t, t_1, t_2 \in \ensuremath{\mathcal{T\!}}(P)$.
\end{semantics}
A sound (although not complete) axiomatisation of $\ensuremath{\mathcal{T\!}}(P)$ is provided in
\cite{BliSif10-causal-fmsd}. Rather than reproduce it here, we indicate
the differences after the extension provided in \secn{refinement}.
\subsection{Systems of Causal Rules}
\label{sec:rules}
Below, for any set $X$ of propositional variables, we denote by $\ensuremath{\mathbb{B}}[X]$ the corresponding Boolean algebra generated by $X$. For presentation clarity, we will often omit the conjunction operator and write $a \lor bc$ instead of $a \lor (b \land c)$.
\begin{definition}
A {\em causal rule} is a $\ensuremath{\mathbb{B}}[P]$ formula $E \Rightarrow C$, where $E$
(the \emph{effect}) is either a constant, $\ensuremath{\mathtt{tt}}$, or a port variable $p
\in P$, and $C$ (the \emph{cause}) is either a constant, $\ensuremath{\mathtt{tt}}$ or
$\ensuremath{\mathtt{ff}}$, or a positive $\ensuremath{\mathbb{B}}[P\setminus\{p\}]$ formula in disjunctive normal form.
\end{definition}
\begin{note} \label{rem:absorption}
Notice that $a_1 \lor a_1\,a_2 = a_1$, and therefore causal rules can be
simplified by replacing $p \Rightarrow a_1 \lor a_1\,a_2$ with $p
\Rightarrow a_1)$. We assume that all the causal rules are simplified by
this absorption rule.
\end{note}
\begin{definition}
A \emph{system of causal rules} is a set $R = \{p \Rightarrow x_p\}_{p\in
P^t}$, where $P^t \bydef{=} P \cup \{\ensuremath{\mathtt{tt}}\}$, having precisely one causal rule for each port variable $p \in P^t$. An interaction $a \in
2^P$ satisfies the system $R$ (denoted $a \models R$), iff the
characteristic valuation of $a$ on $P$ satisfies the formula
$\bigwedge_{p\in P^t} (p \Rightarrow x_p)$. We denote by
$|R|\ \bydef{=}\ \sum_{a \models R} a$ the union (in terms of the Algebra
of Interactions) of the interactions satisfying $R$. Thus we have
$|\cdot| : \ensuremath{\mathcal{CR\!}}(P) \rightarrow \ensuremath{\mathcal{A\hspace{-0.6ex}I\!}}(P)$, where $\ensuremath{\mathcal{CR\!}}(P)$ is the set of all
systems of causal rules over the set of port variables $P$.
\end{definition}
\subsection{Transformations between different representations}
\label{sec:transformations}
Transformations $\ensuremath{\mathcal{AC}\!}(P) \overset{\tau}{\underset{\sigma}{\rightleftarrows}}
\ensuremath{\mathcal{T\!}}(P)$ and $\ensuremath{\mathcal{T\!}}(P) \overset{R}{\rightleftarrows} \ensuremath{\mathcal{CR\!}}(P)$ were defined in
\cite{BliSif10-causal-fmsd} and shown to respect $\simeq$. Below, we will
only need the transformations $\sigma : \ensuremath{\mathcal{T\!}}(P) \rightarrow \ensuremath{\mathcal{AC}\!}(P)$ and $R:
\ensuremath{\mathcal{T\!}}(P) \rightarrow \ensuremath{\mathcal{CR\!}}(P)$. The former is defined recursively by putting
\begin{align}
\label{eq:treecon}
\sigma(a) & = [a]\,,
& \sigma(a \rightarrow t) & = [a]'\,[\sigma(t)]\,,
& \sigma(t_1 \oplus t_2) & = [\sigma(t_1)]'\,[\sigma(t_2)]'\,.
\end{align}
We define $R: \ensuremath{\mathcal{T\!}}(P) \rightarrow \ensuremath{\mathcal{CR\!}}(P)$ by putting
\begin{equation}
\label{eq:trees2rules}
R(t)\ =\ \{p \Rightarrow c_p(t)\}_{p\in P\cup\{\ensuremath{\mathtt{tt}}\}}\,,
\end{equation}
where the function $c_p : \ensuremath{\mathcal{T\!}}(P) \rightarrow \ensuremath{\mathbb{B}}[P]$ is defined recursively
as follows. For $a\in 2^P$ (with $p\not\in a$) and $t,t_1,t_2 \in \ensuremath{\mathcal{T\!}}(P)$,
we put
\begin{align*}
c_p(0) & = \ensuremath{\mathtt{ff}}\,,
& c_{\ensuremath{\mathtt{tt}}}(0) & = \ensuremath{\mathtt{ff}}\,,\\
c_p(p \rightarrow t) & = \ensuremath{\mathtt{tt}}\,,
& c_{\ensuremath{\mathtt{tt}}}(1 \rightarrow t) & = \ensuremath{\mathtt{tt}}\,,\\
c_p(pa \rightarrow t) & = a\,,
& c_{\ensuremath{\mathtt{tt}}}(a \rightarrow t) & = a\,,\\
c_p(a \rightarrow t) & = a \land c_p(t)\,,\\
c_p(t_1 \oplus t_2) & = c_p(t_1) \lor c_p(t_2)\,,
& c_{\ensuremath{\mathtt{tt}}}(t_1 \oplus t_2) & = c_{\ensuremath{\mathtt{tt}}}(t_1) \lor c_{\ensuremath{\mathtt{tt}}}(t_2)\,.
\end{align*}
Observe that this transformation associates to each port $p \in P$ a causal
rule $p \Rightarrow C$, where $C$ is the disjunction of all prefixes leading
from roots of $t$ to some node containing $p$, including the ports of this
node other than $p$.
\section{Modification of the semantic model}
\label{sec:semantic}
We now present the variation of the BIP operational semantics based on the
offer predicate \cite{BliSif11-constraints-sc}.
\begin{definition}
\label{defn:lts}
A \emph{labelled transition system} (LTS) is a triple $(Q,P,\goesto)$,
where $Q$ is a set of \emph{states}, $P$ is a set of \emph{ports}, and
$\goesto\, \subseteq Q\times 2^P \times Q$ is a set of
\emph{transitions}, each labelled by a non-empty set of ports. For $q,q'
\in Q$ and $a \in 2^P$, we write $q \goesto[a] q'$ iff $(q,a,q') \in\,
\goesto$. A label $a \in 2^P$ is \emph{active} in a state $q \in Q$
(denoted $q \goesto[a]$), iff there exists $q' \in Q$ such that $q
\goesto[a] q'$. We abbreviate $q \not\goesto[a] \bydef{=} \lnot (q
\goesto[a])$.
\end{definition}
Below, it is assumed that, for all $q \in Q$, $q \goesto[\emptyset]
q$. All results of the paper can be reformulated without this
assumption, but making it simplifies the presentation. We write $pq$
for the set of ports $\{ p, q\}$.
\begin{definition}
\label{defn:behaviour}
A \emph{behaviour} is a pair $B=(S,\offer)$ consisting of an LTS
$S=(Q,P,\goesto)$ and an \emph{offer} predicate $\offer$ on $Q \times P$
such that $q \offer[p]$ holds (a port $p \in P$ is \emph{offered} in a
state $q \in Q$) whenever there is a transition from $q$ containing $p$,
that is $(\exists a \in 2^P: p \in a \land q \goesto[a]) \Rightarrow q
\offer[p]$. We write $B = (Q,P,\goesto,\offer)$ for $B =
((Q,P,\goesto),\offer)$.
The offer predicate extends to sets of ports: for $a \in 2^P$, $q
\offer[a] \bydef{=} \bigwedge_{p \in a} q \offer[p]$. Notice that
$q\offer[\emptyset] \equiv \ensuremath{\mathtt{tt}}$.
\end{definition}
\begin{note}
In the following, we assume, for any $B_i = (Q_i, P_i, \goesto, \offer)$
with $i \in [1,n]$, that $\{P_i\}_{i=1}^n$ are pairwise disjoint (i.e.\ $i
\neq j$ implies $P_i \cap P_j = \emptyset$) and $P \bydef{=}
\bigcup_{i=1}^n P_i$.
To avoid excessive notation, here and in the rest of the paper, we drop
the indices on $\goesto$ and $\offer$, as they can always be
unambiguously deduced from the corresponding state variables.
\end{note}
Let $P$ be a set of ports. We denote $\fire{P} \bydef{=}
\setdef{\fire{p}}{p \in P}$ and $\inhib{P} \bydef{=}
\setdef{\inhib{p}}{p \in P}$. We call the elements of $\act{P}$,
$\fire{P}$ and $\inhib{P}$ respectively {\em activation}, {\em firing}
and {\em negative port typings}.
\begin{definition}
\label{defn:interaction}
An {\em interaction} is a subset $a \subseteq \act{P} \cup \fire{P} \cup
\inhib{P}$.
For a given interaction $a$, we define the following sets of ports:
\begin{itemize}
\item $\actsup{a} \bydef{=} a \cap P$, the {\em activation support} of
$a$,
\item $ \firesup{a} \bydef{=} \setdef{p \in P}{\fire{p} \in a}$, the
{\em firing support} of $a$,
\item $\negsup{a} \bydef{=} \setdef{p \in P}{\inhib{p} \in a}$, the
{\em negative support} of $a$.
\end{itemize}
\end{definition}
\begin{definition}
\label{defn:composition}
Let $B_i = (Q_i, P_i, \goesto, \offer)$, with $i \in [1,n]$ and $P =
\bigcup_{i=1}^n P_i$, be a set of component behaviours. Let $\gamma
\subseteq 2^{\act{P} \cup \fire{P} \cup \inhib{P}}$ be a set of
interactions. The composition of $\{B_i\}_{i=1}^n$ with $\gamma$ is
a behaviour $\gamma(B_1,\dots,B_n) \bydef{=} (Q, P, \goesto,
\offer)$ with
%
\begin{itemize}
\item the set of states $Q = \prod_{i=1}^n Q_i$\mdash the cartesian
product of the sets of states $Q_i$,
\item the strongest (i.e.\ inductively defined) offer predicate $\offer$
satisfying the rules, for each $i \in [1,n]$,
\begin{equation}
\label{eq:rule:offer}
\derrule{q_i \offer p}{q_1\dots q_n \offer p}
\end{equation}
(recall that the sets of ports $P_i$ are pairwise disjoint),
\item the minimal transition relation $\goesto$ satisfying the
rule
\begin{equation}
\label{eq:rule:trans}
\renewcommand{\arraystretch}{2}
\derrule[4]{
a \in \gamma
& \Big\{q_i\longgoesto[\firesup{a}\cap P_i] q_i'\Big\}_{i=1}^n
& \Big\{q_i \offer (\actsup{a} \cap P_i)\Big\}_{i=1}^n
& \Setdef{q_i \noffer p}{p \in \negsup{a} \cap P_i}_{i=1}^n
}
{q_1\dots q_n \longgoesto[\firesup{a}] q_1'\dots q_n'}\,.
\end{equation}
\end{itemize}
\end{definition}
\begin{wrapfigure}{r}{0.24\textwidth}
\centering
\input{nonflat-new.pspdftex}
\mycaption{Flat composed system equivalent to the one shown in
\fig{nonflat:composed}}
\label{fig:nonflat:new}
\end{wrapfigure}
Taking on the \ex{nonflat} from the introduction, a flat composition of
$B_1$, $B_2$ and $B_3$ equivalent to that of \fig{nonflat:composed} in the
semantics of \defn{composition} is shown in \fig{nonflat:new} on the right.
This representation follows the classical BIP approach with the exception
of the priority, whereof the semantics is defined in terms of the offer
predicate. In terms of \defn{composition}, this is translated by taking
$\gamma = \{\fire{p}\,\inhib{r}, \fire{q}, \fire{s}, \fire{r}\,\fire{t}\}
\subseteq 2^{\act{P} \cup \fire{P} \cup \inhib{P}}$.
BIP composition operators, consisting of an interaction and a priority
model, can be given new operational semantics in terms of the offer
predicate: the semantics of the interaction model composition remains the
same \eq{transsem}, whereas the rule for priority becomes
\begin{equation}
\label{eq:newprisem}
\derrule[2]{
q \longgoesto[a] q' &
\Setdef{q \noffer[a']}{a \prec a'}
}{
q \longgoesto[a]_\prec q'
}\,.
\end{equation}
Clearly, any combination of BIP interaction and priority models can be
represented by an extended interaction model $\gamma \subseteq 2^{\act{P}
\cup \fire{P} \cup \inhib{P}}$. A priority $a \prec p_1\dots p_n$ is
translated into $\{\fire{a}\,\inhib{p_1}, \dots, \fire{a}\,\inhib{p_n}\}$
(here $\fire{a}$ is a shorthand for the set of firing ports corresponding
to ports in $a$). In general, when several inhibitors are defined for the
same interaction, that is $a \prec p_1^i\dots p_{n_i}^i$, for $i \in
[1,m]$, this translates into
$\setdef{\fire{a}\,\inhib{p^1_{k_1}}\,\dots\,\inhib{p^m_{k_m}}}{k_i \in
[1,n_i]}$.
It is important to observe that, as stated by \lem{nofiring} below, the
rule \eq{rule:trans} in \defn{composition} implies that any interaction $a
\in \gamma$ such that $\firesup{a} = \emptyset$ does not have any impact on
the composed system.
\begin{lemma}
\label{lem:nofiring}
Let $\gamma_1, \gamma_2 \subseteq 2^{\act{P} \cup \fire{P} \cup
\inhib{P}}$ be two sets of interactions and denote
$\symmdiff{\gamma_1}{\gamma_2} \bydef{=} (\gamma_1 \setminus \gamma_2)
\cup (\gamma_2 \setminus \gamma_1)$ their symmetric difference. If
$\firesup{a} = \emptyset$, for all $a \in \symmdiff{\gamma_1}{\gamma_2}$,
then $\gamma_1(B_1,\dots,B_n) = \gamma_2(B_1,\dots,B_n)$.
\end{lemma}
\begin{proof}
It is easy to see that $\gamma_1(B_1,\dots,B_n)$ and
$\gamma_2(B_1,\dots,B_n)$ behaviours can differ only in their respective
transition relations $\goesto$.
Application of the rule \eq{rule:trans} in \defn{composition} to an
interaction with empty firing support generates a transition $q_1\dots
q_n \longgoesto[\firesup{a} = \emptyset] q_1\dots q_n$. As mentioned in
the opening of this section, we assume that the self-loop transition
labelled by an empty set is enabled in all states. Therefore, the above
transition is present in both $\gamma_1(B_1,\dots,B_n)$ and
$\gamma_2(B_1,\dots,B_n)$. By the assumption of the lemma, all
interactions with non-empty firing support belong to $\gamma_1 \cap
\gamma_2$. Hence all transitions labelled with non-empty interactions
also appear in both $\gamma_1(B_1,\dots,B_n)$ and
$\gamma_1(B_1,\dots,B_n)$.
\end{proof}
\begin{lemma}
\label{lem:minimal}
Let $\gamma_1 \subseteq 2^{\act{P} \cup \fire{P} \cup \inhib{P}}$ be a
set of interactions, $\gamma_2 = \gamma_1 \cup \{a\}$, with $a \subseteq
\act{P} \cup \fire{P} \cup \inhib{P}$, such that there is an interaction
$b \in \gamma_1$, $b \subseteq a$ and $\firesup{b}=\firesup{a}$. Then
$\gamma_1(B_1,\dots,B_n) = \gamma_2(B_1,\dots,B_n)$.
\end{lemma}
\begin{proof}
According to rule \eq{rule:trans} any transition generated by the
interaction $a$ can also be generated by the interaction $b$. Thus,
interaction $a$ does not impact the behaviour of the composed system, and
$\gamma_1(B_1,\dots,B_n) = \gamma_2(B_1,\dots,B_n)$.
\end{proof}
\section{Algebra extensions}
\label{sec:extension}
In \secn{semantic}, we have replaced the classical BIP combination of
interaction and priority models with an extended interaction model with
ports of three types: firing, activation and negative.\footnote{
Only firing and negative ports are necessary to define classical BIP
composition operators. Activation ports allow for a full correspondence
with $\ensuremath{\mathbb{B}}[\act{P},\fire{P}]$ Boolean constraints. This correspondence
and an expressivity study are given in \cite{BliSif11-constraints-sc}.
} We can now extend other algebras used for the glue representation.
We start by considering the extension of the Algebra of Interactions,
$\ensuremath{\mathcal{A\hspace{-0.6ex}I\!}}(P)$. Recall that $x \simeq y$ iff $\intsem{x} = \intsem{y}$. As a
simple corollary of the results in \cite{BliSif08-express-concur},
$\intsem{x} = \intsem{y}$ is equivalent to $\intsem{x}(\ensuremath{\mathbf{B}}) =
\intsem{y}(\ensuremath{\mathbf{B}})$, for any finite family $\ensuremath{\mathbf{B}}$ of behaviours.
Below we will consider $\ensuremath{\mathcal{A\hspace{-0.6ex}I\!}}(\act{P} \cup \fire{P} \cup \inhib{P})$ with the
latter definition of term equivalence: two terms $x,y \in \ensuremath{\mathcal{A\hspace{-0.6ex}I\!}}(\act{P} \cup
\fire{P} \cup \inhib{P})$ are equivalent iff $\intsem{x}(\ensuremath{\mathbf{B}}) =
\intsem{y}(\ensuremath{\mathbf{B}})$ (in terms of \defn{composition}), for any finite family
$\ensuremath{\mathbf{B}}$ of behaviours. In general, we define equivalence as follows.
\begin{definition}
Let $\ensuremath{\mathcal{A}}(P)$ be an algebra, $\intsem{\cdot} : \ensuremath{\mathcal{A}}(P) \rightarrow 2^{2^P}$.
Two terms $x, y \in \ensuremath{\mathcal{A}}(P)$ are {\em equivalent} $x \sim y $ iff, for any
finite family $\ensuremath{\mathbf{B}}$ of behaviours, $\intsem{x}(\ensuremath{\mathbf{B}}) = \intsem{y}(\ensuremath{\mathbf{B}})$ (in
terms of \defn{composition}).
\end{definition}
\begin{note}
Clearly $\sim$ is weaker than $\simeq$.
\end{note}
We are now in position to similarly extend the other algebras. The
interaction semantics of the causal interaction trees $\aisem{\cdot}:
\ensuremath{\mathcal{T\!}}(P) \rightarrow \ensuremath{\mathcal{A\hspace{-0.6ex}I\!}}(P)$ is transposed without any change to
$\aisem{\cdot}: \ensuremath{\mathcal{T\!}}(\act{P} \cup \fire{P} \cup \inhib{P}) \rightarrow
\ensuremath{\mathcal{A\hspace{-0.6ex}I\!}}(\act{P} \cup \fire{P} \cup \inhib{P})$. Similarly, the functions
$\tau: \ensuremath{\mathcal{AC}\!}(P) \rightarrow \ensuremath{\mathcal{T\!}}(P)$ and $\sigma: \ensuremath{\mathcal{T\!}}(P) \rightarrow \ensuremath{\mathcal{AC}\!}(P)$
are transposed identically to $\ensuremath{\mathcal{AC}\!}(\act{P} \cup \fire{P} \cup \inhib{P})$
and $\ensuremath{\mathcal{T\!}}(\act{P} \cup \fire{P} \cup \inhib{P})$. The same goes for the
mapping $R(t)$ associating to a causal interaction tree $t \in \ensuremath{\mathcal{T\!}}(P)$ the
corresponding system of causal rules \cite{BliSif10-causal-fmsd}. The only
difference is that, in $\ensuremath{\mathcal{CR\!}}(\act{P} \cup \fire{P} \cup \inhib{P})$ we
introduce the following additional axiom: $\fire{p} \Rightarrow \act{p}$,
for all $p \in P$.
\begin{proposition} \label{prop:congruence}
The equivalence relation $\sim$ on $\ensuremath{\mathcal{T\!}}(\act{P} \cup \fire{P} \cup
\inhib{P})$ is a congruence.
\end{proposition}
\begin{proof}[Sketch of the proof]
The proof is the same as for $\ensuremath{\mathcal{T\!}}(P)$ \cite{BliSif10-causal-fmsd}. For
any two trees $t_1, t_2 \in \ensuremath{\mathcal{T\!}}(\act{P} \cup \fire{P} \cup \inhib{P})$
and for any context $C(z) \in \ensuremath{\mathcal{T\!}}(\act{P} \cup \fire{P} \cup \inhib{P}
\cup \{z\})$, we have to show that the equivalence $t_1 \sim t_2$ implies
$C(t_1/z) \sim C(t_2/z)$, where $C(t_i/z)$ is the tree obtained, by
replacing in $C(z)$ all occurrences of $z$ by $t_i$. Since the semantics
$\ensuremath{\mathcal{T\!}}$ is compositional, structural induction on the context $C(z)$ proves
the proposition.
\end{proof}
The first consequence of the above extension is that, rather than extending the
existing graphical representation of connectors, it can be directly used in its present form to
express priorities and activation conditions (the use of the offer
predicate in the positive premises of the rule \eq{rule:trans}) by adding a
trivalued attribute to ports: {\em firing}, {\em activation} and {\em
negative}. It is important to observe the difference between, on one
hand, adding an attribute to ports and, on the other hand, modifying the
typing operator ({\em synchron} vs.\ {\em trigger} typing), since the
latter is applied at each level of the connector hierarchy, whereas the
former is applied to ports, that is only at the leaves of the connector.
\subsection{Refinement of the extension}
\label{sec:refinement}
When we apply $x, y \in \ensuremath{\mathcal{A\hspace{-0.6ex}I\!}}(\act{P} \cup \fire{P} \cup \inhib{P})$ to
compose behaviour with operational semantics of \defn{composition},
$\intsem{x}(\ensuremath{\mathbf{B}}) = \intsem{y}(\ensuremath{\mathbf{B}})$ does not imply $x = y$. $\ensuremath{\mathcal{A\hspace{-0.6ex}I\!}}$ axioms
are not complete (although still sound) with respect to $\sim$, since this
equivalence is weaker than $\simeq$. Consequently, on $\ensuremath{\mathcal{T\!}}(\act{P} \cup
\fire{P} \cup \inhib{P})$, $\sim$ is also weaker than $\simeq$.
\begin{figure*}
\centering
\begin{subfigure}[b]{1em}
\centering
\begin{tikzpicture}
\node (P) at (0,0) {$\inhib{p}$};
\node (Q) at (0,-1) {$\fire{q}$};
\node at (0,-1.5) {};
\path[->]
(P) edge (Q);
\end{tikzpicture}
\caption{}
\label{fig:equiv:trees:simple}
\end{subfigure}
\hspace{1cm}
\begin{subfigure}[b]{2em}
\centering
\begin{tikzpicture}
\node at (0,-1) {$\inhib{p}\, \fire{q}$};
\node at (0,-2) {};
\end{tikzpicture}
\caption{}
\label{fig:equiv:trees:simple:down}
\end{subfigure}
\hspace{2cm}
\begin{subfigure}[b]{25mm}
\centering
\begin{tikzpicture}
\node (P) at (1,0) {$\fire{p}$};
\node (Q) at (1,-1) {$\inhib{q}$};
\node (R) at (0,-2) {$\fire{r}$};
\node (S) at (2,-2) {$\fire{s}$};
\path[->]
(P) edge (Q)
(Q) edge (R) edge (S);
\end{tikzpicture}
\caption{}
\label{fig:equiv:trees:adv}
\end{subfigure}
\hspace{1cm
\begin{subfigure}[b]{25mm}
\centering
\begin{tikzpicture}
\node (P) at (1,0) {$\fire{p}$};
\node (R) at (0,-2) {$\inhib{q}\, \fire{r}$};
\node (S) at (2,-2) {$\inhib{q}\, \fire{s}$};
\path[->]
(P) edge (R) edge (S);
\end{tikzpicture}
\caption{}
\label{fig:equiv:trees:adv:down}
\end{subfigure}
\mycaption{Two pairs of equivalent trees: (\subref{fig:equiv:trees:simple}),
(\subref{fig:equiv:trees:simple:down}) and (\subref{fig:equiv:trees:adv}),
(\subref{fig:equiv:trees:adv:down})}
\label{fig:equiv:trees}
\end{figure*}
\begin{example}
Let $P = \{p,q,r,s\}$ and consider the $\ensuremath{\mathcal{T\!}}(\act{P} \cup \fire{P} \cup
\inhib{P})$ trees shown in \fig{equiv:trees}. The interaction semantics of
the tree in \fig{equiv:trees:simple} is $\intsem{\inhib{p} \rightarrow
\fire{q}} = \{\inhib{p},\, \inhib{p}\,\fire{q}\}$. However, the
interaction $\inhib{p}$ does not contain any firing ports. Therefore, as
mentioned above (\lem{nofiring}), it does not influence component
synchronisation and we have $\inhib{p} \rightarrow
\fire{q}\ \sim\ \inhib{p}\,\fire{q}$ (cf.\ \fig{equiv:trees:simple:down}).
The causal interaction tree in \fig{equiv:trees:adv} also defines a
redundant interaction. Indeed,
\[
\intsem{
\fire{p} \rightarrow \inhib{q} \rightarrow (\fire{r} \oplus \fire{s})
}
\ =\
\left\{
\fire{p},\,
\fire{p}\, \inhib{q}\,,\,
\fire{p}\, \inhib{q}\, \fire{r}\,,\,
\fire{p}\, \inhib{q}\, \fire{s}\,,\,
\fire{p}\, \inhib{q}\, \fire{r}\, \fire{s}\,
\right\}\,.
\]
Although the interaction $\fire{p}\, \inhib{q}$ does contain a firing port
$\fire{p}$, it is redundant (\lem{minimal}). We conclude, therefore, that
the causal interaction trees in \fig{equiv:trees:adv} and
\fig{equiv:trees:adv:down} are equivalent, since
\[
\intsem{
\fire{p} \rightarrow (\inhib{q}\,\fire{r} \oplus \inhib{q}\,\fire{s})
}
\ =\
\left\{
\fire{p},\,
\fire{p}\, \inhib{q}\, \fire{r}\,,\,
\fire{p}\, \inhib{q}\, \fire{s}\,,\,
\fire{p}\, \inhib{q}\, \fire{r}\, \fire{s}\,
\right\}\,.
\]
\end{example}
The above example illustrates the idea that the nodes of causal interaction
trees, which do not contain firing ports, can be ``pushed'' down the tree.
Another notable case leading to redundant interactions corresponds to trees
containing {\em contradictory port typings}. For example, either of the
two equivalent trees $\inhib{p} \rightarrow \fire{p}$ and
$\inhib{p}\,\fire{p}$ authorises the interaction $\inhib{p}\,\fire{p}$.
However, when considered in the context of the rule \eq{rule:trans}, this
interaction generates two conflicting premises $q_i \goesto[p] q_i'$ and
$q_i \noffer[p]$. Thus, this instance of the rule \eq{rule:trans} does not
authorise any transitions and the interaction $\inhib{p}\,\fire{p}$ can be
safely discarded. This example corresponds to the additional axiom
$\fire{p} \Rightarrow \act{p}$ imposed in \cite{BliSif11-constraints-sc} on
the Boolean formul\ae\ in $\ensuremath{\mathbb{B}}[\act{P},\fire{P}]$. Similarly, redundant
interactions appear when a tree contains other distinct port typings of the
same port: $\act{p}$ and $\inhib{p}$, generating conflicting premises $q_i
\offer[p]$ and $q_i \noffer{p}$; $\act{p}$ and $\fire{p}$, whereof the
former generates the premise $q_i \offer[p]$ redundant alongside the
premise $q_i \goesto[p] q_i'$ generated by the latter.
Below, we provide a set of axioms reducing interaction redundancy. We
enrich axioms for $\ensuremath{\mathcal{T\!}}(\act{P} \cup \fire{P} \cup \inhib{P})$ from
\cite{BliSif10-causal-fmsd} by adding some new ones, specific for the
trivalued port attribute.
\begin{axioms}
\begin{enumerate}
\item \label{ax:nodes} For all $p \in P$ and $a \subseteq \act{P} \cup
\fire{P} \cup \inhib{P}$ such that $a \neq \emptyset$,
\begin{enumerate}
\item $a \cdot 0 = 0$,
\item $a \cdot 1 = a$, for $a \neq 0$,
\item $\fire{p}\cdot\act{p} = \fire{p}$ (cf.\ the additional axiom
$\fire{p} \Rightarrow \act{p}$ in $\ensuremath{\mathcal{CR\!}}(\act{P} \cup \fire{P} \cup
\inhib{P})$),
\item $\fire{p}\cdot\inhib{p} = \act{p}\cdot\inhib{p} = 0$.
\end{enumerate}
\item \label{ax:par} Parallel composition, `$\oplus$', is associative,
commutative, idempotent, and its identity element is $0$.
\item \label{ax:zero:leaf} $a \rightarrow 0 = a$, for all $a
\subseteq \act{P} \cup \fire{P} \cup \inhib{P}$.
\item \label{ax:zero:node} $0 \rightarrow t = 0$, for all $t \in
\ensuremath{\mathcal{T\!}}(\act{P} \cup \fire{P} \cup \inhib{P})$.
\item \label{ax:pushdown:node} $c \rightarrow a \rightarrow b
\rightarrow t = c \rightarrow ab \rightarrow t$ for all $a,b,c
\subseteq \act{P} \cup \fire{P} \cup \inhib{P}$, such that
$\firesup{a} = \emptyset$, and $t \in \ensuremath{\mathcal{T\!}}(\act{P} \cup \fire{P}
\cup \inhib{P})$.
\item \label{ax:pushdown:port} $ap \rightarrow b = ap
\rightarrow bp$ for all $a,b \subseteq \act{P} \cup \fire{P} \cup
\inhib{P}$, $p \in \act{P} \cup \fire{P} \cup \inhib{P} $.
\item \label{ax:relation:operators}
$a \rightarrow (t_1 \oplus t_2) = a \rightarrow t_1\ \oplus\ a
\rightarrow t_2$, for all $a \subseteq \act{P} \cup
\fire{P} \cup \inhib{P}$, $t_1, t_2 \in
\ensuremath{\mathcal{T\!}}(\act{P} \cup \fire{P} \cup \inhib{P})$.
\end{enumerate}
\end{axioms}
\axs{nodes} equalise redundant interactions due to contradictory port
typings, whereas \ax{pushdown:node} eliminates the nodes with empty firing
support. \axs{par}, \ref{ax:zero:leaf}, \ref{ax:zero:node} and
\ref{ax:relation:operators} are the same as in
\cite{BliSif10-causal-fmsd}.\footnote{
The two remaining axioms from \cite{BliSif10-causal-fmsd} are replaced by
Lemmas~\ref{lem:nofiring:leaf} and \ref{lem:one:node} in this paper.
}
\begin{proposition}
The above axiomatisation is sound with respect to $\sim$.
\end{proposition}
\begin{proof}
Since, by \prop{congruence}, the equivalence relation $\sim$ is a
congruence, it is sufficient to show that all the axioms respect $\sim$.
This is proved by verifying that the semantics for left and right sides
coincide.
\axs{par}, \ref{ax:zero:leaf}, \ref{ax:zero:node} and
\ref{ax:relation:operators} are the same as in \cite{BliSif10-causal-fmsd}.
Hence, their respective left- and right-hand sides are related by $\simeq$,
which is stronger than $\sim$. \ax{nodes}(a) and \ax{nodes}(b) are
trivial. \ax{nodes}(c) is a consequence of \lem{minimal}. In the
\ax{nodes}(d), both pairs $p$ and $\inhib{p}$, and $\fire{p}$ and
$\inhib{p}$ produce conflicting premises in the rule \eq{rule:trans} and,
therefore, do not generate any transitions. For the \ax{pushdown:node}, we
have
\begin{align}
\intsem{c \rightarrow a \rightarrow b \rightarrow t}
&= \{c,\, a\,c,\, a\,b\,c\} \cup \setdef{a\,b\,c\,a_2}{a_2 \in \intsem{t}}\\
\intsem{c \rightarrow ab \rightarrow t}
&= \{c,\, a\,b\,c\} \cup \setdef{a\,b\,c\,a_2}{a_2 \in \intsem{t}}
\end{align}
The only difference between the interaction semantics of the two trees is
the interaction $ac$. However, any transition authorised by the rule
\eq{rule:trans} with this interaction is also authorised with interaction
$c$, since $\firesup{a} = \emptyset$ (\lem{minimal}). Hence, the composed
systems coincide.
For the \ax{pushdown:port}, we have $\intsem{ap \rightarrow b} = \{ap,\,
abp\} = \intsem{ap \rightarrow bp}$. Thus $ap \rightarrow b \simeq ap
\rightarrow bp$, which implies $ap \rightarrow b \sim ap \rightarrow bp$.
\end{proof}
Notice that our axiomatisation is not complete. For instance, the equivalence $p\rightarrow q \oplus q \rightarrow p \sim p \oplus q$ cannot be derived from the axioms.
\begin{lemma} \label{lem:nofiring:leaf}
For all $a,b \subseteq \act{P} \cup \fire{P} \cup \inhib{P}$, such that
$\firesup{b} = \emptyset$, holds the equality $a \rightarrow b = a$.
\end{lemma}
\begin{proof}
$a \rightarrow b = a \rightarrow b \rightarrow 0 \rightarrow 0 = a
\rightarrow b \cdot 0 \rightarrow 0 = a \rightarrow 0 \rightarrow 0 = a$
(\axss{zero:leaf}{pushdown:node})
\end{proof}
\begin{lemma} \label{lem:one:node}
For all $a \subseteq \act{P} \cup \fire{P} \cup \inhib{P}$ and $t \in
\ensuremath{\mathcal{T\!}}(\act{P} \cup \fire{P} \cup \inhib{P})$, holds the equality $a
\rightarrow 1 \rightarrow t = a \rightarrow t$.
\end{lemma}
\begin{proof}
If $t = 0$ the statement of this lemma is a special case of
\lem{nofiring:leaf} with $b = 1$. If $t \neq 0$ it can be represented as a
parallel composition of non-zero trees $t = \bigoplus_{i=1}^{n} r_i
\rightarrow t_i$, with $r_i \subseteq \act{P} \cup \fire{P} \cup
\inhib{P}$. By \axs{pushdown:node} and \ref{ax:relation:operators}, we
have
\[
a \rightarrow 1 \rightarrow t
\ =\
\bigoplus_{i=1}^{n} (a \rightarrow 1 \rightarrow r_i \rightarrow t_i)
\ =\
\bigoplus_{i=1}^{n} (a \rightarrow r_i \rightarrow t_i)
\ =\
a \rightarrow \bigoplus_{i=1}^{n} (r_i \rightarrow t_i)
\ =\
a \rightarrow t\,.
\]
\end{proof}
\begin{lemma} \label{lem:pushdown:node}
For all $a,b_i,c \subseteq \act{P} \cup \fire{P} \cup \inhib{P}$, such
that $\firesup{a} = \emptyset$ and $t_i \in \ensuremath{\mathcal{T\!}}(\act{P} \cup \fire{P}
\cup \inhib{P})$, holds the equality
\[
c \rightarrow a \rightarrow
\bigoplus_{i=1}^{n} (b_i \rightarrow t_i) = c \rightarrow
\bigoplus_{i=1}^{n} (ab_i \rightarrow t_i)\,.
\]
\end{lemma}
\begin{proof}
As above, applying \axs{pushdown:node} and \ref{ax:relation:operators}, we
have
\[
c \rightarrow a \rightarrow \bigoplus_{i=1}^{n} (b_i \rightarrow t_i)
\ =\
\bigoplus_{i=1}^{n} (c \rightarrow a \rightarrow b_i \rightarrow t_i)
\ =\
\bigoplus_{i=1}^{n} (c \rightarrow ab_i \rightarrow t_i)
\ =\
c \rightarrow \bigoplus_{i=1}^{n} (ab_i \rightarrow t_i)\,.
\]
\end{proof}
\begin{definition} \label{defn:tree:normal}
A causal interaction tree $t \in \ensuremath{\mathcal{T\!}}(\act{P} \cup \fire{P} \cup
\inhib{P})$ is in \emph{normal form} if it satisfies the following
properties:
\begin{enumerate}
\item All nodes of $t$ except roots have non-empty firing support.
\item There are no causal dependencies between the same typing of the
same port in $t$, that is for any causal chain $a \rightarrow \dots
\rightarrow b$ within $t$, we have $a \cap b = \emptyset$.
\item There are no causal dependencies between different port typings of
the same port in $t$, other than dependencies of the form $ap
\rightarrow \dots \rightarrow b \fire{p}$, where $a,b \subseteq \act{P}
\cup \fire{P} \cup \inhib{P}$, $p \in P$.
\end{enumerate}
\end{definition}
\begin{proposition}[Normal form for causal interaction trees]
\label{prop:normal:tree}
Every causal interaction tree $t \in \ensuremath{\mathcal{T\!}}(\act{P} \cup \fire{P} \cup
\inhib{P})$ has a normal form $t = \tilde{t} \in \ensuremath{\mathcal{T\!}}(\act{P} \cup
\fire{P} \cup \inhib{P})$.
\end{proposition}
\begin{proof}
Consider $t \in \ensuremath{\mathcal{T\!}}(\act{P} \cup \fire{P} \cup \inhib{P})$. We start by
computing $t_1 = t$ with all nodes, except potentially the roots, having
non-empty firing support.
Let $a$ be a non-root node of $t$ with $\firesup{a} = \emptyset$, such
that the tree $s$ rooted in $a$ does not have any nodes with empty firing
support. If $s$ is empty, that is $a$ is a leaf then remove $a$ from the
tree (\lem{nofiring:leaf}). Otherwise, let $c$ be the parent of $a$, which
exists since $a$ is not a root and move the parallel composition operator
up using \ax{relation:operators}:
%
\begin{equation}
\label{eq:plusup}
c \rightarrow
\left((a \rightarrow s) \oplus \bigoplus_{i=1}^{n} t_i\right)
=
(c \rightarrow a \rightarrow s) \oplus
\left(\bigoplus_{i=1}^{n} c \rightarrow t_i\right)\,.
\end{equation}
%
The sub-tree $s$ can be further decomposed as $s = \bigoplus_{i=1}^n (b_i
\rightarrow s_i)$, so, by \lem{pushdown:node}, we have
%
\begin{equation}
\label{eq:pushdown}
c \rightarrow a \rightarrow s \
=\ c \rightarrow a \rightarrow \bigoplus_{i=1}^n (b_i \rightarrow s_i)\
=\ c \rightarrow \bigoplus_{i=1}^{n} (ab_i \rightarrow s_i)\,.
\end{equation}
%
Each of nodes $ab_i$ has non-empty firing support, since $\firesup{b_i} =
\emptyset$ by the choice of $a$. Substituting \eq{pushdown} into
\eq{plusup} and applying \ax{relation:operators}, we obtain
\[
\left(c \rightarrow \bigoplus_{i=1}^{n} (ab_i \rightarrow s_i)\right)
\oplus
\left(\bigoplus_{i=1}^{n} c \rightarrow t_i\right)
=
c \rightarrow \left(\left(\bigoplus_{i=1}^{n} ab_i \rightarrow s_i\right)
\oplus
\bigoplus_{i=1}^{n} t_i\right)\,.
\]
In the resulting tree, there is one node with empty firing support less
than in $t$. Hence, repeating this procedure as long as there are such
nodes, we will compute a tree $t_1$, where all nodes except roots have
non-empty firing support. This computation is confluent, since the order
is irrelevant among causally independent nodes, whereas among causally
dependent ones it is fixed by the algorithm.
Consider a causal chain $a\tilde{p} \rightarrow \dots \rightarrow
b\hat{p}$ within $t_1$, with $\tilde{p}$ and $\hat{p}$ being two typings
of the same port. If $\tilde{p} = \act{p}$ and $\hat{p} = \fire{p}$,
there is nothing to do, since such dependencies are allowed by
\defn{tree:normal}. Otherwise, we propagate $\tilde{p}$ down by applying
\ax{pushdown:port}:
%
\[
a\tilde{p} \rightarrow c_1 \rightarrow
\dots \rightarrow c_k \rightarrow b\hat{p}
\ =\
a\tilde{p} \rightarrow c_1\tilde{p} \rightarrow
\dots \rightarrow c_k \rightarrow b\hat{p}
=\ \dots\ =
a\tilde{p} \rightarrow c_1\tilde{p} \rightarrow
\dots \rightarrow c_k\tilde{p} \rightarrow b\hat{p}\tilde{p}\,.
\]
%
{\bf Case 1:} $\tilde{p} = \hat{p}$ or $\tilde{p} = \fire{p}$ and
$\hat{p} = \act{p}$. We apply \axs{nodes}(c) and \ref{ax:pushdown:port}:
%
\[
a\tilde{p} \rightarrow c_1\tilde{p} \rightarrow
\dots \rightarrow c_k\tilde{p} \rightarrow b\hat{p}\tilde{p}
\ =\
a\tilde{p} \rightarrow c_1\tilde{p} \rightarrow
\dots \rightarrow c_k\tilde{p} \rightarrow b\tilde{p}
\ =\
a\tilde{p} \rightarrow c_1 \rightarrow
\dots \rightarrow c_k \rightarrow b\,.
\]
%
{\bf Case 2:} $\tilde{p} \neq \hat{p}$ and either $\tilde{p} = \inhib{p}$
or $\hat{p} = \inhib{p}$. We apply \axs{nodes}(d), \ref{ax:zero:leaf}
and \ref{ax:pushdown:port}:
\begin{multline*}
a\tilde{p} \rightarrow c_1\tilde{p} \rightarrow
\dots \rightarrow c_k\tilde{p} \rightarrow b\hat{p}\tilde{p}
\ =\
a\tilde{p} \rightarrow c_1\tilde{p} \rightarrow
\dots \rightarrow c_k\tilde{p} \rightarrow 0\ = \\
\ =\
a\tilde{p} \rightarrow c_1 \rightarrow
\dots \rightarrow c_k \rightarrow 0
\ =\
a\tilde{p} \rightarrow c_1 \rightarrow \dots \rightarrow c_k\,.
\end{multline*}
To compute $\tilde{t}$, we apply this transformation to all relevant
causal chains within $t$.
\end{proof}
\begin{definition} \label{defn:conn:normal}
An $\ensuremath{\mathcal{AC}\!}(\act{P} \cup \fire{P} \cup \inhib{P})$ connector is in
\textit{normal form} if the following conditions hold.
\begin{enumerate}
\item Nodes at every hierarchical level of the connector, except the
bottom one, have at least one trigger.
\item Each node at the bottom hierarchical level, is a strong
synchronisation of pairwise distinct ports.
\item Every node at the bottom hierarchical level, without firing ports,
has only triggers as ancestors.
\end{enumerate}
\end{definition}
\begin{corollary}[Normal form for connectors]
Every connector $x \in \ensuremath{\mathcal{AC}\!}(\act{P} \cup \fire{P} \cup \inhib{P})$ has an
equivalent normal form $x \sim \tilde{x} \in \ensuremath{\mathcal{AC}\!}(\act{P} \cup \fire{P}
\cup \inhib{P})$.
\end{corollary}
\begin{proof}[Sketch of the proof]
Given a connector $x$, let $t = \tau(x)$ be the equivalent causal
interaction tree and $\tilde{t} = t$ its normal form. Put $\tilde{x} =
\sigma(\tilde{t})$. Since both $\sigma$ and $\tau$ preserve $\sim$, we
have $\tilde{x} \sim x$. Normality of $\tilde{x}$ is a direct
consequence of that of $\tilde{t}$ and the definition \eq{treecon} of
$\sigma$.
\end{proof}
\begin{proposition}
\label{prop:normal:rules}
Any causal interaction tree $t \in \ensuremath{\mathcal{T\!}}(\act{P} \cup \fire{P} \cup
\inhib{P})$ can be represented by a system of causal rules with only
firing ports as effects, i.e.\ having only rules of the form $\fire{p}
\Rightarrow C$, where $C$ is a DNF Boolean formula on $\fire{P} \cup
\act{P}$ without negative firing variables.
\end{proposition}
\begin{proof}
Applying the transformation $R : \ensuremath{\mathcal{T\!}}(P) \rightarrow \ensuremath{\mathcal{CR\!}}(P)$ defined in
\secn{transformations} to a tree $t \in \ensuremath{\mathcal{T\!}}(P)$, gives a system of causal
rules of the form $p \Rightarrow C$, where $C$ is a DNF Boolean formula and
each monomial is a conjunction of the nodes on the way from a root of $t$
to $p$ (some prefix in $t$ leading to $p$, excluding $p$).
We define the transformation $\tilde{R}:\ensuremath{\mathcal{T\!}}(\act{P} \cup \fire{P} \cup
\inhib{P}) \rightarrow \ensuremath{\mathcal{CR\!}}(\act{P} \cup \fire{P} \cup \inhib{P})$, by
putting
\begin{equation}
\label{eq:trees2rules1}
\tilde{R}(t) \bydef{=} \{p \Rightarrow c_p(t)\}_{p\in \fire{P}\cup\{\ensuremath{\mathtt{tt}}\}}\,,
\end{equation}
that is we omit causal rules for port variables in $\act{P} \cup \inhib{P}$
(in \eq{trees2rules1}, the set of rules is indexed by $p \in
\fire{P}\cup\{\ensuremath{\mathtt{tt}}\}$ as opposed to $p \in P\cup\{\ensuremath{\mathtt{tt}}\}$ in
\eq{trees2rules}). To prove the equivalence $t \sim \tilde{R}(t)$ it is
sufficient to show $\tilde{R}(t) \sim R(t)$.
$\tilde{R}(t)$ has less constraints than $R(t)$. Hence, it allows more
interactions. Let $a \in \intsem{\tilde{R}(t)} \setminus \intsem{R(t)}$,
i.e. there exists $ p \in \act{P} \cup \inhib{P}$, such that $p \in a$ and
the rule $p \Rightarrow C_1$ is violated by $a$. Let $\tilde{a} = a
\setminus p$.
Assume $\tilde{a} \notin \intsem{\tilde{R}(t)}$, i.e.\ there exists $\fire{q}
\in \fire{P}$ and a rule $(\fire{q} \Rightarrow C_2) \in \tilde{R}(t)$,
such that $\fire{q} \in \tilde{a}$ and the rule $\fire{q} \Rightarrow C_2$
is violated by $\tilde{a}$. This rule is not violated by $a$. Hence $C_2 =
pC_2'$ and, consequently, $p$ lies on all prefixes in $t$, leading to
$\fire{q}$. $a \in \intsem{\tilde{R}(t)}$, $\fire{q} \in \tilde{a}
\subseteq a$, thus there is at least one prefix in $t$, leading to
$\fire{q}$ and contained in $a$. As $p$ lies on this prefix, the rule $(p
\Rightarrow C_1)$ is satisfied by $a$, contradicting the conclusion above.
Therefore our assumption is wrong and $\tilde{a} \in
\intsem{\tilde{R}(t)}$.
Since $\tilde{a} \in \intsem{\tilde{R}(t)}$ and $\firesup{\tilde{a}} =
\firesup{a}$, we have, by \lem{minimal}, $\intsem{\tilde{R}(t)}(\ensuremath{\mathbf{B}}) =
(\intsem{\tilde{R}(t)} \setminus \{a\})(\ensuremath{\mathbf{B}})$ for any family of behaviours
$\ensuremath{\mathbf{B}}$. Thus, for all $a \in \intsem{\tilde{R}(t)} \setminus
\intsem{R(t)}$, there exists $\tilde{a} \varsubsetneq a$, such that
$\tilde{a} \in \intsem{\tilde{R}(t)}$ and $\firesup{\tilde{a}} =
\firesup{a}$. By \lem{minimal}, we have $\intsem{\tilde{R}(t)}(\ensuremath{\mathbf{B}}) =
\intsem{R(t)}(\ensuremath{\mathbf{B}})$ for any family $\ensuremath{\mathbf{B}}$, i.e.\ $R(t) \sim \tilde{R}(t)$.
\end{proof}
\section{Connector synthesis (example)}
\label{sec:example}
\begin{wrapfigure}[7]{r}{0.24\textwidth}
\centering
\vspace{-0.5\baselineskip}
\input{control-main.pspdftex}
\mycaption{Main module}
\label{fig:modules}
\end{wrapfigure}
Consider a system providing some given functionality in two modes: {\em
normal} and {\em backup}. The system consists of four modules: the
Backup module $A$ can only perform one action $a$; the Main module $B$
(\fig{modules}) can perform an action $b$ corresponding to the normal mode
activity, it can also be
switched $on$ and $off$, as well as perform an
internal (unobservable) error transition $err$; the Monitor module $M$ is a
black box, which performs some internal logging by observing the two
actions $a$ and $b$ through the corresponding ports $a_l$ and $b_l$;
finally, the black box Controller module $C$ takes the decisions to switch
on or off the main module through the corresponding ports $on_c$ and
$off_c$, furthermore, it can perform a $test$ to check whether the main
module can be restarted.
We want to synthesise connectors ensuring the properties below
(encoded by Boolean constraints).
\begin{itemize}
\item The main and backup actions must be logged: $\fire{a} \Leftrightarrow
\fire{a_l}$ and $\fire{b} \Leftrightarrow \fire{b_l}$\,;
\item Only Controller can turn on the Main module: $\fire{on}
\Leftrightarrow \fire{on_c}$\,;
\item When Controller switches off the Main module must stop operation:
$\fire{off_c} \Rightarrow \fire{off}$ and $\fire{b} \Rightarrow
\non{\fire{off_c}}$\,;
\item Controller can only test the execution of Backup: $\fire{test}
\Rightarrow \fire{a}$\,;
\item Backup can only execute when Main is not possible: $\fire{a}
\Rightarrow \inhib{b} \vee \fire{off}$\,;
\item Main can only switch off when ordered to do so or after a failure:
$\fire{off} \Rightarrow \inhib{b} \vee \fire{off_c}$\,;
\end{itemize}
In order to compute the required glue, we take the conjunction of the above
constraints together with the \emph{progress} constraint $\fire{a} \vee
\fire{b} \vee \fire{on} \vee \fire{off} \vee \fire{test} \vee \fire{a_l}
\vee \fire{b_l} \vee \fire{off_c} \vee \fire{on_c}$ stating that at every
round some action must be taken. In order to simplify the resulting
connectors, we also use part of the information about the behaviour of the
Main module, namely the fact that $on$, on one hand, and $b$ or $off$, on
the other, are mutually exclusive: $\act{on} \Rightarrow \inhib{b} \wedge
\inhib{off}$. Finally, we also apply the additional axiom imposed on
Boolean constraints: $\fire{p} \Rightarrow p$. We obtain the following
global constraint (omitting the conjunction symbol):
\begin{center}
$\quad
(\fire{a} \Rightarrow \fire{a_l}\ \act{a}\ \inhib{b}\
\lor\ \fire{a_l}\ \act{a}\ \fire{off})
(\fire{a_l} \Rightarrow \fire{a}\ \act{a_l})
(\fire{b} \Rightarrow \fire{b_l}\ \act{b}\ \non{\fire{off_c}})
(\fire{b_l} \Rightarrow \fire{b} \act{b_l})
(\fire{on} \Rightarrow \fire{on_c}\ \act{on})
(\fire{on_c} \Rightarrow \fire{on}\ \act{on_c})
\hfill$
\\[2pt]
$\land\
(\fire{off} \Rightarrow \act{off}\ \inhib{b}\
\lor\ \fire{off_c}\ \act{off})
(\fire{off_c} \Rightarrow \fire{off}\ \act{off_c})
(\fire{test} \Rightarrow \fire{a}\ \act{test})$
\\[2pt]
$\hfill
\land\
(\act{on} \Rightarrow \inhib{b}\ \inhib{off})
(\fire{a}\ \lor\ \fire{b}\ \lor\ \fire{on}\ \lor\ \fire{off}\
\lor\ \fire{test}\ \lor\ \fire{a_l}\ \lor\ \fire{b_l}\
\lor\ \fire{off_c}\ \lor\ \fire{on_c})\,.
\quad$
\end{center}
Recall now that causal rules have the form $p \Rightarrow C$, where $p \in
\act{P} \cup \fire{P} \cup \inhib{P} \cup \{\ensuremath{\mathtt{tt}}\}$ and $C$ is a DNF
Boolean formula on $\act{P} \cup \fire{P}$ without negative firing
variables. By \prop{normal:rules}, it is sufficient to consider only the
rules with firing or $\ensuremath{\mathtt{tt}}$ effects. A system of causal rules is a
conjunction of such clauses. Among the constraints above, there are two
that do not have this form: $\act{on} \Rightarrow \inhib{b} \ \inhib{off}$
and $\fire{b} \Rightarrow \fire{b_l}\ \act{b}\ \non{\fire{off_c}}$.
Rewriting them as $\inhib{on} \lor \inhib{b} \ \inhib{off}$ and
$\non{\fire{b}} \lor \fire{b_l}\ \act{b}\ \non{\fire{off_c}}$, distributing
over the conjunction of the rest of the constraints and making some
straightforward simplifications, we obtain a disjunction of three systems
of causal rules:
\begin{align*}
\ensuremath{\mathtt{tt}} &
\Rightarrow \lefteqn{\fire{a}\ \inhib{b}\ \inhib{off}\ \lor\ \fire{on}}
&&& \ensuremath{\mathtt{tt}} &
\Rightarrow \lefteqn{\fire{a}\ \inhib{on}\ \lor\ \fire{off}}
&&& \ensuremath{\mathtt{tt}} &
\Rightarrow \lefteqn{\fire{b}\ \fire{b_l}}
\\
\fire{a} &\Rightarrow \fire{a_l}\ \inhib{b}
&&& \fire{a} &\Rightarrow \lefteqn{\fire{a_l}\ \inhib{b}\
\lor\ \fire{a_l}\ \fire{off}}\
&&& \fire{a} &\Rightarrow \ensuremath{\mathtt{ff}}
\\
\fire{a_l} &\Rightarrow \fire{a}
& \fire{b} &\Rightarrow \ensuremath{\mathtt{ff}}\quad
%
& \fire{a_l} &\Rightarrow \fire{a}
& \fire{b} &\Rightarrow \ensuremath{\mathtt{ff}}
%
& \fire{a_l} &\Rightarrow \ensuremath{\mathtt{ff}}
& \fire{b} &\Rightarrow \ensuremath{\mathtt{tt}}
\\
\fire{on} &\Rightarrow \fire{on_c}
& \fire{b_l} &\Rightarrow \ensuremath{\mathtt{ff}}
%
& \fire{on} &\Rightarrow \ensuremath{\mathtt{ff}}
& \fire{b_l} &\Rightarrow \ensuremath{\mathtt{ff}}
%
& \fire{on} &\Rightarrow \ensuremath{\mathtt{ff}}
& \fire{b_l} &\Rightarrow \ensuremath{\mathtt{tt}}
\\
\fire{on_c} &\Rightarrow \fire{on}
& \fire{off} &\Rightarrow \ensuremath{\mathtt{ff}}
%
& \fire{on_c} &\Rightarrow \ensuremath{\mathtt{ff}}
& \fire{off} &\Rightarrow \inhib{b}\ \lor\ \fire{off_c}
%
& \fire{on_c} &\Rightarrow \ensuremath{\mathtt{ff}}
& \fire{off} &\Rightarrow \ensuremath{\mathtt{ff}}
\\
\fire{test} &\Rightarrow \lefteqn{\fire{a}}
& \fire{off_c} &\Rightarrow \ensuremath{\mathtt{ff}}
%
& \fire{test} &\Rightarrow \fire{a}
& \fire{off_c} &\Rightarrow \fire{off}
%
& \fire{test} &\Rightarrow \ensuremath{\mathtt{ff}}
& \fire{off_c} &\Rightarrow \ensuremath{\mathtt{ff}}
\end{align*}
Applying the procedure from \cite{BliSif10-causal-fmsd}, we obtain the
$\ensuremath{\mathcal{T\!}}(\act{P} \cup \fire{P} \cup \inhib{P})$ trees in \fig{example:trees}
and connectors in \fig{example:connectors}. In terms of classical BIP, one
can easily distinguish here two priorities: $x\ a\ a_l \prec b\ b_l$ and
$x\ off \prec b\ b_l$ for all $x$ not containing $off\ off_c$. In general,
priorities are replaced by local inhibitors. In this example, these appear
to characterise states of the Main module. For instance,
$\fire{a}\ \fire{a_l}\ \inhib{b}\ \inhib{off}$ defines possible
interactions involving $a\ a_l$ when neither $b$ nor $off$ are possible,
i.e.\ in state 1 (see \fig{modules}).
\begin{figure*}
\hfill
\begin{subfigure}[b]{34mm}
\begin{tikzpicture}
\node (P) at (0,0) {$\fire{a}\ \fire{a_l}\ \inhib{b}\ \inhib{off}$};
\node (Q) at (0,-2) {$\fire{test}$};
\node[right] at (0.6, -1) {$\oplus\quad \fire{on}\ \fire{on_c}$};
\path[->]
(P) edge (Q);
\end{tikzpicture}
\end{subfigure}
\hfill
\begin{subfigure}[b]{53mm}
\begin{tikzpicture}
\node (P) at (0,0) {$\fire{off}\ \fire{off_c}$};
\node (Q) at (0,-1) {$\fire{a}\ \fire{a_l}$};
\node (R) at (0,-2) {$\fire{test}$};
\node at (1, -1) {$\oplus$};
\node (T) at (2,0) {$\fire{a}\ \fire{a_l}\ \inhib{b}\ \inhib{on}$};
\node (S) at (2,-2) {$\fire{test}$};
\node[right] at (2.6, -1) {$\oplus\quad \fire{off}\ \inhib{b}$};
\path[->]
(P) edge (Q)
(Q) edge (R)
(T) edge (S);
\end{tikzpicture}
\end{subfigure}
\hfill
\begin{subfigure}[b]{2em}
\begin{tikzpicture}
\node at (0,-1) {$\fire{b}\ \fire{b_l}$};
\node at (0,-2) {};
\end{tikzpicture}
\end{subfigure}
\hfill\mbox{}
\mycaption{Three causal interaction trees}
\label{fig:example:trees}
\end{figure*}
\begin{figure*}
\centering
\quad
\begin{subfigure}[b]{46mm}
\input{control-connector-1.pspdftex}
\end{subfigure}
\hfill
\begin{subfigure}[b]{76mm}
\input{control-connector-2.pspdftex}
\end{subfigure}
\hfill
\begin{subfigure}[b]{8mm}
\input{control-connector-3.pspdftex}
\end{subfigure}
\quad\mbox{}
%
\mycaption{Connectors corresponding to trees from \fig{example:trees}}
\label{fig:example:connectors}
\end{figure*}
\section{Conclusion}
\label{sec:conclusion}
The work presented in this paper relies on a variation of the BIP
operational semantics based on the offer predicate introduced in
\cite{BliSif11-constraints-sc}. Glue operators defined using the offer
predicate are isomorphic to Boolean constraints on activation and firing
port variables $\ensuremath{\mathbb{B}}[\act{P}, \fire{P}]$ with an additional axiom $\fire{p}
\Rightarrow \act{p}$ \cite{BliSif11-constraints-sc}. By considering the
negation of an activation port variable as a separate {\em negative} port
variable (keeping, however, the axiom $\act{p}\,\inhib{p} = \ensuremath{\mathtt{ff}}$), we
reinterpret the combination of interaction and priority models on $P$ as an
interaction model on $\act{P} \cup \fire{P} \cup \inhib{P}$. This allows
us to apply the algebraic theory, previously developed for modelling
interactions in BIP, to model interactions and priorities simultaneously.
In particular, we can synthesise such new connectors from arbitrary
$\ensuremath{\mathbb{B}}[\act{P}, \fire{P}]$ Boolean formul\ae\ (in \cite{BliSif10-causal-fmsd}
we have shown how to synthesise classical connectors from formul\ae\ on
port variables without firing/activation dichotomy).
The equivalence induced by the new operational semantics on the algebras
($x \sim y \bydef{\Leftrightarrow} \intsem{x}(\ensuremath{\mathbf{B}}) = \intsem{y}(\ensuremath{\mathbf{B}})$ for
any finite set of behaviours $\ensuremath{\mathbf{B}}$) is weaker than the standard equivalence
induced by the interaction semantics ($x \simeq y \bydef{\Leftrightarrow}
\aisem{x} = \aisem{y}$). Extending the axioms of the Algebra of Causal
Interaction Trees accordingly, we define normal forms for connectors and
causal interaction trees. This, in turn, allows us to simplify the causal
rule representation, by considering only rules with firing effects.
Algebra extensions are illustrated on a connector synthesis example.
In this paper, we have only extended the axiomatisation of $\ensuremath{\mathcal{T\!}}(\act{P}
\cup \fire{P} \cup \inhib{P})$. Studying corresponding extensions for the
axiomatisations of other algebras as well as their completeness could be
part of the future work. More urgently, we intend to study the differences
between the classical BIP semantics and the offer variation. For example,
it is clear that the two semantics are equivalent on flat models. The
divergence on hierarchical models remains to be characterised.
\bibliographystyle{eptcs}
|
2,877,628,090,370 | arxiv | \section{Introduction}
The populous family of compactness theorems is established by Arzel\'a and Ascoli. Our starting point, the Kolmogorov-Riesz theorem characterizes precompact (totally bounded) sets of $L^p(\mathbb{R}^n)$, see e.g. \cite{a}. Besides, such a characterization is interesting in itself, it has several applications to differential and integral equations. Compactness criteria were studied in particular non-standard function spaces, e.g. in Sobolev spaces \cite{hoh}, variable Lebesgue spaces \cite{gma}, or weighted variable exponent amalgam and Sobolev spaces \cite{au}, and also in more general circumstances, see e.g. \cite{f}, \cite{r} and \cite{gr}. Sudakov-type improvements of the classical theorem are derived in \cite{hohm} (see \cite{s} as well).\\
The useful criterion of compactness given by Pego via Fourier transformation (see \cite{pe}) has also great influence, see e.g. \cite{dfg}, \cite{g}, \cite{gk}. In the inspirating paper \cite{k}, compactness criterion is given by Laplace transformation.
Below we offer a new aspect of characterization of precompact sets in certain Banach spaces. Instead of deriving similar theorems by related, for instance Mellin or cosine transformations, we investigate and extend the notion of equicontinuity. Motivated by the effect of translation, $\tau_yf(x)=f(x-y)$, on exponential functions, different translation operators were introduced by orthogonal systems $\{\varphi_n\}_n$ as $T_y\varphi_n(x)=\varphi_n(x)\varphi_n(y)$. Hereinafter we deal with Laguerre and Bessel translations.
In the next section a Riesz-Kolmogorov type theorem is derived by Laguerre translation to weighted $L^p$ spaces on the half-line. The corresponding transformation, the discrete Laguerre-Fourier transformation implies a remarkable simplification, since the structure of the corresponding $l^{p'}$ spaces are simpler than the original $L^p$ ones. In this section we also introduce and study Laguerre translation on sequences.\\
In the third section the method is presented by Bessel translation. The corresponding transformation is the Hankel (Bessel-Fourier) transformation which establishes a Pego-type theorem.
\medskip
\section{Laguerre translation method}
Laguerre translation is developed by product formulae, see e.g. \cite{w} and by investigation of the related Cauchy problem, see e.g. \cite{bs}. We mention that it is a natural idea to handle Bessel and Laguerre translations in parallel, since the translated functions in both cases are the solutions to very similar Cauchy problems, $u_{xx}-u_{tt}+\frac{2\alpha+1}{x}u_x - \frac{2\alpha+1}{t}u_t-ru=0$, where $r(x,t)=0$ in Bessel case and $r(x,t)=x^2-t^2$ in Laguerre case. The derived convolution structure is examined e.g. in \cite{gm}. The norm of the translation operator ensures a maximum principle to the corresponding hyperbolic problem, and it implies Nikol'skii-type norm-estimations on the half-line, see \cite{adh}. Convolution is applicable to study of best approximation in certain spaces, see \cite{gm1}. In this section compactness is investigated by Laguerre translation. The main result of the section is the characterization of precompact sets in $L^2_\alpha$.
\subsection{Laguerre translation and precompact sets in $L^p_\alpha$}
We introduce weighted $L^p$-spaces on the half-line similar to the ones given in the previous section.\\ Let $1\le p \le \infty$ and $\alpha>-1$.
$f: \mathbb{R}_+ \to \mathbb{R} \in L^p_\alpha$ if
$$\|f\|_{p,\alpha} := \left(\int_0^\infty \left|f(x) e^{-\frac{x}{2}}\right|^px^\alpha dx\right)^{\frac{1}{p}}=\left(\int_0^\infty |\tilde{f}|^pd\mu_\alpha\right)^{\frac{1}{p}}<\infty.$$
$$\|f\|_{\infty,\alpha}=\|\tilde{f}\|_\infty$$
independently of $\alpha$.
The translation operator acting on the space given above can be defined as follows, see e.g. \cite{gm} .\\
Let $\alpha>-\frac{1}{2}$
\begin{equation}\label{transl}T_t^\alpha(f,x)\end{equation}$$=c_{\alpha}\int_0^\pi f(x+t+2\sqrt{xt}\cos\vartheta)e^{-\sqrt{xt}\cos\vartheta}j_{\alpha-\frac{1}{2}}(\sqrt{xt}\sin\vartheta)\sin^{2\alpha}\vartheta d\vartheta,$$
where $c_{\alpha}=\frac{\Gamma(\alpha+1)}{\Gamma\left(\alpha+\frac{1}{2}\right)\Gamma\left(\frac{1}{2}\right)}$ and $j_{\alpha-\frac{1}{2}}$ is the entire Bessel function, cf. \eqref{B}.
By symmetry of the definition in $t,x \ge 0$,
$$T_t^\alpha(f,x)=T_x^\alpha(f,t),$$
and again by definition
$$T_t^\alpha(f,0)=f(t), \hspace{4pt} \hspace{4pt} \hspace{4pt} T_0^\alpha(f,x)=f(x).$$
Denoting by $L_n^\alpha(x)$ the $n^{th}$ Laguerre polynomial orthogonal on $(0,\infty)$ with respect to the Laguerre weight, that is
$$\int_0^\infty L_n^\alpha(x)L_k^\alpha(x)e^{-x}x^\alpha dx =\Gamma(\alpha+1)\binom{n+\alpha}{n}\delta_{n,k},$$
and taking into consideration that
$$L_n^\alpha(0)=\binom{n+\alpha}{n}=w(\alpha,n)=:w(n),$$
we denote by
$$R_n^\alpha(x)=\frac{L_n^\alpha(x)}{\binom{n+\alpha}{n}}.$$
Thus $R_n^\alpha(0)=1$ and
\begin{equation}\label{Rnorm} \left\|\sqrt{\frac{w(n)}{\Gamma(\alpha+1)}}R_n^\alpha\right\|_{2,\alpha}=1, \hspace{4pt} \hspace{4pt} \|\tilde{R}_n^\alpha\|_\infty=\tilde{R}_n^\alpha(0)=1.\end{equation}
With this notation we have that
\begin{equation}\label{rr}T_t^\alpha(R_n^\alpha,x)=R_n^\alpha(x)R_n^\alpha(t),\end{equation}
see e.g. \cite{gm}.
According to \cite[Theorem 1]{gm} for all $\alpha\ge 0$ and $1\le p \le \infty$, considering the translated function as a function of the variable $x$,
\begin{equation}\label{trnorm}\|e^{-\frac{t}{2}}T_t^\alpha(f,x)\|_{p,\alpha}\le \|f\|_{p,\alpha}.\end{equation}
The corresponding convolution is
\begin{equation}\label{colag}(f*g)(t):=\int_0^\infty T_t^\alpha(f,x)g(x)e^{-x}x^\alpha dx.\end{equation}
Again by \cite[Theorem 1]{gm}, for all $\alpha\ge 0$ and $1\le p, q, r \le \infty$ with $\frac{1}{r}=\frac{1}{p}+\frac{1}{q}-1$; if $f\in L^p_\alpha$ and $g \in L^q_\alpha$, then
\begin{equation}\|f*g\|_{r,\alpha} \le \|f\|_{p,\alpha}\|g\|_{q,\alpha}.\end{equation}
The first theorem of this section is a Kolmogorov -Riesz type theorem, where the standard equicontinuity property is replaced by the one based on Laguerre translation.
\medskip
\begin{theorem}\label{lpr} Let $1\le p <\infty$ and $\alpha \ge 0$. A bounded set $K\subset L^p_\alpha$ is precompact if and only if the properties below are fulfilled.\\
${\bf P_a}.$ For all $\varepsilon >0$ there is an $R>0$ such that for all $f\in K$
\begin{equation}\label{farok} \left(\int_R^\infty |\tilde{f}|^pd\mu_\alpha\right)^{\frac{1}{p}}<\varepsilon. \end{equation}
${\bf P_b.}$ For all $\varepsilon$ and $M_0$ positive numbers there is a $\delta>0$ such that for all $0\le t \le M_0$, $0\le h \le \delta$ and $f\in K$
\begin{equation}\label{leqf}\left(\int_0^\infty \left|\left(T_{t+h}^\alpha f(x)-T_{t}^\alpha f(x)\right)e^{-\frac{x}{2}}\right|^pd\mu_\alpha(x)\right)^{\frac{1}{p}} <\varepsilon. \end{equation}
\end{theorem}
\proof
First we assume that $K$ is precompact. Then for an $\varepsilon>0$ there is a finite $\frac{\varepsilon}{2}$-net, $H_{\frac{\varepsilon}{2}}=\{u_1,\dots , u_j\}$ in $K$ $(j=j(\varepsilon))$. Since $C_0$ - the continuous functions with compact support - are dense in $L_{p,\alpha}$, there is an $S_\varepsilon \subset C_0$, such that $S_\varepsilon =\{\Phi_1, \dots ,\Phi_j\}$ and $\|u_l-\Phi_l\|_{p,(\alpha)}<\frac{\varepsilon}{2}$ for all $u_l \in H_{\frac{\varepsilon}{2}}$. That is there is an $R=R_\varepsilon >0$ such that the closed ball $\overline{B(0,R)}$ contains the support of each $\Phi \in S_\varepsilon$ wich ensures ${\bf P_a}$.
To prove ${\bf P_b}$ take an $f\in L^p_\alpha$, $t \in [0,M_0]$ with some $M_0$, and let $\Phi$ be the element of $S_\varepsilon$ closest to $f$ with $\mathrm{supp} \Phi \in (0,R)$. In view of \eqref{transl}
$$\left(\int_0^\infty \left|\left(T_{t+h}^\alpha \Phi(x)-T_{t}^\alpha \Phi(x)\right)e^{-\frac{x}{2}}\right|^pd\mu_\alpha(x)\right)^{\frac{1}{p}}$$ $$=\left(\int_0^\infty \left|c_\alpha\int_0^\pi \left(\Phi(x+t+h+2\sqrt{x(t+h)}\cos\vartheta)e^{-\sqrt{x(t+h)}\cos\vartheta}j_{\alpha-\frac{1}{2}}(\sqrt{x(t+h)}\sin\vartheta)\right.\right.\right.$$ $$\left.\left.\left.-\Phi(x+t+2\sqrt{xt}\cos\vartheta)e^{-\sqrt{xt}\cos\vartheta}j_{\alpha-\frac{1}{2}}(\sqrt{xt}\sin\vartheta)\right)\sin^{2\alpha}\vartheta d\vartheta e^{-\frac{x}{2}}\right|^p x^\alpha dx\right)^{\frac{1}{p}}=(*).$$
For sake of simplicity let us denote by
$$\Psi(x,t,\vartheta):= \Phi(x+t+2\sqrt{xt}\cos\vartheta)e^{-\sqrt{xt}\cos\vartheta}j_{\alpha-\frac{1}{2}}(\sqrt{xt}\sin\vartheta).$$
Recalling that $\mathrm{supp}\Phi \subset (0,R)$ and $t\in [0, M_0]$, if $R<|\sqrt{x}-\sqrt{t}|$ then $\Psi(x,t,\vartheta)=0$, that is if $\sqrt{x}>\sqrt{M_0}+R$ then $\Psi(x,t,\vartheta)=0$. Considering that difference of the arguments of the three functions above are at most $2\sqrt{xh}\le c(R, M_0)\sqrt{h}$ and $\Psi(x,t,\vartheta)$ is compactly supported and continuous, there is a $\delta>0$ such that if $0\le h\le \delta$
$$\left|\Psi(x,t+h,\vartheta)-\Psi(x,t,\vartheta)\right|\le\varepsilon$$
and so
$$(*)\le \varepsilon \left(\int_0^\infty e^{-p\frac{x}{2}} x^\alpha dx\right)^{\frac{1}{p}}.$$
Then, because the set of $\Phi$-s in question is finite we can choose $R$ and $\delta$ uniformly. Since the norm of translation is bounded on $[0,M_0]$, cf. \eqref{trnorm} (and it is a linear operator), we can finish this part with triangle inequality.
Vice versa assuming ${\bf P_a}$ and ${\bf P_b}$ let
$$V_af(x):=\frac{1}{A}\int_0^aT_t^\alpha f(x)e^{-\frac{t}{2}}t^\alpha dt$$
if $0\le x \le R$ with some finite $R$, and we define $V_af(x)=0$ if $x>R$.
where $A=\int_0^a e^{-\frac{t}{2}}t^\alpha dt$. Then, applying H\"older's inequality and the symmetry of translation
\begin{equation}\label{maeq}|V_af(x+u)-V_af(x)|\le \frac{1}{A}\int_0^a|T_t^\alpha f(x+u)-T_t^\alpha f(x)|e^{-\frac{t}{2}}t^\alpha dt\end{equation} $$\le \frac{1}{A^{\frac{1}{p}}}e^{\frac{a}{2p'}}\left(\int_0^\infty|T_{x+u}^{\alpha}f(t)-T_x^{\alpha}f(t)|^pe^{-\frac{pt}{2}}t^\alpha dt\right)^{\frac{1}{p}},$$
and similarly
\begin{equation}\label{V}|V_af(x)|\le \frac{1}{A^{\frac{1}{p}}}e^{\frac{a}{2p'}}\left(\int_0^\infty|T_x^{\alpha}f(t)|^pe^{-\frac{pt}{2}}t^\alpha dt\right)^{\frac{1}{p}}.\end{equation}
Let
$$F_{a,R}:=\left\{V_af(x): f \in K, \hspace{4pt} x \le R\right\}.$$
By \eqref{V} and \eqref{trnorm}
$$|e^{-\frac{x}{2}}V_af(x)|\le \frac{1}{A^{\frac{1}{p}}}e^{\frac{a}{2p'}}\|f\|_{p,\alpha}.$$
Thus with fixed $a$ and $R$, $F_{a,R}$ is bounded (by $M \frac{1}{A^{\frac{1}{p}}}e^{\frac{a}{2p'}+\frac{R}{2}}$ if $K$ is bounded by $M$) and (choosing $M_0=R$) by \eqref{maeq} and assumption ${\bf P_b}$ it is equicontinuous. Thus for an arbitrary $\varepsilon>0$ there is an $\varepsilon$-net, $V_af_1, \dots, V_af_n$, in $F_{a,R}$ such that $f_i \in K$, $i=1, \dots , n$.\\
Let $f\in K$ be arbitrary, and for an $\varepsilon>0$ choose $R$ according to property ${\bf P_a}$. Then again by H\"older and Fubini theorems
$$\| V_af-f\|_{p,\alpha}$$ $$\le \left(\int_0^R\left|e^{-\frac{x}{2}}\int_0^a\frac{1}{A}\left(T_t^{\alpha}f(x)-f(x)\right)e^{-\frac{t}{2}}t^\alpha dt\right|^px^\alpha dx\right)^{\frac{1}{p}}+\left(\int_R^\infty\left|e^{-\frac{x}{2}}f(x)\right|^px^\alpha dx\right)^{\frac{1}{p}}$$
$$\le \varepsilon +e^{\frac{a}{2p'}}\left(\frac{1}{A}\int_0^\infty e^{-\frac{px}{2}}\int_0^a\left|T_t^{\alpha}f(x)-f(x)\right|^pe^{-\frac{t}{2}}t^\alpha dt x^\alpha dx\right)^{\frac{1}{p}}$$ $$\le \varepsilon +e^{\frac{a}{2p'}}\left(\frac{1}{A}\int_0^ae^{-\frac{t}{2}}t^\alpha \int_0^\infty e^{-\frac{px}{2}}\left|T_t^{\alpha}f(x)-f(x)\right|^px^\alpha dx dt\right)^{\frac{1}{p}}$$ $$\le \varepsilon +e^{\frac{a}{2p'}}\sup_{0\le t \le a}\left(\int_0^\infty e^{-\frac{px}{2}}\left|T_t^{\alpha}f(x)-f(x)\right|^px^\alpha dx\right)^{\frac{1}{p}}.$$
Thus choosing $a$ according to property ${\bf P_b}$ small enough (assume that $a<1$), $\| V_af-f\|_{p,\alpha}<2\varepsilon$. \\
With these chosen $a$ and $R$ construct $F_{a,R}$ and for the previous $\varepsilon$ and $f$, from the $\varepsilon$-net in question choose $V_af_i$ such that $|V_af(x)-V_af_i(x)|<\varepsilon$ on $[0, R]$. Then we have
$$\| V_af-V_af_i\|_{p,\alpha}= \left(\int_0^R\left|e^{-\frac{x}{2}}( V_af(x)-V_af_i(x))\right|^px^\alpha dx\right)^{\frac{1}{p}}
$$ $$\le \varepsilon \left(\int_0^\infty e^{-\frac{px}{2}}x^\alpha dx\right)^{\frac{1}{p}}\le\varepsilon c(\alpha,p).$$
That is by the triangle inequality $\{f_i\}_{i=1}^n$ is a $(4+c(\alpha,p))\varepsilon$-net in $K$.
\medskip
\subsection{Laguerre translation on sequences}
Laguerre translation on sequences, to the best of my knowledge, have not appeared explicitly just as convolution of sequences, see e.g. \cite{ag}. The corresponding algebras are investigated in \cite{ka}. Below we derive the translation from convolution and investigate its properties.
Let $\alpha>-1$. Recalling the notation above we introduce the discrete weights
$$w(k)=w(\alpha,k)=\binom{k+\alpha}{k},$$
and the space of real sequences, $a=\{a(k)\}_{k=0}^\infty$. For $1\le p<\infty$ $a\in l^p_\alpha$ if
$$\|a\|_{p,\alpha} :=\left(\sum_{k=0}^\infty|a(k)|^pw(k)\right)^{\frac{1}{p}}<\infty.$$
$$\|a\|_{\infty,\alpha}=\|a\|_{\infty}$$
independently of $\alpha$.
If $a\in l^p_\alpha$ and $b\in l^{p'}_\alpha$ we denote by
$$\langle a,b\rangle =\sum_{k=0}^\infty a(k)b(k)w(k).$$
\noindent {\bf Remark. }
Certainly, in discrete case the criterion of precompactness is simpler, cf. \cite[Theorem 4]{hoh}. It is as follows.
For any $1\le p <\infty$ and $\alpha \ge 0$ a set $K \subset l^p_\alpha$ is precompact if and only if it is pointwise bounded (i.e. for all $n\in\mathbb{N}$ there is an $M(n)$ such that for all $a\in K $ $a(n)\le M(n)$), and the next property fulfils.
\medskip
${\bf P_{as}.}$ For all $\varepsilon >0$ there is an $N\in\mathbb{N}$ such that for all $a\in K $
\begin{equation}\label{afarok}\left(\sum_{k=N+1}^\infty |a(k)|^pw(k)\right)^{\frac{1}{p}} <\varepsilon. \end{equation}
Indeed, suppose, that $K$ is precompact. Then it is obviously bounded in $l^p_\alpha$. Since $w(k)\ge 1$, it is pointwise bounded as well. For $\frac{\varepsilon}{2}$ it has a finite $\frac{\varepsilon}{2}$-net, $b_1, \dots, b_n$, say. Since the finite sequences are dense in $l^p_\alpha$, there are $s_i$ finite sequences, such that $\|p_i-s_i\|_{p,\alpha}<\frac{\varepsilon}{2}$, $i=1, \dots, n$. Thus the maximal length of $s_i$-s is an appropriate choice of $N$.
Assume now, that $K$ is pointwise bounded and fulfils ${\bf P_{as}}$. Choose an $N$ for an arbitrary $\varepsilon$ ensured by ${\bf P_{as}}$. Let $S_N:=\{a^N:=a(0), \dots, a(N) : a\in K\}$ be the set of the $N+1$-long initial parts of the sequences in $K$. Then the distance of $K$ and $S_N$ is at most $\varepsilon$. Because $S_N$ is finite dimensional it is bounded in $l^p_\alpha$ ($S_N$ is bounded by $cM_N N^{\alpha+1}$, where $M_N:=\max_{n=1}^N M(n)$) and (again by being finite dimensional) $S_N$ contains a finite $\varepsilon$-net, and the corresponding sequences form a $2\varepsilon$-net in $K$.
\medskip
To define translation on the spaces of sequences, our starting point is the convolution defined in \cite{ag}.\\ Let $\alpha >\alpha_0=\frac{-5+\sqrt{17}}{2}$. Then by \cite[Theorem 1]{ag}
\begin{equation}\label{gam}\gamma(n,m,k):=\gamma(n,m,k,\alpha)=\int_0^\infty R_n^\alpha(x)R_m^\alpha(x)R_k^\alpha(x)e^{-2x}x^\alpha dx >0,\end{equation}
for all $k, m, n \in \mathbb{N}$. In view of \cite[(4.2)]{ag}
\begin{equation}\label{sgam}\sum_{k=0}^\infty \gamma(n,m,k)w(k)=1\end{equation}
for all $m, n \in \mathbb{N}$.
Let $a \in l^p_\alpha$, $b \in l^q_\alpha$, where $\frac{1}{p}+\frac{1}{q}\ge 1$. In \cite[(4.5)]{ag} the next convolution is defined.
\begin{equation}\label{aconv}(a*b)(k):=\sum_{m=0}^\infty \sum_{n=0}^\infty a(m)b(n)\gamma(n,m,k)w(n)w(m).\end{equation}
Similarly to the $L^p$-cases if $1\le p,q,r \le \infty$ and $\frac{1}{r}=\frac{1}{p}+\frac{1}{q}-1$,
\begin{equation}\label{cn}\|a*b\|_{r,\alpha} \le \|a\|_{p,\alpha}\|b\|_{q,\alpha},\end{equation}
see \cite[(4.6)]{ag}.
Thus, fixing $\alpha>\alpha_0$, we can define the corresponding translation as
\begin{equation}\label{selt} T_k(a)(n):=\sum_{m=0}^\infty a(m)w(m)\gamma(n,m,k).\end{equation}
By symmetry we immediately have that
\begin{equation} T_k(a)(n)=T_n(a)(k).\end{equation}
So \eqref{aconv} can be written as
\begin{equation}(a*b)(k)=\sum_{n=0}^\infty b(n)T_k(a)(n)w(n)=\langle b, T_k(a)\rangle =\langle a, T_k(b)\rangle.\end{equation}
As on functions, Laguerre translation on sequences is also a bounded operator.
\begin{proposition}\label{Tsn} Let $\alpha>\alpha_0$ and $1\le p \le \infty$ Then for all $k$
$$\|T_k(a)\|_{p,\alpha}\le \|a\|_{p,\alpha}.$$
\end{proposition}
\proof
Let $a \in l^\infty$. In view of \eqref{selt}, \eqref{gam} and \eqref{sgam}
$$|T_k(a)(n)|=|\sum_{m=0}^\infty a(m)w(m)\gamma(n,m,k)|\le \|a\|_\infty \sum_{m=0}^\infty \gamma(n,m,k)w(m)=\|a\|_\infty.$$
If $a \in l^1_\alpha$
$$\sum_{n=0}^\infty |T_k(a)(n)|w(m)=\sup_{b, \|b\|_\infty \le 1}\sum_{n=0}^\infty b(n)T_k(a)(n)w(n)$$ $$=\sup_{b, \|b\|_\infty \le 1}\sum_{m=0}^\infty \sum_{n=0}^\infty a(m)b(n)\gamma(n,m,k)w(n)w(m)=\sup_{b, \|b\|_\infty \le 1}(a*b)(k).$$ According to \eqref{cn}
$$ \|a*b\|_\infty \le \|a\|_{1,\alpha},$$
which implies the statement in $l^1_\alpha$. Finally interpolation ensures the result.
\medskip
To prepare the next subsection, we mention the standard connection between the corresponding spaces of functions and sequences.\\
For a function $f$ in some $L^p_\alpha$ let us denote the corresponding sequence by $\hat{f}=a_f$, where
\begin{equation} \hat{f}(n):=\int_0^\infty \tilde{f}\tilde{R}_n^\alpha d\mu_\alpha, \end{equation}
for $n \in \mathbb{N}$.\\
Let $a$ be a sequence. The corresponding function can be defined as $\check{a}=f_a(x)$, where
$$\tilde{f}_a \sim \sum_{m=0}^\infty a(m)w(m)\tilde{R}_m^\alpha$$
if the series is convergent in some sense.\\
Let $1\le p \le 2$. Then, as usual, if $f\in L^p_\alpha$ then $\hat{f} \in l^{p'}_{\alpha}$, and if $a\in l^{p}_{\alpha}$ then $\check{a}\in L^{p'}_{\alpha}$ and the operators map $L^p_\alpha$ to $l^{p'}_{\alpha}$ and $l^p_\alpha$ to $L^{p'}_{\alpha}$ are bounded.\\
Indeed, by interpolation it is a consequence of Parseval's formula and the next inequalities:
$$|\hat{f}(k)|=\left|\int_0^\infty \tilde{f}\tilde{R}^{\alpha}_kd\mu_\alpha\right|\le \|f\|_{1,\alpha},$$
and
$$\|\tilde{f}_a\|_\infty \le\sum_{m=0}^\infty |a(m)|w(m)\|\tilde{R}_m^{\alpha}\|_\infty =\|a\|_{1,\alpha},$$
where \eqref{Rnorm} is considered.
Let $a$ and $b$ are in $l^1_\alpha$, say, then the series $\tilde{f}_a$ and $\tilde{g}_b$ are uniformly convergent. Thus by \eqref{aconv}
\begin{equation}\tilde{f}_a(x)\tilde{g}_b(x)e^{-\frac{x}{2}}=\sum_{k=0}^\infty (a*b)(k)w(k)\tilde{R}_k^\alpha,\end{equation}
that is
\begin{equation}\label{sz}\check{a}\check{b}\sim a*b,\end{equation}
see \cite{ag}.
Similarly \eqref{aconv} ensures that if $\hat{f}=a_f$ then
\begin{equation}\label{Tnk}\widehat{\tilde{f}\tilde{R}_n^{\alpha}}(k)=T_n(a_f)(k).\end{equation}
Moreover \eqref{rr} implies that
\begin{equation}\label{38}\widehat{f*g}(n)=\hat{f}(n)\hat{g}(n), \hspace{4pt} \hspace{4pt} \widehat{ T_t^\alpha(f)}(n)=\hat{f}(n)R_n^\alpha(t),\end{equation}
provided that $\alpha \ge 0$, $f\in L^p_\alpha$ and $g \in L^q_\alpha$ and $\frac{1}{p}+\frac{1}{q}\ge 1$.
\subsection{Pego-type theorem with Laguerre transformation} At first we introduce the notion of equicontinuity in mean with respect to sequences.
\medskip
\noindent ${\bf P_{bs}}.$ A set $K \subset l^p_\alpha$ is equicontinuous in mean if for all $\varepsilon>0$ there is an $N \in \mathbb{N}$ such that for all $j>N$ and $a \in K$
\begin{equation}\label{aeqc} \left(\sum_{k=0}^\infty |T_j(a)(k)-a(k)|^pw(k)\right)^{\frac{1}{p}}<\varepsilon.\end{equation}
\medskip
Subsequently we need the the next extra property with respect to functions.
\medskip
\noindent ${\bf P_{a0}}.$ A set $K \subset L^p_\alpha$ is equivanishing at zero if for all $\varepsilon>0$ there is a $\delta>0$ such that for all $f\in K$
\begin{equation}\label{eqv0}\left(\int_0^\delta |\tilde{f}|^pd\mu_\alpha\right)^{\frac{1}{p}}\le \varepsilon.\end{equation}
\medskip
After this preparation we are in position to state a Pego-type theorem.
\medskip
\begin{theorem}\label{lL} Let $1\le p \le 2$ and $\alpha \ge 0$. \\
$(\mathrm{a})$ If $K \subset l^p_\alpha$ is bounded and fulfils ${\bf P_{as}}$, then $\check{K} \subset L^{p'}_\alpha$ fulfils ${\bf P_{b}}$.\\
$(\mathrm{b})$ If $K \subset L^p_\alpha$ fulfils ${\bf P_{b}}$, then $\hat{K} \subset l^{p'}_\alpha$ fulfils ${\bf P_{as}}$.\\
$(\mathrm{c})$ If $K \subset l^p_\alpha$ fulfils ${\bf P_{bs}}$, then $\check{K} \subset L^{p'}_\alpha$ fulfils ${\bf P_{a}}$.\\
$(\mathrm{d})$ If in addition $\alpha >\frac{1}{2}$, $K \subset L^p_\alpha$ is bounded and fulfils ${\bf P_{a}}$ and ${\bf P_{a0}}$, then $\hat{K} \subset l^{p'}_\alpha$ fulfils ${\bf P_{bs}}$.
\end{theorem}
\proof
$(\mathrm{a})$: Let $f=f_a \in \check{K} \subset L^{p'}_\alpha$. Then by \eqref{38} and \eqref{Rnorm}
$$\left(\int_0^\infty\left|\left(T_{t+h}^{\alpha}f(x)-T_t^{\alpha}f(x)\right|e^{-\frac{x}{2}}\right)^{p'}d\mu_\alpha(x)\right)^{\frac{1}{p'}}$$ $$\le c\left(\sum_{k=0}^\infty\left|a(k)\left(R_k^{\alpha}(t+h)-R_k^{\alpha}(t)\right)\right|^pw(k)\right)^{\frac{1}{p}}$$ $$\le c\left(\sum_{k=0}^N\left|a(k)\left(R_k^{\alpha}(t+h)-R_k^{\alpha}(t)\right)\right|^pw(k)\right)^{\frac{1}{p}}+2ce^{\frac{t}{2}}\left(\sum_{k=N+1}^\infty |a(k)|^pw(k)\right)^{\frac{1}{p}}$$ $$=S_1+S_2.$$
Since $\left\|\left(R_k^{\alpha}\right)'\right\|_\infty \le ck \left\|R_{k-1}^{(\alpha+1)}\right\|_\infty $, see \cite[(5.1.14)]{sz},
$$S_1\le cNMe^{\frac{t}{2}}h,$$
where $K$ is bounded by $M$. Thus, recalling that $t\in [0,M_0]$ by ${\bf P_{as}}$ if $N$ is large enough,
$S_2\le \frac{\varepsilon}{2}$, and we can choose $\delta$ so small such that $S_1\le \frac{\varepsilon}{2},$ if $0\le h \le \delta$.\\
$(\mathrm{b})$: Let $f\in K$. Take a function $g\in C_0$ such that\\ $g\ge 0$, $\mathrm{supp}g\subset [0,\delta]$ and $\int_0^\infty g(x)e^{-x}d\mu_\alpha(x)=1$. As $\hat{g}(k) \to 0$, we can choose $N$ so large that $| \hat{g}(k)|<\frac{1}{2}$ if $k\ge N$. Thus applying \eqref{38}, \eqref{colag}, Minkowski inequality and the symmetry of translation we have
$$\left(\sum_{k=N}^\infty |\hat{f}(k)|^{p'}w(k)\right)^{\frac{1}{p'}}\le 2 \left(\sum_{k=N}^\infty |\hat{f}(k)(1-\hat{g}(k))|^{p'}w(k)\right)^{\frac{1}{p'}}$$ $$\le c\left(\int_0^\infty\left|(f(t)-f*g(t))e^{-\frac{t}{2}}\right|^pd\mu_\alpha(t)\right)^{\frac{1}{p}}$$ $$=c\left(\int_0^\infty\left|\left(f(t)-\int_0^\infty T_t^{\alpha}f(x)g(x)e^{-x}d\mu_\alpha(x)\right)e^{-\frac{t}{2}}\right|^pd\mu_\alpha(t)\right)^{\frac{1}{p}}$$ $$\le c\int_0^\infty\left(\int_0^\infty \left(|f(t)-T_x^{\alpha}f(t)|e^{-\frac{t}{2}}\right)^pd\mu_\alpha(t)\right)^{\frac{1}{p}}g(x)e^{-x}d\mu_\alpha(x)$$ $$\le c\sup_{0\le x \le \delta}\|f-T_x^{\alpha}f\|_{p,\alpha}.$$
$(\mathrm{c})$: Let $N$ be the appropriate index for $\varepsilon$ in ${\bf P_{bs}}$, and let us define a finite sequence $b=(b(N), \dots, b(N+n))$ such that $b_k\ge 0$, $k=N,\dots, N+n$ and $\sum_N^{N+n}b(k)w(k)=1$. Let us denote by $\check{b}=g$ and for an $a\in K$ $\check{a}=f$. For $\varepsilon>0$ we can take an apropriate initial part of $a$, $a_L:=(a(0),a(1), \dots, a(L))$, and $\check{a}_L=p_L$ such that $\|f-p_L\|_{p',\alpha}\le \varepsilon $.\\
Since $\lim_{x\to\infty}g(x)e^{-x}=0$, there is an $R$ such that $|g(x)e^{-x}|\le \frac{1}{2}$ if $x>R$. Considering that $|g(x)e^{-x}|\le 1$ on $x\ge 0$, in view of \eqref{sz}, as above
$$\left(\int_R^\infty|\tilde{f}(x)|^{p'}d\mu_\alpha(x)\right)^{\frac{1}{p'}} \le \varepsilon+\left(\int_R^\infty|\tilde{p_L}(x)|^{p'}d\mu_\alpha(x)\right)^{\frac{1}{p'}}$$ $$\le \varepsilon+ 2\left(\int_R^\infty|\tilde{p_L}(x)(1-g(x)e^{-x})|^{p'}d\mu_\alpha(x)\right)^{\frac{1}{p'}}$$ $$=\varepsilon+\left(\int_R^\infty|\tilde{p_L}(x)-\tilde{p_L}(x)g(x)e^{-x}|^{p'}d\mu_\alpha(x)\right)^{\frac{1}{p'}}\le \varepsilon+c\|a_L-a_L*b\|_{p,\alpha}$$ $$=\varepsilon+c\left(\sum_{k=0}^L\left|a(k)-\sum_{j=N}^{N+n}b(j)T_k(a)(j)w(j)\right|^pw(k)\right)^{\frac{1}{p}}$$ $$\le\varepsilon+c\left(\sum_{k=0}^\infty\left|\sum_{j=N}^{N+n}\left[a(k)-T_j(a)(k)\right]b(j)w(j)\right|^pw(k)\right)^{\frac{1}{p}}$$ $$\varepsilon+\le c \sup_{j\ge N}\left(\sum_{k=0}^\infty |a(k)-T_j(a)(k)|^pw(k)\right)^{\frac{1}{p}}\le(1+c)\varepsilon.$$
$(\mathrm{d})$: According to \eqref{Tnk} and \eqref{Rnorm}
$$\left(\sum_{k=0}^\infty |a(k)-T_j(a)(k)|^{p'}w(k)\right)^{\frac{1}{p'}}=\left(\sum_{k=0}^\infty \left|\hat{f}(k)-\widehat{\tilde{R}_j^{\alpha}\tilde{f}}(k)\right|^{p'}w(k)\right)^{\frac{1}{p'}}$$ $$\le c\left(\int_0^\infty \left|f(x)\left(1- R_j^{\alpha}(x)e^{-x}\right)e^{-\frac{x}{2}}\right|^pd\mu_\alpha(x))\right)^{\frac{1}{p}}\le 2c\left(\int_0^\delta \left|f(x)e^{-\frac{x}{2}}\right|^pd\mu_\alpha(x))\right)^{\frac{1}{p}}$$ $$+ c\left(\int_\delta^R \left|f(x)\left(1- R_j^{\alpha}(x)e^{-x}\right)e^{-\frac{x}{2}}\right|^pd\mu_\alpha(x))\right)^{\frac{1}{p}}$$ $$+2c\left(\int_R^\infty \left|f(x)e^{-\frac{x}{2}}\right|^pd\mu_\alpha(x))\right)^{\frac{1}{p}}=I+II+III.$$
By the assumption $I+III\le 4c\varepsilon$. To estimate $II$ we consider that
$$\left(R_j(x)e^{-x}\right)'=e^{-x}\frac{-L_{j-1}^{(\alpha+1)}(x)-L_j^{\alpha}(x)}{\binom{j+\alpha}{j}}=\frac{-L_{j}^{(\alpha+1)}(x)e^{-x}}{\binom{j+\alpha}{j}},$$ see \cite[(5.1.13)]{sz}. In view of \cite[(2.8)]{m}
\begin{equation}\label{infn}\tilde{R}_n^{\alpha}(x)x^{\frac{\alpha}{2}}\le \frac{C}{n^{\frac{\alpha}{2}}} \left\{\begin{array}{ll}(x\nu){\frac{\alpha}{2}}, \hspace{4pt} \hspace{4pt} 0\le x\le \frac{1}{\nu}\\
(x\nu)^{-\frac{1}{4}}, \hspace{4pt} \hspace{4pt} \frac{1}{\nu}< x \le \frac{\nu}{2}\\
\left(\nu\left(\nu^{\frac{1}{3}}+|x-\nu|\right)\right)^{-\frac{1}{4}}, \hspace{4pt} \hspace{4pt} \frac{\nu}{2}< x \le \frac{3\nu}{2}\\
e^{-\gamma x}, \hspace{4pt} \hspace{4pt} \frac{3\nu}{2}<x, \end{array}\right.\end{equation}
where $\nu=4n+2\alpha+2$. Thus
\begin{equation}\label{derb}\frac{\left|L_{j}^{(\alpha+1)}(x)\right|e^{-x}}{\binom{j+\alpha}{j}}\le c\frac{j^{\frac{1}{4}-\frac{\alpha}{2}}}{\delta^{\frac{\alpha+1}{2}+\frac{1}{4}}}, \hspace{4pt}\ws x\in(\delta,R).\end{equation}
Thus, denoting the bound of $K$ with $M$
$$II\le c(\delta)Rj^{\frac{1}{4}-\frac{\alpha}{2}}\|f\|_{p,\alpha}\le c(\delta,R,M)j^{\frac{1}{4}-\frac{\alpha}{2}},$$
which is small if $j$ is large enough.
\medskip
The computation in proof of (d) ensures the regular behavior of translation on sequences.
\medskip
\begin{proposition} Let $\alpha >\frac{1}{2}$, $1\le p <\infty$. If $K\subset l^p_\alpha$ is precompact, then $K$ fulfils ${\bf P_{bs}}$.\end{proposition}
\proof
By property ${\bf P_{as}}$ it is enough to prove the statement for finite sets of finite sequences.
Let $S:=s_1, \dots, s_n$ and $s_i=\{a_{ik}\}_{k=0}^{n_i}$ and $p_i=\sum_{k=0}^{n_i}a_{ik}R_k^\alpha$. For any $\varepsilon>0$ there are $\delta_i$ and $R_i$ such that $\left(\int_0^{\delta_i}|\tilde{p}_i|^2d\mu_\alpha\right)^{\frac{1}{2}}\le\varepsilon$ and $\left(\int_{R_i}^{\infty}|\tilde{p}_i|^2d\mu_\alpha\right)^{\frac{1}{2}}\le\varepsilon$, $i=1, \dots, n$. Let $R:=R(\varepsilon_1,\varepsilon)=\max_{i=1}^n R_i$, $N:=N(\varepsilon_1)=\max_{i=1}^n n_i$, $\delta:=\delta(\varepsilon_1,\varepsilon)=\min_{i=1}^n \delta_i$, $M:=M(\varepsilon_1)=\max_{i=1}^n\|p_i\|_{2,\alpha}$. With the abbreviation $a_{ik}=\hat{p}_i(k)$, as above we have
$$\left|\hat{p}_i(k)-T_j(\hat{p}_i)(k)\right|=\left|\int_0^\infty \left(p_i(x)-R_j^{\alpha}(x)p_i(x)e^{-x}\right)R_k(x)e^{-x}x^\alpha dx\right|$$ $$\le \|R_k^{\alpha}\|_{2,\alpha}\left(2\left(\int_0^\delta |\tilde{p}_i|^2 d\mu_\alpha \right)^{\frac{1}{2}}+\|p_i\|_{2,\alpha}\sup_{[\delta,R]}\left|1-R_j(x)e^{-x}\right|+2\left(\int_R^\infty |\tilde{p}_i|^2 d\mu_\alpha \right)^{\frac{1}{2}}\right).$$
By \eqref{derb} and \eqref{Rnorm}
$$\left|\hat{p}_i(k)-T_j(\hat{p}_i)(k)\right|\le ck^{-\frac{\alpha}{2}} \left(4\varepsilon+ M R \frac{j^{\frac{1}{4}-\frac{\alpha}{2}}}{\delta^{\frac{\alpha+1}{2}+\frac{1}{4}}}\right)\leq 5c\varepsilon k^{-\frac{\alpha}{2}}$$
if $j$ is large enough. Thus
$$\|\hat{p}_i-T_j(\hat{p}_i)\|_{p,\alpha}\le 5c\varepsilon \left(\sum_{k=0}^Nk^{-p\frac{\alpha}{2}}w(k)\right)^{\frac{1}{p}}\le 5c\varepsilon N^{\frac{\alpha+1}{p}-\frac{\alpha}{2}}.$$
Since $N$ is independent of $\varepsilon$, it is arbitrary small if $j$ is large enough. According to Proposition \ref{Tsn} the proof can be finished with a triangle inequality.
\medskip
The main theorem of this section os the next corollary.
\medskip
\begin{cor} Let $\alpha \ge 0$. A set $K\subset L^2_\alpha$ is precompact if and only if it satisfies ${\bf P_{b}}$, and the corresponding set $\hat{K}$ is pointwise bounded.
\end{cor}
\proof
If $K$ is precompact in $L^2_\alpha$, Theorem \ref{lpr} ensures equicontinuity in mean. Moreover $K$ is bounded in $L^2_\alpha$, so $\hat{K}$ is bonded in $l^2_\alpha$ and as above, pointwise too.\\
If $K$ is equicontinuous in mean, by Theorem \ref{lL} $\hat{K}$ is equivanishing in $l^2_\alpha$, and the pointwise boundedness ensures precompactness of $\hat{K}$ and of $K$ too.
\medskip
\section{Bessel translation method}
Bessel translation and Hankel transformation are widely examined by several authors. We mention here only a few examples. One of our main sources is an early paper, \cite{le}. By Bessel translation modulus of smoothness and the related best approximation can be investigated, see \cite{p}, and Nikol'skii inequalities for entire functions can be proved, see \cite{abdh}. For Fourier-Bessel transformation, similarly to the standard Fourier transformation, uncertainty results are derived, see \cite{gj}, which are somehow in concordance with Pego-type theorems. Hereinafter we concentrate to compactness criteria.
Let $L_{p,\alpha}$ be the space of measurable functions on $\mathbb{R}_+$ equipped with the norm
$$\|f\|_{p,(\alpha)}=\left(\int_0^\infty|f(x)|^p x^{2\alpha+1}dx\right)^{\frac{1}{p}}, \hspace{4pt} \hspace{4pt} \alpha \ge -\frac{1}{2}, \hspace{4pt} 1\le p <\infty, \hspace{4pt} \hspace{4pt} \hspace{4pt} \|f\|_{\infty,(\alpha)}=\|f\|_{\infty, \mathbb{R}_+}.$$
The dual space of $L_{p,\alpha}$ is denoted by $L_{p',\alpha}$, where
$$\frac{1}{p}+\frac{1}{p'}=1.$$
First we introduce the Bessel translation of an integrable function, see \cite[(5.19)]{le}:
\begin{equation}\label{t} T_{t,\alpha}f(s)=\int_0^\pi f(\sqrt{t^2+s^2-2st\cos\varphi})d\mu(\varphi),\end{equation}
where $d\mu(\varphi)=c_\alpha \sin^{2\alpha}\varphi d\varphi$, and $c_\alpha=\left(\int_0^\pi \sin^{2\alpha}\varphi d\varphi\right)^{-1}$.
The symmetry of the definition implies
\begin{equation}\label{st} T_{t,\alpha}f(s)= T_{s,\alpha}f(t).\end{equation}
We need the entire Bessel functions which are
\begin{equation}\label{B}j_\alpha(z)=\Gamma(\alpha+1)\left(\frac{2}{z}\right)^\alpha J_\alpha(z)=\sum_{k=0}^\infty\frac{(-1)^k\Gamma(\alpha+1)}{\Gamma(k+1)\Gamma(k+\alpha+1)}\left(\frac{z}{2}\right)^{2k}.\end{equation}
Subsequently the next properties will be necessary. The norm of an entire Bessel function on the half-line is attained at zero:
\begin{equation}\label{j1}\|j_\alpha\|_{\infty, \mathbb{R}_+}=j_\alpha(0)=1.\end{equation}
The derivative of $j_\alpha$ can be expressed as
\begin{equation}\label{j2}j_\alpha'(z)=-\frac{1}{2(\alpha+1)}zj_{\alpha+1}(z),\end{equation}
see \cite{be}.
As it is mentioned in the introduction, similarly to the exponential case, the Bessel translation fulfils
$$T_{t,\alpha}j_\alpha(\lambda x)=j_\alpha(\lambda t)j_\alpha(\lambda x),$$
see \cite{le}.
The operator norm of Bessel translation is one, see \cite[(2.24)]{p}
\begin{equation}\label{opntr}\|T_tf\|_{p,(\alpha)}\le \|f\|_{p,(\alpha)}, \hspace{4pt} \hspace{4pt} 1\le p \le \infty.\end{equation}
The Kolmogorov-Riesz type theorem of this section is as follows.
\medskip
\begin{theorem}\label{best} $1\le p < \infty$ $K\subset L_{p,\alpha}$ is bounded. $K$ is precompact if and only if the two conditions below are fulfiled.\\
${\bf P_A.}$ $K$ is equivanishing, i.e.
\begin{equation}\label{c2} \forall \varepsilon>0 \hspace{4pt} \exists R>0, \hspace{4pt} \forall f\in K \hspace{4pt} \left(\int_R^\infty|f(x)|^px^{2\alpha+1}dx\right)^{\frac{1}{p}}<\varepsilon.\end{equation}
${\bf P_B.}$ $K$ is $L_{p,\alpha}$-equicontinuous, i.e. $\forall \varepsilon>0$ and $\forall$ $M_0>0$ there is a $\delta>0$ such that $\forall$ $0\le h \le \delta$, $\forall$ $f\in K$, $\forall$ $t\in[0,M_0]$
\begin{equation}\label{c3} \left(\int_0^\infty|T_{t+h,\alpha}f(s)-T_{t,\alpha}f(s)|^p s^{2\alpha+1}ds\right)^{\frac{1}{p}}<\varepsilon.\end{equation}
\end{theorem}
\proof
To prove property ${\bf P_A}$ we can proceed just as in the proof of Theorem \ref{lpr}.\\
To prove ${\bf P_B}$, with the notation of Theorem \ref{lpr} we take an $f\in L_{p,\alpha}$ and a $\Phi$ in $S_{\frac{\varepsilon}{3}}$ closest to $f$.\\
Denoting by $(s,t,\varphi):=(\sqrt{s^2+t^2-2st\cos\varphi})$, in view of \eqref{t}
$$\|T_{t+h,\alpha} \Phi- T_{t,\alpha}\Phi\|_{p,(\alpha)}$$ $$=\left(\int_0^\infty \left|\int_0^\pi \left(\Phi(s,t+h,\varphi)-\Phi(s,t,\varphi)\right)d\mu(\varphi)\right|^ps^{2\alpha+1}ds\right)^{\frac{1}{p}}$$ $$\le \int_0^\pi \left(\int_0^\infty \left|\Phi(s,t+h,\varphi)-\Phi(s,t,\varphi)\right|^ps^{2\alpha+1}ds\right)^{\frac{1}{p}}d\mu(\varphi).$$
Since $||s-t|-h|\le (s,t,\varphi),(s,t+h,\varphi)\le s+t+h$, recalling that $\Phi \in S_{\frac{\varepsilon}{3}}$ and $t<M_0$
$$\|T_{t+h,\alpha} \Phi- T_{t,\alpha}\Phi\|_{p,(\alpha)}\le \int_0^\pi \left(\int_0^R \left|\Phi(s,t+h,\varphi)-\Phi(s,t,\varphi)\right|^ps^{2\alpha+1}ds\right)^{\frac{1}{p}}d\mu(\varphi),$$
where $R=R_{\frac{\varepsilon}{3}}+M_0+1$.\\
$|\sqrt{s^2+(t+h)^2-2s(t+h)\cos\varphi}-\sqrt{s^2+t^2-2st\cos\varphi}|\le h$ (cf. \cite[(2.27)]{p}). Recall that there are finitely many uniformly continuous $\Phi$ in $S_{\frac{\varepsilon}{3}}$. Since they are equicontinuous (in standard sense) we can choose an appropriate $\delta$ to $\frac{\varepsilon}{3B}$, where $B=B(\varepsilon)= \left(\int_0^R s^{2\alpha+1}ds \right)^{\frac{1}{p}}$ such that $|\Phi(s,t+h,\varphi)-\Phi(s,t,\varphi)|<\frac{\varepsilon}{3B}$, thus considering that $\mu$ is a probability measure, $\|T_{t+h,\alpha} \Phi- T_{t,\alpha}\Phi\|_{p,(\alpha)}\le \frac{\varepsilon}{3}$. Considering the norm of the linear operator, \eqref{opntr}, by a triangle inequality \eqref{c3} is proved.
On the other hand supposing ${\bf P_A}$ and ${\bf P_B}$, we show that $K$ is precompact. To this we define
\begin{equation}M_{a}f(s):=\frac{1}{A}\int_0^a T_{t,\alpha}f(s)t^{2\alpha+1}dt,\end{equation}
where $A=\int_0^a t^{2\alpha+1}dt$.
By H\"older's inequality, then applying the symmetry of translation, cf. \eqref{st}
\begin{equation}\label{mfmf}|M_af(s+u)-M_af(s)|\le \frac{1}{A}\int_0^a|T_{t,\alpha}f(s+u)-T_{t,\alpha}f(s)|t^{2\alpha+1}dt\end{equation} $$\le \frac{1}{A^{\frac{1}{p}}}\left(\int_0^\infty |T_{t,\alpha}f(s+u)-T_{t,\alpha}f(s)|^pt^{2\alpha+1}dt\right)^{\frac{1}{p}}$$ $$=\frac{1}{A^{\frac{1}{p}}}\left(\int_0^\infty |T_{s+u,\alpha}f(t)-T_{s,\alpha}f(t)|^pt^{2\alpha+1}dt\right)^{\frac{1}{p}}.$$
Similarly, and recalling \eqref{opntr}
\begin{equation}\label{mfk}|M_af(s)|\le \frac{1}{A^{\frac{1}{p}}}\left(\int_0^\infty |T_{s,\alpha}f(t)|^pt^{2\alpha+1}dt\right)^{\frac{1}{p}}\le \frac{1}{A^{\frac{1}{p}}}\|f\|_{p,(\alpha)}.\end{equation}
Let $a$ and $R$ be fixed positive numbers, and let $F_{a,R}:= \{M_af(s): f\in K, s<R\}$. Then by \eqref{mfmf} and \eqref{c3} $F_{a,R}$ is equicontinuous (in standard sense), and by \eqref{mfk} and boundedness of $K$ $F_{a,R}$ is uniformly bounded. Thus by the theorem of Arzel\'a and Ascoli for all $\varepsilon>0$ there is a finite $\varepsilon$-net $N_\varepsilon \subset F_{a,R}$. \\
Let us denote the elements of $N_\varepsilon=\{M_af_1,\dots ,M_af_j\}$, where $f_i\in K$, $i=1,\dots , j$ and $j=j(\varepsilon)$, that is for all $f\in K$ there is an $f_i$, such that $|M_af(s)-M_af_i(s)|\le\varepsilon$ if $0\le s\le R$. We show that $\{f_i\}_{i=1}^j$ is an $\varepsilon$-net in $K$ if $\{M_af_i\}_{i=1}^j=N_{\frac{\varepsilon}{L}}$ is an $\frac{\varepsilon}{L}$-net in $F_{a,R}$ with a suitable $a$, $L$ and $R$ (will be given later).
Applying again H\"older's inequality and then Fubini's theorem
$$\|M_af-f\|_{p,(\alpha)}^p \le \int_0^\infty\left(\int_0^a\frac{1}{A}|T_{t,\alpha}f(s)-f(s)|t^{2\alpha+1}dt\right)^ps^{2\alpha+1}ds$$
$$\le \frac{1}{A}\int_0^\infty \int_0^a |T_{t,\alpha}f(s)-f(s)|^p t^{2\alpha+1}dt s^{2\alpha+1}ds$$ $$= \frac{1}{A}\int_0^a \int_0^\infty|T_{t,\alpha}f(s)-f(s)|^ps^{2\alpha+1}ds t^{2\alpha+1}dt.$$
Thus
\begin{equation}\label{mff}\|M_af-f\|_{p,(\alpha)}^p\le \sup_{0\le t\le a}\int_0^\infty|T_{t,\alpha}f(s)-f(s)|^ps^{2\alpha+1}ds.\end{equation}
Similarly
$$\|M_af-M_af_j\|_{p,(\alpha)}^p \le \left(\int_0^R|M_af(s)-M_af_j(s)|^p s^{2\alpha+1}ds\right)^{\frac{1}{p}}$$ $$+\left(\int_R^\infty |M_af(s)-M_af_j(s)|^p s^{2\alpha+1}ds\right)^{\frac{1}{p}}=I+II.$$
As $II\le \left(\int_R^\infty |M_af(s)|^p s^{2\alpha+1}ds\right)^{\frac{1}{p}}+\left(\int_R^\infty |M_af_j(s)|^p s^{2\alpha+1}ds\right)^{\frac{1}{p}}$, it is enough to investigate $M_ag$, where $g\in K$.
$$\left(\int_R^\infty |M_ag(s)|^p s^{2\alpha+1}ds\right)^{\frac{1}{p}}\le \|M_ag-g\|_{p,(\alpha)}^p+\left(\int_R^\infty |g(s)|^p s^{2\alpha+1}ds\right)^{\frac{1}{p}}.$$
$$I\le \sup_{0\le s \le R}|M_af(s)-M_af_i(s)|B(R),$$
where $B(R)=\left(\frac{R^{2\alpha+2}}{2\alpha+2}\right)^{\frac{1}{p}}$. Thus
\begin{equation}\label{maf}\|M_af-M_af_j\|_{p,(\alpha)}^p \le B(R)\sup_{0\le s \le R}|M_af(s)-M_af_i(s)|\end{equation} $$+ \sup_{g\in K}\left(\|M_ag-g\|_{p,(\alpha)}^p+\left(\int_R^\infty |g(s)|^p s^{2\alpha+1}ds\right)^{\frac{1}{p}}\right).$$
Let $\varepsilon>0$ be arbitrary and let us choose by \eqref{c2} $R$ so large that $\forall f\in K$ $\left(\int_R^\infty|f(x)|^px^{2\alpha+1}dx\right)^{\frac{1}{p}}<\frac{\varepsilon}{5}$ and by \eqref{c3} let us choose $a$ so small such that for all $ 0\le h \le a$ and for all $f\in K$, \hspace{4pt} $t\in[0,R] \hspace{4pt} \left(\int_0^\infty|T_{t+h,\alpha}f(s)-T_{t,\alpha}f(s)|^p s^{2\alpha+1}ds\right)^{\frac{1}{p}}<\frac{\varepsilon}{5}$. Now let us take an $\frac{\varepsilon}{5B(R)}$-net, $N:=N_{\frac{\varepsilon}{5B(R)}} \subset F_{a,R}$. By this choice for all $g\in K$
$$\|g-M_ag\|_{p,(\alpha)}\le \frac{\varepsilon}{5},$$
cf. \eqref{mff}. Moreover for an arbitrary $f\in K$ there is a suitable $f_j$ with $M_af_j \in N$, such that
$$\|M_af-M_af_j\|_{p,(\alpha)}\le B(R)\frac{\varepsilon}{5B(R)}+\frac{\varepsilon}{5}+\frac{\varepsilon}{5},$$
cf. \eqref{maf}. Finally we finish the proof with
$$\|f-f_j\|_{p,(\alpha)}\le \|f-M_af\|_{p,(\alpha)}+\|M_af-M_af_j\|_{p,(\alpha)}+\|M_af_j-f_j\|_{p,(\alpha)}\le \varepsilon.$$
\medskip
Now we define the Bessel-Fourier or Hankel transformation to facilitate formulating a Pego-type theorem. Let us denote the Hankel trasform of a function $f$ by
$$H_\alpha(f)(y)=\hat{f}(y):=c_\alpha\int_0^\infty f(x)j_\alpha(xy)x^{2\alpha+1}dx.$$
The inverse transformation is
$$H_\alpha^{[-1]}(\hat{f})(x)=c_\alpha\int_0^\infty \hat{f}(y)j_\alpha(xy)x^{2\alpha+1}dy,$$
if exists.
If $K$ is a set of certain functions, we denote by $\hat{K}$ the set of the Hankel transforms of the functions in $K$.\\
Below we need that Hankel transformation fulfils the Hausdorff-Young inequality, that is
\begin{equation}\label{hy} \|H_{\alpha} (f)\|_{p',(\alpha)} \le C_p \|f\|_{p,(\alpha)}, \hspace{4pt} \hspace{4pt} \hspace{4pt} 1\le p \le 2,\end{equation}
see \cite{gj}. We also use the next property of the Hankel transform.
\begin{equation}\label{ht} H_{\alpha} (T_{t,\alpha}f)(y)=j_\alpha(ty)\hat{f}(y); \hspace{4pt} \hspace{4pt} \hspace{4pt} T_{t,\alpha}\hat{f}(y)=H_\alpha(j_\alpha(t\cdot)f(\cdot))(y), \end{equation}
see \cite{gj}. Furthermore
\begin{equation}\label{hinf}\lim_{y\to\infty}\hat{f}(y)=0,\end{equation}
see \cite{p}. Defining the convolution by
$$ (f*g)(x)=\int_0^\infty T_{x,\alpha}f(t)g(t)t^{2\alpha+1}dt,$$
we have
\begin{equation}\label{hkonv} H_\alpha(f*g)=\hat{f}\hat{g},\end{equation}
see \cite{gj}.
\medskip
\begin{theorem}\label{He} $1\le p \le 2$. Let $K\subset L_{p,\alpha}$. If $K$ is bounded and satisfies ${\bf P_A}$ in $L_{p,\alpha}$, $\hat{K}$ satisfies ${\bf P_B}$ in $L_{p',\alpha}$, and if $K$ satisfies ${\bf P_B}$ in $L_{p,\alpha}$, $\hat{K}$ satisfies ${\bf P_A}$ in $L_{p',\alpha}$. \end{theorem}
\proof
Let $K$ be bounded by $M$ and $f \in K$. Then by \eqref{ht} and then \eqref{hy}
$$\|T_{t+h,\alpha}\hat{f}-T_{t,\alpha}\hat{f}\|_{p',(\alpha)}=\|H_\alpha\left((j_\alpha((t+h)(\cdot))-j_\alpha(t(\cdot)))f(\cdot)\right)\|_{p',(\alpha)}$$ $$\le C_p \|(j_\alpha((t+h)s)-j_\alpha(ts))f(s)\|_{p,(\alpha)}\le C_p\left(\int_0^R \left|(j_\alpha((t+h)s)-j_\alpha(ts))f(s)\right|^ps^{2\alpha+1}ds\right)^{\frac{1}{p}}$$ $$+C_p\left(\int_R^\infty \left|(j_\alpha((t+h)s)-j_\alpha(ts))f(s)\right|^ps^{2\alpha+1}ds\right)^{\frac{1}{p}}=C_p(I+II).$$
Let $\varepsilon$ be arbitrary and we choose $R=R(\varepsilon)$ to $\frac{\varepsilon}{4}$ according to \eqref{c2}. By \eqref{j1}
$$II\le 2 \left(\int_R^\infty \left|f(s)\right|^ps^{2\alpha+1}ds\right)^{\frac{1}{p}}\le \frac{\varepsilon}{2}.$$
To estimate $I$, let $M_0>0$ be arbitrary and assume that $0<t<M_0$. Let us choose $\delta=\delta(R(\varepsilon),M_0,\varepsilon)<1$ such that
$$\delta \le \frac{(\alpha+1)\varepsilon}{(M_0+1)R^2M}.$$
Then in view of \eqref{j2} and \eqref{j1} for all $0\le h \le \delta$, by choice of $\delta$
$$I \le (M_0+1)R\frac{1}{2(\alpha+1)}\|j_{\alpha+1}\|_\infty \delta R \|f\|_{p,(\alpha)} \le \frac{\varepsilon}{2}.$$
To prove the second statement, for an arbitrary fixed $\delta>0$ we choose nonnegative functions $g_\delta$ supported on $(0,\delta)$ such that $$\int_0^\infty g_\delta(x)x^{2\alpha+1}dx=1.$$
Then in view of \eqref{hinf} we choose $R=R(\delta)$ such that $|H_\alpha(g_\delta)(y)|\le \frac{1}{2}$ if $y>R$. Thus, by \eqref{hkonv} and \eqref{hy}
$$\left(\int_R^\infty|\hat{f}(y)|^{p'}y^{2\alpha+1}dy\right)^{\frac{1}{p'}}\le 2 \left(\int_R^\infty|H_\alpha(f)(y)(1-H_\alpha(g_\delta)(y))|^{p'}y^{2\alpha+1}dy\right)^{\frac{1}{p'}}$$ $$\le C(p) \|f-f*g_\delta\|_{p,(\alpha)}=C(p)\left(\int_0^\infty\left|\int_0^\infty(f(x)-T_{x,\alpha}f(t))g_\delta(t)t^{2\alpha+1}dt\right|^px^{2\alpha+1}dx\right)^{\frac{1}{p}}$$ $$\le C(p)\int_0^\infty\left(\int_0^\infty|f(x)-T_{t,\alpha}f(x)|^px^{2\alpha+1}dx\right)^{\frac{1}{p}}g_\delta(t)t^{2\alpha+1}dt$$ $$\le C(p)\sup_{0\le t \le \delta}\left(\int_0^\infty|f(x)-T_{t,\alpha}f(x)|^px^{2\alpha+1}dx\right)^{\frac{1}{p}},$$
where in the last but one (Minkowski) inequality the symmetry of translation is considered. Thus the proof can be finished by choosing an appropriate $\delta$ to the arbitrary $\varepsilon$ in view of \eqref{c3}, and $g_\delta$ and $R(\delta)$ to $\delta$ as above.
\medskip
\begin{cor} $\mathrm{(I)}$ A bounded subset $K\subset L_{2,\alpha}$ is precompact if and only if for all $M_0>0$, $t\in [0, M_0]$\\
$\|T_{t+h,\alpha}f-T_{t,\alpha}f\|_{2,(\alpha)}\to 0$, and $\|T_{t+h,\alpha}\hat{f}-T_{t,\alpha}\hat{f}\|_{2,(\alpha)}\to 0$ uniformly in $K$.
\medskip
\noindent $\mathrm{(II)}$ A bounded subset $K\subset L_{2,\alpha}$ is precompact if and only if\\ $\left(\int_R^\infty|f(x)|^2x^{2\alpha+1}dx\right)^{\frac{1}{2}}\to 0$, and $\left(\int_R^\infty|\hat{f}(y)|^2y^{2\alpha+1}dy\right)^{\frac{1}{2}}\to 0$ uniformly in $K$.
\end{cor}
\medskip
\begin{cor} Let $u,v : \mathbb{R}_+ \to \mathbb{C}$, $u, v \in L_\infty$ such that $\lim_{x \to \infty}u(x), v(x)=0$, and assume that $u$ is continuous. Then the operator $L:= uH_\alpha v$, $f \mapsto uH_\alpha(vf)$ is compact on $L_{2,(\alpha)}$.
\end{cor}
\proof
Let $K\subset L_{2,\alpha}$ be a subset bounded by $M$. By the assumption, for an $\varepsilon>0$ there is an $R>0$ such that $|v(x)|<\varepsilon$ if $x>R$. Thus $\left(\int_R^\infty |(vf)(x)|^2x^{2\alpha+1}dx\right)^{\frac{1}{2}}\le \varepsilon M$, that is $vK$ is bounded and equivanishing. So by Theorem \ref{He} $H_\alpha(vK)$ is $L_{2,\alpha}$-equicontinuous, and by assumption $LK=uH_\alpha(vK)$ is also $L_{2,\alpha}$-equicontinuous and equivanishing.
\medskip
\noindent {\bf Remark. }
(1) In the proof of the first statement of Theorem \ref{He} we can also use the asymptotics of Bessel functions $|j_{\alpha+1}(u)|\le C u^{-\alpha+\frac{3}{2}}$. Then a larger $\delta$ can be allowed.\\
(2) Comparing \cite{hohm} to these special translations, the omission of the criterion of boundedness needs further investigations in Bessel and Laguerre cases.
\medskip
|
2,877,628,090,371 | arxiv | \section*{Introduction}
Solid-state based quantum-light sources are elementary building blocks for future photonic quantum networks \cite{Aharonovich2016}, quantum information processing \cite{Kok2007} and quantum metrology \cite{Giovannetti2004}. To date, the performance of non-classical light sources has reached such a high level, that the development of user-friendly devices for applications can be pursued. In particular, single-photon sources (SPSs) based on semiconductor quantum dots (QDs) show close to ideal properties in terms of the quantum nature of emission \cite{Aharonovich2016}, and sophisticated excitation schemes including strict resonant excitation enabled proof-of-principle demonstrations of photonic cluster-state generation \cite{Schwartz2016a} and boson sampling \cite{Wang2017,Loredo2017} in laboratory environments. Furthermore, field experiments using QD-based SPSs have been employed for proof-of-principle quantum key distribution experiments \cite{Rau2014}, but still suffered from complex and bulky closed-cycle pulse-tube coolers and complex light extraction via free-space optics. Against this background, it is clear that fiber-coupling and robust packaging of practical quantum-light emitting devices is highly desirable for taking steps beyond proof-of-principle experiments. First steps in this direction utilized fiber-coupled QD-SPSs requiring fiber-bundles containing about 600 individual fiber cores to spatially post-select single emitter in a non-deterministic sample layout \cite{Xu2007}. Non-deterministic device approaches were also employed for the fiber-coupling of the single-photon emission from QDs embedded in as-grown planar samples \cite{Kumano2016} or microcavity structures based on photonic crystals \cite{Lee2015,Daveau2017} and micropillars with oxide aperture \cite{Haupt2010,Snijders2017}. Moreover, the direct fiber-coupling of nitrogen-vacancies in nano-diamonds \cite{Schroeder2011a} and single QDs embedded in nanowires \cite{Cadeddu2016} was realized using less practical pick-and-place techniques based on micromanipulators in a scanning electron microscope (SEM). All of these previous reports used bulky cryostats to operate QDs at cryogenic environments, which hindered the development of user-friendly devices for real-world applications.
In this work, we report on a stand-alone and user-friendly quantum light source based on a fiber-coupled QD integrated within a compact Stirling cryocooler. The QD is deterministically embedded within a monolithic microlens via in-situ electron-beam lithography and precisely coupled to an optical fiber by employing an optical alignment-process and epoxide adhesive bonding at room-temperature. This concept allows for a high degree of control and reproducibility for the fabrication of stand-alone SPSs with pre-defined emission properties - features which are a key requirements for the upscaling of fiber-coupled quantum networks. To benchmark our source, we conduct photon auto-correlation measurements under continuous wave and pulsed optical excitation. Additionally, we demonstrate a high stability of the fiber-coupled single-photon emission over several cool-down/warm-up cycles as well as user-intervention-free long-term test runs, demonstrating the capability of our single-photon unit for autonomous operation in future photonic quantum networks.
\section*{Results}
\subsection*{Fiber-coupling of deterministic QD microlenses}
\begin{figure}[t]
\centering
\includegraphics[width=\linewidth]{Fig_1}
\caption{Fabrication of fiber-coupled QD microlenses. (a) Microscope image of the sample surface before fiber-coupling. A gold-mask contains arrays of apertures with a pitch of 150\,$\mu$m. Each target aperture contains a single deterministically fabricated QD-microlens. Marker structures allow for unambiguous identification of target apertures and microlenses. (b) SEM image of a deterministic single-QD microlens fabricated via 3D in-situ electron-beam lithography and reactive ion etching. (c) Microscope image of a single aperture (dimensions: $10\mu\rm{m}\times10\mu\rm{m}$) containing a microlens deterministically fabricated above a pre-selected QD. Suitable QD-microlenses are pre-characterized using standard micro-photoluminescence spectroscopy at 10\,K.
(d) Illustration of the room-temperature fiber-coupling process: 1. Fiber-scan across sample surface and monitoring of GaAs-bandgap emission within the gold apertures excited by 651\,nm laser. Emission of the bandgap is only visible above apertures and markers. 2. Precise alignment of the fiber above a precharacterized target aperture and lifting of the fiber by about 5\,mm. 3. Attaching a small drop of epoxide adhesive to the fiber-ferrule. 4. Lowering of the fiber to its previous position and monitoring of GaAs emission during hardening ($\approx2\,$hours). (e) Photograph of a fiber-coupled QD sample after the process illustrated in (d) showing the fiber ferrule glued to the sample.}
\label{Fig_1}
\end{figure}
The stand-alone SPS is based on a wafer grown by metal-organic chemical vapour deposition on GaAs-(001) substrate. After a 300\,nm thick GaAs buffer layer, a distributed Bragg reflector consisting of 23 pairs of $\lambda/4$-thick Al$_{0.9}$Ga$_{0.1}$As/GaAs is deposited. Next, 65\,nm of GaAs, a single layer of self-organized InAs QDs and a 420\,nm thick GaAs capping layer are grown. For the optical alignment procedure of the fiber core to a QD-microstructure, a 80\,nm-thick gold mask containing $10\,\mu\rm{m}\times10\,\mu$m apertures with a pitch of 150\,$\mu$m and corresponding markers (for unambiguous identification) is defined via optical (UV) lithography (cf. Figure \ref{Fig_1}\,a). Subsequently, the sample is spin-coated with a 100\,nm-thick layer of the electron-beam resist AR-P 6200 (CSAR 62)\cite{Kaganskiy2016} and transferred to a low-temperature cathodoluminescence lithography (CLL) system. The latter consists of a SEM with integrated liquid-helium flow-cryostat and extensions for optical spectroscopy as well as electron-beam lithography \cite{Gschrey2015}. Using the 3D in-situ electron beam lithography at 10\,K \cite{Gschrey2013,Gschrey2015b}, we define single deteministic QD-microlenses in the gold apertures. Afterwards the sample is transferred out of the CLL system and the resist is developed. This leaves the cross-linked resist above target QDs acting as lens-shaped etch masks for the subsequent etch step using reactive-ion-enhanced plasma etching (etch depth: 420\,nm). This directly transfers the lens-profile into the semiconductor material (i.e. GaAs). As a result of the described fabrication process, our sample is covered by a gold mask containing apertures with one deterministic QD-microlens (diameter: 2.4\,$\mu$m) each (cf. Figure \ref{Fig_1}\,b and c), while all other (non-selected) QDs inside apertures were removed by the dry-etching process. Using micro-photoluminescence spectroscopy at 10\,K, suitable QD-microlenses are selected according to their PL intensity.
For the precise coupling of a photonic QD-microstructure to an optical fiber, we developed an alignment technique which combines deterministic in-situ QD-device fabrication with a robust optical coupling of the fiber core using epoxide adhesive bonding at ambient conditions (room temperature, no vacuum). In the developed process we scan a multi-mode fiber with 50\,$\mu$m core-diameter embedded in a ceramic ferrule in short distance (see below) across the sample surface using a 3D closed-loop piezo-stage, as illustrated in Figure \ref{Fig_1}\,d. The light of a CW laser ($\lambda=\rm{671}\,$nm) coupled into the fiber is reflected at the gold mask. If the fiber core is located above an aperture, charge-carriers are generated inside the semiconductor material and the associated photoluminescence of the GaAs bandgap (around 870\,nm at 300\,K) can easily be detected using a spectrometer attached to the output of the scanning fiber. The pitch of 150\,$\mu$m between apertures in combination with alignment markers allows us to precisely locate target apertures containing a single QD-microlens, which was pre-characterized via $\mu$PL. The height of the scanning fiber is thereby adjusted by observing the luminescence of the GaAs-bandgap in the following way: If the fiber ferrule is gently pressed to the sample surface, the resulting strain leads to a slight spectral shift of the bandgap emission. Moving the fiber back to the position of the unstrained GaAs-bandgap defines the point of physical contact.
After locating the target aperture, the fiber is lifted by about 5\,mm normal to the sample surface. At this point, a small drop of epoxide adhesive is attached to the fiber-ferrule and the fiber is lowered to its previous position. The final position has to be reached within the epoxide pot life of about 5\,minutes. Until the hardening process is completed ($\approx2\,$hours), the emission of the GaAs bandgap is monitored continuously to detect a possible misalignment between fiber-core and aperture. To test the accuracy of our fiber-coupling procedure, we repeated the process described above for fiber-core diameters of 25\,$\mu$m and 9\,$\mu$m. In all cases we were able to precisely locate the positions of the apertures, suggesting an alignment accuracy better than $9\,\mu$m.
The presented approach for fiber-coupling has several advantages: Firstly, the complete alignment and curing process is performed under ambient conditions, and neither requires cryogenic temperatures for QD identification nor heating or UV illumination during hardening of the adhesive. This makes our process robust, easy to use and economic, and still enables a high accuracy of the alignment.
Secondly, the fiber is embedded in a ceramic ferrule which increases the bond area and hinders the fiber from being torn off by torsional forces appearing when handling the fiber-coupled device. Additionally, we use a commercially available and quickly reactive epoxide for curing, which is developed for fiber connector preparation and is thus index matched to optical fibers. The index matching between fiber-core and adhesive further reduces reflection losses at the interface which can further enhance the extraction efficiency. Moreover, our approach can easily be adopted for other solid-state based SPSs, such as micropillar cavities \cite{Heindel2010} and bullseye resonators \cite{Ates2012a}, as well as for SPSs emitting at telecom wavelengths.
\subsection*{Stand-alone single-photon unit}
\begin{figure}[t]
\centering
\includegraphics[width=\linewidth]{Fig_2}
\caption{The stand-alone single-photon source 'QSource'. (a) QSource module comprising the Stirling cryocooler with attached customized vacuum chamber. The fiber-coupled (FC) QD-microlens (cf. Figure \ref{Fig_1}) is mounted to the cryocooler's coldhead inside the vacuum chamber. The active vibration cancellation (AVC) system reduces vibration export by the cryocooler's moving piston. (c) Coldhead temperature of the Stirling cryocooler measured during cool-down. Inset: Coldhead temperature over a measurement period of 48\,hours, revealing high temperature stability of the cryocooler.}
\label{Fig_2}
\end{figure}
When operated at cryogenic temperatures, the InAs/GaAs material system used for the QD-SPS in our work offers superior quantum optical properties \cite{Michler2000,Ding2016} if compared to quantum emitters operated at room temperature \cite{Michler2000a,Kurtsiefer2000,Lounis2000,Holmes2014,Deshpande2014}. Thus, a crucial aspect for the development of high-quality and user-friendly SPSs is the cooling of the quantum emitter to cryogenic temperatures. To date, liquid helium supplied in large storage dewars in combination with flow-cryostats is still the most common cooling technique in research laboratories. However, aiming at the autonomous operation of a quantum light source at remote places without laboratory infrastructure requires more practical cryocoolers. In recent years the use of closed-cycle pulse-tube coolers for experiments in quantum optics have become more popular, due to improvements in vibration damping \cite{Rau2014}. Although such systems do not require a permanent liquid helium supply, they still depend on high-power voltage supplies and bulky compressors.
In our work, we employ a compact and economic Stirling cryocooler \cite{Veprik2005} to build a stand-alone fiber-coupled SPS (see Figure \ref{Fig_2}\,a), called 'QSource' in the following. The Stirling cryocooler only requires a standard supply voltage (220\,V), can be operated with minimal space requirements and recently proved to be suitable for quantum-optical free-space experiments at base temperatures down to 30\,K \cite{Schlehahn2015}. The Stirling cryocooler (Model: Cryotel-GT from Sunpower) used in our QSource comprises a single piston moving along the same axis as the movable regenerator which allows for compact dimensions of $27.6\,\rm{cm}\times8.3\,\rm{cm}\times8.3\,\rm{cm}$. The cryocooler is additionally equipped with an active vibration cancellation (AVC) system. The AVC constitutes a dynamically (real-time) adjusted inert mass, which minimizes the vibrations exported by the cryocooler's moving piston. For this purpose, the vibrations are measured using an accelerometer based on micro-electro-mechanical systems (MEMS) mounted at the housing of the Stirling cryocooler. The resulting signal is used for an active feed-forward control loop for adjusting the balancers movement (see Ref. \cite{Riabzev2009} for details). The fiber-coupled SPS described in the previous section is mounted directly to the cryocooler's coldhead. A custom made vacuum chamber attached to the Stirling cryocooler encloses the coldhead and contains ports for up to four optical fibers and the electrical-feedthroughs for the temperature sensor integrated into the coldhead.
Due to the compact dimensions of the Stirling cryocooler, the complete single-photon unit including the vacuum chamber with all necessary feedthroughs fits easily within a standard 19$"$ rack-insert as depicted in Figure \ref{Fig_2}\,b). For an autonoumous operation of our QSource, the single-photon unit is housed in a small mobile rack (90\,cm height) containing a diode laser ($\lambda=651\,$nm or 855\,nm, continuous wave (CW) or pulsed mode) coupled to the single-photon unit via a 90:10 fiber beamsplitter, the electronics for controlling the Stirling cryocooler and a personal computer (PC) with display, keyboard and mouse. Via the PC the user is able to control all parameters relevant for operation of the QSource, including temperature control and monitoring. Additionally, it optionally allows for photon-autocorrelation measurements using a fiber-based Hanbury-Brown and Twiss (HBT) setup equipped with two Silicon-based single-photon counting modules (SPCMs) and the corresponding electronics for time-correlated single-photon counting (TCSPC). The temperature at the coldhead during a cool-down of the Striling-cryocooler is depicted in Figure \ref{Fig_2}\,c. After only 30\,minutes the sample reaches a temperature of 40\,K. The inset shows a measurement of the coldhead temperature over a period of 48\,hours, revealing a mean base temperature of 39.995\,K at a temperature stability of $\pm6\,$mK being competitive with state-of-the-art Helium-flow cryostats and about one order of magnitude improvement compared to our previous work Ref. \cite{Schlehahn2015}. The achieved temperature is sufficiently low for applications relying solely on the sub-Poissonian statistics of the single-photon states generated by QD emitters, such as quantum key distribution via the BB84 protocol \cite{Heindel2012}. Quantum information schemes relying on the photon indistinguishability, however, require temperatures below 10\,K \cite{Thoma2016}, which can be reached in compact devices in future by combining Stirling- and Joule-Thomson- cryocoolers \cite{Gemmell2017}.
\subsection*{Durability of fiber-coupling}
\begin{figure}[t]
\centering
\includegraphics[width=\linewidth]{Fig_3}
\caption{Durability test of the fiber-connection. (a), (b) and (c) Spectra of a fiber-coupled QD-microlens (Device1) operated in our QSource at $T=40\,$K after repeated cool-down/warm-up (40\,K\,$\leftrightarrow$\,290\,K). Emission of the exciton (X), the biexciton (XX), and singly charged trion states (X$^+$ and X$^-$) is identified. (d) Frequency count histogram of the center wavelength of the four QD states extracted from the spectra in the contour plot in (b). (e) Integrated CCD counts of the fiber-coupled QD emission for each individual excitonic complex (lower panel) and its sum (upper panel), evaluated for the spectra shown in (b).}
\label{Fig_3}
\end{figure}
The spectral properties and the durability of our fiber-coupled SPS are tested using a spectrometer with a 1200-lines optical grating and an attached liquid-Nitrogen-cooled charge-coupled device camera (system's spectral resolution: 25\,$\mu$eV). Figure \ref{Fig_3}\,a presents the spectrum of a fiber-coupled QD-microlens, named Device1 from now on, operated in our QSource at 40\,K (integration time: 1\,s). Emission of different excitonic states stemming from the same QD is clearly observed, where the charge-neutral exciton (X), the biexciton (XX) and the singly charged trion states (X$^+$ and X$^-$) were identified via polarization- and excitation-power-resolved measurements (not shown).
To evaluate the durability of the glued connection between optical fiber and QD sample, we repeatedly warmed up and cooled down the sample and monitored the emission of Device1 using an automated routine as described in the following. Starting from room temperature (290\,K), the Stirling cryocooler is turned on in power-controlled mode (110\,W). After reaching the base temperature, the system holds this temperature for 30\,minutes to reach thermal equilibrium. Then the coldhead temperature is stabilized at 40\,K using a proportional-integral-derivative (PID) -controlled operation mode for 10\,minutes and a spectrum is recorded before the Stirling cryocooler is turned off. The next cooling cycle starts automatically after the coldhead reached room temperature again (about 6\,hours). Figure \ref{Fig_3}\,b displays the spectra of Device1 in a contour plot for cool-down/warm-up cycles 1 to 11, where the last spectrum is additionally shown in Figure \ref{Fig_3}\,c. The emission properties of the QD in terms of the spectral finger print and the emission intensities of the corresponding excitonic states remain almost unchanged, indicating a high durability of the glued connection between fiber and sample. Quantitatively extracting the emission energies of the four QD states for each spectrum from cool-down cycle 1 to 11 by fitting yields standard deviations for the spectral shifts of $\delta E_{\rm{X+}}=\pm86\,$pm, $\delta E_{\rm{X}}=\pm85\,$pm $\delta E_{\rm{XX}}=\pm84\,$pm and $\delta E_{\rm{X-}}=\pm98\,$pm. The corresponding frequency histogram of the fitting results is shown in Figure \ref{Fig_3}\,d. The small deviation in emission energy over many cool-down cycles is attributed to slightly varying electric fields in the vicinity of the QD due to charge fluctuations and the resulting quantum confined Stark effect \cite{Empedocles1997}. Interestingly, a careful analysis of the data reveals that the relative spectral positions of all four emission lines with respect to the X-state remains constant within a standard deviation of $\pm4$ to $\pm7\,$pm, more than one order magnitude less than the absolute spectral shift. Thus we conclude, that the binding energies of the respective QD-states remain almost constant over the entire eleven cool-down cycles, implying that the strain inside the QD sample is not changing over time, which confirms the high stability of our fiber-coupling scheme. Additionally, we evaluate the change in brightness of Device1 during the durability test. Figure \ref{Fig_3}\,e depicts the integrated CCD counts of the QD emission for each individual excitonic complex (lower panel) and its sum (upper panel), evaluated for the spectra shown in Fig. \ref{Fig_3}\,b. We observe, stable emission of Device1 with a standard deviation of $4\%$, certifying an excellent durability of the glued fiber connection used in our QSource. The noticeable anti-correlation between the emission intensities of X and X$^+$ state in the lower panel indicate a random positive charging of the QD by nearby traps.
Beyond the thermal durability proven above, real-world applications of our QSource will also demand robustness against mechanical shocks during e.g. the shipment or the operation on moving platforms such as airplanes. In this context, it is worth mentioning that the repeated transfer between remote labs on different floors in our institute had no notable influence on the performance of the QSource, which was integrated in a wheeled measurement rack for this purpose. This underlines the high mechanical durability of the bond between fiber and QD-sample, which can be analyzed in more detail by performing quantifiable shock tests in future.
\subsection*{Turn-key single-photon generation}
\begin{figure}[t]
\centering
\includegraphics[width=\linewidth]{Fig_4}
\caption{Single-photon generation using the QSource. (a) Measurement of the second-order photon-autocorrelation $g^{(2)}(\tau)$ on the X$^+$-emission of Device2 (cf. Figure \ref{Fig_3}) operated in the Stirling cryocooler demonstrating single-photon emission with $g^{(2)}(0)=0.07 \pm 0.05$. The solid (dashed) line represents the convoluted (deconvoluted) $g^{(2)}(\tau)$-function of a model fit taking into account the setup's temporal resolution. (b) Photon flux of Device2 recorded at the single-photon counting modules during the $g^{(2)}(\tau)$-measurement shown in (a). (c) Time-resolved measurement of the fiber-coupled emission of Device3 under pulsed excitation at 80\,MHz. Inset: $g^{(2)}(\tau)$-histogram recorded for Device3, revealing triggered non-classical light emission. (d) Photon flux of Device3 under pulsed excitation during a user-intervention-free test run over a period of 100\,hours.}
\label{Fig_4}
\end{figure}
To evaluate the performance of our stand-alone single-photon source in terms of its single-photon purity, we performed measurements of the second-order photon-autocorrelation $g^{(2)}(\tau)$ during operation of the Stirling cryocooler at cryogenic temperature ($T=40\,$K). For this purpose, we evaluated another fiber-coupled single-photon source (Device2), which featured particular bright emission of the positively charged trion state X$^+$. The fiber-output of the QSource is spectrally filtered (bandwidth: 0.1\,nm) via the external spectrometer transmitting the emission of the X$^+$ state. Right after the spectrometer, the emission is coupled to the fiber-based HBT setup for coincidence measurements. Figure \ref{Fig_4}\,a presents the histogram of $g^{(2)}(\tau)$ (time-bin width: 275\,ps) under CW excitation at 651\,nm. The pronounced antibunching at zero time delay ($\tau=0$) signifies the non-classicality of the emitted light. Indeed, fitting the experimental data by accounting for the timing resolution (350\,ps) of the HBT setup reveals $g^{(2)}(0)=0.07 \pm 0.05$ unambiguously demonstrating single-photon emission. Additionally, we evaluated the photon flux of Device2 during the $g^{(2)}(\tau)$-measurement as shown in Figure \ref{Fig_4}\,b. We observe a mean single-photon flux at the HBT setup of 11.7\,kHz at a standard deviation of only 4\% during the measurement.
With respect to applications of single-photon sources in quantum information, triggered emission of single-photons is required. We therefore tested the QSource under pulsed-excitation at 855\,nm with an excitation repetition rate of 80\,MHz (pulse-width: 80\,ps) using yet another fiber-coupled microlens (Device3) acting as quantum emitter. The choice of Device3 was necessary, because Device2 showed a long decay-time constant ($>10\,$ns) under pulsed-excitation, most probably due to pronounced charge-carrier recapture processes \cite{Aichele2004}. The time-resolved fiber-coupled emission of an excitonic state is displayed in Figure \ref{Fig_4}\,c. The optical response of the QD shows a monoexponential decay with a lifetime of $(1.0\pm0.1)\,$ns. The corresponding $g^{(2)}(\tau)$-histogram under pulsed excitation is depicted in the inset of Figure \ref{Fig_4}\,c. The clearly separated coincidence peaks combined with the significantly suppressed peak at zero delay prove triggered emission of non-classical light. Fitting the experimental data with a model according to Ref. \cite{Schlehahn2016} yields antibunching with $g^{(2)}(0)=0.57\pm0.05$. Here, the non-ideal antibunching is limited by the uncorrelated background emission of the nearby wetting layer for this particular device. Additionally, we also evaluated the long-term stability of our QSource for Device3 (see Figure \ref{Fig_4}\,d). We observe a photon flux at the HBT setup of 720\,Hz (including 40\,Hz dark counts) with a standard devication of $14\%$ during the entire 100-hour measurement period without any user intervention, again confirming the excellent stability of our QSource.
To assess the photon collection efficiency of our devices, we determined the overall transmission of our complete experimental setup to be about 0.3\% (from the fiber inside the QSource to the SPCMs). Hence, the 680\,Hz of detected QD-photons in the pulsed experiment correspond to a photon flux of 227\,kHz inside the fiber and a collection efficiency of 0.28\% (=227\,kHz/80\,MHz) per excitation pulse. Similarly, in case of the CW measurement, 11.7\,kHz correspond to 3.9\,MHz photon flux inside the fiber and a collection efficiency of 0.39\% (=3.9\,MHz/1\,GHz) per excitation (considering a radiative lifetime of 1\,ns). For Device2, we independently measured a photon extraction efficiency of 14.4\% into $NA= 0.4$ (in a free-space experiment), which is consistent with device simulations based on finite element calculations (see Supplementary of Ref. \cite{Gschrey2015b}). Hence, we estimate a fiber-coupling efficiency of 2.7\% for Device2 used for the measurements in Fig. \ref{Fig_4}\,a and b. We anticipate significantly improved coupling efficiencies into optical fibers, by employing recently developed on-chip micro-optics \cite{Fischbach2017a} in future QSource modules.
\section*{Summary and outlook}
In this work, we report on a user-friendly stand-alone SPS providing single-photon emission via an optical fiber. Within our single-photon unit, a QD is deterministically embedded in a monolithic microlens. The precise coupling of the QD-microlens to the optical fiber is achieved using a robust process based on active alignment and epoxide adhesive bonding at room-temperature, which allows for an accuracy better than 9\,$\mu$m. The fiber-coupled single-photon emitter is mounted to the coldhead of a compact plug-and-play Stirling cryocooler integrated in a 19$"$ rack-insert. Our stand-alone device allows for autonomous operation over several days with high stability of the single-photon flux at the fiber output and antibunching values down to $g^{(2)}(0)=0.07 \pm 0.05$. Moreover, we show triggered emission of non-classical light at 80\,MHz excitation repetition rate. The high durability of the fiber-connection is proven in endurance tests, revealing stable integrated QD emission within 4\% over eleven successive cool-down/warm-up cycles. Additionally, a user-intervention-free 100-hour test run proves the long-term stability of our QSource and we achieve single-photon detection rates of up to 11.7\,kHz at a standard deviation of 4\%, confirming the potential of our approach for applications in quantum information science.
By implementing straight forward extensions to our QSource, we anticipate greatly enhanced performance in terms of the achievable single-photon purity and flux. For instance, by combining our deterministic QD-microlenses with miniaturized laser-written multi-lens objectives \cite{Gissibl2016}, the photon extraction efficiency from our device as well as the coupling efficiency to an optical fiber can be enhanced \cite{Fischbach2017a}.
Moreover, a higher level of integration will reduce the total system losses. This can for instance be achieved by using electrically pumped quantum light sources by utilizing a recently developed contacting scheme for deterministic QD microlenses \cite{Schlehahn2016a} or by employing compact interference band-pass filters instead of bulky external spectrometers \cite{Heindel2012}. Additionally, the accuracy of our fiber-coupling process also shows promise for the use of single-mode fibers, which however will require further optimization. Finally, the next-generation QSource can easily be adapted to other operation wavelengths, opening up the route for the realization of turn-key stand-alone quantum light sources at telecommunication wavelengths for applications in fiber-based quantum key distribution and quantum repeater networks.
|
2,877,628,090,372 | arxiv | \section{Introduction}
Photons with a helical phase front have a definite orbital angular momentum (OAM)
$l\hbar$, with $l$ an arbitrary integer \cite{AndrewsBook}.
The arguably most
attractive feature of this spatial degree of freedom is that it spans an unbounded, discrete Hilbert space and, thus, can be used
to encode high-dimensional, possibly entangled (qudit) states \cite{KrennPNAS2014} which are considered as a resource to achieve increased channel capacities \cite{WangNatPhot2012}, and to enhance the security of quantum communication protocols \cite{MafuPRA2013,MirhosseiniNJP2016}.
However, the information encoded in OAM photonic states is very fragile with respect to disturbances
along the transmission path. For instance, in free space,
the scattering of photons on random inhomogeneities of the refractive-index of air, as generic in turbulent atmosphere,
results in the crosstalk (coupling) amongst distinct spatial OAM modes \cite{Anguita:08,TylerOptLett09} and leads
to the (unavoidable and irreversible) decay of
OAM entanglement \cite{raymer06,RouxPRA2011,IbrahimPRA2013,LeonhardPRA2015,RouxPRA2016}.
Another, distinct mechanism of entanglement degradation is due to
diffractive effects as induced by physical obstructions. It is in these cases suggestive
that a suitable choice of the field modes used to encode the OAM state which is to be transmitted may allow to compensate for diffractive effects.
Indeed, recent experiments on the entanglement evolution of bipartite OAM entangled states which are diffracted upon a circular obstruction provide evidence that measurements in the Bessel-Gaussian (BG) rather than in the Laguerre-Gaussian (LG) basis allow for the reduction of diffraction-induced losses of entanglement. This observation was qualitatively attributed to the known self-healing property of BG modes \cite{McLarenNatPhys2014}.
In our present contribution, we offer a general and quantitative theoretical treatment of diffraction-induced entanglement decay
in terms of the concomitant overlap between the diffracted modes, and will see that this mutual overlap is in general significantly smaller for BG as compared to LG modes.
After a brief recollection of basic properties of LG and BG modes in Sec.~\ref{modes}, Sec. \ref{prop} presents our treatment of the
diffraction of a single photon on an obstruction.
Sec. \ref{EnCo} generalizes the diffraction problem to entangled
bi-photons, to derive the above result for twin-photon entanglement past an obstacle, before Sec. \ref{conc} concludes our work.
\section{Laguerre-Gaussian and Bessel-Gaussian modes}
\label{modes}
Let us consider a scalar monochromatic wave $\tilde{\psi}(x,y,z,t)=\psi(x,y,z) e^{-i\omega t}$ propagating along the positive $z-$direction. Its spatial part obeys the Helmholtz equation, which in the paraxial approximation turns into the homogeneous parabolic equation \cite{goodman},
\begin{equation}
2ik\frac{\partial{\psi}}{\partial{z}} = \nabla^2_T \psi ,
\label{parabolic}
\end{equation}
where $\nabla^2_T=\partial^2/\partial x^2 + \partial^2/\partial y^2$ is the transverse Laplacian and $k = 2\pi /\lambda$ is the wave number, with $\lambda$ the wave length.
A cylindrically symmetric eigensystem of Eq. (\ref{parabolic}) is formed by Laguerre-Gaussian (LG) modes. The latter
are characterized by two discrete indices: the azimuthal index $l=0,\pm 1,\pm 2,\ldots$ associated with the OAM $\hbar l$ per photon populating the mode, and the radial index $p=0,1,2,\ldots$ which
fixes the radial intensity distribution in the transverse plane, with
$p+1$ concentric rings of local intensity maxima.
LG modes were the first OAM-carrying light beams to be studied \cite{AllenPRA1992}, and they are commonly used to investigate OAM entanglement \cite{MairLettNature2001}.
Another cylindrically-symmetric set of solutions to Eq. \eqref{parabolic}
is provided by Bessel-Gaussian (BG) modes \cite{GoriOptComm1987} that are a physical approximation of Bessel beams. The latter are formal, non-diffracting
solutions of the Helmholtz equation formed by superpositions of plane waves whose wave vectors lie on a cone \cite{DurninPRL1987,DurninOpt1987}, though require infinite energy for their creation, as their intensity has to be reproduced at any $z>0$. Note, however that BG modes are neither complete nor orthogonal in the entire transverse space, but only in the subspace associated with the OAM degree of freedom. The latter property qualifies them as a suitable basis for OAM-encoded information transmission.
BG modes \cite{GoriOptComm1987} are specified by two parameters: The discrete azimuthal index $l$, which, as well as for LG modes, defines the OAM, and the (continuous) radial wave number $k_\rho\geq 0$, which characterizes the mode's radial structure. By changing $k_\rho$, the transverse intensity distribution is continuously transformed from a Gaussian ($k_\rho = 0$) to a multi-ring intensity distribution, which is characteristic of BG beams.
Since BG modes are renowned for their self-healing property, i.e., the ability to reconstruct after encountering an obstruction \cite{BouchalOC1998,BouchalOC2002,LitvinOC2009}, they are potentially interesting carriers of OAM entanglement \cite{McLarenOSA2012}.
\section{Diffraction of a single photon on an opaque screen}
We now address the modification of a given photonic input state upon scattering off an obstruction. We first consider the case of a single
photon, and will use these results to model the fate of a bi-photon state in the subsequent Section.
\label{prop}
\subsection{Input states}
Single photon input states with well-defined OAM $\hbar l_0$ are accommodated by the LG mode
\begin{equation}
u^{0l_0}_{LG}(x,y,z=0) = \mathcal{N}_{LG} \left(\rho/w_{LG}\right)^{|l_0|}e^{i l_0 \phi} e^{-\rho^2/w_{LG}^2},
\label{LG}
\end{equation}
with $p=0$, and by the
BG mode
\begin{equation}
u^{\kappa l_0}_{BG}(x,y,z=0) = \mathcal{N}_{BG} J_{l_0}(\kappa \rho) e^{i l_0 \phi}e^{-\rho^2/w_{BG}^2},
\label{BG}
\end{equation}
with $k_\rho=\kappa$, where $\mathcal{N}_{LG}$ and $\mathcal{N}_{BG}$ are normalization constants
(their explicit form is irrelevant for our subsequent analysis), $\rho=\sqrt{x^2+y^2}$ the radius, $\phi$ the azimuthal angle in the $x-y$ plane, $J_{l_0}(x)$ the Bessel function of order $l_0$,
and $w_{LG}$, $w_{BG}$ the mode waists. $z=0$ is chosen such as to identify the position of the beam
waists, where, for simplicity, we also place the obstacle.
Anticipating our results on how an obstruction affects the input modes, the following remark is in order:
As follows from Eqs. (\ref{LG}), (\ref{BG}), the LG (BG) modes have single-(multi-)ring intensity distributions that depend on their beam widths and azimuthal indices. Therefore, an obstruction of a given radius $a$ (which we set equal to the experimental value chose in \cite{McLarenNatPhys2014}), in general will screen out different structural elements of incident beams with distinct azimuthal indices.
However, for BG modes, because of the presence of the Bessel function in Eq. (\ref{BG}), the transverse structure is such that intensity is spread over multiple rings and the fraction of intensity obscured by the obstacle is almost independent of $l_0$. Consequentely, a constant beam waist $w_{BG}$ given by the experimental value \cite{McLarenNatPhys2014} is used in the following. In contrast, the intensity of LG modes is distributed over a single ring with the $l_0-$dependent radius $\rho = \sqrt{l_0/2} w_{LG}$ (see Eq. \eqref{LG}).
If -- to ensure that the maximum of the input intensity distribution be covered by the obstacle placed on the beam axis -- we impose $\rho=0.8 a$, the beam waist needs to be chosen $l_0-$dependent, according to $w_{LG}=0.8a\sqrt{2/l_0}$.
By comparison of modes with single (LG) and multi ring (BG) intensity distributions, we aim at a better understanding of how the two different radial structures affect the transmission of OAM states across obstructed paths.
Hereafter, we denote single photon states prepared in the spatial modes \eqref{LG} and \eqref{BG} as $|u_{LG}\rangle:=|0,l_0\rangle$ and $|u_{BG}\rangle:=|\kappa,l_0\rangle$, respectively.
\subsection{Boundary conditions}
\label{sec:boundary}
We seek the solution $\psi(x,y,z)$ of \eqref{parabolic} for $z>0$,
for a single photon state of the spatial input mode $u(x,y,z)$ (we skip the labels $LG$ or $BG$), diffracted by an obstruction located at $z=0$. Following the scenario of the experiment \cite{McLarenNatPhys2014}, we assume a circular shape of the obstacle, which is also convenient for our theoretical analysis, owing to its symmetry. Note, however, that this does not imply a fundamental restriction: the diffracted mode $\psi(x,y,z)$
can be numerically inferred (see Sec. \ref{sec:basic}) for arbitrary geometries.
To account for the impact of the obstacle, we impose the {\it modified} Kirchhoff boundary conditions \cite{goodman,FischerOptExpr2007},
\begin{equation}
\psi(x,y,z=0) = u(x,y,z=0)t(x-d,y),
\label{bc}
\end{equation}
where
\begin{equation}
t(x-d,y) = 1 -\exp \left\lbrace - \left[\frac{(x-d)^2 + y^2}{a^2}\right]^m \right\rbrace,
\label{t}
\end{equation}
is the obstacle's transmission function, with $d$ and $a $ its
shift\footnote{Due to the rotational symmetry of the problem,
there is no loss of generality in our choice of the displacement $d$
along the $x$-axis.} with respect to the beam axis and
its radius, respectively, and $m$ a positive integer. In our simulations, we choose $m=12$, and the thus defined super-Gaussian on the right hand side of Eq. (\ref{t}) serves to smoothen edge effects as encountered \cite{FischerOptExpr2007} for Kirchhoff boundary conditions, $t(x-d,y) = \Theta((x-d)^2+y^2 - a^2)$, with $\Theta(x)$ the Heaviside function.
Since part of the incident mode is blocked by the obstruction, the diffracted mode still needs to be renormalized:
\begin{equation}
\psi(x,y,z) := \frac{\psi(x,y,z)}{(\int\int dx dy |\psi(x,y,0)|^2)^{1/2}}.
\label{norm_psi}
\end{equation}
Eq. (\ref{norm_psi}) ensures that $\int\int dx dy |\psi(x,y,z)|^2=1$ for any $z\geq 0$.
\subsection{Basic properties of the diffracted wave}
\label{sec:basic}
\begin{figure*}
\includegraphics[width=0.9\linewidth]{IntensityPhaseGray2.png}
\caption{Intensity (a-d) and phase (e-h) distribution of BG (a,b,e,f) and LG (c,d,g,h) modes with azimuthal index $l_0=1$. Subplots (a, e) and (c, g) correspond to the unperturbed LG and BG modes $u(x,y,z=0)$, respectively [see Eqs. \eqref{LG} and \eqref{BG}], whereas subplots (b, f, d, h) represent diffracted waves $\psi(x,y,z)$ at $z=50$ mm behind a circular obstacle with the transmission function $t(x-d,y)$ [see Eq. \eqref{t}] parametrized by $a =d = 200\; \mu$m and $m=12$.
The specific parameter values employed to produce the present patterns are:
$p=0$, $\kappa=30$ mm$^{-1}$, $w_{LG}= 226 \; \mu$m, $w_{BG}= 1$ mm, $\lambda= 710$ nm, and, except for $w_{LG}$, remain fixed throughout this work.}
\label{InPhase}
\end{figure*}
An exact numerical solution of the boundary value problem defined by Eqs.~\eqref{parabolic},~\eqref{bc}-\eqref{norm_psi}, for arbitrary parametrisation of the unperturbed
beams \eqref{LG}, \eqref{BG} and of the transmission function \eqref{t}, can be obtained with the help of the {\it angular-spectrum propagator}, well-known in Fourier optics \cite{goodman},
\begin{equation}
\psi (x,y,z) = \mathcal{F}^{-1}\lbrace T(k_x,k_y,z) \mathcal{F}\left[\psi (x,y,z=0)\right]\rbrace,
\label{angspec}
\end{equation}
where ($\mathcal{F}^{-1}$) $\mathcal{F}$ indicates the (inverse) Fourier transform in the transverse plane, and $T(k_x,k_y,z) = \exp [i k z -(k_x^2+k_y^2)z/2k \;]$, with
$k_x$ and $k_y$ wave numbers in the $x$ and $y$ directions, is the angular-spectrum transfer function \cite{goodman}.
The resulting intensity (upper) and phase (lower row)
distributions for diffracted BG (second) and LG (fourth column)
modes with azimuthal index $l_0=1$ are presented in Fig. \ref{InPhase}, in comparison to those of the incident BG (first) and LG (third column) modes.
Clearly, the obstacle significantly perturbs both types of input modes. However, both, the intensity and the phase distributions of the BG mode are less affected by the obstruction than those of the LG mode. This enhanced stability of BG modes is known \cite{BouchalOC1998} and attributed to their ``healing'' property (which LG modes with $p=0$ lack), that is the feature to restore their spatial structure following disturbances. It is worth pointing out that the concept of self healing is not quantitative and purely based on a visual inspection of a finite region around the beam axis. The effect of the obstacle is still present in the transverse plane, but it has been pushed away from the observation window. This is easy to understand if we think of BG modes as superpositions of waves propagating on a cone.
A modification of the intensity and phase distributions of diffracted
modes as observed in Fig.~\ref{InPhase} can be regarded as the manifestation of diffraction-induced coupling of the spatial modes of the incident wave to other spatial modes. In other words, the diffracted mode \eqref{norm_psi}
corresponds to a normalized single photon diffracted state
\begin{equation}
|\psi_{LG}\rangle=\sum_{pl}c_{pl}(z)|p,l\rangle,
\label{diff_LG}
\end{equation}
for the LG mode, and to
\begin{equation}
|\psi_{BG}\rangle=\sum_{l}\int dk_\rho c_{l}(z,k_\rho)|k_\rho,l\rangle,
\label{diff_BG}
\end{equation}
for a BG mode, with $|p,l\rangle$ and $|k_\rho,l\rangle$ single photon states of the modes $u^{pl}_{LG}(x,y,z)$ and $u^{k_\rho l}_{BG}(x,y,z)$, respectively. The latter, unperturbed mode functions are known exactly \cite{AllenPRA1992,GoriOptComm1987}, but they can also be easily assessed numerically, by plugging Eqs. \eqref{LG} and \eqref{BG} into the angular-spectrum propagator \eqref{angspec}.
From Eqs.~\eqref{diff_LG} and \eqref{diff_BG} it is clear that diffraction introduces crosstalk among different OAM modes \cite{Anguita:08,TylerOptLett09}.
The expansion coefficients are then given by the inner products of the diffracted field and the unperturbed mode functions,
\begin{subequations}
\begin{eqnarray}
&c_{pl}(z)&\!= \!\int\! \int\! dx dy u_{LG}^{pl*}(x,y,z) \psi_{LG}(x,y,z),
\label{coeffa} \\
&c_{l}(z,k_\rho)&=\!\int \!\int\! dx dy u_{BG}^{k_\rho l*}(x,y,z) \psi_{BG}(x,y,z).
\label{coeffb}
\end{eqnarray}
\label{coeffs}
\end{subequations}
We point out that the inner products \eqref{coeffs} are invariant under translations along the $z$-axis. This can be proven \cite{ChuOptExpr2014} using the Plancherel theorem \cite{goodman} and the fact that the $z$-propagation is determined only by the angular-spectrum transfer function [see Eq. \eqref{angspec}]. Therefore, henceforth we will omit the $z$-dependence of the expansion coefficients.
A standard way to analyze crosstalk is through the assessment of the expansion coefficients in Eq. \eqref{coeffs}. However, even without addressing the individual coupling amplitudes $c_{pl}$, $c_{l}(k_\rho)$, we can build some qualitative understanding of the presence of different OAM modes in the diffracted wave.
In the next section we will see how the scattering of the input mode into a superposition of many OAM modes induces a non zero mutual overlap of the diffracted waves, and thereby affects the output state entanglement.
Let us therefore examine the bottom row of Fig. \ref{InPhase} where the phase profiles of the incident (azimuthal index $l_0=1$) and diffracted (in general, arbitrary $l$-values) waves are depicted.
We identify the $l_0 = 1$ character of the incident modes in Figs. \ref{InPhase}(e) and (g) by their only phase singularity at the origin, associated with a topological charge one. Inspection of the phase profile of the diffracted beam shows that, notwithstanding some phase distortions, the diffracted BG mode [Fig. \ref{InPhase}(f)] still exhibits the very sole phase singularity at the origin.
This suggests that the azimuthal index is almost preserved, or that scattering to other OAM modes is weak. In contrast, the appearance of multiple phase singularities for the diffracted LG mode -- for example, along the positive $x$ axis -- is evident [Fig.~\ref{InPhase}(h)]. This suggests that this phase structure belongs to a superposition of multiple OAM modes.
\section{Diffraction of a biphoton on two opaque screens}
\label{EnCo}
We can now import our above results for the diffraction of a single input mode, to quantify the entanglement decay of an OAM bi-photon input state when each photon
is diffracted upon an obstacle.
While this scenario is inspired by the experiment in \cite{McLarenNatPhys2014}, the physics here considered is different inasmuch as the experiment quantified the output state entanglement as a function of the distance between obstacle (centered at the optical axis) and detector. For distances smaller than the self-healing length of the encoding BG modes, measurement noise obscures the output state entanglement in \citep{McLarenNatPhys2014}. In contrast, in our present set-up the distance between obstacle and detector is invariant, and we instead quantify entanglement reduction induced by transverse displacements of the obstacle, which leads to non trivial scattering into different OAM modes upon transmission. Therefore, the here observed entanglement reduction is due to an actual perturbation of the transmitted state rather than to a smooth modulation of the signal to noise ratio.
\subsection{Setup}
\label{Setup}
\begin{figure}
\includegraphics[width=1\linewidth]{SetupV1.pdf}
\caption{Sketch of the setup. A source produces a pair of maximally-entangled twisted photons. The two photons propagate in opposite directions; at their beam waists ($z=0$) they are diffracted by circular obstructions of radius $a$. The two obstacles are displaced by $\pm d$ with respect to the beam axis. Finally, the photons are detected at a distance $L$ from the obstacles.}
\label{entfig}
\end{figure}
More specifically, we consider the setting in Fig. \ref{entfig}:
A source generates pairs of single photon
excitations of LG or BG modes, which are Bell state (i.e., maximally) entangled in their OAM,
\begin{equation}
\label{psi0}
\ket{\Psi_0} = \frac{1}{\sqrt{2}}(\ket{l_0,-l_0} + \ket{-l_0,l_0}),
\end{equation}
with $\ket{\pm l_0}$ the shorthand notation for either $\ket{0,\pm l_0}$ (LG mode) or $\ket{\kappa,\pm l_0}$ (BG mode).
To minimize the notational overhead, we assume that each of the
photons is diffracted by a circular screen, and that both screens have identical radii and are placed with opposite offsets with respect to the optical axis.
For simplicity and without loss of the essential physics, in our calculations we set both screens at $z=0$.\footnote{Finite distances $z$ would lead to the appearance of $z$-dependent propagation phases (see Sec.~\ref{sec:basic}) in the obstacle plane, resulting, e.g., in rotation of the phase distributions in Fig.~\ref{InPhase}(e,g). However, this rotation does not affect entanglement.}
Finally, the biphoton state is calculated at a distance $L$ from the obstacle planes.
Under the transformation \eqref{bc} of each of the modes acting as carriers of the single photon components $|\pm l_0\rangle$ of the twin-photon state \eqref{psi0}, the
input state's underlying mode structure is
mapped on the (normalized, according to Eq. \eqref{norm_psi}) diffracted output mode
\begin{eqnarray}
\Psi(\mathbf{r}_1,\mathbf{r}_2;L)&= \frac{1}{\sqrt{2}} \left [\psi_{l_0}^d(\mathbf{r}_1,L)\psi_{-l_0}^{-d}(\mathbf{r}_2,L) \right . \nonumber\\
& \left . +\psi_{-l_0}^d(\mathbf{r}_1,L)\psi_{l_0}^{-d}(\mathbf{r}_2,L) \right ],
\label{diff_2photon}
\end{eqnarray}
where $\psi_{l}^{\pm d}(\mathbf{r}_i, L)$ [$\mathbf{r}_i=(x_i,y_i)$ and $i=1,2$] represents the constituent single modes' diffractive image (at the distance $L$ from the obstruction). Accordingly, by generalization of Eqs. (\ref{diff_LG},\ref{diff_BG}), the diffracted bi-photon output state reads
\begin{equation}
|\Psi(L)\rangle\!=\!\begin{cases}
\sum_{p_1,l_1,p_2,l_2}c_{p_1l_1,p_2l_2}|p_1,l_1;p_2,l_2\rangle,\\\\
\sum_{l_1,l_2}\int dk^\prime_{\rho}dk^{\prime\prime}_{\rho}c_{l_1l_2}(k^\prime_{\rho},k^{\prime\prime}_{\rho})|k^\prime_{\rho},l_1;k^{\prime\prime}_{\rho},l_2\rangle,
\end{cases}
\label{Psi_L}
\end{equation}
for OAM encoding in the LG (first line) or in the BG (second line) basis, respectively.
where the first and second lines correspond to the expansion in the LG and BG bases, respectively, and the dependence on $L$ in the right hand side of Eq. (\ref{Psi_L}) is incorporated solely in the basis states. The latter dependence, however, cannot affect entanglement, which is encoded in the expansion coefficients. Thus, entanglement remains invariant under $z$-translations. Therefore, we can lighten the notation and drop the propagation distance $L$ in all subsequent expressions.
\subsection{Entanglement of the output state}
\label{sec:ent}
\begin{figure}[t!]
\includegraphics[width=0.9\linewidth]{bplot_v2.pdf}
\caption{(Color online) Mutual overlap $b$ versus the relative displacement $d/a$ (where $a=200\;\mu$m) for (a) BG modes with $l_0=1,2,3,4$ and $w_{BG} = 1$ mm, and (b) for LG modes with the same azimuthal numbers and $w_{LG}=226, 160, 130, 113\;\mu$m, respectively, where variable beam widths ensure screening off $l_0$-independent structural elements of the LG beams [see the discussion following Eq. \eqref{BG}].}
\label{bplot}
\end{figure}
\begin{figure}[t!]
\includegraphics[width=0.9\linewidth]{CBGLG.pdf}
\caption{(Color online) Concurrence $C(|\Psi\rangle)$ versus the relative displacement $d/a$ ($a=200\;\mu$m), for the diffracted twin-photon state \rref{psi0} of LG (red squares) and BG (blue dots) modes with $l_0 = 1$. Beam waists were set to $w_{LG} = 226\;\mu$m and $w_{BG} = 1$ mm .}
\label{Cbglg}
\end{figure}
Given the above, we can now proceed to quantify the diffracted bi-photon output state's entanglement, directly from its transverse position representation \eqref{diff_2photon}. For this purpose we employ the higher dimensional generalization \cite{RungtaPRA2001, GuoQuantInfProc2013} of concurrence \cite{WoottersPRL1998}
\begin{equation}
\label{C}
C(|\Psi\rangle) = \sqrt{2\left(1 - \mathrm{tr}[\varrho^2]\right)},
\end{equation}
where $\varrho=\mathrm{tr}_2\left[|\Psi\rangle\langle \Psi|\right]$ is the reduced density matrix of either one of the entangled
photons, after tracing out the other. This trace is performed on the output state's two photons density matrix
\begin{equation}
\sigma(\mathbf{r}_1,\mathbf{r}_2;\mathbf{r}^{\prime}_1,\mathbf{r}^{\prime}_2)=\Psi^*(\mathbf{r}_1,\mathbf{r}_2)\Psi(\mathbf{r}^{\prime}_1,\mathbf{r}^{\prime}_2) \, .
\label{sigma}
\end{equation}
by integration over the transverse coordinates of the second photon, to obtain
\begin{align}
\varrho(\mathbf{r}_1,\mathbf{r}_1^\prime) & = \int d^2r_2 \sigma(\mathbf{r}_1,\mathbf{r}_2;\mathbf{r}_1^\prime,\mathbf{r}_2)\nonumber\\
&=\frac{1}{2}\left [ \psi_{l_0}^d(\mathbf{r}_1)\psi^{d\,*}_{l_0}(\mathbf{r}_1^\prime)+b \psi_{l_0}^d(\mathbf{r}_1)\psi^{d\,*}_{-l_0}(\mathbf{r}_1^\prime) \right. \label{reduced_dm}
\\
&\left.+ b \psi^d_{-l_0}(\mathbf{r}_1)\psi^{d\,*}_{l_0}(\mathbf{r}_1^\prime)+ \psi^d_{-l_0}(\mathbf{r}_1)\psi^{d\,*}_{-l_0}(\mathbf{r}_1^\prime)\right ]. \nonumber
\end{align}
In~\eqref{reduced_dm} we introduced the (real) parameter,
\begin{equation}
b \equiv \int d^2 r \psi^{d\,*}_{-l_0}(\mathbf{r})\psi^d_{l_0}(\mathbf{r})\, ,
\label{crosstalk2}
\end{equation}
which is the mutual overlap between the diffracted fields $\psi^d_{- l_0}(\mathbf{r})$ and $\psi^d_{+ l_0}(\mathbf{r})$. Scattering of the input fields $u_{\pm l_0}(\mathbf{r},z=0)$ into superpositions of OAM modes (see Fig.~\ref{InPhase}) results in a nonzero value of $b$ if and only if some of the modes in the diffracted fields are common. By virtue of Eq. (\ref{crosstalk2}),
$0\leq |b| \leq 1$, with the upper bound attained in the degenerate
limit $-l_0=l_0=0$, where, however,
$|\Psi_0\rangle$
reduces to a product state.
\begin{figure*}[t!]
\includegraphics[width=0.9\linewidth]{Cxi.pdf}
\caption{(Color online) (a-c) Concurrence $C(|\Psi\rangle)$ versus the relative displacement $d/a$ ($a = 200\;\mu$m), for the diffracted twin-photon state \eqref{psi0} of LG modes with $l_0 = 2,3,4$. Panels (a,b,c) correspond to three distinct values of the ratio $\xi/a = 0.4, 0.75, 1.2$, respectively. (d) Number of minima of the concurrence $N_{min}$ versus $\xi(l_0)/a$ [see Eq. \eqref{xi}], for $l_0 = 2,3,4$. }
\label{Cxi}
\end{figure*}
Several examples of the mutual overlap $b$ as a function of the relative displacement $d/a$ are shown in Fig. \ref{bplot}. We first note that $b=0$ if $d=0$. In this case, the mutual overlap vanishes, since the perturbation caused by the obstacle leaves the cylindrical symmetry of diffracted waves intact.
Furthermore, $b\to 0$ when the screens' displacement is large compared to the essential support of the beam, due to the trivial reason that the screens' impact turns negligible in this limit.
For intermediate displacements of the obstructions, interference of the diffracted modes results in oscillations of $b$, with amplitudes which are smaller for BG than for LG
encoding, for a given $l_0$, and progressively decrease with increasing OAM, for both sets of modes.
One can see a progression in the oscillation of the mutual overlap in the LG case. The number of peaks and dips is equal to the OAM index $l_0$ (see also Sec. \ref{sec:phase_corr}). The oscillations are modulated by an envelope function, whose amplitude and width gradually decrease with increasing $l_0$. In the case of $l_0=1$, the dip in the oscillation more or less coincides with the peak of the envelope function. Therefore its amplitude seems disproportionately large compared to those for higher orders. In the BG case, the shapes of the curves are more complex. There are still oscillations, but their modulation is more irregular. Nevertheless, a similar progression causes the amplitudes of these curves to decrease gradually for increasing $l_0$.
The significance of the mutual overlap $b$ is that it
uniquely determines the diffraction-induced entanglement decay.
Indeed, using the normalization of $\psi^d_{\pm l_0}({\bf r})$,
as well as Eqs. (\ref{reduced_dm}) and (\ref{crosstalk2}), we find
\begin{align}
\label{redpur}
\mathrm{tr}(\varrho^2) &= \int d^2 r \int d^2 r^\prime \varrho(\mathbf{r},\mathbf{r}^\prime) \varrho(\mathbf{r}^\prime,\mathbf{r}) \\
&=\frac{1}{2}(1 + 6b^2 + b^4), \nonumber
\end{align}
which leads to an explicit expression for the output state's concurrence \eqref{C}, as the main result of this work:
\begin{equation}
C(|\Psi\rangle)=\sqrt{1-6b^2-b^4}.
\label{Can}
\end{equation}
However, $b$ is not an independent parameter but a function of the beam waist, of the size and displacement of the screen, and of the azimuthal index.
\subsection{Entanglement loss upon diffraction}
\label{sec:ent-res}
Let us now assess the entanglement decay featured by the diffracted image of \rref{psi0} -- for encoding in $l_0 = 1$. This is the case for which we have the maximum mutual overlap
(see Fig. \ref{bplot}).
Fig. \ref{Cbglg} shows the dependence of $C(|\Psi\rangle)$ on the relative displacement $d/a$. Comparison to Fig. \ref{bplot} shows that the behaviour of $C(|\Psi\rangle)$ follows directly from that of $b$ (see Fig. \ref{bplot}), and that, in particular, OAM entanglement is more robust in the BG as compared to the LG basis, for most values of $d/a$. Only for large displacements does LG offer a tiny advantage, since the multi-ring intensity profile of BG modes leads to some residual overlap even at large $d/a$. This multi-ring structure is also at the origin of the
modulation of the BG mode's output concurrence with the displacement, and of the underlying mutual overlap behaviour depicted in
Fig. \ref{bplot}(a).
Minimal output concurrence is observed, for both encodings,
at $d/a\approx 1.2$, corresponds to stationary points of the mutual overlap $b$ in Fig.~\ref{bplot}, and is due to the obstacles' overlaps with the maxima of the intensity distribution of the input beams (e.g., the LG input mode with $l_0=1$ and $p=0$ has an intensity maximum
concentrated along a single ring with radius $\approx w_{LG}$, and $C(|\Psi\rangle)$ exhibits a minimum precisely when the obstacle is placed at the
corresponding position).
\subsection{Entanglement and the phase correlation length}
\label{sec:phase_corr}
The situation is slightly more complicated for LG-encoded OAM entangled input states \eqref{psi0} with $l_0\geq 2$. While an LG mode's intensity profile exhibits only one bright ring also for $l_0\geq 2$, the mutual overlap $b$ oscillates, as shown in Fig. \ref{bplot}(b), and so does the output concurrence in Fig. \ref{Cxi}, due to \eqref{Can}. We therefore need to resort to the output state's phase distribution, and, in particular, to its {\it phase
correlation length} $\xi(l_0)$ \cite{LeonhardPRA2015}, which accounts for both, its phase and intensity profiles.
For LG modes with $p=0$ and $|l_0|\geq 2$, $\xi(l_0)$ reads \cite{LeonhardPRA2015}
\begin{equation}
\xi(l_0) = \frac{w_{LG}}{\sqrt{2}}\sin\left(\frac{\pi}{2|l_0|}\right)\frac{\Gamma(3/2 +|l_0|)}{\Gamma(1+|l_0|)}\, ,
\label{xi}
\end{equation}
and it was shown that the entanglement decay experienced by LG-encoded OAM entangled states in a weakly turbulent atmosphere
is a universal function of the ratio $\xi(l_0)/r_0$, where $r_0$ is the turbulence coherence length (the typical length scale on which turbulence-induced phase errors are correlated \cite{FRIED:65}).
In our present scenario, a natural analog of $r_0$ is the radius $a$ of the obstacle, and the curves in Fig.~\ref{Cxi} are obtained for
distinct values of the ratio $\xi(l_0)/a$ for each of the panels (a-c).
While there is no apparent regularity of the behaviour
of $C(|\Psi\rangle)$ in either one of the panels (a-c), the results can be classified by one specific feature -- the number of minima $N_{min}$ of $C(|\Psi\rangle)$.
To demonstrate this, panel (d) correlates $N_{min}$ with the ratio $\xi(l_0)/a$: Indeed, for small values $\xi(l_0)/a\lesssim 0.4$, $N_{min}=4$, and with increasing $\xi/a$, $N_{min}$ steadily decreases by one, at $\xi(l_0)/a\simeq 0.6$ and $\xi(l_0)/a\simeq 1.07$.
\section{Conclusion}
\label{conc}
In this work, we studied the effect of the diffraction of a singlet type biphoton OAM state, initially encoded either in LG or in BG modes, on the entanglement content of such states. Using methods of Fourier optics we inferred the diffracted twin-photon state, taking into account the size and shift of the obstacle with respect to the beam axis, as well as different initial azimuthal quantum numbers $l_0$ of the modes.
We derived an analytical formula for the concurrence of the diffracted state, which is an infinite dimensional, pure state. This formula depends only on one parameter -- the scattering-induced mutual overlap between the two diffracted waves which stem from the OAM modes with indices $l_0$ and $-l_0$, respectively. This result, in particular, shows that peculiarities of spatial distributions of BG and LG modes lead to different mutual overlaps of diffracted waves, which, eventually, results in stronger robustness of entanglement for BG modes.
Thereby, our theoretical findings complement the experimental observations of \cite{McLarenNatPhys2014}.
This work corroborates that scattering into superpositions of OAM modes underlies the behavior of entanglement under deterministic perturbations, such as diffraction, as much as it does under random disturbances, such as weak atmospheric turbulence \cite{LeonhardPRA2015}. Therefore, our results suggest
a stronger resilience in atmospheric turbulence of photonic OAM entanglement encoded into BG than into LG modes -- a statement
which will be interesting to verify in the future.
Another possible research direction will be to interpret the diffraction-induced spreading of the OAM spectrum in terms of the uncertainty principle for angular position and angular momentum \cite{franke_arnold_2004} and to establish a connection between angular uncertainty and OAM entanglement loss.
\acknowledgments
We would like to thank A. Forbes for illuminating and enjoyable discussions. G.S., V.N.S. and A.B. acknowledge support by Deutsche Forschungsgemeinschaft under grant DFG BU 1337/17-1.
|
2,877,628,090,373 | arxiv | \section{Introduction}
\label{sec:intro}
The power of multi-layer convolutional networks for learning features has been established in audio signal processing, such as speech recognition and music classification~\cite{lecun1995convolutional, hinton2012deep, lee2009unsupervised}. The idea of a convolutional network is to convolve signals with filters, and use the obtained features for classification. The scattering transform, proposed by Mallat~\cite{mallat2012group}, employed pre-specified wavelets as filters, and obtained state of the art result on some music and speech datasets~\cite{anden2011multiscale,anden2014deep}. The structure of the scattering transform is similar to a cascade of constant-Q or mel-filter banks~\cite{anden2011multiscale}, and the scattering transform can capture useful spectral contents of acoustic signals.
The spectral content of acoustic signals is often analyzed using
time-frequency energy distributions with a particular frequency scale. For example, the popular mel-frequency cepstral coefficients (MFCC) are based on the mel-scale, a human perceptual frequency scale~\cite{stevens1937scale}. The MFCC is a DCT of the log energy of the mel-frequency spectral coefficients (MFSC)~\cite{rabiner1993fundamentals}, which were shown to be closely related to scattering transform coefficients~\cite{anden2011multiscale,anden2014deep}.
PCANet~\cite{chan2014pcanet} was recently proposed for image classification. In this framework, filters are learned from the data as principal components at the local ``image patch'' level. PCANet was shown to match and in some cases improve upon state of the art performance in a variety of image classification benchmarks.
In this paper, we translate the PCANet framework into the world of acoustic signal processing and relate it to spectral feature representations. The PCANet filters are obtained as the eigenvectors of a local covariance matrix, which is a Toeplitz matrix, and so the resulting filters can be approximated by the Discrete Cosine Transform (DCT) basis functions~\cite{ahmed1974discrete,sanchez1995diagonalizing}. We thus introduce the use of DCTNet for acoustic signal classification, in which PCA filters are simply replaced with fixed DCT filters. We relate DCTNet to spectral feature representation methods for acoustic signals, such as the short time Fourier transform (STFT), spectrogram and linear frequency spectral coefficients (LFSC). In particular, each DCTNet layer is essentially a short time DCT. The process of our PCANet and DCTNet is shown in Fig.~\ref{fig:process}. More technical details can be found in Section~\ref{sec:signal} to Section~\ref{sec:lfsc} and also in the dissertation of Xian~\cite{xian2015whale}.
\begin{figure}[htb]
\centering
\includegraphics[height=50mm,width=90mm]{process_1.jpg
\vspace{-10pt}
\caption{\small{PCANet and DCTNet Process. The input is the time series of an acoustic signal. After convolving with DCT or PCA filterbanks, we have the short time DCT or short time PCA of the signal. Summing and averaging the second layer's outputs, we have linear scale spectral coefficients, and we use them for spectral clustering and classification.}}
\label{fig:process}
\end{figure}
DCTNet is of interest as an alternative to the scattering transform for the following reasons. Firstly, DCTNet features explicitly give frequency information, which is likely beneficial for many acoustic signals, whereas scattering transform coefficients capture scale information. Secondly, the DCT is a popular tool in audio signal coding~\cite{princen1986analysis}, for example in MP3, and so DCTNet provides a natural way to incorporate software and hardware in these contexts into a deep learning framework.
We note that Ng and Teoh~\cite{ng2015dctnet} recently introduced the use of a DCTNet variant for image feature extraction, applying block-wise 2D convolution, histogramming and binary hashing. In contrast, we adopt DCTNet for acoustic signals, using an entirely different post-processing strategy, and we also provide insight into the different time-frequency content revealed by each layer of DCTNet.
We present experiments using the DCLDE whale vocalization data~\cite{dclde}. Results show that DCTNet improved results in classification tasks, suggesting that DCTNet is an attractive tool for acoustic signal processing problem, such as underwater acoustics.
\section{The DCT Approximation for PCANet Eigenfunctions}
\label{sec:signal}
In the PCANet framework~\cite{chan2014pcanet}, filterbanks are obtained as
the eigenfunctions of the local covariance matrix. Given a signal $\bold{x}=\small{(x(1), x(2),\cdots, x(N))}$, we construct the Hankel matrix
\small
\begin{align*}
\textbf{X}=\left[\begin{array}{cccc}
x(1) & x(2) &\cdots & x(N-M+1) \\
x(2) & x(3) &\cdots & x(N-M+2) \\
\vdots & \vdots & \ddots & \vdots \\
x(M) & x(M+1) & \cdots & x(N)
\end{array}\right],
\end{align*}
\normalsize
where $M<N$. When the first column $\small{x(1), \cdots, x(M)}$ and the last column $x(N-M+1),\cdots, x(N)$ are zeros, letting $\rho_j=\sum\limits_{i} x(i)x(i+j)$, the sample covariance matrix
\small
\begin{align*}
\bold{XX^{T}}=\left[\begin{array}{ccccc}
1 & \rho_1 & \rho_2 &\cdots & \rho_{M-1} \\
\rho_1 & 1 & \rho_1 &\cdots & \rho_{M-2} \\
\vdots & \vdots & \vdots & \ddots & \vdots \\
\rho_{M-1} & \rho_{M-2} & \rho_{M-3} &\cdots & 1
\end{array}\right]
\end{align*}
\normalsize
is a Toeplitz matrix.
When the autocorrelation of the signal decays fast, the discrete cosine transform (DCT) basis functions can well approximate the eigenfunctions of the Toeplitz matrix~\cite{ahmed1974discrete,sanchez1995diagonalizing}. A comparison of the top eight eigenfunctions of DCT and PCA for a single whale vocalization are shown in Fig.~\ref{fig:eigenfunctions_comp}.
\begin{figure}[!ht]
\centering
\subfigure[PCA eigenfunctions 1-4]{\includegraphics[width=.45\linewidth,height=2.3cm]{pca_eigen_1}}
\subfigure[PCA eigenfunctions 5-8]{\includegraphics[width=.45\linewidth,height=2.3cm]{pca_eigen_2}}
\\
\vspace{-6pt}
\centering
\subfigure[DCT eigenfunctions 1-4]{\includegraphics[width=.45\linewidth,height=2.3cm]{dct_eigen_1}}
\subfigure[DCT eigenfunctions 5-8]{\includegraphics[width=.45\linewidth,height=2.3cm]{dct_eigen_2}}
\vspace{-6pt}
\caption{Comparison of the top eight DCT eigenfunctions and PCA eigenfunctions for a single whale vocalization}
\label{fig:eigenfunctions_comp}
\end{figure}
The autocorrelation of the signal and the correlation between the DCT and PCA eigenfunctions are shown in Fig.~\ref{fig:eigenfunctions_comp2}. We can see that the approximation of DCT to PCA eigenfunctions depends upon the structure of the data. An error bound for using DCT eigenfunctions to diagonalize the Toeplitz matrix has been given in terms of the autocorrelation coefficients of the signal~\cite{sanchez1995diagonalizing}.
\begin{figure}[!ht]
\centering
\subfigure[Autocorrelation of signal]{\includegraphics[width=.45\linewidth,height=2.55cm]{autocorr_w20_toeplitz}}
\subfigure[Eigenfunctions correlation]{\includegraphics[width=.45\linewidth,height=2.55cm]{correlation_dct_pca_eigenvector}}
\vspace{-6pt}
\caption{Signal autocorrelation and eigenfunctions correlation}
\label{fig:eigenfunctions_comp2}
\end{figure}
We can also interpret the DCT approximation in a Toeplitz-Fourier framework, since a Toeplitz matrix $T$ can be represented as the sum of a circulant matrix $C$ and a skew-circulant matrix $S$, that is $T=C+S$. The eigenfunctions of a circulant matrix are well known to be Fourier eigenfunctions. We can view the skew-circulant matrix as an error, and the energy of the circulant matrix within the given Toeplitz matrix can be optimized by applying Chan's optimal circulant preconditioner~\cite{chan1988optimal}, that is $C=\arg\min\limits_{C'}||C'-T||_{F}^2$. By this argument, the eigenfunctions are well approximated by Fourier basis functions, and consequently by the closely related DCT basis functions.
\section{Short time PCA and short time DCT}
\label{sec:stft}
The discrete time STFT can be written as~\cite{oppenheim1978applications}:
\small
\begin{align*}
X(m,\omega)&=\sum\limits_{n=-\infty}^{\infty}x(m-n)w(n)\exp(-j\omega (m-n)) \\
&=\exp(-j\omega m)\sum\limits_{n=-\infty}^{\infty}(w(n)\exp(j\omega n))x(m-n) \\
&=\exp(-j\omega m)[(w(m)\exp(j\omega m))*x(m)]
\end{align*}
\normalsize
where $w$ is a window function and $\omega$ is the angular frequency. It can be viewed as the modulation of a band-pass filter. For PCANet and DCTNet, we replace the Fourier basis functions $\{\exp(j\omega m)\}$ with PCA eigenfunctions and DCT eigenfunctions, obtaining a short time PCA and short time DCT of the signal respectively.
Plots of short time PCA and DCT (output of the first layer) are shown in Fig.~\ref{fig:1st_convolution} for different window lengths. We use the DCLDE blue whale vocalization data as an example. DCT filterbanks are a natural choice since they are time-frequency representations, whereas time-frequency representation and resolution may be lost when using PCA filterbanks, especially when the PCA eigenfunctions cannot be approximated by the DCT. We can see in the comparison of Fig.~\ref{fig:1st_convolution}(a) and Fig.~\ref{fig:1st_convolution}(c) that PCA fails to represent the time-frequency content of the signal.
\begin{figure}[!ht]
\centering
\subfigure[DCTNet with window size 256]{\includegraphics[width=.45\linewidth,height=2.8cm]{DCTNet_spectrogram_1st_w256}}
\subfigure[DCTNet with window size 20]{\includegraphics[width=.45\linewidth,height=2.8cm]{DCTNet_spectrogram_1st_w20}}
\\
\vspace{-6pt}
\centering
\subfigure[PCANet with window size 256]{\includegraphics[width=.45\linewidth,height=2.8cm]{PCANet_spectrogram_1st_w256}}
\subfigure[PCANet with window size 20]{\includegraphics[width=.45\linewidth,height=2.8cm]{PCANet_spectrogram_1st_w20}}
\vspace{-6pt}
\caption{Plots of the first layer output}
\label{fig:1st_convolution}
\end{figure}
The advantage of DCTNet over PCANet can also be found by looking at their computation complexity. PCA is data dependent, and requires $O(M^3)$ operations to find the eigenvectors of the $M\times M$ data covariance matrix, plus an addition computational cost to perform the short time PCA convolution; while for DCTNet, with the help of FFT, we need $O(MN\log_2 M)$ operations to obtain a short time DCT of the signal of length $N$~\cite{ng2015dctnet,wang2012introduction}.
\section{Linear frequency spectrogram}
\label{sec:lfsc}
After obtaining the first layer, we treat each row of the output as a separate signal, and convolve each with a new PCA or DCT filterbank. We thus end up with multiple new short time PCAs/DCTs, which can capture the dynamic structure of the signal. We choose a smaller window compared with the first layer window size for the filterbanks, so we have a finer scale representation (second layer) inside a coarser scale representation (first layer). Plots of second layer short time DCT are shown in Fig.~\ref{fig:2nd_convolution}. They are generated by taking a short time DCT of window size 20 of the first layer output signal using window size 256, as shown in Fig.~\ref{fig:1st_convolution}(a). Comparing Fig.~\ref{fig:1st_convolution}(b) with Fig.~\ref{fig:2nd_convolution}, we can see that the two-layer DCTNet can illustrate more detailed signal components.
\begin{figure}[!ht]
\centering
\subfigure[DCTNet 2nd layer output 1]{\includegraphics[width=.45\linewidth,height=2.8cm]{DCTNet_spectrogram_step14}}
\subfigure[DCTNet 2nd layer output 2]{\includegraphics[width=.45\linewidth,height=2.8cm]{DCTNet_spectrogram_step12}}
\vspace{-6pt}
\caption{DCTNet second layer output with window size 256 at the first layer, and window size 20 at the second layer. (a) shows the signal component at frequency step 14 of Fig.~\ref{fig:1st_convolution}(a), while (b) shows the signal component at frequency step 12 of Fig.~\ref{fig:1st_convolution}(a). }
\label{fig:2nd_convolution}
\end{figure}
The mel-frequency spectrogram, or MFSC, can be obtained by convolving the signal with a constant-Q filterbank and a low-pass filter, as the analysis in~\cite{anden2011multiscale,anden2014deep}:
\begin{align*}
Mx(t,\lambda)\approx|x*\psi_{\lambda}|^2*|\phi|^2(t),
\end{align*}
where $\psi_{\lambda}$ is a constant-Q band-pass filter, and $\phi$ is a low-pass filter. When replacing the constant-Q filterbank and the low-pass filter with a DCT filterbank and a DCT function, the average of the two-layer DCTNet output can give us a linear frequency spectrogram like features, due to the linearity of scale of DCT filterbank. The average of the two-layer DCTNet features can also improve the deformation stability for acoustic signals.
\begin{figure}[!ht]
\centering
\subfigure[Music log-MFSC feature]{\includegraphics[width=.45\linewidth,height=3cm]{log_mfsc}}
\subfigure[$1^{st}$ layer scattering output]{\includegraphics[width=.45\linewidth,height=3cm]{scattering_1st}}
\\
\vspace{-6pt}
\centering
\subfigure[Music log-LFSC feature]{\includegraphics[width=.45\linewidth,height=3cm]{log_lfsc}}
\subfigure[Average $2^{nd}$ layer DCTNet]{\includegraphics[width=.45\linewidth,height=3cm]{dctnet}}
\vspace{-6pt}
\caption{Comparison of MFSC, scattering transform coefficients, LFSC and and the average of DCTNet second layer output}
\label{fig:lfsc}
\end{figure}
As Fig~\ref{fig:lfsc} shows, we use a piece of classical music from the GTZAN dataset as an example to illustrate MFSC, LFSC, and their connection with the scattering transform and DCTNet outputs. Applying triangular weight functions for the filterbanks of MFSC and LFSC, we obtain 40 MFSC and LFSC coefficients. Most of the energy of this music is distributed in frequency range below 4000Hz. The mel-scale is approximately a linear scale for low frequencies~\cite{logan2000mel}, but logarithmic for high frequencies, since the human ear is less sensitive to high frequency signal content~\cite{olson1967music}.
\section{experimental results}
\label{sec:cluster}
\subsection{Dataset}
We use the DCLDE 2015 conference blue whale D call data and fin whale 40Hz call data~\cite{dclde} for experiments. There are 851 blue whale calls and 244 fin whale calls of the same sampling frequency 2000Hz. Each signal lasts for 5 seconds, so the length of each signal is 10000.
\subsection{Spectral Clustering}
We use two-layer DCTNet and PCANet, and use window size 256 for the first layer in order to have a better time-frequency resolution of the data, and choose window size 20 for the second layer to make it comparable to MFSC and LFSC. For MFSC and LFSC, we generate the spectrogram using a Hamming window of length 256. We extract 20 coefficients from each time frame after applying the triangular weighted function filterbank on the signals, and then we concatenate the coefficients along the time axis. We use the MATLAB scattering transform toolbox~\cite{scattering_toolbox} with the frame duration $T=125ms$ (or window size 256) for the experiment, and set $Q_1=12,~Q_2=1$ to better capture the acoustic signal information. In this experiment, the energy of the second layer scattering transform coefficients is very close to zero. Considering the size of dataset and features characteristic, to make a fair experiment, we compared with one layer scattering transform coefficients, MFSC, and LFSC.
We use the Laplacian Eigenmap for dimensional reduction for the feature set of DCTNet, PCANet, scattering transform, MFSC and LFSC. We use three nearest neighbors to create the kernel of the Laplacian Eigenmap. We examine the adjacency matrices created by the Laplacian Eigenmap, which are generated based on feature similarity. Since it is a binary classification, we use the Fielder value~\cite{von2007tutorial} for spectral re-ordering. We can see that the two-layer DCTNet and PCANet can better separate the blue whale and fin whale data than one layer, as Fig~\ref{fig:adj_mat} shows.
\begin{figure}[!ht]
\centering
\subfigure[1-layer DCTNet]{\includegraphics[width=.48\linewidth,height=2.6cm]{dctnet_neighbor_graph_1layer}}
\subfigure[2-layer DCTNet]{\includegraphics[width=.48\linewidth,height=2.6cm]{dctnet_neighbor_graph_2layer}}
\\
\vspace{-6pt}
\centering
\subfigure[1-layer PCANet]{\includegraphics[width=.48\linewidth,height=2.6cm]{pcanet_neighbor_graph_1layer}} \subfigure[2-layer PCANet]{\includegraphics[width=.48\linewidth,height=2.6cm]{pcanet_neighbor_graph_2layer}}
\vspace{-6pt}
\caption{Adjacency matrices created by DCTNet and PCANet. The block of rows and columns from 1 to 851 are blue whale data, from 852 to 1095 are fin whale data.}
\label{fig:adj_mat}
\end{figure}
After obtaining features from DCTNet, PCANet, the scattering transform, MFSC and LFSC, we use the kNN classifier ($k=3$) to evaluate the separation of the data. The ROC plot is shown in Fig.~\ref{fig:rocs}. The values of the AUC (area under the ROC curve) are shown in Table~\ref{table:AUC}.
\begin{figure}[htb]
\includegraphics[height=45mm,width=85mm]{roc_mfsc_lfsc_dct_pca_scat_18.jpg
\vspace{-6pt}
\caption{ROC comparisons}
\label{fig:rocs}
\end{figure}
\begin{table}[h!]
\caption{Area Under the Curves (AUCs) of different feature sets}
\label{table:AUC}
\centering
\begin{tabular}{cc}
\hline\hline
Feature set & AUC \\
\hline
DCTNet $2^{nd}$ layer & 0.9513 \\
PCANet $2^{nd}$ layer & 0.9258 \\
DCTNet $1^{st}$ layer & 0.9200 \\
PCANet $1^{st}$ layer & 0.9079 \\
Scattering transform & 0.9275 \\
MFSC & 0.9283\\
LFSC & 0.9225
\\ [1ex]
\hline
\end{tabular}
\end{table}
We use 5-fold cross-validation to generate the plot, that is 80\% of the blue whale vocalizations and fin whale vocalizations for training, and the rest for testing. We calculate the distance of each testing data point to the training data points, and make a decision based on its three nearest neighbors, and compare it with the ground truth, to obtain the true positive rate and false positive rate. In order to generate the ROC curve, we vary the value of the probability of false alarm ($P_F$) over the range 0 to 1, and obtain a corresponding threshold for the probability of detection $(P_D)$ based upon the obtained true positive and false positive rate. Since the number of blue right whale and fin whale vocalizations available for testing are relatively small, the classification results are promising but preliminary.
\section{Conclusion}
\label{sec:conclude}
In this paper, we apply PCANet and DCTNet for acoustic signal classification. We have shown that each layer of PCANet and DCTNet is essentially a short time PCA/DCT, revealing different time-frequency content of the signal. The computation cost for DCTNet is smaller than for PCANet. DCTNet can generate linear frequency spectral like coefficients, and improve the classification rate of whale vocalizations, which is useful for underwater acoustics, or potentially other applications.
\bibliographystyle{IEEEbib}
|
2,877,628,090,374 | arxiv |
\subsection{The distance between the invariant manifolds of \texorpdfstring{$L_3$}{L3}}
The one dimensional unstable and stable invariant manifolds of $L_3$ have two branches each (see Figure~\ref{fig:L3Outer}).
One pair circumvents $L_5$, which we denote by $W^{\unstable,+}(\mu)$ and $W^{\stable,+}(\mu)$, and the other, $W^{\unstable,-}(\mu)$ and $W^{\stable,-}(\mu)$, circumvents $L_4$.
Since the Hamiltonian system associated to the Hamiltonian $\HInicial$ is reversible with respect to the involution
\begin{equation*}
\Phi(q,p;t)=(q_1,-q_2,-p_1,p_2;-t),
\end{equation*}
the $+$ branches of the invariant manifolds are symmetric with respect to the $-$ branches. Thus, we restrict our analysis to the positive branches.
To measure the distance between $W^{\unstable/\stable,+}(\mu)$, we consider the symplectic polar change of coordinates
\begin{align}\label{def:changePolars}
q=
r \begin{pmatrix}
\cos \theta \\
\sin \theta
\end{pmatrix},
\qquad
p =
R
\begin{pmatrix}
\cos \theta \\
\sin \theta
\end{pmatrix}
- \frac{G}{r} \begin{pmatrix}
\sin \theta \\
-\cos \theta
\end{pmatrix},
\end{align}
where
$R$ is the radial linear momentum and $G$ is the angular momentum.
We consider the $3$-dimensional section
\[
\Sigma = \claus{(r,\theta,R,G) \in \mathbb{R} \times \mathbb{T} \times \mathbb{R}^2
\mathrm{h}:\mathrm{h} r>1, \, \theta=\frac{\pi}2 \,}
\]
and denote by $(r^{\unstable}_*,\frac{\pi}2, R^{\unstable}_*,G^{\unstable}_*)$ and $(r^{\stable}_*,\frac{\pi}2,R^{\stable}_*,G^{\stable}_*)$ the first crossing of the invariant manifolds with this section.
The next theorem measures the distance between these points for $0< \mu\ll 1$.
\begin{theorem}\label{theorem:mainTheorem}
There exists $\mu_0>0$ such that, for $\mu \in (0,\mu_0)$,
\[
\norm{(r^{\unstable}_*,R^{\unstable}_*,G^{\unstable}_*)-(r^{\stable}_*,R^{\stable}_*,G^{\stable}_*)}
=
\sqrt[3]{4} \,
\mu^{\frac13} e^{-\frac{A}{\sqrt{\mu}}}
\boxClaus{\vabs{\CInn}+\mathcal{O}\paren{\frac1{\vabs{\log \mu}}}},
\]
where:
\begin{itemize}
\item The constant $A>0$ is the real-valued integral
\begin{equation}\label{def:integralA}
A= \int_0^{\frac{\sqrt{2}-1}{2}} \frac{2}{1-x}\sqrt\frac{x}{3(x+1)(1-4x-4x^2)} dx\approx 0.177744.
\end{equation}
\item The constant $\CInn \in \mathbb{C}$ is the Stokes constant associated to the inner equation analyzed in \cite{articleInner} and in Theorem \ref{theorem:innerComputations} below.
\end{itemize}
\end{theorem}
\begin{remark}\label{remark:sectiontheta}
We can prove the same result for any section
\[
\Sigma(\theta_*) = \claus{(r,\theta,R,G)
\in \mathbb{R} \times \mathbb{T} \times \mathbb{R}^2
\mathrm{h}:\mathrm{h} r>1, \, \theta=\theta_* \,},
\]
with $\theta_* \in
(0,\theta_0)$ and $\theta_0=\arccos\paren{\frac12-\sqrt2}$ (the value of $\mu_0$ depends on how close to the endpoints of the interval $\theta_*$ is).
The section $\theta=\theta_0$ is close to the ``turning point'' of the invariant manifolds (see Figure \ref{fig:L3Outer}).
\end{remark}
The constant $A$ in \eqref{def:integralA} is derived from the values of the complex singularities of the separatrix of certain integrable averaged system, which is studied in the prequel paper \cite{articleInner}.
The results obtained in \cite{articleInner} about this separatrix are summarized in Theorem \ref{theorem:singularities} below.
The origin of the constant $\Theta$ appearing in Theorem \ref{theorem:mainTheorem} is explained in Theorem \ref{theorem:innerComputations}, which analyzes the so-called inner equation. This theorem is also proven in \cite{articleInner}.
Moreover, in that paper it is seen, by a numerical computation, that $\vabs{\CInn}\approx 1.63$. We expect that one should be able to prove that $\vabs{\CInn}\neq 0$ by means of rigorous computer computations (see \cite{BCGS21}). Note that $\vabs{\CInn}\neq 0$ implies that there are not primary (i.e. one round) homoclinic orbits to $L_3$.
A fundamental problem in dynamical systems is to prove whether a given model has chaotic dynamics (for instance a Smale horseshoe).
For many physically relevant models this is usually remarkably difficult. This is the case of many Celestial Mechanics models, where most of the known chaotic motions have been found in nearly integrable regimes where there is an unperturbed problem which already presents some form of ``hyperbolicity''. This is the case in the vicinity of collision orbits (see for example \cite{Moe89, BolMac06, Bol06, Moe07}) or close to parabolic orbits (which allows to construct chaotic/oscillatory motions), see~\cite{Sitnikov1960, Alekseev1976, LlibSim80, Moser2001, GMS16, GMSS17, GPSV21}.
There are also several results in regimes far from integrable which rely on computer assisted proofs \cite{Ari02, WilcZgli03, Cap12, GierZgli19}. The problem tackled in this paper and \cite{articleInner} is radically different. Indeed, if one takes the limit $\mu\to 0$ in \eqref{def:hamiltonianInitialNotSplit} one obtains the classical integrable Kepler problem in the elliptic regime, where no hyperbolicity is present. Instead, the (weak) hyperbolicity is created by the $\mathcal{O}(\mu)$ perturbation, which can be captured considering an integrable averaged Hamiltonian along the $1:1$ mean motion resonance\footnote{The $1:1$ averaged Hamiltonian has been also studied to obtain ``good'' approximations for the global dynamics in the $1:1$ resonant zone, see for example \cite{RNP16, AlePou21} and the references therein.}.
One of the classical methods to construct chaotic dynamics is the Smale-Birkhoff homoclinic theorem by proving the existence of transverse homoclinic orbits to invariant objects, most commonly, periodic orbits.
Certainly the breakdown of homoclinic orbits to the critical point $L_3$ given by Theorem~\ref{theorem:mainTheorem} does not lead to the existence of chaotic orbits. However, one should expect that Theorem~\ref{theorem:mainTheorem} implies that there exist Lyapunov periodic orbits exponentially close to $L_3$ whose stable and unstable invariant manifolds intersect transversally. This would create chaotic motions ``exponentially close'' to $L_3$ and its invariant manifolds (see \cite{articleChaos}).
As already mentioned, Theorem \ref{theorem:mainTheorem} rules out the existence of primary homoclinic connections to $L_3$ in the RPC$3$BP for $0< \mu\ll 1$. However, it does not prevent the existence of multiround homoclinic orbits, that is homoclinic orbits which pass close to $L_3$ multiple times.
It has been conjectured (see for instance~\cite{BMO09}, where the authors analyze this problem numerically) that multi-round homoclinic connections to $L_3$ should exist for a sequence of values $\claus{\mu_k}_{k \in \mathbb{N}}$ satisfying $\mu_k\to 0$ as $k \to \infty$.
\paragraph{A first step towards proving Arnold diffusion along the $1:1$ mean motion resonance in the $3$-Body Problem?}
Consider the $3$-Body Problem in the planetary regime, that is one massive body (the Sun) and two small bodies (the planets) performing approximate ellipses (including the ``Restricted limit'' when one of planets has mass zero). A fundamental problem is to assert whether such configuration is stable (i.e. is the Solar system stable?). Thanks to Arnold-Herman-F\'ejoz KAM Theorem, many of such configurations are stable, see \cite{Arnold63,Fejoz04}. However, it is widely expected that there should be strong instabilities created by Arnold diffusion mechanisms (as conjectured by Arnold in \cite{Arnold64}). In particular, it is widely believed that one of the main sources of such instabilities dynamics are the mean motion resonances, where the period of the two planets is resonant (i.e. rationally dependent) \cite{FGKR16}.
The RPC$3$BP has too low dimension (2 degrees of freedom) to possess Arnold diffusion. However, since it can be seen as a first order for higher dimensional models, the analysis performed in this paper can be seen as a humble first step towards constructing Arnold diffusion in the $1:1$ mean motion resonance. In this resonance, the RPC$3$BP has a normally hyperbolic invariant manifold given by the center manifold of the Lagrange point $L_3$. This normally hyperbolic invariant manifold is foliated by the classical Lyapunov periodic orbits. One should expect that the techniques developed in the present paper would allow to prove that the invariant manifolds of these periodic orbits intersect transversally within the corresponding energy level of \eqref{def:hamiltonianInitialNotSplit}. Still, this is a much harder problem than the one considered in this paper and the technicalities involved would be considerable.
This transversality would not lead to Arnold diffusion due to the low dimension of the RPC3BP. However, if one considers either the Restricted Spatial Circular $3$-Body Problem with small $\mu>0$ which has three degrees of freedom, the Restricted Planar Elliptic $3$-Body Problem with small $\mu>0$ and eccentricity of the primaries $e_0>0$, which has two and a half degrees of freedom, or the ``full'' planar $3$-Body Problem (i.e. all three masses positive, two small) which has three degrees of freedom (after the symplectic reduction by the classical first integrals) one should be able to construct orbits with a drastic change in angular momentum (or inclination in the spatial setting).
In the Restricted Planar Elliptic $3$-Body Problem the change of angular momentum would imply the transition of the zero mass body orbit from a close to circular ellipse to a more eccentric one. In the full 3BP, due to total angular momentum conservation, the angular momentum would be transferred from one body to the other changing both osculating ellipses.
This behavior would be analogous to that of \cite{FGKR16} for the $3:1$ and $1:7$ resonances. In that paper, the transversality between the invariant manifolds of the normally hyperbolic invariant manifold was checked numerically for the realistic Sun-Jupiter mass ratio $\mu=10^{-3}$.
Arnold diffusion instabilities have been analyzed numerically for the Restricted Spatial Circular $3$-Body Problem in \cite{SSST14}.
\subsection{The strategy to prove Theorem \ref{theorem:mainTheorem}}
The main difficulty in proving Theorem \ref{theorem:mainTheorem} is that the
distance between the stable and unstable manifolds of $L_3$ is exponentially small with respect to $\sqrt\mu$ (this is also usually known as a \emph{beyond all orders} phenomenon). This implies that the classical Melnikov Method \cite{GuckenheimerHolmes} to detect the breakdown of homoclinics cannot be applied.
To prove Theorem \ref{theorem:mainTheorem}, we follow the strategy of exponentially small splitting of separatrices (already outlined in \cite{articleInner}) which goes back to the seminal work by Lazutkin \cite{Laz84, Laz05}. See \cite{articleInner} for a list of references on the recent developments in the field of exponentially small splitting of separatrices. In particular, we follow similar strategies of those in \cite{BFGS12,BCS13}.
In the present work the first order of the difference between manifolds is not given by the Melnikov function.
Instead, we must derive and analyze an inner equation which provides the dominant term of this distance. As a consequence, we need to ``match'' (i.e. compare) certain solutions of the inner equation with the parameterizations of the perturbed invariant manifolds.
The first part of the proof, that was completed in the prequel \cite{articleInner}, dealt with the following steps:
\begin{enumerate}[label*=\Alph*.]
\item
We perform a change of coordinates to capture the slow-fast dynamics of the system.
%
The first order of the new Hamiltonian has a saddle point with an homoclinic connection (also known as separatrix) and a fast harmonic oscillator.
%
%
\item We study the analytical continuation of the time-parametrization of the separatrix of this first order.
%
In particular, we obtain its maximal strip of analyticity and the singularities at the boundary of this strip.
%
%
\item We derive the inner equation.
\item We study two special solutions which will be ``good approximation'' of the perturbed invariant manifolds near the singularities of the unperturbed separatrix (see Step F below).
%
\end{enumerate}
The remaining steps necessary to complete the proof of Theorem~\ref{theorem:mainTheorem} are the following:
\begin{enumerate}[label*=\Alph*.]
%
\item[E] We prove the existence of the analytic continuation of the parametrizations of the invariant manifolds of $L_3$, $W^{\unstable,+}(\delta)$ and $W^{\stable,+}(\delta)$, in an appropriate complex domain called boomerang domain.
%
This domain contains a segment of the real line and intersects a sufficiently small neighborhood of the singularities of the unperturbed separatrix.
%
%
\item[F.] By using complex matching techniques, we show that, close to the singularities of the unperturbed separatrix, the solutions of the inner equation obtained in Step D are ``good approximations'' of the parameterizations of the perturbed invariant manifolds obtained in Step E.
%
%
\item[G.] We obtain an asymptotic formula for the difference between the perturbed invariant manifolds by proving that the dominant term comes from the difference between the solutions of the inner equation.
%
%
\end{enumerate}
The structure of this paper goes as follows.
In Section \ref{section:introductionPoincare} we perform the change of coordinates introduced in Step A and state Theorem \ref{theorem:mainTheoremPoincare}, which is a reformulation of Theorem \ref{theorem:mainTheorem} in this new set of variables.
Then, in Section \ref{section:resultsOuter}, we state the results concerning Steps B, C and D above (which are proven in \cite{articleInner}) and we carry out Steps E, F and G. These steps lead to the proof of Theorem \ref{theorem:mainTheoremPoincare}.
Sections \ref{section:proofH-existence} and \ref{section:proofG-matching} are devoted to proving the results in Section \ref{section:resultsOuter} which concern Steps E and F.
\section{Introduction}
\label{section:introduction}
\input{introductionOuter}
\section{A singular formulation of the problem}
\label{section:introductionPoincare}
\input{introductionPoincare}
\subsection{Proof of Theorem~\ref{theorem:mainTheorem}}
\label{subection:undoChanges}
\input{undoChanges}
\section{Proof of Theorem~\ref{theorem:mainTheoremPoincare}}
\label{section:resultsOuter}
\input{resultsIntro}
\subsection{Analytical continuation of the unperturbed separatrix} \label{section:singularitiesOuter}
\input{singularitiesOuter}
\subsection{The perturbed invariant manifolds}
\label{section:outer}
\input{outer}
\subsubsection{Analytic extension of the stable and unstable manifolds}
\label{subsection:outerBasic}
\input{outerGraph}
\subsubsection{Further analytic extension of the unstable manifold}
\label{subsection:outerExtension}
\input{outerGlobal}
\subsection{A first order of the invariant manifolds near the singularities}
\label{section:differenceInner}
\input{differenceInner}
\subsubsection{The inner equation}
\label{subsection:innerHeuristics}
\input{innerHeuristics}
\subsubsection{Complex matching estimates} \label{subsection:matching}
\input{matching}
\subsection{The asymptotic formula for the difference}
\label{section:difference}
\input{difference}
\input{proofF-Difference}
\section{The perturbed invariant manifolds}
\label{section:proofH-existence}
\input{proofH-existence}
\subsection{The invariant manifolds in the infinity domain}
\label{subsection:proofH-existenceInfinite}
\input{proofH-existenceInfinite}
\subsection{The invariant manifolds in the outer domain}
\label{subsection:proofH-existenceBounded}
\input{proofH-existenceOuter}
\subsection{Switching to the time-parametrization}
\label{subsection:proofH-changeuOut}
\input{proofH-existenceChangeuOut}
\subsection{Extending the time-parametrization}
\label{subsection:proofH-existenceFlow}
\input{proofH-existenceFlow}
\subsection{Back to a graph parametrization}
\label{subsection:proofH-changevOut}
\input{proofH-existenceChangevOut}
\section{Complex matching estimates}
\label{section:proofG-matching}
\input{proofG-matching}
\subsubsection{Perturbed invariant manifolds as a graph}
Since we measure the distance between the invariant manifolds in the
section $\lambda=\lambda_*$ (see Theorem \ref{theorem:mainTheoremPoincare}),
we parameterize them as graphs with respect to $\lambda$ (whenever is possible) or,
more conveniently,
with respect
to the independent variable $u$ defined by $\lambda=\lambda_h(u)$.
To define these suitable parameterizations we first translate the
equilibrium point $\Ltres(\delta)$ to $\mathbf{0}$ by the change of coordinates
\begin{equation}\label{def:changeEqui}
\phi_{\equi}:(\lambda,\Lambda,x,y) \mapsto (\lambda,\Lambda,x,y) + \Ltres(\delta).
\end{equation}
Second, we consider the symplectic change of coordinates
\begin{equation}\label{def:changeOuter}
\phi_{\out}:(u,w,x,y) \to (\lambda,\Lambda,x,y),
\quad
{\lambda}= \lambda_h(u), \hspace{5mm} {\Lambda}= \Lambda_h(u) - \frac{w}{3\Lambda_h(u)}.
\end{equation}
We refer to $(u,w,x,y)$ as the \emph{separatrix coordinates}.
Let us remark that $\phi_{\out}$ is not defined for $u=0$ since $\Lambda_h(0)=0$ (see Theorem \ref{theorem:singularities}).
We deal with this fact later when considering the domain of definition for $u$.
After these changes of variables, we look for the perturbed invariant manifolds
as a graph with respect to $u$.
In other words, we look for functions
\[
\zdOut(u) = \left(\wdOut(u),\xdOut(u),\ydOut(u)\right)^T,
\quad \text{ for } \diamond=\unstable,\stable,
\]
such that the
invariant manifolds given in Proposition~\ref{proposition:HamiltonianScaling}
can be expressed as
\begin{equation}\label{eq:invariantManifoldsExpression}
\mathcal{W}^{\diamond}(\delta)= \left\{ \paren{\lambda_h(u), \Lambda_h(u)-\frac{\wdOut(u)}{3\Lambda_h(u)}, \xdOut(u), \ydOut(u)} + \Ltres(\delta)\right\}, \quad \text{for } \diamond=\unstable,\stable,
\end{equation}
with $u$ belonging to an appropriate domain contained in $\Pi^{\mathrm{ext}}_{A, \betaBow}$ (see \eqref{def:dominiBow}).
The graphs $\zuOut$ and $\zsOut$
must satisfy the asymptotic conditions
\begin{equation}\label{eq:asymptoticConditionsOuter}
\begin{split}
\lim_{\Re u \to -\infty} \left(\frac{\wuOut(u)}{\Lambda_h(u)},\xuOut(u), \yuOut(u) \right) =
\lim_{\Re u \to +\infty} \left(\frac{\wsOut(u)}{\Lambda_h(u)},\xsOut(u), \ysOut(u) \right) = 0.
\end{split}
\end{equation}
\begin{remark}\label{remark:realAnalytic}
Since the Hamiltonian $H$ is real-analytic in the sense of $\conj{H(\lambda,\Lambda,x,y;\delta)}=H(\conj{\lambda},\conj{\Lambda},y,x;\conj{\delta})$ (see Proposition \ref{proposition:HamiltonianScaling}),
then we say that $\zOut(u)=(\wOut(u),\xOut(u),\yOut(u))^T$ is real-analytic if it satisfies
\begin{align*}
\wOut(\conj{u}) = \conj{\wOut(u)}, \qquad
\xOut(\conj{u}) = {\yOut(u)}, \qquad
\yOut(\conj{u}) = {\xOut(u)}.
\end{align*}
\end{remark}
The classical way to study exponentially small splitting of separatrices, in this setting, is to look for solutions $\zuOut$ and $\zsOut$ in a certain complex common domain containing a segment of the real line and intersecting a $\mathcal{O}(\delta^2)$ neighborhood of the singularities $u=\pm iA$ of the separatrix.
Recall that the invariant manifolds can not be expressed as a graph in a neighborhood of $u=0$.
To overcome this technical problem, we find solutions $\zuOut$ and $\zsOut$ defined in a complex domain, which we call \emph{boomerang domain} due to its shape (see Figure~\ref{fig:dominiBoomerang}).
\begin{figure}[t]
\centering
\begin{overpic}[scale=1]{DominiBoomerang.png}
\put(60,31){$\DBoomerang$}
\put(19,29.5){\footnotesize $\betaOutA$}
\put(48.5,29.5){\footnotesize $\betaOutB$}
\put(42,49.5){\footnotesize $iA$}
\put(52,44.5){\footnotesize $i(A-\kappa \delta^2)$}
\put(39.5,4){\footnotesize $-iA$}
\put(41,35){\footnotesize $\dBoomerang A$}
\put(102,27){\footnotesize$\Re u$}
\put(45,58){\footnotesize$\Im u$}
\end{overpic}
\bigskip
\caption{The boomerang domain $\DBoomerang$ defined in~\eqref{def:dominiBoomerang}.}
\label{fig:dominiBoomerang}
\end{figure}
Namely,
\begin{equation}\label{def:dominiBoomerang}
\begin{split}
\DBoomerang = \left\{ u \in \mathbb{C} \right. \mathrm{h}:\mathrm{h}
&\vabs{\Im u} < A - \kappa \delta^2 + \tan \betaOutA \Re u,
\vabs{\Im u} < A - \kappa \delta^2 - \tan \betaOutA \Re u,
\\
&\vabs{\Im u} > \left. \dBoomerang A - \tan \betaOutB \Re u
\right\},
\end{split}
\end{equation}
where $\kappa>0$ is such that $A-\kappa \delta^2>0$,
$\betaOutA$ is the constant given in Theorem~\ref{theorem:singularities}
and $\betaOutB \in [\betaOutA,\frac{\pi}{2})$ and $\dBoomerang \in
(\frac{1}{4},\frac{1}{2})$
are independent of $\delta$.
\begin{theorem}\label{theorem:existence}
Fix a constant $\dBoomerang \in (\frac{1}{4},\frac{1}{2})$.
Then, there exists $\delta_0, \kappaBoomerang>0$ such that,
for $\delta \in (0,\delta_0)$,
$\kappa\geq\kappaBoomerang$,
the graph parameterizations $\zuOut$ and $\zsOut$ introduced
in~\eqref{eq:invariantManifoldsExpression} can be extended real-analytically to
the domain $\DBoomerang$.
Moreover, there exists a real constant $\cttOuterA>0$ independent of $\delta$ and $\kappa$ such that, for $u \in \DBoomerang$ we have that
\begin{align*
|\wdOut(u)| \leq \frac{\cttOuterA\delta^2}
{\vabs{u^2 + A^2}}
+ \frac{\cttOuterA\delta^4}{\vabs{u^2 + A^2}^{\frac{8}{3}}}, \quad
|\xdOut(u)| \leq \frac{\cttOuterA\delta^3}
{\vabs{u^2 + A^2}^{\frac{4}{3}}}, \quad
|\ydOut(u)| \leq \frac{\cttOuterA\delta^3}
{\vabs{u^2 + A^2}^{\frac{4}{3}}}.
\end{align*}
\end{theorem}
Notice that the asymptotic conditions \eqref{eq:asymptoticConditionsOuter} do
not have any meaning in the domain $\DBoomerang$ since it is bounded.
Therefore, to prove the existence of $\zuOut$ and $\zsOut$ in $\DBoomerang$ one has to start with different domains where these asymptotic conditions make sense and then find a way to extend them real-analytically to $\DBoomerang$.
We describe the details of these process in the following Sections \ref{subsection:outerBasic} and \ref{subsection:outerExtension}.
\subsubsection{An integral equation for $\dzHatU$}
\subsubsection{End of the proof of Theorem \ref{theorem:mainTheoremPoincare}}
We look for $\dzHatU$ as the unique solution of an integral equation.
Since $\dzHat$ satisfies~\eqref{eq:invariantEquationDifference3}, by the variations of constants formula
\begin{align}\label{eq:dzOutDiff}
\dzHat(u) =
\begin{pmatrix}
c_x m_x(u)\\
c_y m_y(u)
\end{pmatrix}
+
\begin{pmatrix}
\displaystyle m_x(u)
\int_{u_-}^{u} m_x^{-1}(s) \,
\pi_1 \paren{{\mathcal{B}}^{\spl}(s) \dzHat(s)} ds \\
\displaystyle m_y(u)
\int_{u_+}^{u} m_y^{-1}(s) \,
\pi_2 \paren{{\mathcal{B}}^{\spl}(s) \dzHat(s)} ds
\end{pmatrix},
\end{align}
where $\mathcal{M}(u)$ is the fundamental matrix \eqref{def:fundamentalMatrixDifference}, $s$ belongs to some integration path in $\DBoomerang$ and $c_x$ and $c_y$ are defined as
\begin{align}\label{def:cxcyDifference}
c_x = \dxOut(u_-) m_x^{-1}(u_-), \qquad
c_y = \dyOut(u_+) m_y^{-1}(u_+).
\end{align}
For $k_1, k_2 \in \mathbb{C}$, we define
\begin{equation}\label{def:operatorIIdiff}
\mathcal{I}[k_1,k_2](u) =
\big(k_1 \, m_x(u), k_2 \, m_y(u)\big)^T,
\end{equation}
and the operator
\begin{align}\label{def:operatorEEdiff}
\mathcal{E}[\phiA](u) =
\begin{pmatrix}
\displaystyle
m_x(u) \int_{u_-}^{u} m_x^{-1}(s) \,
\pi_1 \paren{{\mathcal{B}}^{\spl}(s) \phiA(s)} ds \\
\displaystyle
m_y(u) \int_{u_+}^{u} m_y^{-1}(s) \,
\pi_2 \paren{{\mathcal{B}}^{\spl}(s) \phiA(s)} ds
\end{pmatrix}.
\end{align}
Then, with this notation, $\dzHatO = \mathcal{I}[c_x^0,c_y^0]$ (see \eqref{def:dzHatO}) and equation~\eqref{eq:dzOutDiff} is equivalent to $\dzHat = \mathcal{I}[c_x,c_y]+\mathcal{E}[\dzHat]$.
Since $\mathcal{E}$ is a linear operator, $\dzHatU =
\dzHat-\dzHatO$ satisfies
\begin{equation}\label{eq:dzOutU}
\dzHatU(u) =
\mathcal{I}[c_x-c_x^0, c_y-c_y^0](u) +
\mathcal{E}[\dzHatO](u) + \mathcal{E}[\dzHatU](u).
\end{equation}
To obtain estimates for $\dzHatU$, we first prove that $\mathrm{Id}-\mathcal{E}$ is invertible in the Banach space $\XSplTotal= \XSpl \times \XSpl$, with
\begin{align*}
\XSpl = \left\{ \phiA: \DBoomerang \to \mathbb{C} \mathrm{h}:\mathrm{h} \normSpl{\phiA}
= \sup_{u\in \DBoomerang} \vabs{e^{\frac{A-\vabs{\Im u}}{\delta^2}}
\phiA(u)}
<+\infty \right\},
\end{align*}
endowed with the norm
\begin{align}\label{def:normexp}
\normSplTotal{\phiA} =
\normSpl{\phiA_1} + \normSpl{\phiA_2},
\end{align}
for $\phiA=(\phiA_1,\phiA_2)$.
Therefore, to prove Theorem \ref{theorem:mainTheoremPoincare} it is enough to see that $\dzHatU$ satisfies that $\normSplTotal{\dzHatU} \leq C\delta^{\frac13}\vabs{\log \delta}^{-1}$.
First, we state a lemma whose proof is postponed to Appendix \ref{subappendix:proofH-technicalFirst}.
\begin{lemma}\label{lemma:boundsBspl}
Let $\kappaBoomerang, \delta_0$ be the constants given in Theorem \ref{theorem:existence}.
Then, there exists a constant $C>0$ such that, for $\kappa\geq\kappaBoomerang$, $\delta \in(0,\delta_0)$ and $u \in \DBoomerang$,
the function $\Upsilon$ in \eqref{def:operatorPPdifference},
the matrix $\mathcal{B}^{\spl}$ in \eqref{def:operatorsDifferenceAABB} and the functions $B_x$, $B_y$ in \eqref{def:mxmyalxaly} satisfy
for $\kappa\geq\kappaBoomerang$, $\delta \in(0,\delta_0)$ and $u \in \DBoomerang$,
\begin{align
&\vabs{\Upsilon_1(u)-1}\leq \frac{C}{\kappa^2}, \qquad
\vabs{\Upsilon_2(u)}\leq \frac{C\delta}{\vabs{u^2+A^2}^{\frac{4}{3}}}, \qquad
\vabs{\Upsilon_3(u)}\leq \frac{C\delta}{\vabs{u^2+A^2}^{\frac{4}{3}}}, \label{proof:boundsUpsilon}
\\[0.4em]
{C}^{-1} &\leq \vabs{B_*(u)} \leq C,
\quad *=x,y,
\quad \text{and} \quad
|{\mathcal{B}}^{\spl}_{i,j}(u)| \leq
\frac{C \, \delta^2}{\vabs{u^2 + A^2}^{2}},
\quad i,j \in \claus{1,2}. \nonumber
\end{align}
\end{lemma}
In the next lemma we obtain estimates for the linear operator $\mathcal{E}$ (see \eqref{def:operatorEEdiff}).
\begin{lemma}\label{lemma:operatorEEdiff}
Let $\kappaBoomerang, \delta_0$ be the constants as given in Theorem \ref{theorem:existence}.
There exists $\cttDiffA>0$ such that for $\delta\in(0,\delta_0)$ and
$\kappa\geq \kappaBoomerang$,
the operator $\mathcal{E}: \XSplTotal \to \XSplTotal$ in \eqref{def:operatorEEdiff} is well defined and satisfies that, for $\phiA \in \XSplTotal$,
\begin{align*}
\normSplTotal{\mathcal{E}[\phiA]} \leq \frac{\cttDiffA}{\kappa}
\normSplTotal{\phiA}.
\end{align*}
In particular, $\mathrm{Id} - \mathcal{E}$ is invertible and
\begin{align*}
\normSplTotal{(\mathrm{Id} - \mathcal{E})^{-1}[\phiA]} \leq 2\normSplTotal{\phiA}.
\end{align*}
\end{lemma}
\begin{proof}
Let us consider $\mathcal{E}=(\mathcal{E}_1,\mathcal{E}_2)^T$, $\phiA \in \XSplTotal$ and $u \in \DBoomerang$.
We only prove the estimate for $\mathcal{E}_2[\phiA](u)$.
The corresponding one for $\mathcal{E}_1[\phiA](u)$ follows analogously.
By the definition of $m_y$ in~\eqref{def:mxmyalxaly} and Lemma \ref{lemma:boundsBspl}, we have that
\begin{align*}
\vabs{\mathcal{E}_2[\phiA](u)}
&\leq
C \delta^2 e^{\frac{\Im u}{\delta^2}}
\vabs{\int_{u_+}^{u}
e^{-\frac{\Im s}{\delta^2}}
\frac{
\vabs{\phiA_1(s)}+\vabs{\phiA_2(s)}
}{\vabs{s^2+A^2}^2} d s }
\\
&\leq
C \delta^2 e^{\frac{\Im u - A}{\delta^2}}
\normSplTotal{\phiA}
\vabs{\int_{u_+}^{u}
e^{\frac{\vabs{\Im s}-\Im s}{\delta^2}}
\frac{d s}{\vabs{s^2+A^2}^2}}.
\end{align*}
Let us consider the case $\Im u < 0$. Then, for a fixed $u_0 \in \mathbb{R} \cap \DBoomerang$, we define the integration path $\rho_t \subset \DBoomerang$ as
\begin{align*}
\rho_t =
\begin{cases}
u_+ + 2t(u_0-u_+)
&\quad \text{for } t \in (0,\frac12), \\
u_0 + (2t-1)(u-u_0)
&\quad \text{for } t \in [\frac12,1).
\end{cases}
\end{align*}
Then,
\begin{align*
\vabs{\mathcal{E}_2[\phiA](u)}
&\leq
C \delta^2 e^{-\frac{\vabs{\Im u}+A}{\delta^2}}
\normSplTotal{\phiA}
\vabs{\int_{0}^{\frac12} \frac{dt}{\vabs{\rho_t-iA}^2}
+ \int_{\frac12}^{1}
\frac{e^{\frac{2\vabs{\Im \rho_t}}{\delta^2}}}{\vabs{\rho_t+iA}^2} dt}
\leq \frac{C}{\kappa}
e^{\frac{\vabs{\Im u}-A}{\delta^2}}
\normSplTotal{\phiA}.
\end{align*}
If $\Im u \geq 0$, we consider the integration path $\rho_t = u_+ + t(u-u_+)$ for $t\in[0,1]$ and we obtain
\begin{align*}
\vabs{\mathcal{E}_2[\phiA](u)} &\leq
C \delta^2 e^{\frac{\vabs{\Im u}-A}{\delta^2}}
\normSplTotal{\phiA}
\vabs{\int_{0}^{1} \frac{\vabs{u-u_+}}{\vabs{\rho_t-iA}^2} dt}
\leq \frac{C}{\kappa}
e^{\frac{\vabs{\Im u}-A}{\delta^2}}
\normSplTotal{\phiA}.
\end{align*}
Therefore,
$
\normSpl{\mathcal{E}_2[\phiA]} \leq \frac{C}{\kappa}\normSplTotal{\phiA}.
$
\end{proof}
Notice that, by \eqref{eq:dzOutU}, $\dzHatU$ satisfies
\begin{equation}\label{eq:dzOutUInverse}
(\mathrm{Id} - \mathcal{E} )\dzHatU(u) = \mathcal{I}[c_x-c_x^0, c_y-c_y^0](u) + \mathcal{E}[\dzHatO](u).
\end{equation}
Since, by Lemma \ref{lemma:operatorEEdiff}, $\mathrm{Id}-\mathcal{E}$ is invertible in $\XSplTotal$ we have an explicit formula for $\dzHatU$.
Nevertheless, we still need good estimates for the right hand side with respect to the norm \eqref{def:normexp}.
\begin{lemma}\label{lemma:IconstantsDiff}
There exist $\kappa_*, \delta_0, \cttDiffB>0$ such that, for
$\kappa=\kappa_*\vabs{\log \delta}$ and $\delta\in(0,\delta_0)$,
\begin{align*}
\normSplTotal{\mathcal{I}[c_x-c_x^0,c_y-c_y^0]}
\leq \frac{\cttDiffB \, \delta^{\frac{1}{3}}}{\vabs{\log \delta}}\qquad \text{and}\qquad
\normSplTotal{\mathcal{E}[\dzHatO](u)}\leq \frac{\cttDiffB \, \delta^{\frac{1}{3}}}{\vabs{\log \delta}},
\end{align*}
with $\mathcal{I}$, $(c_x^0,c_y^0)$, $(c_x,c_y)$, $\mathcal{E}$ and $\dzHatO$ defined in \eqref{def:operatorIIdiff}, \eqref{def:dzHatO}, \eqref{def:cxcyDifference}, \eqref{def:operatorEEdiff}
and
\eqref{def:dzHatO:0}, respectively.
\end{lemma}
\begin{proof}
By the definition of the function $\mathcal{I}$,
\[
\normSplTotal{\mathcal{I}[c_x-c_x^0,c_y-c_y^0]} = \vabs{c_x-c_x^0}\normSpl{m_x} + \vabs{c_y-c_y^0}\normSpl{m_y},
\]
where $m_x$ and $m_y$ are given in \eqref{def:mxmyalxaly}.
Then, by Lemma \ref{lemma:boundsBspl},
\begin{align*}
\normSpl{m_x} =
e^{\frac{A}{\delta^2}}
\sup_{u \in \DBoomerang}
\boxClaus{e^{-\frac{\Im u+\vabs{\Im u}}{\delta^2}} \vabs{B_x(u)}}
%
\leq C e^{\frac{A}{\delta^2}},
\qquad
\normSpl{m_y} \leq
%
%
%
C e^{\frac{A}{\delta^2}},
\end{align*}
and, as a result,
\begin{equation}\label{proof:operatorIIdiff}
\normSplTotal{\mathcal{I}[c_x-c_x^0,c_y-c_y^0]} \leq
C e^{\frac{A}{\delta^2}}
\paren{|c_x-c_x^0| + |c_y-c_y^0|}.
\end{equation}
We now obtain an estimate for $|c_y-c_y^0|$.
The estimate for $|c_x-c_x^0|$ follows analogously.
By the definition of $m_y$ (see \eqref{def:mxmyalxaly}), one has
\begin{equation}\label{proof:cyMenyscyO}
\begin{split}
\vabs{c_y - c_y^0}
&= e^{-\frac{A}{\delta^2}+\kappa}
\vabs{B_y^{-1}(u_+)}
\vabs{\dyOut(u_+) - \dyOutO(u_+)}.
\end{split}
\end{equation}
Let us denote $\DYInnC = \YuMchO -\YsMchO$ where $\YusMchO$ are given on~\eqref{def:ZdMchO}.
Recall that $\YusMchO=\YusInn + \YusMchU$ where $\YusInn$ is the third component of $\ZusInn$, the solutions of the inner equation (see Theorems \ref{theorem:innerComputations} and \ref{theorem:matching}).
We write,
\begin{align*}
\dyOut(u_+) &= \sqrt{2} \alpha_+ \delta^{\frac{1}{3}}
\DYInnC\paren{\frac{u_+ - iA}{\delta^2}}
=\sqrt{2} \alpha_+ \delta^{\frac{1}{3}} \left[
\DYInn \paren{-i\kappa} +
\YuMchU \paren{-i\kappa} - \YsMchU \paren{-i\kappa}
\right].
\end{align*}
By the definition of $\dyOutO$ in \eqref{def:dzHatO:0} (see also \eqref{def:dzHatO}), we have
$
\dyOutO(u_+) = \sqrt{2}\alpha_+ \delta^{\frac{1}{3}}
\CInn e^{-\kappa} .
$
Then, by \eqref{proof:cyMenyscyO} and Lemma~\ref{lemma:boundsBspl},
\begin{align*}
\vabs{c_y-c_y^0} \leq
C \delta^{\frac{1}{3}} e^{-\frac{A}{\delta^2}+\kappa}
\Big[
\vabs{\DYInn \paren{-i\kappa} - \CInn e^{-\kappa}}
+
\vabs{\YuMchU \paren{-i\kappa}} +
\vabs{\YsMchU \paren{-i\kappa}}
\Big],
\end{align*}
and, applying Theorems~\ref{theorem:innerComputations} and
\ref{theorem:matching}, we obtain
\begin{equation*
\begin{split}
\vabs{c_y-c_y^0}
&\leq
C \delta^{\frac{1}{3}}
e^{-\frac{A}{\delta^2}+\kappa}
\boxClaus{
\vabs{\chi_3(-i\kappa) e^{-\kappa} }
+
\frac{C}{\kappa} \delta^{\frac23(1-\gamma)} }
\leq
\frac{C}{\kappa} \delta^{\frac13} e^{-\frac{A}{\delta^2}}
\paren{1 +
\delta^{\frac23(1-\gamma)} e^{\kappa}},
\end{split}
\end{equation*}
where $\gamma \in (\gamma^*,1)$ with $\gamma^*\in [\frac{3}{5},1)$ given in Theorem \ref{theorem:matching}.
Taking $\kappa=\kappa_* \vabs{\log \delta}$ with $0<\kappa_*<\frac{2}{3}(1-\gamma)$, we obtain
\begin{equation*
\begin{split}
\vabs{c_y-c_y^0}
&\leq
\frac{C\delta^{\frac13} }{\vabs{\log \delta}}
e^{-\frac{A}{\delta^2}}
\paren{1 + \delta^{\frac23(1-\gamma)-\kappa_*}}
\leq
\frac{C \delta^{\frac13} }{\vabs{\log \delta}}
e^{-\frac{A}{\delta^2}}.
\end{split}
\end{equation*}
This bound and \eqref{proof:operatorIIdiff} prove the first estimate of the lemma.
For the second estimate, it only remains to bound $\dzHatO$ and apply Lemma~\ref{lemma:operatorEEdiff}.
Indeed, by the definition of $\dzHatO$ in \eqref{def:dzHatO}, Lemma \ref{lemma:boundsBspl} and \eqref{proof:operatorIIdiff}, we have that
\begin{align*}
\normSplTotal{\dzHatO}
=
\normSplTotal{\mathcal{I}[c_x^0,c_y^0]}
\leq
C e^{\frac{A}{\delta^2}} \paren{\vabs{c_x^0}+\vabs{c_y^0}}
\leq
C \delta^{\frac13}.
\end{align*}
Since $\kappa=\kappa_* \vabs{\log \delta}$ with $0<\kappa_*<\frac{2}{3}(1-\gamma)$,
Lemma \ref{lemma:operatorEEdiff} implies $\normSplTotal{\mathcal{E}[\dzHatO]} \leq \frac{C \delta^{\frac13}}{\vabs{\log \delta}}$.
\end{proof}
With this lemma, we can give sharp estimates for $\dzHatU$ by using equation
\eqref{eq:dzOutUInverse}.
Indeed, since the right hand side of this equation belongs to $\XSplTotal$, by Lemma \ref{lemma:operatorEEdiff},
\[
\dzHatU(u) = (\mathrm{Id} - \mathcal{E} )^{-1}\left( \mathcal{I}[c_x-c_x^0, c_y-c_y^0](u) + \mathcal{E}[\dzHatO](u)\right).
\]
Then, Lemmas \ref{lemma:operatorEEdiff} and \ref{lemma:IconstantsDiff} imply
\begin{align}\label{def:fitaDeltaphi1}
\normSplTotal{\dzHatU}
\leq \frac{C \delta^{\frac{1}{3}}}{\vabs{\log \delta}}.
\end{align}
To prove Theorem \ref{theorem:mainTheoremPoincare}, it only remains to analyze $B_x(u_-)$ and $B_y(u_+)$.
\begin{lemma}\label{lemma:boundsBonsBeta}
Let $\kappa_*$ be as given in Lemma \ref{lemma:IconstantsDiff}.
Then, there exists $\delta_0>0$ such that, for $\delta \in (0,\delta_0)$ and $\kappa=\kappa_*\vabs{\log \delta}$, the functions $B_{x}, B_y$ defined in \eqref{def:mxmyalxaly} satisfy
\begin{align*}
B_x^{-1}(u_-) &= e^{-\frac{4i}9(\pi-\lambda_h(u_*))}
\paren{1+\mathcal{O}\paren{\frac1{\vabs{\log \delta}}}},
\\
B_y^{-1}(u_+) &= e^{\frac{4i}9(\pi-\lambda_h(u_*))}
\paren{1+\mathcal{O}\paren{\frac1{\vabs{\log \delta}}}},
\end{align*}
where $u_{\pm}=\pm i(A-\kappa\delta^2)$.
\end{lemma}
This lemma is proven in Appendix \ref{subappendix:proofH-technicalSecond}.
Let $u_* \in \DBoomerang \cap \mathbb{R}$.
We compute the first order of $\dzHatO(u_*)=(\dxOutO(u_*),\dyOutO(u_*))^T$.
Since, by Theorem \ref{theorem:singularities}, $(\alpha_+)^3=(\alpha_-)^3=\frac12$, and applying Lemma \ref{lemma:boundsBonsBeta} and \eqref{def:dzHatO}, we obtain
\begin{align*}
\vabs{\Delta x_0(u_*)} =
\vabs{\Delta y_0(u_*)} =
\sqrt[6]{2}
\vabs{\CInn}
\delta^{\frac13} e^{-\frac{A}{\delta^2}}
\paren{1+\mathcal{O}\paren{\frac1{\vabs{\log \delta}}}}.
\end{align*}
Moreover, by \eqref{def:fitaDeltaphi1},
\begin{align*}
\vabs{\Delta x(u_*) - \Delta x_0(u_*)},
\vabs{\Delta y(u_*) - \Delta y_0(u_*)}
\leq
\frac{C \delta^{\frac13} e^{-\frac{A}{\delta^2}}}{\vabs{\log \delta}}.
\end{align*}
Finally, notice that the section $u=u_* \in \DBoomerang \cap \mathbb{R}$ translates to $\lambda= \lambda^* :=\lambda_h(u_*)$ (see \eqref{def:changeOuter}).
Moreover, since $\dot{\lambda}_h=-3\Lambda_h$ (see \eqref{eq:separatrixParametrization}), one deduces that $\Lambda_h(u)>0$ for $u>0$.
Therefore, by the change of coordinates \eqref{def:changeOuter}, Theorem \ref{theorem:existence} and taking $\delta$ small enough,
\begin{align*}
\Lambda_*^{\diamond} = \Lambda_h(u_*) - \frac{\wdOut(u_*)}{3\Lambda_h(u_*)}
=
\Lambda_h(u_*) + \mathcal{O}(\delta^2) > 0,
\qquad
\text{with }\diamond=\unstable,\stable,
\end{align*}
and, therefore using formula \eqref{eq:defDwOut} for $\dwOut$ and Lemma \ref{lemma:boundsBspl}, we obtain that
\[
\vabs{\Lambda_*^{\unstable}-\Lambda_*^{\stable}} \leq
C \vabs{\dwOut(u_*)}
\leq
C \delta \vabs{\dxOut(u_*)} +
C \delta \vabs{\dyOut(u_*)}
\leq
C \delta^{\frac43} e^{-\frac{A}{\delta^2}}.
\]
\subsection{Proof of Lemma \ref{lemma:boundsBspl}}
\label{subappendix:proofH-technicalFirst}
First, we prove the estimates for the operator $\Upsilon$ given in \eqref{def:operatorPPdifference}.
For $\sigma \in [0,1]$, we define
$z_{\sigma}=\sigma\zuOut + (1-\sigma) \zsOut$
with $z_{\sigma}=(w_{\sigma},x_{\sigma},y_{\sigma})^T$.
Then, by Theorem \ref{theorem:existence}, for $u \in \DBoomerang$, we have that
\begin{equation}\label{proof:ztauBounds}
\vabs{w_{\sigma}(u)}\leq \frac{C\delta^2}{\vabs{u^2+A^2}} +
\frac{C\delta^4}{\vabs{u^2+A^2}^{\frac83}}, \qquad
\vabs{x_{\sigma}(u)},\vabs{y_{\sigma}(u)}\leq \frac{C\delta^3}{\vabs{u^2+A^2}^{\frac43}}.
\end{equation}
Recalling that $H^{\out} = w + \frac{xy}{\delta^2} + H_1^{\out}$ (see \eqref{def:hamiltonianOuter}), one has
\begin{equation*}
\begin{split}
\vabs{\Upsilon_1(u)-1}& \leq
\sup_{\sigma \in[0,1]} \vabs{\partial_w H_1^{\out}(u,z_{\sigma}(u))},\\
\vabs{\Upsilon_2(u)}&\leq
\frac{\vabs{y_{\sigma}(u)}}{\delta^2}
+
\sup_{\sigma \in[0,1]} \vabs{
\partial_x H_1^{\out}(u,z_{\sigma}(u))},
\\
\vabs{\Upsilon_3(u)}
&\leq
\frac{\vabs{x_{\sigma}(u)}}{\delta^2}
+
\sup_{\sigma \in[0,1]} \vabs{
\partial_y H_1^{\out}(u,z_{\sigma}(u))}.
\end{split}
\end{equation*}
Then, by \eqref{proof:ztauBounds} and applying
\eqref{def:Fitag} and \eqref{proof:estimatef2f3Out} in the proof of
Lemma \ref{lemma:computationsRRROut} we obtain the estimates for $\Upsilon_1, \Upsilon_2$ and $\Upsilon_3$.
We also need estimates for the matrix $\widetilde{\mathcal{B}}^{\spl}$ given in \eqref{def:Bspl1}, which satisfies
\begin{align*}
|\widetilde{\mathcal{B}}^{\spl}_{i,j}(u)|
\leq
\sup_{\sigma \in[0,1]}
\vabs{\paren{D_z \mathcal{R}^{\out}
[\zOut_{\sigma}](u)}_{i,j}},
\end{align*}
for $z_{\sigma}= \sigma\zuOut + (1-\sigma) \zsOut$.
Then, by \eqref{proof:ztauBounds} and applying Lemma \ref{lemma:computationsRRROut}, for $u \in \DBoomerang$,
\begin{equation} \label{proof:boundsBsplTilde}
\begin{aligned}
\vabs{\widetilde{\mathcal{B}}^{\spl}_{2,1}(u)} &\leq
\frac{C\delta}{\vabs{u^2 + A^2}^{\frac{2}{3}}},
&
\vabs{\widetilde{\mathcal{B}}^{\spl}_{3,1}(u)} &\leq
\frac{C\delta}{\vabs{u^2 + A^2}^{\frac{2}{3}}},
\\
\vabs{\widetilde{\mathcal{B}}^{\spl}_{2,2}(u)} &\leq
\frac{C}{\vabs{u^2 + A^2}^{\frac13}} +
\frac{C \delta^2}{\vabs{u^2 + A^2}^{2}},
&
\vabs{\widetilde{\mathcal{B}}^{\spl}_{3,2}(u)} &\leq
\frac{C\delta^2}{\vabs{u^2 + A^2}^{2}},
\\
\vabs{\widetilde{\mathcal{B}}^{\spl}_{2,3}(u)} &\leq
\frac{C\delta^2}{\vabs{u^2 + A^2}^{2}},
&
\vabs{\widetilde{\mathcal{B}}^{\spl}_{3,3}(u)} &\leq
\frac{C}{\vabs{u^2 + A^2}^{\frac13}} +
\frac{C \delta^2}{\vabs{u^2 + A^2}^{2}}.
\end{aligned}
\end{equation}
Then, by \eqref{proof:boundsUpsilon} and taking $\kappa$ big enough,
\begin{align*}
\vabs{{\mathcal{B}}^{\spl}_{1,1}(u)} &\leq
\frac{\vabs{\Upsilon_2(u)}}{\vabs{\Upsilon_1(u)}} \vabs{\widetilde{\mathcal{B}}^{\spl}_{2,1}(u)}
\leq
\frac{C \delta^2}{\vabs{u^2 + A^2}^{2}}, \\
\vabs{{\mathcal{B}}^{\spl}_{1,2}(u)} &\leq
\vabs{\widetilde{\mathcal{B}}^{\spl}_{2,3}(u)} +
\frac{\vabs{\Upsilon_3(u)}}{\vabs{\Upsilon_1(u)}} \vabs{\widetilde{\mathcal{B}}^{\spl}_{2,1}(u)}
\leq
\frac{C \delta^2}{\vabs{u^2 + A^2}^{2}},
\end{align*}
and analogous estimates hold for ${\mathcal{B}}^{\spl}_{2,1}$ and ${\mathcal{B}}^{\spl}_{2,2}$.
Finally, we compute estimates for $B_y(u)$ (see \eqref{def:mxmyalxaly}) and $u \in \DBoomerang$. The estimates for $B_x(u)$ can be computed analogously.
Let us consider the integration path $\rho_t = u_* + (u-u_*)t $, for $t\in[0,1]$.
Then
\begin{align*}
B_y(u) = \exp \paren{
\int_0^{1} \widetilde{\mathcal{B}}^{\spl}_{2,2}\paren{\rho_t}
(u-u_*) dt}.
\end{align*}
Using the bounds in \eqref{proof:boundsBsplTilde}, we have that
\begin{align*}
\vabs{\log B_y(u)} &\leq
C \vabs{u-u_*} \vabs{\int^{1}_0
\frac{1}{\vabs{\rho^2_t + A^2}^{\frac{1}{3}}} +
\frac{\delta^2}{\vabs{\rho^2_t + A^2}^{2}} dt}
\leq C,
\end{align*}
which implies $C^{-1} \leq \vabs{B_y(u)} \leq C$.
\subsection{Proof of Lemma \ref{lemma:boundsBonsBeta}}
\label{subappendix:proofH-technicalSecond}
We only give an expression for $B_y(u_+)$. The result for $B_x(u_-)$ is analogous.
First, we analyze $\widetilde{\mathcal{B}}^{\spl}_{3,3}$.
\begin{lemma}\label{lemma:proofConstantA}
For $\delta>0$ small enough, $\kappa>0$ large enough and $u \in \DBoomerang$, the function $\widetilde{\mathcal{B}}^{\spl}_{3,3}$ defined in \eqref{def:Bspl1} is of the form
\begin{align*}
\widetilde{\mathcal{B}}_{3,3}^{\spl}(u) =
-\frac{4i}3 \Lambda_h(u)
+ \delta^2 m(u;\delta),
\end{align*}
for some function $m$ satisfying
\begin{align*}
\vabs{m(u;\delta)}
\leq \frac{C}{\vabs{u^2+A^2}^{2}}.
\end{align*}
\end{lemma}
\begin{proof}
Let us define $z_{\tau}= \tau\zuOut + (1-\tau) \zsOut$ and recall that, for $u \in \DBoomerang$,
\begin{align}\label{proof:B33tildeDef}
\widetilde{\mathcal{B}}_{3,3}(u) =
\int_0^1 \partial_y \mathcal{R}_3^{\out}[z_{\tau}](u) d\tau.
\end{align}
Then, by the expression of $\mathcal{R}_3^{\out}$ in \eqref{eq:expressionRRRout}, the estimates
in the proof of Lemma \ref{lemma:computationsRRROut} (see Appendix \ref{subsubsection:proofComputationsRRROut}) and Theorem \ref{theorem:existence}, we have that
\[
\partial_y \mathcal{R}_3^{\out}[z_{\tau}](u)
=
\frac{i}{\delta^2} g^{\out}(u,z_{\tau}(u)) +
\delta^2 \widetilde{m}(u;\delta),
\]
where
$
\vabs{\widetilde{m}(u;\delta)}
\leq \frac{C}{\vabs{u^2+A^2}^{2}}.
$
In the following, to simplify notation, we denote by $\widetilde{m}(u;\delta)$ any function satisfying the previous estimate.
Since $g^{\out}=\partial_w H_1^{\out}$, by \eqref{proof:H1outExpressionOuter} one has
\[
g^{\out}(u,z_{\tau}(u)) = \partial_w M_P(u,z_{\tau}(u);\delta)+
\partial_w M_S(u,z_{\tau}(u);\delta)+
\partial_w M_R(u,z_{\tau}(u);\delta),
\]
with $M_P$, $M_S$ and $M_R$ as given in \eqref{def:expressionMJOuter},
\eqref{def:expressionMSOuter} and
\eqref{def:expressionMROuter}, respectively.
Then, taking into account that $F_{\pend}(s)=2z^3+\mathcal{O}(z^4)$ (see \eqref{def:Fpend}) and following the proofs of Lemmas \ref{lemma:boundsMPout} and \ref{lemma:boundsMSout}, it is a tedious but an easy computation to see that,
\begin{equation*}
\begin{split}
g^{\out}(u,z_{\tau}(u)) =& \,
\partial_w M_P(u,0,0,0;\delta) +
\partial_w M_S(u,0,0,0;\delta) \\
&- \frac{w_{\tau}(u)}{3 \Lambda_h^2(u)}
-\frac{\delta^2 \LtresLa(\delta)}{\Lambda_h(u)}
- 2 \delta^2\Lambda_h(u)
+
\delta^4\widetilde{m}(u;\delta),
\end{split}
\end{equation*}
and, by \eqref{proof:B33tildeDef},
\begin{equation}\label{proof:B33tildeDefB}
\begin{split}
\widetilde{\mathcal{B}}_{3,3}(u) =& \, \frac{i}{\delta^2} \boxClaus{\partial_w M_P(u,0,0,0;\delta) +
\partial_w M_S(u,0,0,0;\delta) } \\
&- i\frac{\wuOut(u)+\wsOut(u)}{6 \delta^2\Lambda_h^2(u)}
- i\frac{\LtresLa(\delta)}{\Lambda_h(u)}
- 2i \Lambda_h(u)
+
\delta^2\widetilde{m}(u;\delta).
\end{split}
\end{equation}
Next, we study the terms $\wusOut(u)$.
Since $H^{\out}=w + \frac{xy}{\delta^2} + M_P+M_S+M_R$ (see
\eqref{def:hamiltonianOuter} and \eqref{proof:H1outExpressionOuter}), one can see that
\[
H^{\out}(u,\zuOut(u);\delta)
=
H^{\out}(u,\zsOut(u);\delta)
=
\lim_{\Re u \to \pm \infty} H^{\out}(u,0,0,0;\delta)
=
\delta^4 K(\delta),
\]
with $\vabs{K(\delta)}\leq C$, for $\delta$ small enough.
Then, by Theorem \ref{theorem:existence}, for $\diamond=\unstable,\stable$,
\begin{align*}
\vabs{\wdOut(u) + M_P(u,\zdOut(u);\delta)+
M_S(u,\zdOut(u);\delta)+ M_R(u,\zdOut(u);\delta)}
\leq
\frac{C\delta^4}{\vabs{u^2+A^2}^{\frac83}}.
\end{align*}
Again, following the proofs of Lemmas \ref{lemma:boundsMPout} and \ref{lemma:boundsMSout}, one obtains
\begin{align*}
\vabs{\wdOut(u) + M_P(u,0,0,0;\delta)
+ M_S(u,0,0,0;\delta)
+\delta^2\Lambda_h(u)(3\LtresLa+2\Lambda_h^2(u))}
\leq \frac{C\delta^4}{\vabs{u^2+A^2}^{\frac83}},
\end{align*}
and, by \eqref{proof:B33tildeDefB},
\begin{equation*
\begin{split}
\widetilde{\mathcal{B}}_{3,3}(u)
=& \,
- \frac{4i}3 \Lambda_h(u)
+
\frac{i}{\delta^2} \boxClaus{\partial_w M_P(u,0,0,0;\delta)
+ \frac{M_P(u,0,0,0;\delta)}{3\Lambda_h^2(u)}
}
\\
&+ \frac{i}{\delta^2} \boxClaus{\partial_w M_S(u,0,0,0;\delta)
+ \frac{M_S(u,0,0,0;\delta)}{3\Lambda_h^2(u)} }
+\delta^2\widetilde{m}(u;\delta).
\end{split}
\end{equation*}
Therefore, it only remains to check that
\begin{align*}
\vabs{\partial_w M_{P,S}(u,0,0,0;\delta)
+ \frac{M_{P,S}(u,0,0,0;\delta)}{3\Lambda_h^2(u)}}
\leq \frac{C\delta^4}{\vabs{u^2+A^2}^2}.
\end{align*}
Indeed, by \eqref{def:SerieP} and the definition \eqref{def:expressionMJOuter} of $M_P$, one has
\begin{align*}
M_P(u,w,0,0;\delta) = \mathcal{M}_P\paren{u,\delta^2\Lambda_h(u)-\frac{\delta^2 w}{3\Lambda_h(u)}+\delta^4\LtresLa(\delta)},
\end{align*}
where $\mathcal{M}_P(u,\Lambda)$ is an analytic function for $u \in \DBoomerang$ and $\vabs{\Lambda} \ll 1$.
Moreover, following the proof of Lemma \ref{lemma:boundsMPout}, there exist $a_0$ and $a_1$ such that
\begin{align*}
\vabs{\mathcal{M}_P(u,\Lambda)-a_0(u;\delta)- a_1(u;\delta) \Lambda} \leq
\frac{C\Lambda^2}{\vabs{u^2+A^2}^2},
\end{align*}
with
\[
\vabs{a_0(u;\delta)} \leq \frac{C\delta^4}{\vabs{u^2+A^2}^{\frac23}},
\qquad
\vabs{a_1(u;\delta)} \leq \frac{C}{\vabs{u^2+A^2}^{\frac23}}.
\]
Therefore,
\begin{align*}
\vabs{\partial_w M_{P}(u,0,0,0;\delta)
+ \frac{M_{P}(u,0,0,0;\delta)}{3\Lambda_h^2(u)}}
\leq& \,
\frac{\vabs{a_0(u)}}{3\Lambda_h^2(u)}
+ \frac{\delta^4\LtresLa(\delta)\vabs{a_1(u)}}{3\Lambda_h^2(u)}
+ \frac{C\delta^4}{\vabs{u^2+A^2}^{2}}
\\
\leq& \,
\frac{C\delta^4}{\vabs{u^2+A^2}^{2}} .
\end{align*}
An analogous estimate holds for $M_S$.
\end{proof}
\begin{proof}[End of the proof of Lemma \ref{lemma:boundsBonsBeta}]
By Lemma \ref{lemma:proofConstantA} and recalling that $u_+=iA-\kappa \delta^2$,
\begin{equation}
\label{proof:constantDifferenceZ}
\begin{split}
\log B_y(u_+)
=&
\int_{u_*}^{u_+} \widetilde{\mathcal{B}}_{3,3}^{\spl}(u) du
=
-\frac{4i}3 \int_{u^*}^{i A} \Lambda_h(u) du
\\ &+
\frac{4i}3 \int^{i A}_{u_+} \Lambda_h(u) du
+
\delta^2
\int_{u^*}^{u_+} {m}(u;\delta).
\end{split}
\end{equation}
Then, by Theorem \ref{theorem:singularities} and taking into account that $\kappa=\kappa_* \vabs{\log \delta}$ (see Lemma \ref{lemma:IconstantsDiff}), we obtain
\begin{align*}
\vabs{\log B_y(u_+) + \frac{4 i}3
\int_{u^*}^{i A} \Lambda_h(u) du}
\leq
\frac{C}{\kappa}
+
C \kappa^{\frac23}\delta^{\frac43}
+
\frac{C\delta^2}{\vabs{u_* - iA}}
\leq
\frac{C}{\vabs{\log \delta}}.
\end{align*}
Finally, recalling that $\dot{\lambda}_h=-3\Lambda_h$, applying the change of coordinates $\lambda=\lambda_h(u)$
and using that $\lambda_h(iA) = \pi$, we have that
\begin{align*}
\frac{4i}3 \int_{u^*}^{i A} \Lambda_h(u) du
=
-\frac{4i}9 \int_{\lambda_h(u_*)}^{\pi} d\lambda
=
-\frac{4i}9 \paren{\pi-\lambda_h(u_*)}.
\end{align*}
Joining the last statements with \eqref{proof:constantDifferenceZ}, we obtain the statement of the lemma.
\end{proof}
\subsection{Preliminaries and set up}
Proposition \eqref{proposition:innerDerivation} shows that the Hamiltonian $H^{\out}$ expressed in inner coordinates, that is $H^{\inn}$ as given in \eqref{def:hamiltonianInnerComplete},
is of the form $H^{\inn}=W+XY+ \mathcal{K}+H_1^{\inn}$.
Then, the equation associated to $H^{\inn}$ can be written as
\begin{equation}\label{eq:systemEDOsInnerComplete}
\left\{ \begin{array}{l}
\dot{U} = 1 + g^{\Inner}(U,Z) + g^{\mch}(U,Z),\\
\dot{Z} = \mathcal{A}^{\Inn} Z + f^{\Inner}(U,Z) + f^{\mch}(U,Z),
\end{array} \right.
\end{equation}
where $\mathcal{A}^{\Inner}$ is given in \eqref{def:matrixAAA} and
\begin{equation}\label{def:fgInnfgMch}
\begin{aligned}
f^{\Inn} &= \paren{-\partial_U \mathcal{K},
i \partial_Y \mathcal{K}, -i\partial_X \mathcal{K}}^T,
& \quad
g^{\inn} &= \partial_{W} \mathcal{K},
\\
f^{\mch} &= \paren{-\partial_U H_1^{\inn},
i \partial_Y H_1^{\inn}, -i\partial_X H_1^{\inn} }^T,
& \quad
g^{\mch} &= \partial_{W} H_1^{\inn}.
\end{aligned}
\end{equation}
Notice that, since $(u,\zuOut(u))=\phi_{\Inner}(U,\ZuMchO(U))$ (see \eqref{def:ZdMchO}),
$(U,\ZuMchO(U))$ is an invariant graph of equation \eqref{eq:systemEDOsInnerComplete}.
Therefore, $\ZuMchO$ satisfies the invariance equation
\begin{equation*
\begin{split}
\partial_U {\ZuMchO} &=
\mathcal{A}^{\Inner} \ZuMchO
+ \mathcal{R}^{\Inner}[\ZuMchO]
+ \mathcal{R}^{\mch}[\ZuMchO],
\end{split}
\end{equation*}
with $\mathcal{R}^{\Inner}$ as defined in \eqref{def:operatorRRRInner} and
\begin{equation}\label{def:operatorRRRmch}
\mathcal{R}^{\mch}[\phiA] =
\frac{\mathcal{A}^{\Inner} \phiA
+ f^{\Inn}(U, \phiA)
+ f^{\mch}(U, \phiA) }
{1+ g^{\Inn}(U, \phiA) + g^{\mch}(U, \phiA) }
- \mathcal{A}^{\Inner} \phiA
- \mathcal{R}^{\Inner}[\phiA].
\end{equation}
Similarly $\ZuInn$ satisfies the invariance equation
$
\partial_U \ZuInn =
\mathcal{A}^{\Inner} \ZuInn
+ \mathcal{R}^{\Inner}[\ZuInn]
$ (see Theorem \ref{theorem:innerComputations}) and,
therefore, the difference $\ZuMchU = \ZuMchO - \ZuInn$ must be a solution of
\begin{equation}\label{eq:invariantEquationMchU1}
\partial_U {\ZuMchU} =
\mathcal{A}^{\inn} \ZuMchU +
\mathcal{B}(U) \ZuMchU + \mathcal{R}^{\mch}[\ZuMchO],
\end{equation}
with
\begin{equation}\label{def:operatorBBmch}
\mathcal{B}(U) = \int_{0}^{1} D_{Z}\mathcal{R}^{\Inn}
[(1-s)\ZuInn + s \ZuMchO](U) ds.
\end{equation}
The key point is that, since the existence of both $\ZuInn$ and $\ZuMchO$ is already been proven,
we can think of $\mathcal{B}(U)$ and
$\mathcal{R}^{\mch}[\ZuMchO](U)$ as known functions. Therefore, equation~\eqref{eq:invariantEquationMchU1} can be understood as a non homogeneous linear equation with independent term $\mathcal{R}^{\mch}[\ZuMchO](U)$.
Moreover, defining the linear operator
$\mathcal{L}^{\inn} \phiA = (\partial_U-\mathcal{A}^{\inn})\phiA$,
equation~\eqref{eq:invariantEquationMchU1} is equivalent to
\begin{equation}\label{eq:invariantEquationMchU2}
\mathcal{L}^{\inn}{\ZuMchU}(U) =
\mathcal{B}(U)\ZuMchU(U)
+\mathcal{R}^{\mch}[\ZuMchO](U).
\end{equation}
We prove Theorem \ref{theorem:matching} by solving this equation (with suitable initial conditions).
To this end, we define the Banach space
$
\XcalMchTotal = \XcalMch_{\frac{4}{3}} \times \XcalMch_1 \times \XcalMch_1
$
with
\begin{equation*}
\XcalMch_{\alpha}=
\Bigg\{ \phiA: \DuMchInn \to \mathbb{C} \mathrm{h}:\mathrm{h}
\phiA \text{ real-analytic, }
\normMch{\phiA}_{\alpha}=\sup_{U \in \DuMchInn} \vabs{U^{\alpha}\phiA(U)} < \infty \Bigg\},
\end{equation*}
endowed with the product norm
$
\normMchTotal{\phiA} = \normMch{\phiA_1}_{\frac{4}{3}} + \normMch{\phiA_2}_{1} + \normMch{\phiA_3}_{1}.
$
Next lemma gives some properties of these Banach spaces.
\begin{lemma}\label{lemma:sumNormsMch}
Let $\gamma \in [\frac{3}{5},1)$ and $\alpha, \beta \in \mathbb{R}$. The following statements hold:
\begin{enumerate}
\item If $\varphi \in \XcalMch_{\alpha}$, then $\varphi \in \XcalMch_{\beta}$ for any $\beta \in \mathbb{R}$.
Moreover,
\begin{align*}
\begin{cases}
\normMch{\varphi}_{\beta} \leq
C \kappa^{\beta-\alpha}
\normMch{\varphi}_{\alpha},
&\quad \text{for } \alpha > \beta, \\
\normMch{\varphi}_{\beta} \leq
C \delta^{2(\alpha-\beta)(1-\gamma)}
\normMch{\varphi}_{\alpha},
&\quad \text{for } \alpha < \beta.
\end{cases}
\end{align*}
\item If $\varphi \in \XcalMch_{\alpha}$ and
$\zeta \in \XcalMch_{\beta}$, then
$\varphi \zeta \in \XcalMch_{\alpha+\beta}$ and
$
\normMch{\varphi \zeta}_{\alpha+\beta} \leq \normMch{\varphi}_{\alpha} \normMch{\zeta}_{\beta}.
$
\end{enumerate}
\end{lemma}
This lemma is a direct consequence of the fact that, as explained in Section \ref{subsection:matching}, $U$ satisfies
\begin{equation}\label{eq:boundsDomainMchInner}
\kappa \cos \betaMchA \leq
\vabs{U} \leq
\frac{C}{\delta^{2(1-\gamma)}}.
\end{equation}
Now, we present the main result of this section, which implies Theorem~\ref{theorem:matching}.
\begin{proposition}\label{theorem:matchingProof}
There exist $\gamma^*\in[\frac35,1)$,
$\kappaMch\geq\max\claus{\kappaOuter,\kappaInner}$,
$\delta_0>0$ and $\cttMchA>0$ such that,
for $\gamma \in (\gamma^*,1)$,
$\kappa\geq \kappaMch$
and $\delta \in (0,\delta_0)$, $\ZuMchU$ satisfies
$\normMchTotal{\ZuMchU} \leq \cttMchA \, \delta^{\frac{2}{3}(1-\gamma)}$.
\end{proposition}
\subsection{An integral equation formulation}
\label{subsection:linearOperatorsMch}
To prove Proposition \ref{theorem:matchingProof}, we first introduce a right-inverse of $\mathcal{L}^{\inn}=\partial_U-\mathcal{A}^{\inn}$.
\begin{lemma}\label{lemma:operadorEEmch}
The operator $\mathcal{G}^{\inn}[\phiA]=\paren{\mathcal{G}^{\inn}_1[\phiA_1],\mathcal{G}^{\inn}_2[\phiA_2],\mathcal{G}^{\inn}_3[\phiA_3]}^T$
defined as
\begin{equation}\label{def:operatorEEmch}
\mathcal{G}^{\inn}[\phiA](U) = \paren{
\int_{U_3}^U \phiA_1(S) dS, \,
\int^U_{U_3} e^{-i(S-U)} \phiA_2(S) dS, \,
\int^U_{U_2} e^{i(S-U)}\phiA_3(S) dS}^T,
\end{equation}
where $U_2$ and $U_3$ are introduced in~\eqref{def:U12}, is a right inverse of $\mathcal{L}^{\inn}$.
Moreover, there exists a constant $C>0$ such that:
\begin{enumerate}
\item Let $\alpha>1$. If $\phiA \in \XcalMch_{\alpha}$, then $\mathcal{G}^{\inn}_1[\phiA] \in \XcalMch_{\alpha-1}$ and
$
\normMch{\mathcal{G}^{\inn}_1[\phiA]}_{\alpha-1} \leq C \normMch{\phiA}_{\alpha}.
$
\item Let $\alpha>0$, $j=2,3$.
If $\phiA \in \XcalMch_{\alpha}$,
then $\mathcal{G}^{\inn}_j[\phiA] \in \XcalMch_{\alpha}$ and
$
\normMchSmall{\mathcal{G}^{\inn}_j[\phiA]}_{\alpha} \leq C \normMch{\phiA}_{\alpha}.
$
\end{enumerate}
\end{lemma}
The proof of this lemma follows the same lines as the proof of Lemma 20 in \cite{BCS13}.
Using the operator $\mathcal{G}^{\inn}$, equation~\eqref{eq:invariantEquationMchU2} is equivalent to
\begin{equation*}
\ZMchU(U) = C^{\mch} e^{\mathcal{A}^{\inn} U} +
\mathcal{G}^{\inn} \left[ \mathcal{B} \cdot \ZMchU \right] (U) +
\paren{\mathcal{G}^{\inn} \circ \mathcal{R}^{\mch}[Z]} (U),
\end{equation*}
where $C^{\mch}=(C^{\mch}_W,C^{\mch}_X, C^{\mch}_Y)^T$ is defined as
\begin{equation*}
C_W^{\mch} =\WMchU(U_3), \qquad
C_X^{\mch} =e^{-i U_3}\XMchU(U_3), \qquad
C_Y^{\mch} =e^{i U_2}\YMchU(U_2).
\end{equation*}
Then, defining the operator
$
\mathcal{T}[\phiA](U) =
\mathcal{G}^{\inn} \left[ \mathcal{B} \cdot \phiA \right](U),
$
this equation is equivalent to \begin{equation}\label{eq:invariantEquationMchU4}
(\mathrm{Id} - \mathcal{T})\ZuMchU = C^{\mch} e^{\mathcal{A}^{\inn} U} +
\paren{\mathcal{G}^{\inn} \circ \mathcal{R}^{\mch}[\ZuMchO]}
\end{equation}
and therefore, to estimate $\ZuMchU$, we need to prove that $\mathrm{Id}-\mathcal{T}$ is invertible in $\XcalMchUTotal$.
\begin{lemma}\label{lemma:operatorTTmch}
Let us consider operators $\mathcal{B}$ and $\mathcal{G}^{\inn}$ as given in \eqref{def:operatorBBmch} and \eqref{def:operatorEEmch}.
Then, for $\gamma \in [\frac35,1)$, $\kappa>0$ big enough and $\delta>0$ small enough, for $\phiA \in \XcalMchTotal$,
\begin{align*}
\normMchTotal{\mathcal{T}[\phiA]}
= \normMchTotal{\mathcal{G}^{\inn}[\mathcal{B}\cdot \phiA]}
\leq \frac{1}{2} \normMchTotal{\phiA}
\end{align*}
and therefore
\begin{align*}
\normMchTotal{(\mathrm{Id}-\mathcal{T})^{-1}[\phiA]} \leq 2 \normMchTotal{\phiA}.
\end{align*}
\end{lemma}
To prove this lemma, we use the following estimates, whose proof is a direct result of Lemma 5.5 in \cite{articleInner}.
\begin{lemma}\label{lemma:technicalMatching}
Fix $\varrho>0$ and take $\kappa>0$ big enough. Then, there exists a constant $C$ (depending on $\varrho$ but independent of $\kappa$) such that, for $\phiA \in \XcalMchTotal$ with $\normMchTotal{\phiA}\leq\varrho$, the functions $g^{\inn}$ and $f^{\inn}$ in \eqref{def:operatorRRRInner} and the operator $\mathcal{R}^{\inn}$ in \eqref{def:fgInnfgMch} satisfy
\begin{align*}
\normMch{g^{\inn}(\cdot,\phiA)}_2 \leq C,
\qquad
\normMch{f_1^{\inn}(\cdot,\phiA)}_{\frac{11}3} \leq C,
\qquad
\normMch{f_j^{\inn}(\cdot,\phiA)}_{\frac43}
\leq C,
\quad j=2,3
\end{align*}
and
\begin{align*}
\normMch{\partial_W \mathcal{R}^{\inn}_1[\phiA]}_3 &\leq C, &
\normMch{\partial_X \mathcal{R}^{\inn}_1[\phiA] }_{\frac73} &\leq C, &
\normMch{\partial_Y \mathcal{R}^{\inn}_1[\phiA] }_{\frac73} &\leq C,
\\
\normMch{\partial_W \mathcal{R}^{\inn}_j[\phiA]}_{\frac23} &\leq C, &
\normMch{\partial_X \mathcal{R}^{\inn}_j[\phiA] }_{2} &\leq C, &
\normMch{\partial_Y \mathcal{R}^{\inn}_j[\phiA] }_{2} &\leq C,
\quad j=2,3.
\end{align*}
\end{lemma}
\begin{proof}[Proof of Lemma \ref{lemma:operatorTTmch}]
Let $\ZuMchO$ be as given in \eqref{def:ZdMchO}. Then, by Proposition \ref{proposition:existenceComecocos},
estimates \eqref{eq:boundsDomainMchInner} and taking $\gamma \in [\frac35,1)$, we have that, for $U \in \DuMchInn$,
\begin{align}\label{eq:boundsZMchO}
\vabs{\WuMchO(U)} \leq \frac{C}{\vabs{U}^{\frac{8}{3}}}
+ \frac{C \delta^{\frac{4}{3}}}{\vabs{U}}
\leq
\frac{C}{\vabs{U}^{\frac{8}{3}}}, \qquad
%
\normMch{\XuMchO}_{\frac43} \leq C, \qquad
%
\normMch{\YuMchO}_{\frac43} \leq C.
\end{align}
Then, using also Theorem \ref{theorem:innerComputations}, we obtain that
$(1-s)\ZuInn+s\ZuMchO \in \XcalMchTotal$ for $s \in [0,1]$ and $\gamma \in [\frac35,1)$ and
$
\normMchTotal{(1-s)\ZuInn+s\ZuMchO}\leq C.
$
As a result, using the definition of $\mathcal{B}$ in \eqref{def:operatorBBmch} and Lemma \ref{lemma:technicalMatching},
\begin{equation}\label{eq:boundsOperadorBBmch}
\begin{aligned}
\normMch{\mathcal{B}_{1,1}}_3 &\leq C, &
\normMch{\mathcal{B}_{1,2}}_{\frac73} &\leq C, &
\normMch{\mathcal{B}_{1,3}}_{\frac73} &\leq C, \\
\normMch{\mathcal{B}_{j,1}}_{\frac23} &\leq C, &
\normMch{\mathcal{B}_{j,2}}_2 &\leq C, &
\normMch{\mathcal{B}_{j,3}}_2 &\leq C,
\quad \text{for } j=2,3.
\end{aligned}
\end{equation}
Therefore, by Lemmas~\ref{lemma:operadorEEmch} and \ref{lemma:sumNormsMch} and \eqref{eq:boundsOperadorBBmch}, we obtain
\begin{align*}
\normMch{\mathcal{T}_1[\phiA]}_{\frac{4}{3}} &\leq
C \normMch{\pi_1 \paren{\mathcal{B} \phiA} }_{\frac{7}{3}}\\
&\leq C \boxClaus{\normMch{\mathcal{B}_{1,1}}_{1} \normMch{\phiA_1}_{\frac{4}{3}}+
\normMch{\mathcal{B}_{1,2}}_{\frac{4}{3}} \normMch{\phiA_2}_{1}+
\normMch{\mathcal{B}_{1,3}}_{\frac{4}{3}} \normMch{\phiA_3}_{1}} \\
&\leq
\frac{C}{\kappa^2} \normMch{\phiA_1}_{\frac{4}{3}} +
\frac{C}{\kappa} \normMch{\phiA_2}_{1} +
\frac{C}{\kappa} \normMch{\phiA_3}_{1}
\leq \frac{C}{\kappa} \normMchTotal{\phiA}.
\end{align*}
Proceeding analogously, for $j=2,3$, we have
\begin{align*}
\normMch{\mathcal{T}_j[\phiA]}_{1}
&\leq C \boxClaus{\normMch{\mathcal{B}_{j,1}}_{-\frac{1}{3}}
\normMch{\phiA_1}_{\frac{4}{3}}+
\sum_{l=2}^3
\normMch{\mathcal{B}_{j,l}}_{0} \normMch{\phiA_l}_{1}}
%
\leq \frac{C}{\kappa} \normMchTotal{\phiA}.
\end{align*}
Taking $\kappa>0$ big enough, we obtain the statement of the lemma.
\end{proof}
\subsection{End of the proof of Proposition \ref{theorem:matchingProof}}
To complete the proof of Proposition \ref{theorem:matchingProof}, we study the right-hand side of equation \eqref{eq:invariantEquationMchU4}.
First, we deal with the term $C^{\mch} e^{\mathcal{A}^{\inn} U}$.
Recall that $U_2$ and $U_3$ in \eqref{def:U12} satisfy
\begin{equation*
\frac{C^{-1}}{\delta^{2(1-\gamma)}} \leq
\vabs{U_j} \leq
\frac{C}{\delta^{2(1-\gamma)}},
\hspace{5mm}
\text{ for } j=2,3.
\end{equation*}
Then, taking into account that $\WuMchU= \WuMchO-\WuInn$, \eqref{eq:boundsZMchO} and Theorem \ref{theorem:innerComputations} imply
\begin{align*}
|C^{\mch}_W| = \vabs{\WuMchU(U_3)} \leq \vabs{\WuMchO(U_3)}+\vabs{\WuInn(U_3)}
\leq \frac{C}{\vabs{U_3}^{\frac{8}{3}}}
\leq
C \delta^{\frac{16}{3}(1-\gamma)}
\end{align*}
and, as a result, by Lemma \ref{lemma:sumNormsMch},
$
\normMchSmall{C^{\mch}_W}_{\frac{4}{3}}
\leq
C \delta^{\frac{8}{3}(1-\gamma)}.
$
Analogously, for $U \in \DuMchInn$,
\begin{align*}
|C^{\mch}_X e^{iU}|
&= |e^{i(U-U_3)} \XuMchU(U_3)|
\leq \frac{C e^{-\Im(U-U_3)}}{\vabs{U_3}^{\frac{4}{3}}}
\leq C \delta^{\frac{8}{3}(1-\gamma)}
\end{align*}
and then
$
\normMchSmall{C^{\mch}_X e^{iU}}_1
\leq C \delta^{\frac{2}{3}(1-\gamma)}.
$
An analogous result holds for $C^{\mch}_Y e^{-iU}$. Therefore,
\begin{equation}\label{eq:independentTermsMchA}
\normMchTotalSmall{C^{\mch} e^{\mathcal{A}^{\inn} U}}\leq
C \delta^{\frac{2}{3}(1-\gamma)}.
\end{equation}
Now, we estimate the norm of $\mathcal{G}^{\inn} \circ \mathcal{R}^{\mch}[\ZuMchO]$.
The operator $\mathcal{R}^{\mch}$ in~\eqref{def:operatorRRRmch} can be rewritten as
\begin{align*}
\mathcal{R}^{\mch}[\ZuMchO] = \frac{f^{\mch}(1+g^{\Inn})-
g^{\mch}(\mathcal{A}^{\Inn}\ZuMchO+f^{\Inn})}{(1+g^{\Inn})(1+g^{\Inn}+g^{\mch})}.
\end{align*}
Then by \eqref{eq:boundsZMchO}, Lemmas \ref{lemma:sumNormsMch} and \ref{lemma:technicalMatching} and taking $\kappa$ big enough, we obtain
\begin{equation}\label{proof:boundsfgInn}
\begin{aligned}
\normMch{g^{\inn}(\cdot,\ZuMchO)}_0 &\leq \frac{C}{\kappa^2} \leq \frac12, &\quad
%
\normMch{i\XuMchO + f_2^{\inn}(\cdot,\ZuMchO)}_0
&\leq C, \\
%
\normMch{f_1^{\inn}(\cdot,\ZuMchO)}_0 &\leq C,
&\quad
%
\normMch{-i\YuMchO + f_3^{\inn}(\cdot,\ZuMchO)}_0 &\leq C.
\end{aligned}
\end{equation}
To analyze
$f^{\mch}$ and
$g^{\mch}$ (see \eqref{def:fgInnfgMch}) we rely on the estimates for $H_1^{\inn}$ in \eqref{eq:boundsH1Inn} and its derivatives, which can be easily obtained by Cauchy estimates.
Indeed, they can be applied since $U \in \DuMchInn$ and, by \eqref{eq:boundsZMchO},
\[
\vabs{\WuMchO(U)},\vabs{\XuMchO(U)},\vabs{\YuMchO(U)}\leq C.
\]
Then, there exists $m>0$ such that
\begin{align}\label{proof:boundsMchA}
|g^{\mch}(U,\ZuMchO)| \leq C
\delta^{\frac43-2 m(1-\gamma)},
\quad
|f_j^{\mch}(U,\ZuMchO)|\leq C \delta^{\frac43-2 m(1-\gamma)},
\text{ for } j=1,2,3.
\end{align}
We note that, for $\gamma \in (\gamma^*_0,1)$ with $\gamma^*_0=\max\{\frac{3}{5},\frac{3m-2}{3m}\}$, we have that ${\frac43}-2m(1-\gamma)>0$.
Then, for $\gamma \in (\gamma^*_0,1)$, $\delta$ small enough and $\kappa$ big enough, using \eqref{proof:boundsfgInn} and
\eqref{proof:boundsMchA} we obtain
\begin{align*
|\mathcal{R}^{\mch}_j[\ZuMchO](U)| \leq C \delta^{\frac43-2 m(1-\gamma)}, \qquad
\text{ for } j=1,2,3.
\end{align*}
Then, by Lemmas \ref{lemma:sumNormsMch} and \ref{lemma:operadorEEmch},
\begin{align*}
\normMchTotalSmall{\mathcal{G}^{\inn} \circ \mathcal{R}^{\mch}[\ZuMchO]} &=
\normMchSmall{\mathcal{G}^{\inn}_1 \circ \mathcal{R}_1^{\mch}[\ZuMchO]}_{\frac{4}{3}}
+
\textstyle\sum_{j=2}^3
\normMchSmall{\mathcal{G}^{\inn}_j \circ \mathcal{R}_j^{\mch}[\ZuMchO]}_{1} \\
&\leq
C \normMchSmall{\mathcal{R}_1^{\mch}[\ZuMchO]}_{\frac{7}{3}} +
\textstyle
\sum_{j=2}^3
C \normMchSmall{\mathcal{R}_j^{\mch}[\ZuMchO]}_{1}
\leq
C \delta^{\frac43-2\paren{m+\frac73}(1-\gamma)}.
\end{align*}
If we take $\gamma^*=\max\claus{\frac35, \gamma^*_0, \gamma^*_1}$
with $\gamma^*_1=\frac{3m+5}{3m+7}$,
and $\gamma \in (\gamma^*,1)$,
\begin{align}\label{eq:independentTermsMchB}
\normMchTotalSmall{\mathcal{G}^{\inn} \circ \mathcal{R}^{\mch}[\ZuMchO]}
\leq
C \delta^{\frac23(1-\gamma)}.
\end{align}
To complete the proof of Proposition~\ref{theorem:matchingProof}, we consider
equation~\eqref{eq:invariantEquationMchU4}. By Lemma \ref{lemma:operatorTTmch}, $(\mathrm{Id} - \mathcal{T})$ is invertible in $\XcalMchTotal$ and moreover
\begin{equation*}
\begin{split}
\normMchTotal{\ZuMchU}& =
\normMchTotal{ (\mathrm{Id} - \mathcal{T})^{-1}
\paren{C^{\mch} e^{\mathcal{A}^{\inn} U} + \mathcal{G}^{\inn} \circ \mathcal{R}^{\mch}[\ZuMchO]}}\\
&\leq 2 \normMchTotal{ C^{\mch} e^{\mathcal{A}^{\inn} U} + \mathcal{G}^{\inn} \circ \mathcal{R}^{\mch}[\ZuMchO]}.
\end{split}
\end{equation*}
Then, it is enough to apply \eqref{eq:independentTermsMchA} and \eqref{eq:independentTermsMchB}.
\qed
\subsection{Proof of Lemma~\ref{lemma:computationsRRRtransitionOuter}}
\label{subsubsection:ProofcomputationsRRRtransitionOuter}
\textcolor{red}{Es podria treure.}
First, let us notice that, since $\DOuterTilde \subset \DuOut$, we have that $\YcalOuter \subset \XcalOut_{0,0}$ (see \eqref{def:XcalOut}).
Therefore, for $\alpha,\beta \in \mathbb{R}$, if $\phiA \in \XcalOut_{\alpha,\beta}$ then $\phiA \in \YcalOuter$ and
\begin{equation*
\normSup{\phiA} \leq C \normOut{\phiA}_{\alpha,\beta}.
\end{equation*}
Let $\phiA \in \YcalOuter$ such that $\normSup{\phiA}\leq \varrho \delta^2$ and $v \in \DOuterTilde$
then, for $\delta$ small enough, $v+\phiA(v) \in \DuOut$.
As a result, we can apply the estimates of the derivatives of $H_1^{\out}$ obtained in the proof of Lemma \ref{lemma:computationsRRROut} to operator $R[\phiA]$ (see \eqref{def:operatorRRRtransition}).
Indeed, by \eqref{proof:estimatef1OutDCerivatives},
\begin{align*}
\normSup{R[\phiA]}
\leq
\normOut{\partial_w H_1^{\out}
\paren{\cdot, \zuOut}}_{1,-\frac{2}{3}}
\leq
C\delta^2.
\end{align*}
Analogously, we can compute estimates for the derivative $DR[\phiA]$. Indeed, applying the results in Proposition \ref{proposition:existenceComecocos},
\eqref{proof:estimategOutDerivatives} and \eqref{proof:estimatef1OutDCerivatives}, we obtain
\begin{align*}
\normSup{DR[\phiA]} \leq&
\normOut{\partial_{uw} H_1^{\out}(\cdot,\zuOut)}_{1,\frac{1}{3}}
+
\normOut{\partial^2_{w} H_1^{\out}(\cdot,\zuOut)}_{0,-\frac{2}{3}}
\normOut{\partial_u \wuOut}_{1,1} \\
&+
\normOut{\partial_{wx} H_1^{\out}(\cdot,\zuOut)}_{0,\frac53}
\normOut{\partial_u \xuOut}_{0,\frac53}
+
\normOut{\partial_{wy} H_1^{\out}(\cdot,\zuOut)}_{1,-2}
\normOut{\partial_u \yuOut}_{0,\frac{7}{3}}
\leq C \delta^2.
\end{align*}
\qed
\subsection{Proof of Lemma~\ref{lemma:computationsRFlow}}
\label{subsubsection:ProofComputationsRFlow}
\textcolor{red}{Es podria treure.}
By the definition of $\mathcal{R}^{\flow}[\phiA]$ in \eqref{def:operatorRFlow}, we need to estimate the derivatives of $H_1(\Gamma_h(v)+\phiA(v);\delta)$ and
\begin{align}\label{proof:functionTopertatorRflow}
T[\phiA_{\lambda}] = -V'(\lambda_h+\phiA_{\lambda}) +V'(\lambda_h)
+ V''(\lambda_h)\phiA_{\lambda},
\end{align}
where the potential $V$ is given in \eqref{def:potentialV}.
Let us consider $\phiA=(\phiA_{\lambda},\phiA_{\Lambda},\phiA_x,\phiA_y) \in \XcalFlowTotal$ such that $\normFlowTotal{\phiA} \leq \varrho \delta^3$.
By Theorem \ref{theorem:singularities}, for $v \in \DFlow$ and $\delta$ small
\begin{align}\label{proof:estimatesFlow}
\vabs{\lambda_h(v)+\phiA_{\lambda}(v)} < \pi, \quad
\vabs{\Lambda_h(v)+\phiA_{\Lambda}(v)} < C, \quad
\vabs{\phiA_{x}(v)} < C\delta^3, \quad
\vabs{\phiA_{y}(v)} < C\delta^3.
\end{align}
Then, by definition of $H_1$ in \eqref{def:hamiltonianScalingH1}, we have that
\begin{align*}
H_1(\Gamma_h(v)+\phiA(v);\delta) =& \,
H_1^{\Poi}\big(\lambda_h(v)+\phiA_{\lambda}(v),
1+\delta^2(\Lambda_h(v)+\phiA_{\Lambda}(v)),
\delta \phiA_x(v),\delta \phiA_y(v);\delta \big) \\
&- V(\lambda_h(v)+\phiA_{\lambda}(v)) +
\frac{1}{\delta^4} F_{\pend}(\delta^2\Lambda_h(v)+\delta^2\phiA_{\Lambda}(v)),
\end{align*}
where $H_1^{\Poi}$ is given in \eqref{def:hamiltonianPoincare1} and $F_{\pend}$ in \eqref{def:Fpend}, respectively.
(Recall that $F_{\pend}(s)=\mathcal{O}(s^3)$).
By \eqref{proof:estimatesFlow} we have that $\vabs{(\delta^2(\Lambda_h(v)+\phiA_{\Lambda}(v)),
\delta \phiA_x(v), \delta \phiA_y(v))} \ll 1$,
for $\delta$ small enough, and
$\vabs{\lambda_h(v)+\phiA_{\lambda}(v)}<\pi$, as a result, we can apply the results in Corollary \ref{corollary:seriesH1Poi}.
Indeed, for $v \in \DFlow$,
\begin{equation}\label{proof:flowA}
\begin{aligned}
\vabs{\partial_{\lambda} H_1(\Gamma_h(v)+\phiA(v);\delta)}
&\leq C \delta^2,
& \quad
\vabs{\partial_{x} H_1(\Gamma_h(v)+\phiA(v);\delta)}
&\leq C \delta,
\\
\vabs{\partial_{\Lambda} H_1(\Gamma_h(v)+\phiA(v);\delta)}
&\leq C \delta^2,
& \quad
\vabs{\partial_{y} H_1(\Gamma_h(v)+\phiA(v);\delta)}
&\leq C \delta,
\end{aligned}
\end{equation}
and
\begin{equation}\label{proof:flowB}
\begin{split}
\vabs{\partial_{i j} H_1(\Gamma_h(v)+\phiA(v);\delta)} &\leq C \delta,
\qquad \text{for } i,j \in \claus{\lambda,\Lambda,x,y}.
\end{split}
\end{equation}
Then, it only remains to compute estimates for $T[\phiA_{\lambda}]$.
By \eqref{proof:functionTopertatorRflow} and
\eqref{proof:estimatesFlow} we have that, for $v \in \DFlow$,
\begin{equation}\label{proof:flowC}
\begin{split}
\vabs{T[\phiA](v)} \leq C \vabs{V''(\lambda_h(v))}
\vabs{\phiA_{\lambda}(v)} \leq C \delta^2, \\
\vabs{D T[\phiA](v)} \leq C \vabs{V'''(\lambda_h(v))}
\vabs{\phiA_{\lambda}(v)} \leq C \delta^2.
\end{split}
\end{equation}
Lastly, joining the estimates from \eqref{proof:flowA},
\eqref{proof:flowB} and \eqref{proof:flowC} we obtain the statement of the lemma.
\qed
\subsection{Estimates in the infinity domain}
\label{subsubsection:ProofComputationsRRRInf}
To prove Lemma~\ref{lemma:computationsRRRInf}, we need to obtain estimates for $\mathcal{R}^{\out}$ and its derivatives.
Let us recall that, by its definition in~\eqref{def:operatorRRROuter}, for $z=(w,x,y)$ we have
\begin{align}\label{eq:expressionRRRout}
\mathcal{R}^{\out}[z] = \paren{
\frac{f_1^{\out}(\cdot,z)}{1+g^{\out}(\cdot,z)},
\frac{{f_2}^{\out}(\cdot,z) - \frac{i x}{\delta^2} \, g^{\out}(\cdot,z)}{1+g^{\out}(\cdot,z)},
\frac{{f_3}^{\out}(\cdot,z) + \frac{i y}{\delta^2} \, g^{\out}(\cdot,z)}{1+g^{\out}(\cdot,z)}
}^T,
\end{align}
where $g^{\out} = \partial_w H_1^{\out}$ and $f^{\out}=\paren{-\partial_u H_1^{\out}, i\partial_y H_1^{\out}, -i\partial_x H_1^{\out} }^T$.
Therefore, we need to obtain first estimates for the first and second derivatives of $H_1^{\out}$, introduced in~\eqref{def:hamiltonianOuterSplit}, that is
\begin{equation}\label{proof:H1outForM1}
H_1^{\out} = H \circ (\phi_{\equi} \circ \phi_{\out}) - \paren{w + \frac{xy}{\delta^2}},
\end{equation}
where $H=H_0+H_1$ with $H_0= H_{\pend}+ H_{\osc}$ (see \eqref{def:hamiltonianScaling}, \eqref{def:hamiltonianScalingH0}).
Since $(\lambda_h,\Lambda_h)$ is a solution of the Hamiltonian $H_{\pend}$ and belongs to the energy level $H_{\pend}=-\frac12$,
\begin{align*}
H_0 \circ \phi_{\out}
=
H_{\pend}\paren{\lambda_h(u),\Lambda_h(u)-\frac{w}{3 \Lambda_h(u)}}
+
H_{\osc}(x,y;\delta)
=
-\frac12
+ w
- \frac{w^2}{6\Lambda_h^2(u)}
+ \frac{x y}{\delta^2}.
\end{align*}
Therefore, by \eqref{proof:H1outForM1}, the Hamiltonian $H_1^{\out}$ can be expressed (up to a constant) as
\begin{align}\label{eq:expressionHamiltonianInfty}
H_1^{\out} =
M \circ \phi_{\out}
- \frac{w^2}{6\Lambda_h^2(u)} ,
\end{align}
where
\begin{align*}
M(\lambda,\Lambda,x,y;\delta) =
(H \circ \phi_{\equi})(\lambda,\Lambda,x,y;\delta) -
H_0(\lambda,\Lambda,x,y).
\end{align*}
In the following lemma we give properties of $M$.
\begin{lemma}\label{lemma:expressionHamiltonianInfty}
Fix constants $\varrho>0$ and $\lambda_0 \in (0,\pi)$.
Then, there exists $\delta_0>0$ such that,
for $\delta \in (0,\delta_0)$,
$\vabs{\lambda}<\lambda_0$,
$\vabs{\Lambda}<\varrho$
and
$\vabs{(x,y)}<\varrho\delta$
,
the function $M$ satisfies
\begin{align*}
\vabs{ \partial_{\lambda} M} &\leq
C\delta^2 \vabs{(\lambda,\Lambda)} + C\delta \vabs{(x,y)}, &
\vabs{ \partial_{x} M} &\leq
C\delta \vabs{(\lambda,\Lambda,x,y)}, \\
\vabs{ \partial_{\Lambda} M} &\leq
C\delta^2 \vabs{(\lambda,\Lambda)} + C\delta \vabs{(x,y)}, &
\vabs{ \partial_{y} M} &\leq
C\delta \vabs{(\lambda,\Lambda,x,y)},
\end{align*}
and
\begin{align*}
\vabs{ \partial^2_{\lambda} M},
\vabs{ \partial_{\lambda\Lambda} M},
\vabs{ \partial^2_{\Lambda} M}
&\leq C \delta^2,
\qquad
\vabs{\partial_{i j} M} \leq C \delta,
\quad \text{for } i,j \in \claus{\lambda,\Lambda,x,y}.
\end{align*}
\end{lemma}
\begin{proof}
Applying $\phi_{\equi}$ (see \eqref{def:changeEqui}) to the Hamiltonian $H=H_0 + H_1$, we have that
\begin{equation}\label{proof:M1expressionLtres}
\begin{split}
M &=
\paren{H_0 \circ \phi_{\equi} - H_0}
+ H_1 \circ \phi_{\equi}
\\
&= \delta( x \Ltresy + y\Ltresx)
+ 3\delta^2 \Lambda \LtresLa
+ \delta^4 \paren{-\frac{3}{2}\LtresLa^2 +\Ltresx \Ltresy}
+ H_1 \circ \phi_{\equi}.
\end{split}
\end{equation}
Then,
\begin{align}\label{proof:estimatesDerivativesM1B}
\vabs{\partial_{i j} M} \leq
\vabs{\partial_{i j} H_1(\lambda,\Lambda + \delta^2\LtresLa, x+ \delta \Ltresx, y + \delta \Ltresy;\delta)}
,
\quad \text{for } i,j \in \claus{\lambda,\Lambda,x,y}.
\end{align}
Since $\vabs{\Lambda}<\varrho$
and $\vabs{(x,y)}<\varrho\delta$,
then
$\vabs{\Lambda+\delta^2\LtresLa}<2\varrho$
and ${\vabs{(x+\delta^3 \Ltresx,y+\delta^3\Ltresy)}<2\varrho\delta}$,
for $\delta$ small.
By the definition of $H_1$
in \eqref{def:hamiltonianScalingH1} we have that,
\begin{align*}
H_1(\lambda,\Lambda,x,y;\delta) &=
H_1^{\Poi}
\paren{\lambda,1+\delta^2\Lambda,\delta x, \delta y;\delta^4}
- V\paren{\lambda}
+ \frac{1}{\delta^4} F_{\pend}(\delta^2\Lambda),
\end{align*}
where $H_1^{\Poi}$ is given in \eqref{def:hamiltonianPoincare01} (see also \eqref{def:changeScaling}),
$V$ is given \eqref{def:potentialV}
and $F_{\pend}$ is given \eqref{def:Fpend} and satisfies $F_{\pend}(s)=\mathcal{O}(s^3)$.
Since $\vabs{(\delta^2\Lambda,\delta x, \delta y)} < 2\varrho \delta^2 \ll 1$, we apply Lemma \ref{corollary:seriesH1Poi} (recall that $\delta=\mu^{\frac14}$) and Cauchy estimates to
obtain
\begin{align}\label{proof:estimatesDerivativesM1C}
\vabs{ \partial^2_{\lambda} H_1}
\vabs{ \partial_{\lambda\Lambda} H_1},
\vabs{ \partial^2_{\Lambda} H_1}
&\leq C \delta^2,
\qquad
\vabs{\partial_{i j} H_1} \leq C \delta,
\quad \text{for } i,j \in \claus{\lambda,\Lambda,x,y}.
\end{align}
Then, \eqref{proof:estimatesDerivativesM1B} and
\eqref{proof:estimatesDerivativesM1C} give the estimates for the second derivatives of $M$.
For the first derivatives of $M$, let us take into account that, by Theorem~\ref{theorem:singularities}, $0$ is a critical point of both Hamiltonians $(H \circ \phi_{\equi})$ and $H_0$ and,
therefore, also of $M= (H \circ \phi_{\equi})- H_0$.
This fact and the estimates of the second derivatives, together with the mean value theorem, gives the estimates for the first derivatives of $M$.
\end{proof}
\begin{proof}[End of the proof of Lemma~\ref{lemma:computationsRRRInf}]
Let us consider $\varphi=(\varphi_w,\varphi_x,\varphi_y)^T \in \XcalInftyTotal$ such that
$\normInftyTotal{\phiA} \leq \varrho \delta^3$.
We estimate the first and second derivatives of $H_1^{\out}$ evaluated at $(u,\phiA(u))$ (recall \eqref{eq:expressionRRRout}), given by
\begin{align}\label{proof:expressionH1sepInfty}
H_1^{\out}(u,\phiA(u);\delta) = M\paren{\lambda_h(u),\Lambda_h(u)-\frac{\phiA_w(u)}{3\Lambda_h(u)},
\phiA_x(u),\phiA_y(u);\delta}
- \frac{\phiA^2_w(u)}{6\Lambda_h^2(u)}.
\end{align}
First, let us define
\begin{align*}
{\varphi}_{\lambda}(u) = \lambda_h(u), \qquad
{\varphi}_{\Lambda}(u) = \Lambda_h(u)-\frac{\varphi_w(u)}{3\Lambda_h(u)}
\quad \text{and} \quad
\Phi = (\phiA_{\lambda},\phiA_{\Lambda},\phiA_x,\phiA_y).
\end{align*}
Since $\normInftyTotal{\phiA} \leq \varrho \delta^3$ and $\lambda_h, \Lambda_h \in \XcalInfty_{\vap}$ (see \eqref{eq:separatrixBanachSpace}),
\begin{equation}\label{proof:estimatesCoordinatesEqui}
\normInfty{\varphi_{w}}_{2\vap} \leq C \delta^2, \qquad
\normInfty{\varphi_{x}}_{\vap},
\normInfty{\varphi_{y}}_{\vap} \leq C \delta^3, \qquad
\normInfty{\varphi_{\lambda}}_{\vap},
\normInfty{\varphi_{\Lambda}}_{\vap} \leq C.
\end{equation}
Moreover since, by Theorem \ref{theorem:singularities}, $\lambda_h(u)\neq\pi$ for $u \in \DuInfty$, we have that
\[
\vabs{\phiA_{\lambda}(u)} = \vabs{\lambda_h(u)} < \pi,
\quad
\vabs{\phiA_{\Lambda}(u)}
\leq C e^{-\vap \rhoInfty}
\leq C,
\quad
\vabs{(\phiA_{x}(u), \phiA_y(u))}
\leq C\delta^3 e^{-\vap \rhoInfty}
\leq C\delta^3
\]
and, therefore, we can apply Lemma~\ref{lemma:expressionHamiltonianInfty} to \eqref{proof:expressionH1sepInfty}.
In the following computations, we use generously Lemma \ref{lemma:sumNormsOutInf} without mentioning it.
\begin{enumerate}
\item First, we consider $g^{\out}=\partial_w H_1^{\out}$.
By \eqref{proof:expressionH1sepInfty}, we have that
\begin{align*}
g^{\out}(u,\varphi(u))
&=-\frac{\partial_{\Lambda} M \circ \Phi(u)}{3\Lambda_h(u)}
-\frac{\varphi_{w}(u)}{3\Lambda_h^2(u)}.
\end{align*}
Notice that, by Theorem \ref{theorem:singularities}, $\vabs{\Lambda_h(u)}\geq C$ for $u \in \DuInfty$. Then, $\normInftySmall{\Lambda_h^{-1}}_{-\vap} \leq C$.
Therefore, by Lemma \ref{lemma:expressionHamiltonianInfty} and estimates~\eqref{proof:estimatesCoordinatesEqui}, we have that
\begin{equation}\label{proof:boundsInftyg}
\begin{split}
\normInfty{g^{\out}(\cdot,\varphi)}_0 &\leq
C \delta \boxClaus{\delta\normInfty{\varphi_{\lambda}}_{\vap} +
\delta\normInfty{\varphi_{\Lambda}}_{\vap} +
\normInfty{\varphi_{x}}_{\vap} +
\normInfty{\varphi_{y}}_{\vap}}
+ C \normInfty{\varphi_{w}}_{2\vap} \\
&\leq C \delta^2.
\end{split}
\end{equation}
To compute its derivative with respect to $w$, by \eqref{proof:expressionH1sepInfty}, we have that
\begin{align*}
\partial_w g^{\out} (u,\varphi(u))
&=
\frac{\partial^2_{\Lambda} M \circ \Phi(u)}{9\Lambda^2_h(u)}
-\frac{1}{3\Lambda^2_h(u)},
\end{align*}
and, by Lemma \ref{lemma:expressionHamiltonianInfty} and estimates~\eqref{proof:estimatesCoordinatesEqui},
$\normInfty{\partial_w g^{\out}(\cdot,\varphi)}_{-2\vap} \leq C$.
Following a similar procedure, we obtain
$\normInfty{\partial_x g^{\out}(\cdot,\varphi)}_{-\vap} \leq C \delta$
and $\normInfty{\partial_y g^{\out}(\cdot,\varphi)}_{-\vap} \leq C \delta$.
\item Now, we obtain estimates for $f_1^{\out}=-\partial_u H_1^{\out}$.
By \eqref{proof:expressionH1sepInfty}, we have that
\begin{align*}
f_1^{\out}(u,\varphi(u)) =&
- \dot{\lambda}_h(u) \partial_{\lambda}{M} \circ \Phi(u)
-\frac{\Lad_h(u)}{3\Lambda^3_h(u)} \varphi^2_{w}(u) \\
&- \paren{\Lad_h(u) + \frac{\Lad_h(u)}{3 \Lambda_h^2(u)} \varphi_{w}(u)}
\partial_{\Lambda}{M} \circ \Phi(u).
\end{align*}
Then, since $\dot{\lambda}_h, \Lad_h \in \XcalInfty_{\vap}$, by Lemma \ref{lemma:expressionHamiltonianInfty} and estimates~\eqref{proof:estimatesCoordinatesEqui}, we have that
$\normInfty{f_1^{\out}(\cdot,\varphi)}_{2\vap} \leq C\delta^2$.
To compute its derivative with respect to $x$, by \eqref{proof:expressionH1sepInfty},
\begin{align*}
\partial_x f_1^{\out}(u,\varphi(u)) =&
- \dot{\lambda}_h(u) \partial_{x \lambda} M \circ \Phi(u)
- \paren{\Lad_h(u) + \frac{\Lad_h(u)}{3 \Lambda_h^2(u)} \varphi_{w}(u)}
\partial_{x \Lambda} M \circ \Phi(u)
\end{align*}
and, therefore,
$\normInfty{\partial_x f_1^{\out}(\cdot,\varphi)}_{\vap} \leq C\delta$.
Similarly one can obtain
$\normInfty{\partial_w f_1^{\out}(\cdot,\varphi)}_{0} \leq C\delta^2$ and
$\normInfty{\partial_y f_1^{\out}(\cdot,\varphi)}_{\vap} \leq C\delta$.
\item Analogously to the previous estimates,
we can obtain bounds for $f^{\out}_2= i \partial_y H_1^{\out}$ and $f^{\out}_3=-i \partial_x H_1^{\out}$.
Then, for $j=2,3$, it can be seen that
$\normInftySmall{f_j^{\out}(\cdot,\varphi)}_{\vap} \leq C\delta$,
and differentiating we obtain
$\normInftySmall{\partial_w f_j^{\out}(\cdot,\varphi)}_{-\vap} \leq C\delta$,
$\normInftySmall{\partial_x f_j^{\out}(\cdot,\varphi)}_{0} \leq C\delta$ and
$\normInftySmall{\partial_y f_j^{\out}(\cdot,\varphi)}_{0} \leq C\delta$.
\end{enumerate}
Then, by the definition of $\mathcal{R}^{\out}$ in~\eqref{eq:expressionRRRout} and the just obtained estimates, we complete the proof of the lemma.
\end{proof}
\subsection{Estimates in the outer domain}
\label{subsubsection:proofComputationsRRROut}
To obtain estimates of $\mathcal{R}^{\out}$, we write
$H_1^{\out}$ in~\eqref{def:hamiltonianOuterSplit} (up to a constant) as
\begin{equation*}
H_1^{\out} =
H_1 \circ \phi_{\equi} \circ \phi_{\out}
- \frac{w^2}{6\Lambda_h^2(u)}
+
\delta( x \Ltresy + y\Ltresx)
+ 3\delta^2 \LtresLa \paren{\Lambda_h(u)-\frac{w}{3\Lambda_h(u)}},
\end{equation*}
(see \eqref{eq:expressionHamiltonianInfty} and \eqref{proof:M1expressionLtres}).
Then, by the definition of $H_1$ in \eqref{def:hamiltonianScalingH1}, we obtain
\begin{align*}
H_1^{\out} =& \,
(H^{\Poi}_1-V)
\circ \phi_{\sca} \circ \phi_{\equi} \circ \phi_{\out}
+ \frac{1}{\delta^4}
F_{\pend}\paren{\delta^2\Lambda_h(u) -
\frac{\delta^2 w}{3 \Lambda_h(u)} +
\delta^4\LtresLa} \\
&- \frac{w^2}{6\Lambda_h^2(u)}
+
\delta( x \Ltresy + y\Ltresx)
+ 3\delta^2 \LtresLa \paren{\Lambda_h(u)-\frac{w}{3\Lambda_h(u)}},
\end{align*}
where $H_1^{\Poi}$ is given in \eqref{def:hamiltonianPoincare1},
the potential $V$ in \eqref{def:potentialV} and $F_{\pend}$ in \eqref{def:Fpend}.
The changes of coordinates $\phi_{\sca}$, $\phi_{\equi}$ and $\phi_{\out}$ are given in \eqref{def:changeScaling}, \eqref{def:changeEqui} and \eqref{def:changeOuter}, respectively.
Considering $z=(w,x,y)$, we denote the composition of change of coordinates as
\begin{equation}\label{def:changeTotalOuter}
(\lambda, L, \eta, \xi) =
\Theta(u,z) = (\phi_{\sca} \circ \phi_{\equi} \circ \phi_{\out})(u,z).
\end{equation}
Then, since $\mu=\delta^4$, the Hamiltonian $H_1^{\out}$ can be split (up to a constant) as
\begin{equation}\label{proof:H1outExpressionOuter}
H_1^{\out} = M_P + M_{S} + M_R,
\end{equation}
where
\begin{align}
M_P(u,z;\delta) =&
-\paren{\mathcal{P}[\delta^4-1]
- \frac{1}{\sqrt{2+2\cos \lambda}}
}\circ \Theta(u,z),
\label{def:expressionMJOuter}
\\
M_{S}(u,z;\delta) =& \,
\paren{
\frac{1}{\delta^4} \mathcal{P}[0] -
\frac{1-\delta^4}{\delta^4} \mathcal{P}[\delta^4]
- 1 + \cos \lambda} \circ\Theta(u,z),
\label{def:expressionMSOuter} \\
\begin{split}\label{def:expressionMROuter}
M_R(u,z;\delta) =&
- \frac{w^2}{6\Lambda_h^2(u)}
%
+ \delta^2 \LtresLa \paren{3 \Lambda_h(u)
- \frac{w}{\Lambda_h(u)} }
%
+ \delta (x \Ltresy + y \Ltresx ), \\
%
&+ \frac{1}{\delta^4}
F_{\pend}\paren{\delta^2\Lambda_h(u) -
\frac{\delta^2 w}{3 \Lambda_h(u)} +
\delta^4\LtresLa},
\end{split}
\end{align}
and $\mathcal{P}$ is the function given in~\eqref{def:functionD}.
To obtain estimates for the derivatives of $M_P$, $M_S$ and $M_R$, we first analyze the change of coordinates $\Theta$ in~\eqref{def:changeTotalOuter}. It can be expressed as
\begin{equation}\label{proof:ThtExpression}
\Theta(u,z)=
\Big(\pi + \Theta_{\lambda}(u), 1 + \Theta_{L}(u,w),
\Theta_{\eta}(x),\Theta_{\xi}(y) \Big),
\end{equation}
where
\begin{equation*}
\begin{aligned}
\Theta_{\lambda}(u) &= \lambda_h(u)-\pi, &
\Theta_{\eta}(x) &= \delta x + \delta^4 \Ltresx(\delta), \\
\Theta_L(u,w) &= \delta^2 \Lambda_h(u)
- \frac{\delta^2 w}{3 \Lambda_h(u)} + \delta^4 \LtresLa(\delta), \quad&
\Theta_{\xi}(x) &= \delta y + \delta^4 \Ltresy(\delta).
\end{aligned}
\end{equation*}
Next lemma, which is a direct consequence of Theorem \ref{theorem:singularities}, gives estimates for this change of coordinates.
\begin{lemma}\label{lemma:changeTotalOuter}
Fix $\varrho>0$ and $\delta>0$ small enough. Then, for
$\phiA \in \ballOuter \subset \XcalOutTotal$,
\begin{align*}
\normOut{\Theta_{\lambda}}_{0,-\frac{2}{3}}&\leq C, &
\normOut{\Theta_{L}(\cdot,\phiA)}_{0,\frac{1}{3}}
&\leq C \delta^2, &
\normOut{\Theta_{\eta}(\cdot,\phiA)}_{0,\frac{4}{3}}
&\leq C \delta^4, \\
\normOut{\Theta_{\lambda}^{-1}}_{0,\frac{2}{3}}&\leq C, &
\normOut{1+\Theta_{L}(\cdot,\phiA)}_{0,0}
&\leq C, &
\normOut{\Theta_{\xi}(\cdot,\phiA)}_{0,\frac{4}{3}}
&\leq C \delta^4.
\end{align*}
Moreover, its derivatives satisfy
\begin{align*}
\normOut{\partial_u \Theta_{\lambda}}_{0,\frac{1}{3}} &\leq C,
&
\normOut{\partial_u \Theta_{L}(\cdot,\phiA)}_{0,\frac{4}{3}} &\leq C \delta^2,
&
\normOut{\partial_w \Theta_{L}(\cdot,\phiA)}_{0,-\frac{1}{3}} &\leq C \delta^2,
\\
\normOut{\partial_{uw} \Theta_{L}(\cdot,\phiA)}_{0,\frac{2}{3}} &\leq C\delta^2,
&
\partial_x \Theta_{\eta},
\partial_y \Theta_{\xi} &\equiv \delta,
&
\partial^2_{w} \Theta_L,
\partial^2_{x} \Theta_{\eta},
\partial^2_y \Theta_{\xi} &\equiv 0.
\end{align*}
\end{lemma}
In the next lemma we obtain estimates for the derivatives of $M_P$.
\begin{lemma}\label{lemma:boundsMPout}
Fix $\varrho>0$, $\delta>0$ small enough and
$\kappa>0$ big enough. Then, for
$\phiA \in \ballOuter$ and $*=x,y$,
\begin{equation*
\begin{aligned}
\normOut{\partial_u M_P(\cdot,\phiA)
}_{1,1}
&\leq C \delta^2, &
\normOut{\partial_w M_P(\cdot,\phiA)
}_{1,-\frac{2}{3}}
&\leq C \delta^{2}, &
\normOut{\partial_* M_P(\cdot,\phiA)
}_{0,\frac{4}{3}}
&\leq C \delta, \\
\normOut{\partial_{u w} M_P(\cdot,\phiA)
}_{1,\frac{1}{3}}
&\leq C \delta^{2}, &
\normOut{\partial_{u *} M_P(\cdot,\phiA)
}_{0,\frac{7}{3}}
&\leq C \delta, &
\normOut{\partial^2_{w} M_P(\cdot,\phiA)
}_{0,\frac{4}{3}}
&\leq C \delta^4, \\
\normOut{\partial_{w *} M_P(\cdot,\phiA)
}_{0,\frac{5}{3}}
&\leq C \delta^3, &
\normOut{\partial^2_{*} M_P(\cdot,\phiA)
}_{0,2}
&\leq C \delta^2, &
\normOut{\partial_{xy} M_P(\cdot,\phiA)
}_{0,2}
&\leq C \delta^2.
\end{aligned}
\end{equation*}
\end{lemma}
\begin{proof}
We consider $\phiA \in \ballOuter \subset \XcalOutTotal$
and
we estimate the derivatives of $\mathcal{P}[\delta^4-1] \circ \Theta(u,\phiA(u))$. We
first we obtain bounds for $A[\delta^4-1]$ and $B[\delta^4-1]$ (see \eqref{def:DA} and \eqref{def:DB}).
To simplify the notation, we define
\begin{align}\label{proof:defABP}
\widetilde{A}(u) = A[\delta^4-1](\pi +\Theta_{\lambda}(u)), \qquad
\widetilde{B}(u,z) = B[\delta^4-1] \circ \Theta(u,z).
\end{align}
In the following computations we use extensively the results in Lemma~\ref{lemma:sumNormsOuter} without mentioning them.
\begin{enumerate}
\item Estimates of $\wtA(u)$:
Defining $\lah=\lambda-\pi$, by \eqref{def:DA},
\begin{align*}
A[\delta^4-1](\lah+\pi) = 2(1-\cos\lah) -2\delta^4(1-\cos\lah) + \delta^8.
\end{align*}
Then, applying Lemma~\ref{lemma:changeTotalOuter},
\begin{equation*}\label{proof:estimateTrigoOuter}
\begin{aligned}
\normOut{\sin \Theta_{\lambda}}_{0,-\frac{2}{3}}
\leq
C \normOut{\Theta_{\lambda}}_{0,-\frac{2}{3}}
\leq C, \qquad
\normOut{(1-\cos \Theta_{\lambda})^{-1}
}_{0,\frac{4}{3}}
\leq
C \normOut{\Theta_{\lambda}^{-2}
}_{0,\frac{4}{3}}
\leq C
\end{aligned}
\end{equation*}
and, as a result,
\begin{equation}\label{proof:DA}
\begin{split}
\normOutSmall{\wtA^{-1}}_{0,\frac{4}{3}}
&\leq
C \normOut{(1-\cos \Theta_{\lambda})^{-1}
}_{0,\frac{4}{3}}
\leq C,
\\
\normOutSmall{\partial_u \wtA}_{0,-\frac{1}{3}}
&\leq C
\normOut{\sin \Theta_{\lambda}}_{0,-\frac23}
\normOut{\partial_u \Theta_{\lambda}}_{0,\frac13}
\leq C.
\end{split}
\end{equation}
\item Estimates of $\wtB(u,\phiA(u))$:
Considering the auxiliary variables
$(\lah,\Lh)=(\lambda-\pi,L-1)$,
we have that
\begin{equation}\label{proof:D0expansion}
\begin{split}
B[\delta^4-1](\pi+\lah,1+\Lh,\eta,\xi)
= \,&
4 \Lh (1-\cos \lah + \delta^4 \cos \lah) \\
&+
\frac{\eta}{\sqrt{2}}
(-3 +2 e^{-i\lah}+e^{-2i\lah} +\delta^4(3+ e^{-2i\lah})) \\
&+
\frac{\xi}{\sqrt{2}}
(-3 +2 e^{i\lah}+e^{2i\lah} +\delta^4(3+ e^{2i\lah})) \\
&+ R[\delta^4-1](\pi+\lah,1+\Lh,\eta,\xi).
\end{split}
\end{equation}
Then, by the estimates in \eqref{eq:boundD2mes} and Lemma~\ref{lemma:changeTotalOuter},
\begin{equation}\label{proof:DB}
\begin{aligned}
\normOutSmall{\wtB(\cdot,\phiA)}_{1,-2}
\leq \,&
C \normOut{\Theta_{L}(\cdot,\phiA)
\Theta_{\lambda}^2}_{0,-1}
+ \frac{C}{\delta^2}
\normOut{\Theta_{\eta}(\cdot,\phiA) \Theta_{\lambda}}_{0,\frac23} \\
&+ \frac{C}{\delta^2}
\normOut{\Theta_{\xi}(\cdot,\phiA) \Theta_{\lambda}}_{0,\frac23}
+
\frac{C}{\delta^2}
\normOut{(\Theta_{L},\Theta_{\eta},\Theta_{\xi})^2}_{0,\frac23} \leq C \delta^2.
\end{aligned}
\end{equation}
Now, we look for estimates of the first derivatives of $\wtB(u,\phiA(u))$.
By its definition in \eqref{proof:defABP} and the expression of $\Theta$ in \eqref{proof:ThtExpression}, we have that
\begin{equation}\label{proof:boundsBtildePoincare}
\begin{aligned}
\partial_u \wtB =& \,
\boxClaus{ \partial_{\lambda} B[\delta^4-1] \circ \Theta}
\partial_u \Theta_{\lambda}
+
\boxClaus{\partial_{L} B[\delta^4-1] \circ \Theta}
\partial_u \Theta_{L}, \\
\partial_w \wtB =& \,
\boxClaus{
\partial_{L} B[\delta^4-1] \circ \Theta
}
\partial_w \Theta_{L}, \\
\partial_x \wtB =& \,
\boxClaus{
\partial_{\eta} B[\delta^4-1] \circ \Theta
}
\partial_{x} \Theta_{\eta}, \qquad
\partial_y \wtB = \,
\boxClaus{
\partial_{\xi} B[\delta^4-1] \circ \Theta
}
\partial_y \Theta_{\xi}.
\end{aligned}
\end{equation}
Differentiating \eqref{proof:D0expansion} and applying Lemma~\ref{lemma:changeTotalOuter},
\begin{align*}
\normOut{\partial_{\lambda} B[\delta^4-1] \circ \Theta(\cdot,\phiA)}_{1,-\frac43}
\leq& \,
C \normOut{\Theta_{L}(\cdot,\phiA) \Theta_{\lambda}}_{-\frac13} +
\frac{C}{\delta^2}
\normOut{\Theta_{\eta}(\cdot,\phiA)}_{0,\frac43}
\\
&+ \frac{C}{\delta^2}
\normOut{\Theta_{\xi}(\cdot,\phiA)}_{0,\frac43}
+ C\delta^2
\leq C \delta^2,
\\
\normOut{\partial_{L} B[\delta^4-1] \circ \Theta(\cdot,\phiA)}_{1,-\frac73}
\leq&
C \normOut{\Theta_{\lambda}^2}_{0,-\frac43} +
\frac{C}{\delta^2} \normOut{\Theta_{L}(\cdot,\phiA)}_{0,\frac13}
+ \frac{C}{\kappa}
\leq C,
\\
\normOut{\partial_{*} B[\delta^4-1] \circ \Theta(\cdot,\phiA)}_{0,-\frac23}
\leq&
C \normOut{\Theta_{\lambda}}_{0,-\frac23}
+
\frac{C}{\kappa}
\leq C, \quad \text{for } *=\eta,\xi.
\end{align*}
Then, using also \eqref{proof:boundsBtildePoincare} and taking $*=x,y$,
\begin{align}\label{proof:DBfirst}
\normOutSmall{\partial_u \wtB(\cdot,\phiA)
}_{1,-1}
&\leq C \delta^2, &
\normOutSmall{\partial_w \wtB(\cdot,\phiA)
}_{1,-\frac83}
&\leq C \delta^2, &
\normOutSmall{\partial_* \wtB(\cdot,\phiA)
}_{0,-\frac23}
&\leq C \delta.
\end{align}
Analogously, for the second derivatives, one can obtain the estimates
\begin{equation}\label{proof:DBsecond}
\begin{aligned}
\normOutSmall{\partial_{u w} \wtB(\cdot,\phiA) }_{1,-\frac53}
&\leq C \delta^2, & \hspace{-1mm}
\normOutSmall{\partial^2_{w} \wtB(\cdot,\phiA)
}_{0,\frac23}
&\leq C \delta^4, & \hspace{-1mm}
\normOutSmall{\partial_{u *} \wtB(\cdot,\phiA)
}_{0,\frac13}
&\leq C \delta, \\
\normOutSmall{\partial_{w *} \wtB(\cdot,\phiA)
}_{0,-\frac13}
&\leq C \delta^3,& \hspace{-1mm}
\normOutSmall{\partial^2_{*} \wtB(\cdot,\phiA)
}_{0,0}
&\leq C \delta^2, & \hspace{-1mm}
\normOutSmall{\partial_{xy} \wtB(\cdot,\phiA)
}_{0,0}
&\leq C \delta^2.
\end{aligned}
\end{equation}
\end{enumerate}
Now, we are ready to obtain estimates for $M_P(u,\phiA(u))$ by using the series expansion \eqref{def:SerieP}.
First, we check that it is convergent.
Indeed, by \eqref{proof:DA} and \eqref{proof:DB}, for $u \in \DuOut$ and taking $\kappa$ big enough we have that
\begin{align*}
\vabs{\frac{\wtB(u,\phiA(u))}{\wtA(u)}}
&\leq
\normOutSmall{\wtB(\cdot,\phiA)}_{0,-\frac{4}{3}}
\normOutSmall{\wtA^{-1}}_{0,\frac{4}{3}}
\leq
\frac{C}{\kappa^2\delta^2}
\normOutSmall{\wtB(\cdot,\phiA)}_{1,-2}
\leq \frac{C}{\kappa^2} \ll 1.
\end{align*}
Therefore, by \eqref{def:functionD2} and \eqref{def:expressionMJOuter},
\begin{align}\label{proof:expressionMP}
\vabs{M_P(u,\phiA(u))} \leq
\vabs{\frac1{\textstyle\sqrt{A[\delta^4-1](\lambda_h(u))}} - \frac1{\sqrt{2+2\cos \lambda_h(u)}} }
+
C \frac{|\wtB(u,\phiA(u))|}{|\wtA(u)|^{\frac32}}.
\end{align}
Then, to estimate $M_P$ and its derivatives, it only remains to analyze the $u$-derivative of its first term.
Indeed, by the definition of $A[\delta^4-1]$ in \eqref{def:DA}.
\begin{align}\label{proof:DApotential}
\normOut{\partial_u \paren{\frac1{\textstyle\sqrt{A[\delta^4-1](\lambda_h(u))}} - \frac1{\sqrt{2+2\cos \lambda_h(u)}} } }_{0,\frac43} \leq C \delta^4.
\end{align}
Therefore, applying estimates \eqref{proof:DA}, \eqref{proof:DB}, \eqref{proof:DBfirst},
\eqref{proof:DBsecond} and \eqref{proof:DApotential}, to the derivatives of $M_P$ and using \eqref{proof:expressionMP}, we obtain the statement of the lemma.
\end{proof}
Analogously to Lemma \ref{lemma:boundsMPout}, we obtain estimates for the first and second derivatives of $M_S$ and $M_R$ (see \eqref{def:expressionMSOuter} and \eqref{def:expressionMROuter}).
\begin{lemma}\label{lemma:boundsMSout}
Fix $\varrho>0$,
$\delta>0$ small enough
and $\kappa>0$ big enough. Then, for
$\phiA \in \ballOuter$ and $*=x,y$, we have
\begin{equation*
\begin{aligned}
\normOut{\partial_u M_S(\cdot,\phiA)
}_{0,\frac{4}{3}}
&\leq C \delta^2, &
\normOut{\partial_w M_S(\cdot,\phiA)
}_{0,-\frac{1}{3}}
&\leq C \delta^{2}, &
\normOut{\partial_* M_S(\cdot,\phiA)
}_{0,0}
&\leq C \delta, \\
\normOut{\partial_{u w} M_S(\cdot,\phiA)
}_{0,\frac{2}{3}}
&\leq C \delta^{2}, &
\normOut{\partial_{u *} M_S(\cdot,\phiA)
}_{0,\frac{1}{3}}
&\leq C \delta, &
\normOut{\partial^2_{w} M_S(\cdot,\phiA)
}_{0,-\frac{2}{3}}
&\leq C \delta^4, \\
\normOut{\partial_{w *} M_S(\cdot,\phiA)
}_{0,-\frac{1}{3}}
&\leq C \delta^3, &
\normOut{\partial^2_{*} M_S(\cdot,\phiA)
}_{0,0}
&\leq C \delta^2, &
\normOut{\partial_{xy} M_S(\cdot,\phiA)
}_{0,0}
&\leq C \delta^2.
\end{aligned}
\end{equation*}
and
\begin{equation*
\begin{aligned}
\normOut{\partial_u M_R(\cdot,\phiA)
}_{1,1}
&\leq C \delta^2, &
\normOut{\partial_w M_R(\cdot,\phiA)
}_{1,-\frac{2}{3}}
&\leq C \delta^{2}, &
\normOut{\partial_* M_R(\cdot,\phiA)
}_{0,0}
&\leq C \delta, \\
\normOut{\partial_{u w} M_R(\cdot,\phiA)
}_{1,\frac{1}{3}}
&\leq C \delta^{2}, &
\partial_{u *} M_R(\cdot,\phiA)
&\equiv 0, &
\normOut{\partial^2_{w} M_R(\cdot,\phiA)
}_{0,-\frac{2}{3}}
&\leq C, \\
\partial_{w *} M_R(\cdot,\phiA)
&\equiv 0, &
\partial^2_{*} M_R(\cdot,\phiA)
&\equiv 0, &
\partial_{xy} M_R(\cdot,\phiA)
&\equiv 0.
\end{aligned}
\end{equation*}
\end{lemma}
\begin{proof}[End of the proof of Lemma~\ref{lemma:computationsRRROut}]
We start by estimating the first and second derivatives of $H_1^{\out}(u,\phiA(u);\delta)$ in suitable norms.
Recall that by \eqref{proof:H1outExpressionOuter}, $H_1^{\out}=M_P+M_S+M_R$.
Therefore, taking $\varphi \in \ballInfty \subset \XcalOutTotal$ and applying Lemmas \ref{lemma:boundsMPout} and \ref{lemma:boundsMSout}:
\begin{enumerate}
\item For $g^{\out}=\partial_w H_1^{\out}$ one has
\[
\begin{split}
\normOut{g^{\out}(\cdot,\phiA)}_{1,-\frac23} \leq&
\normOut{\partial_w M_P(\cdot,\phiA)}_{1,-\frac23}
+ C
\normOut{\partial_w M_S(\cdot,\phiA)}_{0,-\frac13} +
\normOut{\partial_w M_R(\cdot,\phiA)}_{1,-\frac23} \\ \leq& C \delta^2
\end{split}
\]
and, in particular, for $\kappa$ big enough
\begin{equation}\label{def:Fitag}
\normOut{g^{\out}(\cdot,\phiA)}_{0,0} \leq C \kappa^{-2} \ll 1.
\end{equation}
Analogously,
$\normOut{\partial_w g^{\out}(\cdot,\phiA)}_{0,-\frac23}
\leq C$ and
$\normOut{\partial_* g^{\out}(\cdot,\phiA)}_{0,\frac53}
\leq C \delta^3$,
for $*=x,y$.
\item For $f_1^{\out}=-\partial_u H_1^{\out}$, one has that
\[
\normOut{f^{\out}_1(\cdot,\phiA)}_{1,1}
\leq
\normOut{\partial_u M_P(\cdot,\phiA)}_{1,1}
+
C \normOut{\partial_u M_S(\cdot,\phiA)}_{0,\frac43}+
\normOut{\partial_u M_R(\cdot,\phiA)}_{1,1}
\leq
C \delta^2,
\]
$\normOut{\partial_w f_1^{\out}(\cdot,\phiA)}_{1,\frac13}
\leq
C \delta^2$ and
$\normOut{\partial_* f_1^{\out}(\cdot,\phiA)}_{0,\frac73}
\leq
C \delta,
\quad $
for $*=x,y$.
\item For $f_2^{\out} = i \partial_y H_1^{\out}$ and $f_3^{\out} = -i \partial_x H_1^{\out}$,
we can obtain the estimates
\begin{equation}\label{proof:estimatef2f3Out}
\begin{split}
\normOut{f_2(\cdot,\phiA)}_{0,\frac43}
\leq&
\normOut{\partial_y M_P(\cdot,\phiA)}_{0,\frac43}
+ C
\normOut{\partial_y M_S(\cdot,\phiA)
+
\partial_y M_R(\cdot,\phiA)}_{0,0}
\leq
C \delta, \\
\normOut{f_3(\cdot,\phiA)}_{0,\frac43}
\leq&
\normOut{\partial_x M_P(\cdot,\phiA)}_{0,\frac43}
+ C
\normOut{\partial_x M_S(\cdot,\phiA)
+
\partial_x M_R(\cdot,\phiA)}_{0,0}
\leq
C \delta.
\end{split}
\end{equation}
Analogously, we have that
$\normOutSmall{\partial_w f_j^{\out}(\cdot,\phiA)}_{0,\frac53}
\leq
C \delta^3$ and
$\normOutSmall{\partial_* f_j^{\out}(\cdot,\phiA)}_{0,2}
\leq
C \delta^2$,
for $j=2,3$ and $*=x,y$.
\end{enumerate}
Joining these estimates and taking $\kappa$ big enough, we complete the proof of the lemma.
\end{proof}
\begin{remark}\label{rmk:R}
Note that that $\DOuterTilde \subset \DuOut$ and $\YcalOuter \subset
\XcalOut_{0,0}$ (see \eqref{def:Yout} and \eqref{def:XcalOut}). Then, the proof
of Lemma~\ref{lemma:computationsRRRtransitionOuter} is a direct consequence of
the estimates for $g^{\out}$ and its derivatives in Item 1 above and the
fact that, by \eqref{eq:systemEDOsOuter} and \eqref{eq:InvU},
\[
R[\uOut](v)= \partial_w H_1^{\out}
\paren{v + \uOut(v), \zuOut(v+\uOut(v))}= g^{\out}\paren{v + \uOut(v),
\zuOut(v+\uOut(v))}.
\]
\end{remark}
\subsection{The distance between the invariant manifolds of \texorpdfstring{$L_3$}{L3}}
The one dimensional unstable and stable invariant manifolds of $L_3$ have two branches each (see Figure~\ref{fig:L3Outer}).
One pair circumvents $L_5$, which we denote by $W^{\unstable,+}(\mu)$ and $W^{\stable,+}(\mu)$, and the other, $W^{\unstable,-}(\mu)$ and $W^{\stable,-}(\mu)$, circumvents $L_4$.
Since the Hamiltonian system associated to the Hamiltonian $\HInicial$ is reversible with respect to the involution
\begin{equation*}
\Phi(q,p;t)=(q_1,-q_2,-p_1,p_2;-t),
\end{equation*}
the $+$ branches of the invariant manifolds are symmetric with respect to the $-$ branches. Thus, we restrict our analysis to the positive branches.
To measure the distance between $W^{\unstable/\stable,+}(\mu)$, we consider the symplectic polar change of coordinates
\begin{align}\label{def:changePolars}
q=
r \begin{pmatrix}
\cos \theta \\
\sin \theta
\end{pmatrix},
\qquad
p =
R
\begin{pmatrix}
\cos \theta \\
\sin \theta
\end{pmatrix}
- \frac{G}{r} \begin{pmatrix}
\sin \theta \\
-\cos \theta
\end{pmatrix},
\end{align}
where
$R$ is the radial linear momentum and $G$ is the angular momentum.
We consider the $3$-dimensional section
\[
\Sigma = \claus{(r,\theta,R,G) \in \mathbb{R} \times \mathbb{T} \times \mathbb{R}^2
\mathrm{h}:\mathrm{h} r>1, \, \theta=\frac{\pi}2 \,}
\]
and denote by $(r^{\unstable}_*,\frac{\pi}2, R^{\unstable}_*,G^{\unstable}_*)$ and $(r^{\stable}_*,\frac{\pi}2,R^{\stable}_*,G^{\stable}_*)$ the first crossing of the invariant manifolds with this section.
The next theorem measures the distance between these points for $0< \mu\ll 1$.
\begin{theorem}\label{theorem:mainTheorem}
There exists $\mu_0>0$ such that, for $\mu \in (0,\mu_0)$,
\[
\norm{(r^{\unstable}_*,R^{\unstable}_*,G^{\unstable}_*)-(r^{\stable}_*,R^{\stable}_*,G^{\stable}_*)}
=
\sqrt[3]{4} \,
\mu^{\frac13} e^{-\frac{A}{\sqrt{\mu}}}
\boxClaus{\vabs{\CInn}+\mathcal{O}\paren{\frac1{\vabs{\log \mu}}}},
\]
where:
\begin{itemize}
\item The constant $A>0$ is the real-valued integral
\begin{equation}\label{def:integralA}
A= \int_0^{\frac{\sqrt{2}-1}{2}} \frac{2}{1-x}\sqrt\frac{x}{3(x+1)(1-4x-4x^2)} dx\approx 0.177744.
\end{equation}
\item The constant $\CInn \in \mathbb{C}$ is the Stokes constant associated to the inner equation analyzed in \cite{articleInner} and in Theorem \ref{theorem:innerComputations} below.
\end{itemize}
\end{theorem}
\begin{remark}\label{remark:sectiontheta}
We can prove the same result for any section
\[
\Sigma(\theta_*) = \claus{(r,\theta,R,G)
\in \mathbb{R} \times \mathbb{T} \times \mathbb{R}^2
\mathrm{h}:\mathrm{h} r>1, \, \theta=\theta_* \,},
\]
with $\theta_* \in
(0,\theta_0)$ and $\theta_0=\arccos\paren{\frac12-\sqrt2}$ (the value of $\mu_0$ depends on how close to the endpoints of the interval $\theta_*$ is).
The section $\theta=\theta_0$ is close to the ``turning point'' of the invariant manifolds (see Figure \ref{fig:L3Outer}).
\end{remark}
The constant $A$ in \eqref{def:integralA} is derived from the values of the complex singularities of the separatrix of certain integrable averaged system, which is studied in the prequel paper \cite{articleInner}.
The results obtained in \cite{articleInner} about this separatrix are summarized in Theorem \ref{theorem:singularities} below.
The origin of the constant $\Theta$ appearing in Theorem \ref{theorem:mainTheorem} is explained in Theorem \ref{theorem:innerComputations}, which analyzes the so-called inner equation. This theorem is also proven in \cite{articleInner}.
Moreover, in that paper it is seen, by a numerical computation, that $\vabs{\CInn}\approx 1.63$. We expect that one should be able to prove that $\vabs{\CInn}\neq 0$ by means of rigorous computer computations (see \cite{BCGS21}). Note that $\vabs{\CInn}\neq 0$ implies that there are not primary (i.e. one round) homoclinic orbits to $L_3$.
A fundamental problem in dynamical systems is to prove whether a given model has chaotic dynamics (for instance a Smale horseshoe).
For many physically relevant models this is usually remarkably difficult. This is the case of many Celestial Mechanics models, where most of the known chaotic motions have been found in nearly integrable regimes where there is an unperturbed problem which already presents some form of ``hyperbolicity''. This is the case in the vicinity of collision orbits (see for example \cite{Moe89, BolMac06, Bol06, Moe07}) or close to parabolic orbits (which allows to construct chaotic/oscillatory motions), see~\cite{Sitnikov1960, Alekseev1976, LlibSim80, Moser2001, GMS16, GMSS17, GPSV21}.
There are also several results in regimes far from integrable which rely on computer assisted proofs \cite{Ari02, WilcZgli03, Cap12, GierZgli19}. The problem tackled in this paper and \cite{articleInner} is radically different. Indeed, if one takes the limit $\mu\to 0$ in \eqref{def:hamiltonianInitialNotSplit} one obtains the classical integrable Kepler problem in the elliptic regime, where no hyperbolicity is present. Instead, the (weak) hyperbolicity is created by the $\mathcal{O}(\mu)$ perturbation, which can be captured considering an integrable averaged Hamiltonian along the $1:1$ mean motion resonance\footnote{The $1:1$ averaged Hamiltonian has been also studied to obtain ``good'' approximations for the global dynamics in the $1:1$ resonant zone, see for example \cite{RNP16, AlePou21} and the references therein.}.
One of the classical methods to construct chaotic dynamics is the Smale-Birkhoff homoclinic theorem by proving the existence of transverse homoclinic orbits to invariant objects, most commonly, periodic orbits.
Certainly the breakdown of homoclinic orbits to the critical point $L_3$ given by Theorem~\ref{theorem:mainTheorem} does not lead to the existence of chaotic orbits. However, one should expect that Theorem~\ref{theorem:mainTheorem} implies that there exist Lyapunov periodic orbits exponentially close to $L_3$ whose stable and unstable invariant manifolds intersect transversally. This would create chaotic motions ``exponentially close'' to $L_3$ and its invariant manifolds (see \cite{articleChaos}).
As already mentioned, Theorem \ref{theorem:mainTheorem} rules out the existence of primary homoclinic connections to $L_3$ in the RPC$3$BP for $0< \mu\ll 1$. However, it does not prevent the existence of multiround homoclinic orbits, that is homoclinic orbits which pass close to $L_3$ multiple times.
It has been conjectured (see for instance~\cite{BMO09}, where the authors analyze this problem numerically) that multi-round homoclinic connections to $L_3$ should exist for a sequence of values $\claus{\mu_k}_{k \in \mathbb{N}}$ satisfying $\mu_k\to 0$ as $k \to \infty$.
\paragraph{A first step towards proving Arnold diffusion along the $1:1$ mean motion resonance in the $3$-Body Problem?}
Consider the $3$-Body Problem in the planetary regime, that is one massive body (the Sun) and two small bodies (the planets) performing approximate ellipses (including the ``Restricted limit'' when one of planets has mass zero). A fundamental problem is to assert whether such configuration is stable (i.e. is the Solar system stable?). Thanks to Arnold-Herman-F\'ejoz KAM Theorem, many of such configurations are stable, see \cite{Arnold63,Fejoz04}. However, it is widely expected that there should be strong instabilities created by Arnold diffusion mechanisms (as conjectured by Arnold in \cite{Arnold64}). In particular, it is widely believed that one of the main sources of such instabilities dynamics are the mean motion resonances, where the period of the two planets is resonant (i.e. rationally dependent) \cite{FGKR16}.
The RPC$3$BP has too low dimension (2 degrees of freedom) to possess Arnold diffusion. However, since it can be seen as a first order for higher dimensional models, the analysis performed in this paper can be seen as a humble first step towards constructing Arnold diffusion in the $1:1$ mean motion resonance. In this resonance, the RPC$3$BP has a normally hyperbolic invariant manifold given by the center manifold of the Lagrange point $L_3$. This normally hyperbolic invariant manifold is foliated by the classical Lyapunov periodic orbits. One should expect that the techniques developed in the present paper would allow to prove that the invariant manifolds of these periodic orbits intersect transversally within the corresponding energy level of \eqref{def:hamiltonianInitialNotSplit}. Still, this is a much harder problem than the one considered in this paper and the technicalities involved would be considerable.
This transversality would not lead to Arnold diffusion due to the low dimension of the RPC3BP. However, if one considers either the Restricted Spatial Circular $3$-Body Problem with small $\mu>0$ which has three degrees of freedom, the Restricted Planar Elliptic $3$-Body Problem with small $\mu>0$ and eccentricity of the primaries $e_0>0$, which has two and a half degrees of freedom, or the ``full'' planar $3$-Body Problem (i.e. all three masses positive, two small) which has three degrees of freedom (after the symplectic reduction by the classical first integrals) one should be able to construct orbits with a drastic change in angular momentum (or inclination in the spatial setting).
In the Restricted Planar Elliptic $3$-Body Problem the change of angular momentum would imply the transition of the zero mass body orbit from a close to circular ellipse to a more eccentric one. In the full 3BP, due to total angular momentum conservation, the angular momentum would be transferred from one body to the other changing both osculating ellipses.
This behavior would be analogous to that of \cite{FGKR16} for the $3:1$ and $1:7$ resonances. In that paper, the transversality between the invariant manifolds of the normally hyperbolic invariant manifold was checked numerically for the realistic Sun-Jupiter mass ratio $\mu=10^{-3}$.
Arnold diffusion instabilities have been analyzed numerically for the Restricted Spatial Circular $3$-Body Problem in \cite{SSST14}.
\subsection{The strategy to prove Theorem \ref{theorem:mainTheorem}}
The main difficulty in proving Theorem \ref{theorem:mainTheorem} is that the
distance between the stable and unstable manifolds of $L_3$ is exponentially small with respect to $\sqrt\mu$ (this is also usually known as a \emph{beyond all orders} phenomenon). This implies that the classical Melnikov Method \cite{GuckenheimerHolmes} to detect the breakdown of homoclinics cannot be applied.
To prove Theorem \ref{theorem:mainTheorem}, we follow the strategy of exponentially small splitting of separatrices (already outlined in \cite{articleInner}) which goes back to the seminal work by Lazutkin \cite{Laz84, Laz05}. See \cite{articleInner} for a list of references on the recent developments in the field of exponentially small splitting of separatrices. In particular, we follow similar strategies of those in \cite{BFGS12,BCS13}.
In the present work the first order of the difference between manifolds is not given by the Melnikov function.
Instead, we must derive and analyze an inner equation which provides the dominant term of this distance. As a consequence, we need to ``match'' (i.e. compare) certain solutions of the inner equation with the parameterizations of the perturbed invariant manifolds.
The first part of the proof, that was completed in the prequel \cite{articleInner}, dealt with the following steps:
\begin{enumerate}[label*=\Alph*.]
\item
We perform a change of coordinates to capture the slow-fast dynamics of the system.
%
The first order of the new Hamiltonian has a saddle point with an homoclinic connection (also known as separatrix) and a fast harmonic oscillator.
%
%
\item We study the analytical continuation of the time-parametrization of the separatrix of this first order.
%
In particular, we obtain its maximal strip of analyticity and the singularities at the boundary of this strip.
%
%
\item We derive the inner equation.
\item We study two special solutions which will be ``good approximation'' of the perturbed invariant manifolds near the singularities of the unperturbed separatrix (see Step F below).
%
\end{enumerate}
The remaining steps necessary to complete the proof of Theorem~\ref{theorem:mainTheorem} are the following:
\begin{enumerate}[label*=\Alph*.]
%
\item[E] We prove the existence of the analytic continuation of the parametrizations of the invariant manifolds of $L_3$, $W^{\unstable,+}(\delta)$ and $W^{\stable,+}(\delta)$, in an appropriate complex domain called boomerang domain.
%
This domain contains a segment of the real line and intersects a sufficiently small neighborhood of the singularities of the unperturbed separatrix.
%
%
\item[F.] By using complex matching techniques, we show that, close to the singularities of the unperturbed separatrix, the solutions of the inner equation obtained in Step D are ``good approximations'' of the parameterizations of the perturbed invariant manifolds obtained in Step E.
%
%
\item[G.] We obtain an asymptotic formula for the difference between the perturbed invariant manifolds by proving that the dominant term comes from the difference between the solutions of the inner equation.
%
%
\end{enumerate}
The structure of this paper goes as follows.
In Section \ref{section:introductionPoincare} we perform the change of coordinates introduced in Step A and state Theorem \ref{theorem:mainTheoremPoincare}, which is a reformulation of Theorem \ref{theorem:mainTheorem} in this new set of variables.
Then, in Section \ref{section:resultsOuter}, we state the results concerning Steps B, C and D above (which are proven in \cite{articleInner}) and we carry out Steps E, F and G. These steps lead to the proof of Theorem \ref{theorem:mainTheoremPoincare}.
Sections \ref{section:proofH-existence} and \ref{section:proofG-matching} are devoted to proving the results in Section \ref{section:resultsOuter} which concern Steps E and F.
\section{Introduction}
\label{section:introduction}
\input{introductionOuter}
\section{A singular formulation of the problem}
\label{section:introductionPoincare}
\input{introductionPoincare}
\subsection{Proof of Theorem~\ref{theorem:mainTheorem}}
\label{subection:undoChanges}
\input{undoChanges}
\section{Proof of Theorem~\ref{theorem:mainTheoremPoincare}}
\label{section:resultsOuter}
\input{resultsIntro}
\subsection{Analytical continuation of the unperturbed separatrix} \label{section:singularitiesOuter}
\input{singularitiesOuter}
\subsection{The perturbed invariant manifolds}
\label{section:outer}
\input{outer}
\subsubsection{Analytic extension of the stable and unstable manifolds}
\label{subsection:outerBasic}
\input{outerGraph}
\subsubsection{Further analytic extension of the unstable manifold}
\label{subsection:outerExtension}
\input{outerGlobal}
\subsection{A first order of the invariant manifolds near the singularities}
\label{section:differenceInner}
\input{differenceInner}
\subsubsection{The inner equation}
\label{subsection:innerHeuristics}
\input{innerHeuristics}
\subsubsection{Complex matching estimates} \label{subsection:matching}
\input{matching}
\subsection{The asymptotic formula for the difference}
\label{section:difference}
\input{difference}
\input{proofF-Difference}
\section{The perturbed invariant manifolds}
\label{section:proofH-existence}
\input{proofH-existence}
\subsection{The invariant manifolds in the infinity domain}
\label{subsection:proofH-existenceInfinite}
\input{proofH-existenceInfinite}
\subsection{The invariant manifolds in the outer domain}
\label{subsection:proofH-existenceBounded}
\input{proofH-existenceOuter}
\subsection{Switching to the time-parametrization}
\label{subsection:proofH-changeuOut}
\input{proofH-existenceChangeuOut}
\subsection{Extending the time-parametrization}
\label{subsection:proofH-existenceFlow}
\input{proofH-existenceFlow}
\subsection{Back to a graph parametrization}
\label{subsection:proofH-changevOut}
\input{proofH-existenceChangevOut}
\section{Complex matching estimates}
\label{section:proofG-matching}
\input{proofG-matching}
\subsubsection{Perturbed invariant manifolds as a graph}
Since we measure the distance between the invariant manifolds in the
section $\lambda=\lambda_*$ (see Theorem \ref{theorem:mainTheoremPoincare}),
we parameterize them as graphs with respect to $\lambda$ (whenever is possible) or,
more conveniently,
with respect
to the independent variable $u$ defined by $\lambda=\lambda_h(u)$.
To define these suitable parameterizations we first translate the
equilibrium point $\Ltres(\delta)$ to $\mathbf{0}$ by the change of coordinates
\begin{equation}\label{def:changeEqui}
\phi_{\equi}:(\lambda,\Lambda,x,y) \mapsto (\lambda,\Lambda,x,y) + \Ltres(\delta).
\end{equation}
Second, we consider the symplectic change of coordinates
\begin{equation}\label{def:changeOuter}
\phi_{\out}:(u,w,x,y) \to (\lambda,\Lambda,x,y),
\quad
{\lambda}= \lambda_h(u), \hspace{5mm} {\Lambda}= \Lambda_h(u) - \frac{w}{3\Lambda_h(u)}.
\end{equation}
We refer to $(u,w,x,y)$ as the \emph{separatrix coordinates}.
Let us remark that $\phi_{\out}$ is not defined for $u=0$ since $\Lambda_h(0)=0$ (see Theorem \ref{theorem:singularities}).
We deal with this fact later when considering the domain of definition for $u$.
After these changes of variables, we look for the perturbed invariant manifolds
as a graph with respect to $u$.
In other words, we look for functions
\[
\zdOut(u) = \left(\wdOut(u),\xdOut(u),\ydOut(u)\right)^T,
\quad \text{ for } \diamond=\unstable,\stable,
\]
such that the
invariant manifolds given in Proposition~\ref{proposition:HamiltonianScaling}
can be expressed as
\begin{equation}\label{eq:invariantManifoldsExpression}
\mathcal{W}^{\diamond}(\delta)= \left\{ \paren{\lambda_h(u), \Lambda_h(u)-\frac{\wdOut(u)}{3\Lambda_h(u)}, \xdOut(u), \ydOut(u)} + \Ltres(\delta)\right\}, \quad \text{for } \diamond=\unstable,\stable,
\end{equation}
with $u$ belonging to an appropriate domain contained in $\Pi^{\mathrm{ext}}_{A, \betaBow}$ (see \eqref{def:dominiBow}).
The graphs $\zuOut$ and $\zsOut$
must satisfy the asymptotic conditions
\begin{equation}\label{eq:asymptoticConditionsOuter}
\begin{split}
\lim_{\Re u \to -\infty} \left(\frac{\wuOut(u)}{\Lambda_h(u)},\xuOut(u), \yuOut(u) \right) =
\lim_{\Re u \to +\infty} \left(\frac{\wsOut(u)}{\Lambda_h(u)},\xsOut(u), \ysOut(u) \right) = 0.
\end{split}
\end{equation}
\begin{remark}\label{remark:realAnalytic}
Since the Hamiltonian $H$ is real-analytic in the sense of $\conj{H(\lambda,\Lambda,x,y;\delta)}=H(\conj{\lambda},\conj{\Lambda},y,x;\conj{\delta})$ (see Proposition \ref{proposition:HamiltonianScaling}),
then we say that $\zOut(u)=(\wOut(u),\xOut(u),\yOut(u))^T$ is real-analytic if it satisfies
\begin{align*}
\wOut(\conj{u}) = \conj{\wOut(u)}, \qquad
\xOut(\conj{u}) = {\yOut(u)}, \qquad
\yOut(\conj{u}) = {\xOut(u)}.
\end{align*}
\end{remark}
The classical way to study exponentially small splitting of separatrices, in this setting, is to look for solutions $\zuOut$ and $\zsOut$ in a certain complex common domain containing a segment of the real line and intersecting a $\mathcal{O}(\delta^2)$ neighborhood of the singularities $u=\pm iA$ of the separatrix.
Recall that the invariant manifolds can not be expressed as a graph in a neighborhood of $u=0$.
To overcome this technical problem, we find solutions $\zuOut$ and $\zsOut$ defined in a complex domain, which we call \emph{boomerang domain} due to its shape (see Figure~\ref{fig:dominiBoomerang}).
\begin{figure}[t]
\centering
\begin{overpic}[scale=1]{DominiBoomerang.png}
\put(60,31){$\DBoomerang$}
\put(19,29.5){\footnotesize $\betaOutA$}
\put(48.5,29.5){\footnotesize $\betaOutB$}
\put(42,49.5){\footnotesize $iA$}
\put(52,44.5){\footnotesize $i(A-\kappa \delta^2)$}
\put(39.5,4){\footnotesize $-iA$}
\put(41,35){\footnotesize $\dBoomerang A$}
\put(102,27){\footnotesize$\Re u$}
\put(45,58){\footnotesize$\Im u$}
\end{overpic}
\bigskip
\caption{The boomerang domain $\DBoomerang$ defined in~\eqref{def:dominiBoomerang}.}
\label{fig:dominiBoomerang}
\end{figure}
Namely,
\begin{equation}\label{def:dominiBoomerang}
\begin{split}
\DBoomerang = \left\{ u \in \mathbb{C} \right. \mathrm{h}:\mathrm{h}
&\vabs{\Im u} < A - \kappa \delta^2 + \tan \betaOutA \Re u,
\vabs{\Im u} < A - \kappa \delta^2 - \tan \betaOutA \Re u,
\\
&\vabs{\Im u} > \left. \dBoomerang A - \tan \betaOutB \Re u
\right\},
\end{split}
\end{equation}
where $\kappa>0$ is such that $A-\kappa \delta^2>0$,
$\betaOutA$ is the constant given in Theorem~\ref{theorem:singularities}
and $\betaOutB \in [\betaOutA,\frac{\pi}{2})$ and $\dBoomerang \in
(\frac{1}{4},\frac{1}{2})$
are independent of $\delta$.
\begin{theorem}\label{theorem:existence}
Fix a constant $\dBoomerang \in (\frac{1}{4},\frac{1}{2})$.
Then, there exists $\delta_0, \kappaBoomerang>0$ such that,
for $\delta \in (0,\delta_0)$,
$\kappa\geq\kappaBoomerang$,
the graph parameterizations $\zuOut$ and $\zsOut$ introduced
in~\eqref{eq:invariantManifoldsExpression} can be extended real-analytically to
the domain $\DBoomerang$.
Moreover, there exists a real constant $\cttOuterA>0$ independent of $\delta$ and $\kappa$ such that, for $u \in \DBoomerang$ we have that
\begin{align*
|\wdOut(u)| \leq \frac{\cttOuterA\delta^2}
{\vabs{u^2 + A^2}}
+ \frac{\cttOuterA\delta^4}{\vabs{u^2 + A^2}^{\frac{8}{3}}}, \quad
|\xdOut(u)| \leq \frac{\cttOuterA\delta^3}
{\vabs{u^2 + A^2}^{\frac{4}{3}}}, \quad
|\ydOut(u)| \leq \frac{\cttOuterA\delta^3}
{\vabs{u^2 + A^2}^{\frac{4}{3}}}.
\end{align*}
\end{theorem}
Notice that the asymptotic conditions \eqref{eq:asymptoticConditionsOuter} do
not have any meaning in the domain $\DBoomerang$ since it is bounded.
Therefore, to prove the existence of $\zuOut$ and $\zsOut$ in $\DBoomerang$ one has to start with different domains where these asymptotic conditions make sense and then find a way to extend them real-analytically to $\DBoomerang$.
We describe the details of these process in the following Sections \ref{subsection:outerBasic} and \ref{subsection:outerExtension}.
\subsubsection{An integral equation for $\dzHatU$}
\subsubsection{End of the proof of Theorem \ref{theorem:mainTheoremPoincare}}
We look for $\dzHatU$ as the unique solution of an integral equation.
Since $\dzHat$ satisfies~\eqref{eq:invariantEquationDifference3}, by the variations of constants formula
\begin{align}\label{eq:dzOutDiff}
\dzHat(u) =
\begin{pmatrix}
c_x m_x(u)\\
c_y m_y(u)
\end{pmatrix}
+
\begin{pmatrix}
\displaystyle m_x(u)
\int_{u_-}^{u} m_x^{-1}(s) \,
\pi_1 \paren{{\mathcal{B}}^{\spl}(s) \dzHat(s)} ds \\
\displaystyle m_y(u)
\int_{u_+}^{u} m_y^{-1}(s) \,
\pi_2 \paren{{\mathcal{B}}^{\spl}(s) \dzHat(s)} ds
\end{pmatrix},
\end{align}
where $\mathcal{M}(u)$ is the fundamental matrix \eqref{def:fundamentalMatrixDifference}, $s$ belongs to some integration path in $\DBoomerang$ and $c_x$ and $c_y$ are defined as
\begin{align}\label{def:cxcyDifference}
c_x = \dxOut(u_-) m_x^{-1}(u_-), \qquad
c_y = \dyOut(u_+) m_y^{-1}(u_+).
\end{align}
For $k_1, k_2 \in \mathbb{C}$, we define
\begin{equation}\label{def:operatorIIdiff}
\mathcal{I}[k_1,k_2](u) =
\big(k_1 \, m_x(u), k_2 \, m_y(u)\big)^T,
\end{equation}
and the operator
\begin{align}\label{def:operatorEEdiff}
\mathcal{E}[\phiA](u) =
\begin{pmatrix}
\displaystyle
m_x(u) \int_{u_-}^{u} m_x^{-1}(s) \,
\pi_1 \paren{{\mathcal{B}}^{\spl}(s) \phiA(s)} ds \\
\displaystyle
m_y(u) \int_{u_+}^{u} m_y^{-1}(s) \,
\pi_2 \paren{{\mathcal{B}}^{\spl}(s) \phiA(s)} ds
\end{pmatrix}.
\end{align}
Then, with this notation, $\dzHatO = \mathcal{I}[c_x^0,c_y^0]$ (see \eqref{def:dzHatO}) and equation~\eqref{eq:dzOutDiff} is equivalent to $\dzHat = \mathcal{I}[c_x,c_y]+\mathcal{E}[\dzHat]$.
Since $\mathcal{E}$ is a linear operator, $\dzHatU =
\dzHat-\dzHatO$ satisfies
\begin{equation}\label{eq:dzOutU}
\dzHatU(u) =
\mathcal{I}[c_x-c_x^0, c_y-c_y^0](u) +
\mathcal{E}[\dzHatO](u) + \mathcal{E}[\dzHatU](u).
\end{equation}
To obtain estimates for $\dzHatU$, we first prove that $\mathrm{Id}-\mathcal{E}$ is invertible in the Banach space $\XSplTotal= \XSpl \times \XSpl$, with
\begin{align*}
\XSpl = \left\{ \phiA: \DBoomerang \to \mathbb{C} \mathrm{h}:\mathrm{h} \normSpl{\phiA}
= \sup_{u\in \DBoomerang} \vabs{e^{\frac{A-\vabs{\Im u}}{\delta^2}}
\phiA(u)}
<+\infty \right\},
\end{align*}
endowed with the norm
\begin{align}\label{def:normexp}
\normSplTotal{\phiA} =
\normSpl{\phiA_1} + \normSpl{\phiA_2},
\end{align}
for $\phiA=(\phiA_1,\phiA_2)$.
Therefore, to prove Theorem \ref{theorem:mainTheoremPoincare} it is enough to see that $\dzHatU$ satisfies that $\normSplTotal{\dzHatU} \leq C\delta^{\frac13}\vabs{\log \delta}^{-1}$.
First, we state a lemma whose proof is postponed to Appendix \ref{subappendix:proofH-technicalFirst}.
\begin{lemma}\label{lemma:boundsBspl}
Let $\kappaBoomerang, \delta_0$ be the constants given in Theorem \ref{theorem:existence}.
Then, there exists a constant $C>0$ such that, for $\kappa\geq\kappaBoomerang$, $\delta \in(0,\delta_0)$ and $u \in \DBoomerang$,
the function $\Upsilon$ in \eqref{def:operatorPPdifference},
the matrix $\mathcal{B}^{\spl}$ in \eqref{def:operatorsDifferenceAABB} and the functions $B_x$, $B_y$ in \eqref{def:mxmyalxaly} satisfy
for $\kappa\geq\kappaBoomerang$, $\delta \in(0,\delta_0)$ and $u \in \DBoomerang$,
\begin{align
&\vabs{\Upsilon_1(u)-1}\leq \frac{C}{\kappa^2}, \qquad
\vabs{\Upsilon_2(u)}\leq \frac{C\delta}{\vabs{u^2+A^2}^{\frac{4}{3}}}, \qquad
\vabs{\Upsilon_3(u)}\leq \frac{C\delta}{\vabs{u^2+A^2}^{\frac{4}{3}}}, \label{proof:boundsUpsilon}
\\[0.4em]
{C}^{-1} &\leq \vabs{B_*(u)} \leq C,
\quad *=x,y,
\quad \text{and} \quad
|{\mathcal{B}}^{\spl}_{i,j}(u)| \leq
\frac{C \, \delta^2}{\vabs{u^2 + A^2}^{2}},
\quad i,j \in \claus{1,2}. \nonumber
\end{align}
\end{lemma}
In the next lemma we obtain estimates for the linear operator $\mathcal{E}$ (see \eqref{def:operatorEEdiff}).
\begin{lemma}\label{lemma:operatorEEdiff}
Let $\kappaBoomerang, \delta_0$ be the constants as given in Theorem \ref{theorem:existence}.
There exists $\cttDiffA>0$ such that for $\delta\in(0,\delta_0)$ and
$\kappa\geq \kappaBoomerang$,
the operator $\mathcal{E}: \XSplTotal \to \XSplTotal$ in \eqref{def:operatorEEdiff} is well defined and satisfies that, for $\phiA \in \XSplTotal$,
\begin{align*}
\normSplTotal{\mathcal{E}[\phiA]} \leq \frac{\cttDiffA}{\kappa}
\normSplTotal{\phiA}.
\end{align*}
In particular, $\mathrm{Id} - \mathcal{E}$ is invertible and
\begin{align*}
\normSplTotal{(\mathrm{Id} - \mathcal{E})^{-1}[\phiA]} \leq 2\normSplTotal{\phiA}.
\end{align*}
\end{lemma}
\begin{proof}
Let us consider $\mathcal{E}=(\mathcal{E}_1,\mathcal{E}_2)^T$, $\phiA \in \XSplTotal$ and $u \in \DBoomerang$.
We only prove the estimate for $\mathcal{E}_2[\phiA](u)$.
The corresponding one for $\mathcal{E}_1[\phiA](u)$ follows analogously.
By the definition of $m_y$ in~\eqref{def:mxmyalxaly} and Lemma \ref{lemma:boundsBspl}, we have that
\begin{align*}
\vabs{\mathcal{E}_2[\phiA](u)}
&\leq
C \delta^2 e^{\frac{\Im u}{\delta^2}}
\vabs{\int_{u_+}^{u}
e^{-\frac{\Im s}{\delta^2}}
\frac{
\vabs{\phiA_1(s)}+\vabs{\phiA_2(s)}
}{\vabs{s^2+A^2}^2} d s }
\\
&\leq
C \delta^2 e^{\frac{\Im u - A}{\delta^2}}
\normSplTotal{\phiA}
\vabs{\int_{u_+}^{u}
e^{\frac{\vabs{\Im s}-\Im s}{\delta^2}}
\frac{d s}{\vabs{s^2+A^2}^2}}.
\end{align*}
Let us consider the case $\Im u < 0$. Then, for a fixed $u_0 \in \mathbb{R} \cap \DBoomerang$, we define the integration path $\rho_t \subset \DBoomerang$ as
\begin{align*}
\rho_t =
\begin{cases}
u_+ + 2t(u_0-u_+)
&\quad \text{for } t \in (0,\frac12), \\
u_0 + (2t-1)(u-u_0)
&\quad \text{for } t \in [\frac12,1).
\end{cases}
\end{align*}
Then,
\begin{align*
\vabs{\mathcal{E}_2[\phiA](u)}
&\leq
C \delta^2 e^{-\frac{\vabs{\Im u}+A}{\delta^2}}
\normSplTotal{\phiA}
\vabs{\int_{0}^{\frac12} \frac{dt}{\vabs{\rho_t-iA}^2}
+ \int_{\frac12}^{1}
\frac{e^{\frac{2\vabs{\Im \rho_t}}{\delta^2}}}{\vabs{\rho_t+iA}^2} dt}
\leq \frac{C}{\kappa}
e^{\frac{\vabs{\Im u}-A}{\delta^2}}
\normSplTotal{\phiA}.
\end{align*}
If $\Im u \geq 0$, we consider the integration path $\rho_t = u_+ + t(u-u_+)$ for $t\in[0,1]$ and we obtain
\begin{align*}
\vabs{\mathcal{E}_2[\phiA](u)} &\leq
C \delta^2 e^{\frac{\vabs{\Im u}-A}{\delta^2}}
\normSplTotal{\phiA}
\vabs{\int_{0}^{1} \frac{\vabs{u-u_+}}{\vabs{\rho_t-iA}^2} dt}
\leq \frac{C}{\kappa}
e^{\frac{\vabs{\Im u}-A}{\delta^2}}
\normSplTotal{\phiA}.
\end{align*}
Therefore,
$
\normSpl{\mathcal{E}_2[\phiA]} \leq \frac{C}{\kappa}\normSplTotal{\phiA}.
$
\end{proof}
Notice that, by \eqref{eq:dzOutU}, $\dzHatU$ satisfies
\begin{equation}\label{eq:dzOutUInverse}
(\mathrm{Id} - \mathcal{E} )\dzHatU(u) = \mathcal{I}[c_x-c_x^0, c_y-c_y^0](u) + \mathcal{E}[\dzHatO](u).
\end{equation}
Since, by Lemma \ref{lemma:operatorEEdiff}, $\mathrm{Id}-\mathcal{E}$ is invertible in $\XSplTotal$ we have an explicit formula for $\dzHatU$.
Nevertheless, we still need good estimates for the right hand side with respect to the norm \eqref{def:normexp}.
\begin{lemma}\label{lemma:IconstantsDiff}
There exist $\kappa_*, \delta_0, \cttDiffB>0$ such that, for
$\kappa=\kappa_*\vabs{\log \delta}$ and $\delta\in(0,\delta_0)$,
\begin{align*}
\normSplTotal{\mathcal{I}[c_x-c_x^0,c_y-c_y^0]}
\leq \frac{\cttDiffB \, \delta^{\frac{1}{3}}}{\vabs{\log \delta}}\qquad \text{and}\qquad
\normSplTotal{\mathcal{E}[\dzHatO](u)}\leq \frac{\cttDiffB \, \delta^{\frac{1}{3}}}{\vabs{\log \delta}},
\end{align*}
with $\mathcal{I}$, $(c_x^0,c_y^0)$, $(c_x,c_y)$, $\mathcal{E}$ and $\dzHatO$ defined in \eqref{def:operatorIIdiff}, \eqref{def:dzHatO}, \eqref{def:cxcyDifference}, \eqref{def:operatorEEdiff}
and
\eqref{def:dzHatO:0}, respectively.
\end{lemma}
\begin{proof}
By the definition of the function $\mathcal{I}$,
\[
\normSplTotal{\mathcal{I}[c_x-c_x^0,c_y-c_y^0]} = \vabs{c_x-c_x^0}\normSpl{m_x} + \vabs{c_y-c_y^0}\normSpl{m_y},
\]
where $m_x$ and $m_y$ are given in \eqref{def:mxmyalxaly}.
Then, by Lemma \ref{lemma:boundsBspl},
\begin{align*}
\normSpl{m_x} =
e^{\frac{A}{\delta^2}}
\sup_{u \in \DBoomerang}
\boxClaus{e^{-\frac{\Im u+\vabs{\Im u}}{\delta^2}} \vabs{B_x(u)}}
%
\leq C e^{\frac{A}{\delta^2}},
\qquad
\normSpl{m_y} \leq
%
%
%
C e^{\frac{A}{\delta^2}},
\end{align*}
and, as a result,
\begin{equation}\label{proof:operatorIIdiff}
\normSplTotal{\mathcal{I}[c_x-c_x^0,c_y-c_y^0]} \leq
C e^{\frac{A}{\delta^2}}
\paren{|c_x-c_x^0| + |c_y-c_y^0|}.
\end{equation}
We now obtain an estimate for $|c_y-c_y^0|$.
The estimate for $|c_x-c_x^0|$ follows analogously.
By the definition of $m_y$ (see \eqref{def:mxmyalxaly}), one has
\begin{equation}\label{proof:cyMenyscyO}
\begin{split}
\vabs{c_y - c_y^0}
&= e^{-\frac{A}{\delta^2}+\kappa}
\vabs{B_y^{-1}(u_+)}
\vabs{\dyOut(u_+) - \dyOutO(u_+)}.
\end{split}
\end{equation}
Let us denote $\DYInnC = \YuMchO -\YsMchO$ where $\YusMchO$ are given on~\eqref{def:ZdMchO}.
Recall that $\YusMchO=\YusInn + \YusMchU$ where $\YusInn$ is the third component of $\ZusInn$, the solutions of the inner equation (see Theorems \ref{theorem:innerComputations} and \ref{theorem:matching}).
We write,
\begin{align*}
\dyOut(u_+) &= \sqrt{2} \alpha_+ \delta^{\frac{1}{3}}
\DYInnC\paren{\frac{u_+ - iA}{\delta^2}}
=\sqrt{2} \alpha_+ \delta^{\frac{1}{3}} \left[
\DYInn \paren{-i\kappa} +
\YuMchU \paren{-i\kappa} - \YsMchU \paren{-i\kappa}
\right].
\end{align*}
By the definition of $\dyOutO$ in \eqref{def:dzHatO:0} (see also \eqref{def:dzHatO}), we have
$
\dyOutO(u_+) = \sqrt{2}\alpha_+ \delta^{\frac{1}{3}}
\CInn e^{-\kappa} .
$
Then, by \eqref{proof:cyMenyscyO} and Lemma~\ref{lemma:boundsBspl},
\begin{align*}
\vabs{c_y-c_y^0} \leq
C \delta^{\frac{1}{3}} e^{-\frac{A}{\delta^2}+\kappa}
\Big[
\vabs{\DYInn \paren{-i\kappa} - \CInn e^{-\kappa}}
+
\vabs{\YuMchU \paren{-i\kappa}} +
\vabs{\YsMchU \paren{-i\kappa}}
\Big],
\end{align*}
and, applying Theorems~\ref{theorem:innerComputations} and
\ref{theorem:matching}, we obtain
\begin{equation*
\begin{split}
\vabs{c_y-c_y^0}
&\leq
C \delta^{\frac{1}{3}}
e^{-\frac{A}{\delta^2}+\kappa}
\boxClaus{
\vabs{\chi_3(-i\kappa) e^{-\kappa} }
+
\frac{C}{\kappa} \delta^{\frac23(1-\gamma)} }
\leq
\frac{C}{\kappa} \delta^{\frac13} e^{-\frac{A}{\delta^2}}
\paren{1 +
\delta^{\frac23(1-\gamma)} e^{\kappa}},
\end{split}
\end{equation*}
where $\gamma \in (\gamma^*,1)$ with $\gamma^*\in [\frac{3}{5},1)$ given in Theorem \ref{theorem:matching}.
Taking $\kappa=\kappa_* \vabs{\log \delta}$ with $0<\kappa_*<\frac{2}{3}(1-\gamma)$, we obtain
\begin{equation*
\begin{split}
\vabs{c_y-c_y^0}
&\leq
\frac{C\delta^{\frac13} }{\vabs{\log \delta}}
e^{-\frac{A}{\delta^2}}
\paren{1 + \delta^{\frac23(1-\gamma)-\kappa_*}}
\leq
\frac{C \delta^{\frac13} }{\vabs{\log \delta}}
e^{-\frac{A}{\delta^2}}.
\end{split}
\end{equation*}
This bound and \eqref{proof:operatorIIdiff} prove the first estimate of the lemma.
For the second estimate, it only remains to bound $\dzHatO$ and apply Lemma~\ref{lemma:operatorEEdiff}.
Indeed, by the definition of $\dzHatO$ in \eqref{def:dzHatO}, Lemma \ref{lemma:boundsBspl} and \eqref{proof:operatorIIdiff}, we have that
\begin{align*}
\normSplTotal{\dzHatO}
=
\normSplTotal{\mathcal{I}[c_x^0,c_y^0]}
\leq
C e^{\frac{A}{\delta^2}} \paren{\vabs{c_x^0}+\vabs{c_y^0}}
\leq
C \delta^{\frac13}.
\end{align*}
Since $\kappa=\kappa_* \vabs{\log \delta}$ with $0<\kappa_*<\frac{2}{3}(1-\gamma)$,
Lemma \ref{lemma:operatorEEdiff} implies $\normSplTotal{\mathcal{E}[\dzHatO]} \leq \frac{C \delta^{\frac13}}{\vabs{\log \delta}}$.
\end{proof}
With this lemma, we can give sharp estimates for $\dzHatU$ by using equation
\eqref{eq:dzOutUInverse}.
Indeed, since the right hand side of this equation belongs to $\XSplTotal$, by Lemma \ref{lemma:operatorEEdiff},
\[
\dzHatU(u) = (\mathrm{Id} - \mathcal{E} )^{-1}\left( \mathcal{I}[c_x-c_x^0, c_y-c_y^0](u) + \mathcal{E}[\dzHatO](u)\right).
\]
Then, Lemmas \ref{lemma:operatorEEdiff} and \ref{lemma:IconstantsDiff} imply
\begin{align}\label{def:fitaDeltaphi1}
\normSplTotal{\dzHatU}
\leq \frac{C \delta^{\frac{1}{3}}}{\vabs{\log \delta}}.
\end{align}
To prove Theorem \ref{theorem:mainTheoremPoincare}, it only remains to analyze $B_x(u_-)$ and $B_y(u_+)$.
\begin{lemma}\label{lemma:boundsBonsBeta}
Let $\kappa_*$ be as given in Lemma \ref{lemma:IconstantsDiff}.
Then, there exists $\delta_0>0$ such that, for $\delta \in (0,\delta_0)$ and $\kappa=\kappa_*\vabs{\log \delta}$, the functions $B_{x}, B_y$ defined in \eqref{def:mxmyalxaly} satisfy
\begin{align*}
B_x^{-1}(u_-) &= e^{-\frac{4i}9(\pi-\lambda_h(u_*))}
\paren{1+\mathcal{O}\paren{\frac1{\vabs{\log \delta}}}},
\\
B_y^{-1}(u_+) &= e^{\frac{4i}9(\pi-\lambda_h(u_*))}
\paren{1+\mathcal{O}\paren{\frac1{\vabs{\log \delta}}}},
\end{align*}
where $u_{\pm}=\pm i(A-\kappa\delta^2)$.
\end{lemma}
This lemma is proven in Appendix \ref{subappendix:proofH-technicalSecond}.
Let $u_* \in \DBoomerang \cap \mathbb{R}$.
We compute the first order of $\dzHatO(u_*)=(\dxOutO(u_*),\dyOutO(u_*))^T$.
Since, by Theorem \ref{theorem:singularities}, $(\alpha_+)^3=(\alpha_-)^3=\frac12$, and applying Lemma \ref{lemma:boundsBonsBeta} and \eqref{def:dzHatO}, we obtain
\begin{align*}
\vabs{\Delta x_0(u_*)} =
\vabs{\Delta y_0(u_*)} =
\sqrt[6]{2}
\vabs{\CInn}
\delta^{\frac13} e^{-\frac{A}{\delta^2}}
\paren{1+\mathcal{O}\paren{\frac1{\vabs{\log \delta}}}}.
\end{align*}
Moreover, by \eqref{def:fitaDeltaphi1},
\begin{align*}
\vabs{\Delta x(u_*) - \Delta x_0(u_*)},
\vabs{\Delta y(u_*) - \Delta y_0(u_*)}
\leq
\frac{C \delta^{\frac13} e^{-\frac{A}{\delta^2}}}{\vabs{\log \delta}}.
\end{align*}
Finally, notice that the section $u=u_* \in \DBoomerang \cap \mathbb{R}$ translates to $\lambda= \lambda^* :=\lambda_h(u_*)$ (see \eqref{def:changeOuter}).
Moreover, since $\dot{\lambda}_h=-3\Lambda_h$ (see \eqref{eq:separatrixParametrization}), one deduces that $\Lambda_h(u)>0$ for $u>0$.
Therefore, by the change of coordinates \eqref{def:changeOuter}, Theorem \ref{theorem:existence} and taking $\delta$ small enough,
\begin{align*}
\Lambda_*^{\diamond} = \Lambda_h(u_*) - \frac{\wdOut(u_*)}{3\Lambda_h(u_*)}
=
\Lambda_h(u_*) + \mathcal{O}(\delta^2) > 0,
\qquad
\text{with }\diamond=\unstable,\stable,
\end{align*}
and, therefore using formula \eqref{eq:defDwOut} for $\dwOut$ and Lemma \ref{lemma:boundsBspl}, we obtain that
\[
\vabs{\Lambda_*^{\unstable}-\Lambda_*^{\stable}} \leq
C \vabs{\dwOut(u_*)}
\leq
C \delta \vabs{\dxOut(u_*)} +
C \delta \vabs{\dyOut(u_*)}
\leq
C \delta^{\frac43} e^{-\frac{A}{\delta^2}}.
\]
\subsection{Proof of Lemma \ref{lemma:boundsBspl}}
\label{subappendix:proofH-technicalFirst}
First, we prove the estimates for the operator $\Upsilon$ given in \eqref{def:operatorPPdifference}.
For $\sigma \in [0,1]$, we define
$z_{\sigma}=\sigma\zuOut + (1-\sigma) \zsOut$
with $z_{\sigma}=(w_{\sigma},x_{\sigma},y_{\sigma})^T$.
Then, by Theorem \ref{theorem:existence}, for $u \in \DBoomerang$, we have that
\begin{equation}\label{proof:ztauBounds}
\vabs{w_{\sigma}(u)}\leq \frac{C\delta^2}{\vabs{u^2+A^2}} +
\frac{C\delta^4}{\vabs{u^2+A^2}^{\frac83}}, \qquad
\vabs{x_{\sigma}(u)},\vabs{y_{\sigma}(u)}\leq \frac{C\delta^3}{\vabs{u^2+A^2}^{\frac43}}.
\end{equation}
Recalling that $H^{\out} = w + \frac{xy}{\delta^2} + H_1^{\out}$ (see \eqref{def:hamiltonianOuter}), one has
\begin{equation*}
\begin{split}
\vabs{\Upsilon_1(u)-1}& \leq
\sup_{\sigma \in[0,1]} \vabs{\partial_w H_1^{\out}(u,z_{\sigma}(u))},\\
\vabs{\Upsilon_2(u)}&\leq
\frac{\vabs{y_{\sigma}(u)}}{\delta^2}
+
\sup_{\sigma \in[0,1]} \vabs{
\partial_x H_1^{\out}(u,z_{\sigma}(u))},
\\
\vabs{\Upsilon_3(u)}
&\leq
\frac{\vabs{x_{\sigma}(u)}}{\delta^2}
+
\sup_{\sigma \in[0,1]} \vabs{
\partial_y H_1^{\out}(u,z_{\sigma}(u))}.
\end{split}
\end{equation*}
Then, by \eqref{proof:ztauBounds} and applying
\eqref{def:Fitag} and \eqref{proof:estimatef2f3Out} in the proof of
Lemma \ref{lemma:computationsRRROut} we obtain the estimates for $\Upsilon_1, \Upsilon_2$ and $\Upsilon_3$.
We also need estimates for the matrix $\widetilde{\mathcal{B}}^{\spl}$ given in \eqref{def:Bspl1}, which satisfies
\begin{align*}
|\widetilde{\mathcal{B}}^{\spl}_{i,j}(u)|
\leq
\sup_{\sigma \in[0,1]}
\vabs{\paren{D_z \mathcal{R}^{\out}
[\zOut_{\sigma}](u)}_{i,j}},
\end{align*}
for $z_{\sigma}= \sigma\zuOut + (1-\sigma) \zsOut$.
Then, by \eqref{proof:ztauBounds} and applying Lemma \ref{lemma:computationsRRROut}, for $u \in \DBoomerang$,
\begin{equation} \label{proof:boundsBsplTilde}
\begin{aligned}
\vabs{\widetilde{\mathcal{B}}^{\spl}_{2,1}(u)} &\leq
\frac{C\delta}{\vabs{u^2 + A^2}^{\frac{2}{3}}},
&
\vabs{\widetilde{\mathcal{B}}^{\spl}_{3,1}(u)} &\leq
\frac{C\delta}{\vabs{u^2 + A^2}^{\frac{2}{3}}},
\\
\vabs{\widetilde{\mathcal{B}}^{\spl}_{2,2}(u)} &\leq
\frac{C}{\vabs{u^2 + A^2}^{\frac13}} +
\frac{C \delta^2}{\vabs{u^2 + A^2}^{2}},
&
\vabs{\widetilde{\mathcal{B}}^{\spl}_{3,2}(u)} &\leq
\frac{C\delta^2}{\vabs{u^2 + A^2}^{2}},
\\
\vabs{\widetilde{\mathcal{B}}^{\spl}_{2,3}(u)} &\leq
\frac{C\delta^2}{\vabs{u^2 + A^2}^{2}},
&
\vabs{\widetilde{\mathcal{B}}^{\spl}_{3,3}(u)} &\leq
\frac{C}{\vabs{u^2 + A^2}^{\frac13}} +
\frac{C \delta^2}{\vabs{u^2 + A^2}^{2}}.
\end{aligned}
\end{equation}
Then, by \eqref{proof:boundsUpsilon} and taking $\kappa$ big enough,
\begin{align*}
\vabs{{\mathcal{B}}^{\spl}_{1,1}(u)} &\leq
\frac{\vabs{\Upsilon_2(u)}}{\vabs{\Upsilon_1(u)}} \vabs{\widetilde{\mathcal{B}}^{\spl}_{2,1}(u)}
\leq
\frac{C \delta^2}{\vabs{u^2 + A^2}^{2}}, \\
\vabs{{\mathcal{B}}^{\spl}_{1,2}(u)} &\leq
\vabs{\widetilde{\mathcal{B}}^{\spl}_{2,3}(u)} +
\frac{\vabs{\Upsilon_3(u)}}{\vabs{\Upsilon_1(u)}} \vabs{\widetilde{\mathcal{B}}^{\spl}_{2,1}(u)}
\leq
\frac{C \delta^2}{\vabs{u^2 + A^2}^{2}},
\end{align*}
and analogous estimates hold for ${\mathcal{B}}^{\spl}_{2,1}$ and ${\mathcal{B}}^{\spl}_{2,2}$.
Finally, we compute estimates for $B_y(u)$ (see \eqref{def:mxmyalxaly}) and $u \in \DBoomerang$. The estimates for $B_x(u)$ can be computed analogously.
Let us consider the integration path $\rho_t = u_* + (u-u_*)t $, for $t\in[0,1]$.
Then
\begin{align*}
B_y(u) = \exp \paren{
\int_0^{1} \widetilde{\mathcal{B}}^{\spl}_{2,2}\paren{\rho_t}
(u-u_*) dt}.
\end{align*}
Using the bounds in \eqref{proof:boundsBsplTilde}, we have that
\begin{align*}
\vabs{\log B_y(u)} &\leq
C \vabs{u-u_*} \vabs{\int^{1}_0
\frac{1}{\vabs{\rho^2_t + A^2}^{\frac{1}{3}}} +
\frac{\delta^2}{\vabs{\rho^2_t + A^2}^{2}} dt}
\leq C,
\end{align*}
which implies $C^{-1} \leq \vabs{B_y(u)} \leq C$.
\subsection{Proof of Lemma \ref{lemma:boundsBonsBeta}}
\label{subappendix:proofH-technicalSecond}
We only give an expression for $B_y(u_+)$. The result for $B_x(u_-)$ is analogous.
First, we analyze $\widetilde{\mathcal{B}}^{\spl}_{3,3}$.
\begin{lemma}\label{lemma:proofConstantA}
For $\delta>0$ small enough, $\kappa>0$ large enough and $u \in \DBoomerang$, the function $\widetilde{\mathcal{B}}^{\spl}_{3,3}$ defined in \eqref{def:Bspl1} is of the form
\begin{align*}
\widetilde{\mathcal{B}}_{3,3}^{\spl}(u) =
-\frac{4i}3 \Lambda_h(u)
+ \delta^2 m(u;\delta),
\end{align*}
for some function $m$ satisfying
\begin{align*}
\vabs{m(u;\delta)}
\leq \frac{C}{\vabs{u^2+A^2}^{2}}.
\end{align*}
\end{lemma}
\begin{proof}
Let us define $z_{\tau}= \tau\zuOut + (1-\tau) \zsOut$ and recall that, for $u \in \DBoomerang$,
\begin{align}\label{proof:B33tildeDef}
\widetilde{\mathcal{B}}_{3,3}(u) =
\int_0^1 \partial_y \mathcal{R}_3^{\out}[z_{\tau}](u) d\tau.
\end{align}
Then, by the expression of $\mathcal{R}_3^{\out}$ in \eqref{eq:expressionRRRout}, the estimates
in the proof of Lemma \ref{lemma:computationsRRROut} (see Appendix \ref{subsubsection:proofComputationsRRROut}) and Theorem \ref{theorem:existence}, we have that
\[
\partial_y \mathcal{R}_3^{\out}[z_{\tau}](u)
=
\frac{i}{\delta^2} g^{\out}(u,z_{\tau}(u)) +
\delta^2 \widetilde{m}(u;\delta),
\]
where
$
\vabs{\widetilde{m}(u;\delta)}
\leq \frac{C}{\vabs{u^2+A^2}^{2}}.
$
In the following, to simplify notation, we denote by $\widetilde{m}(u;\delta)$ any function satisfying the previous estimate.
Since $g^{\out}=\partial_w H_1^{\out}$, by \eqref{proof:H1outExpressionOuter} one has
\[
g^{\out}(u,z_{\tau}(u)) = \partial_w M_P(u,z_{\tau}(u);\delta)+
\partial_w M_S(u,z_{\tau}(u);\delta)+
\partial_w M_R(u,z_{\tau}(u);\delta),
\]
with $M_P$, $M_S$ and $M_R$ as given in \eqref{def:expressionMJOuter},
\eqref{def:expressionMSOuter} and
\eqref{def:expressionMROuter}, respectively.
Then, taking into account that $F_{\pend}(s)=2z^3+\mathcal{O}(z^4)$ (see \eqref{def:Fpend}) and following the proofs of Lemmas \ref{lemma:boundsMPout} and \ref{lemma:boundsMSout}, it is a tedious but an easy computation to see that,
\begin{equation*}
\begin{split}
g^{\out}(u,z_{\tau}(u)) =& \,
\partial_w M_P(u,0,0,0;\delta) +
\partial_w M_S(u,0,0,0;\delta) \\
&- \frac{w_{\tau}(u)}{3 \Lambda_h^2(u)}
-\frac{\delta^2 \LtresLa(\delta)}{\Lambda_h(u)}
- 2 \delta^2\Lambda_h(u)
+
\delta^4\widetilde{m}(u;\delta),
\end{split}
\end{equation*}
and, by \eqref{proof:B33tildeDef},
\begin{equation}\label{proof:B33tildeDefB}
\begin{split}
\widetilde{\mathcal{B}}_{3,3}(u) =& \, \frac{i}{\delta^2} \boxClaus{\partial_w M_P(u,0,0,0;\delta) +
\partial_w M_S(u,0,0,0;\delta) } \\
&- i\frac{\wuOut(u)+\wsOut(u)}{6 \delta^2\Lambda_h^2(u)}
- i\frac{\LtresLa(\delta)}{\Lambda_h(u)}
- 2i \Lambda_h(u)
+
\delta^2\widetilde{m}(u;\delta).
\end{split}
\end{equation}
Next, we study the terms $\wusOut(u)$.
Since $H^{\out}=w + \frac{xy}{\delta^2} + M_P+M_S+M_R$ (see
\eqref{def:hamiltonianOuter} and \eqref{proof:H1outExpressionOuter}), one can see that
\[
H^{\out}(u,\zuOut(u);\delta)
=
H^{\out}(u,\zsOut(u);\delta)
=
\lim_{\Re u \to \pm \infty} H^{\out}(u,0,0,0;\delta)
=
\delta^4 K(\delta),
\]
with $\vabs{K(\delta)}\leq C$, for $\delta$ small enough.
Then, by Theorem \ref{theorem:existence}, for $\diamond=\unstable,\stable$,
\begin{align*}
\vabs{\wdOut(u) + M_P(u,\zdOut(u);\delta)+
M_S(u,\zdOut(u);\delta)+ M_R(u,\zdOut(u);\delta)}
\leq
\frac{C\delta^4}{\vabs{u^2+A^2}^{\frac83}}.
\end{align*}
Again, following the proofs of Lemmas \ref{lemma:boundsMPout} and \ref{lemma:boundsMSout}, one obtains
\begin{align*}
\vabs{\wdOut(u) + M_P(u,0,0,0;\delta)
+ M_S(u,0,0,0;\delta)
+\delta^2\Lambda_h(u)(3\LtresLa+2\Lambda_h^2(u))}
\leq \frac{C\delta^4}{\vabs{u^2+A^2}^{\frac83}},
\end{align*}
and, by \eqref{proof:B33tildeDefB},
\begin{equation*
\begin{split}
\widetilde{\mathcal{B}}_{3,3}(u)
=& \,
- \frac{4i}3 \Lambda_h(u)
+
\frac{i}{\delta^2} \boxClaus{\partial_w M_P(u,0,0,0;\delta)
+ \frac{M_P(u,0,0,0;\delta)}{3\Lambda_h^2(u)}
}
\\
&+ \frac{i}{\delta^2} \boxClaus{\partial_w M_S(u,0,0,0;\delta)
+ \frac{M_S(u,0,0,0;\delta)}{3\Lambda_h^2(u)} }
+\delta^2\widetilde{m}(u;\delta).
\end{split}
\end{equation*}
Therefore, it only remains to check that
\begin{align*}
\vabs{\partial_w M_{P,S}(u,0,0,0;\delta)
+ \frac{M_{P,S}(u,0,0,0;\delta)}{3\Lambda_h^2(u)}}
\leq \frac{C\delta^4}{\vabs{u^2+A^2}^2}.
\end{align*}
Indeed, by \eqref{def:SerieP} and the definition \eqref{def:expressionMJOuter} of $M_P$, one has
\begin{align*}
M_P(u,w,0,0;\delta) = \mathcal{M}_P\paren{u,\delta^2\Lambda_h(u)-\frac{\delta^2 w}{3\Lambda_h(u)}+\delta^4\LtresLa(\delta)},
\end{align*}
where $\mathcal{M}_P(u,\Lambda)$ is an analytic function for $u \in \DBoomerang$ and $\vabs{\Lambda} \ll 1$.
Moreover, following the proof of Lemma \ref{lemma:boundsMPout}, there exist $a_0$ and $a_1$ such that
\begin{align*}
\vabs{\mathcal{M}_P(u,\Lambda)-a_0(u;\delta)- a_1(u;\delta) \Lambda} \leq
\frac{C\Lambda^2}{\vabs{u^2+A^2}^2},
\end{align*}
with
\[
\vabs{a_0(u;\delta)} \leq \frac{C\delta^4}{\vabs{u^2+A^2}^{\frac23}},
\qquad
\vabs{a_1(u;\delta)} \leq \frac{C}{\vabs{u^2+A^2}^{\frac23}}.
\]
Therefore,
\begin{align*}
\vabs{\partial_w M_{P}(u,0,0,0;\delta)
+ \frac{M_{P}(u,0,0,0;\delta)}{3\Lambda_h^2(u)}}
\leq& \,
\frac{\vabs{a_0(u)}}{3\Lambda_h^2(u)}
+ \frac{\delta^4\LtresLa(\delta)\vabs{a_1(u)}}{3\Lambda_h^2(u)}
+ \frac{C\delta^4}{\vabs{u^2+A^2}^{2}}
\\
\leq& \,
\frac{C\delta^4}{\vabs{u^2+A^2}^{2}} .
\end{align*}
An analogous estimate holds for $M_S$.
\end{proof}
\begin{proof}[End of the proof of Lemma \ref{lemma:boundsBonsBeta}]
By Lemma \ref{lemma:proofConstantA} and recalling that $u_+=iA-\kappa \delta^2$,
\begin{equation}
\label{proof:constantDifferenceZ}
\begin{split}
\log B_y(u_+)
=&
\int_{u_*}^{u_+} \widetilde{\mathcal{B}}_{3,3}^{\spl}(u) du
=
-\frac{4i}3 \int_{u^*}^{i A} \Lambda_h(u) du
\\ &+
\frac{4i}3 \int^{i A}_{u_+} \Lambda_h(u) du
+
\delta^2
\int_{u^*}^{u_+} {m}(u;\delta).
\end{split}
\end{equation}
Then, by Theorem \ref{theorem:singularities} and taking into account that $\kappa=\kappa_* \vabs{\log \delta}$ (see Lemma \ref{lemma:IconstantsDiff}), we obtain
\begin{align*}
\vabs{\log B_y(u_+) + \frac{4 i}3
\int_{u^*}^{i A} \Lambda_h(u) du}
\leq
\frac{C}{\kappa}
+
C \kappa^{\frac23}\delta^{\frac43}
+
\frac{C\delta^2}{\vabs{u_* - iA}}
\leq
\frac{C}{\vabs{\log \delta}}.
\end{align*}
Finally, recalling that $\dot{\lambda}_h=-3\Lambda_h$, applying the change of coordinates $\lambda=\lambda_h(u)$
and using that $\lambda_h(iA) = \pi$, we have that
\begin{align*}
\frac{4i}3 \int_{u^*}^{i A} \Lambda_h(u) du
=
-\frac{4i}9 \int_{\lambda_h(u_*)}^{\pi} d\lambda
=
-\frac{4i}9 \paren{\pi-\lambda_h(u_*)}.
\end{align*}
Joining the last statements with \eqref{proof:constantDifferenceZ}, we obtain the statement of the lemma.
\end{proof}
\subsection{Preliminaries and set up}
Proposition \eqref{proposition:innerDerivation} shows that the Hamiltonian $H^{\out}$ expressed in inner coordinates, that is $H^{\inn}$ as given in \eqref{def:hamiltonianInnerComplete},
is of the form $H^{\inn}=W+XY+ \mathcal{K}+H_1^{\inn}$.
Then, the equation associated to $H^{\inn}$ can be written as
\begin{equation}\label{eq:systemEDOsInnerComplete}
\left\{ \begin{array}{l}
\dot{U} = 1 + g^{\Inner}(U,Z) + g^{\mch}(U,Z),\\
\dot{Z} = \mathcal{A}^{\Inn} Z + f^{\Inner}(U,Z) + f^{\mch}(U,Z),
\end{array} \right.
\end{equation}
where $\mathcal{A}^{\Inner}$ is given in \eqref{def:matrixAAA} and
\begin{equation}\label{def:fgInnfgMch}
\begin{aligned}
f^{\Inn} &= \paren{-\partial_U \mathcal{K},
i \partial_Y \mathcal{K}, -i\partial_X \mathcal{K}}^T,
& \quad
g^{\inn} &= \partial_{W} \mathcal{K},
\\
f^{\mch} &= \paren{-\partial_U H_1^{\inn},
i \partial_Y H_1^{\inn}, -i\partial_X H_1^{\inn} }^T,
& \quad
g^{\mch} &= \partial_{W} H_1^{\inn}.
\end{aligned}
\end{equation}
Notice that, since $(u,\zuOut(u))=\phi_{\Inner}(U,\ZuMchO(U))$ (see \eqref{def:ZdMchO}),
$(U,\ZuMchO(U))$ is an invariant graph of equation \eqref{eq:systemEDOsInnerComplete}.
Therefore, $\ZuMchO$ satisfies the invariance equation
\begin{equation*
\begin{split}
\partial_U {\ZuMchO} &=
\mathcal{A}^{\Inner} \ZuMchO
+ \mathcal{R}^{\Inner}[\ZuMchO]
+ \mathcal{R}^{\mch}[\ZuMchO],
\end{split}
\end{equation*}
with $\mathcal{R}^{\Inner}$ as defined in \eqref{def:operatorRRRInner} and
\begin{equation}\label{def:operatorRRRmch}
\mathcal{R}^{\mch}[\phiA] =
\frac{\mathcal{A}^{\Inner} \phiA
+ f^{\Inn}(U, \phiA)
+ f^{\mch}(U, \phiA) }
{1+ g^{\Inn}(U, \phiA) + g^{\mch}(U, \phiA) }
- \mathcal{A}^{\Inner} \phiA
- \mathcal{R}^{\Inner}[\phiA].
\end{equation}
Similarly $\ZuInn$ satisfies the invariance equation
$
\partial_U \ZuInn =
\mathcal{A}^{\Inner} \ZuInn
+ \mathcal{R}^{\Inner}[\ZuInn]
$ (see Theorem \ref{theorem:innerComputations}) and,
therefore, the difference $\ZuMchU = \ZuMchO - \ZuInn$ must be a solution of
\begin{equation}\label{eq:invariantEquationMchU1}
\partial_U {\ZuMchU} =
\mathcal{A}^{\inn} \ZuMchU +
\mathcal{B}(U) \ZuMchU + \mathcal{R}^{\mch}[\ZuMchO],
\end{equation}
with
\begin{equation}\label{def:operatorBBmch}
\mathcal{B}(U) = \int_{0}^{1} D_{Z}\mathcal{R}^{\Inn}
[(1-s)\ZuInn + s \ZuMchO](U) ds.
\end{equation}
The key point is that, since the existence of both $\ZuInn$ and $\ZuMchO$ is already been proven,
we can think of $\mathcal{B}(U)$ and
$\mathcal{R}^{\mch}[\ZuMchO](U)$ as known functions. Therefore, equation~\eqref{eq:invariantEquationMchU1} can be understood as a non homogeneous linear equation with independent term $\mathcal{R}^{\mch}[\ZuMchO](U)$.
Moreover, defining the linear operator
$\mathcal{L}^{\inn} \phiA = (\partial_U-\mathcal{A}^{\inn})\phiA$,
equation~\eqref{eq:invariantEquationMchU1} is equivalent to
\begin{equation}\label{eq:invariantEquationMchU2}
\mathcal{L}^{\inn}{\ZuMchU}(U) =
\mathcal{B}(U)\ZuMchU(U)
+\mathcal{R}^{\mch}[\ZuMchO](U).
\end{equation}
We prove Theorem \ref{theorem:matching} by solving this equation (with suitable initial conditions).
To this end, we define the Banach space
$
\XcalMchTotal = \XcalMch_{\frac{4}{3}} \times \XcalMch_1 \times \XcalMch_1
$
with
\begin{equation*}
\XcalMch_{\alpha}=
\Bigg\{ \phiA: \DuMchInn \to \mathbb{C} \mathrm{h}:\mathrm{h}
\phiA \text{ real-analytic, }
\normMch{\phiA}_{\alpha}=\sup_{U \in \DuMchInn} \vabs{U^{\alpha}\phiA(U)} < \infty \Bigg\},
\end{equation*}
endowed with the product norm
$
\normMchTotal{\phiA} = \normMch{\phiA_1}_{\frac{4}{3}} + \normMch{\phiA_2}_{1} + \normMch{\phiA_3}_{1}.
$
Next lemma gives some properties of these Banach spaces.
\begin{lemma}\label{lemma:sumNormsMch}
Let $\gamma \in [\frac{3}{5},1)$ and $\alpha, \beta \in \mathbb{R}$. The following statements hold:
\begin{enumerate}
\item If $\varphi \in \XcalMch_{\alpha}$, then $\varphi \in \XcalMch_{\beta}$ for any $\beta \in \mathbb{R}$.
Moreover,
\begin{align*}
\begin{cases}
\normMch{\varphi}_{\beta} \leq
C \kappa^{\beta-\alpha}
\normMch{\varphi}_{\alpha},
&\quad \text{for } \alpha > \beta, \\
\normMch{\varphi}_{\beta} \leq
C \delta^{2(\alpha-\beta)(1-\gamma)}
\normMch{\varphi}_{\alpha},
&\quad \text{for } \alpha < \beta.
\end{cases}
\end{align*}
\item If $\varphi \in \XcalMch_{\alpha}$ and
$\zeta \in \XcalMch_{\beta}$, then
$\varphi \zeta \in \XcalMch_{\alpha+\beta}$ and
$
\normMch{\varphi \zeta}_{\alpha+\beta} \leq \normMch{\varphi}_{\alpha} \normMch{\zeta}_{\beta}.
$
\end{enumerate}
\end{lemma}
This lemma is a direct consequence of the fact that, as explained in Section \ref{subsection:matching}, $U$ satisfies
\begin{equation}\label{eq:boundsDomainMchInner}
\kappa \cos \betaMchA \leq
\vabs{U} \leq
\frac{C}{\delta^{2(1-\gamma)}}.
\end{equation}
Now, we present the main result of this section, which implies Theorem~\ref{theorem:matching}.
\begin{proposition}\label{theorem:matchingProof}
There exist $\gamma^*\in[\frac35,1)$,
$\kappaMch\geq\max\claus{\kappaOuter,\kappaInner}$,
$\delta_0>0$ and $\cttMchA>0$ such that,
for $\gamma \in (\gamma^*,1)$,
$\kappa\geq \kappaMch$
and $\delta \in (0,\delta_0)$, $\ZuMchU$ satisfies
$\normMchTotal{\ZuMchU} \leq \cttMchA \, \delta^{\frac{2}{3}(1-\gamma)}$.
\end{proposition}
\subsection{An integral equation formulation}
\label{subsection:linearOperatorsMch}
To prove Proposition \ref{theorem:matchingProof}, we first introduce a right-inverse of $\mathcal{L}^{\inn}=\partial_U-\mathcal{A}^{\inn}$.
\begin{lemma}\label{lemma:operadorEEmch}
The operator $\mathcal{G}^{\inn}[\phiA]=\paren{\mathcal{G}^{\inn}_1[\phiA_1],\mathcal{G}^{\inn}_2[\phiA_2],\mathcal{G}^{\inn}_3[\phiA_3]}^T$
defined as
\begin{equation}\label{def:operatorEEmch}
\mathcal{G}^{\inn}[\phiA](U) = \paren{
\int_{U_3}^U \phiA_1(S) dS, \,
\int^U_{U_3} e^{-i(S-U)} \phiA_2(S) dS, \,
\int^U_{U_2} e^{i(S-U)}\phiA_3(S) dS}^T,
\end{equation}
where $U_2$ and $U_3$ are introduced in~\eqref{def:U12}, is a right inverse of $\mathcal{L}^{\inn}$.
Moreover, there exists a constant $C>0$ such that:
\begin{enumerate}
\item Let $\alpha>1$. If $\phiA \in \XcalMch_{\alpha}$, then $\mathcal{G}^{\inn}_1[\phiA] \in \XcalMch_{\alpha-1}$ and
$
\normMch{\mathcal{G}^{\inn}_1[\phiA]}_{\alpha-1} \leq C \normMch{\phiA}_{\alpha}.
$
\item Let $\alpha>0$, $j=2,3$.
If $\phiA \in \XcalMch_{\alpha}$,
then $\mathcal{G}^{\inn}_j[\phiA] \in \XcalMch_{\alpha}$ and
$
\normMchSmall{\mathcal{G}^{\inn}_j[\phiA]}_{\alpha} \leq C \normMch{\phiA}_{\alpha}.
$
\end{enumerate}
\end{lemma}
The proof of this lemma follows the same lines as the proof of Lemma 20 in \cite{BCS13}.
Using the operator $\mathcal{G}^{\inn}$, equation~\eqref{eq:invariantEquationMchU2} is equivalent to
\begin{equation*}
\ZMchU(U) = C^{\mch} e^{\mathcal{A}^{\inn} U} +
\mathcal{G}^{\inn} \left[ \mathcal{B} \cdot \ZMchU \right] (U) +
\paren{\mathcal{G}^{\inn} \circ \mathcal{R}^{\mch}[Z]} (U),
\end{equation*}
where $C^{\mch}=(C^{\mch}_W,C^{\mch}_X, C^{\mch}_Y)^T$ is defined as
\begin{equation*}
C_W^{\mch} =\WMchU(U_3), \qquad
C_X^{\mch} =e^{-i U_3}\XMchU(U_3), \qquad
C_Y^{\mch} =e^{i U_2}\YMchU(U_2).
\end{equation*}
Then, defining the operator
$
\mathcal{T}[\phiA](U) =
\mathcal{G}^{\inn} \left[ \mathcal{B} \cdot \phiA \right](U),
$
this equation is equivalent to \begin{equation}\label{eq:invariantEquationMchU4}
(\mathrm{Id} - \mathcal{T})\ZuMchU = C^{\mch} e^{\mathcal{A}^{\inn} U} +
\paren{\mathcal{G}^{\inn} \circ \mathcal{R}^{\mch}[\ZuMchO]}
\end{equation}
and therefore, to estimate $\ZuMchU$, we need to prove that $\mathrm{Id}-\mathcal{T}$ is invertible in $\XcalMchUTotal$.
\begin{lemma}\label{lemma:operatorTTmch}
Let us consider operators $\mathcal{B}$ and $\mathcal{G}^{\inn}$ as given in \eqref{def:operatorBBmch} and \eqref{def:operatorEEmch}.
Then, for $\gamma \in [\frac35,1)$, $\kappa>0$ big enough and $\delta>0$ small enough, for $\phiA \in \XcalMchTotal$,
\begin{align*}
\normMchTotal{\mathcal{T}[\phiA]}
= \normMchTotal{\mathcal{G}^{\inn}[\mathcal{B}\cdot \phiA]}
\leq \frac{1}{2} \normMchTotal{\phiA}
\end{align*}
and therefore
\begin{align*}
\normMchTotal{(\mathrm{Id}-\mathcal{T})^{-1}[\phiA]} \leq 2 \normMchTotal{\phiA}.
\end{align*}
\end{lemma}
To prove this lemma, we use the following estimates, whose proof is a direct result of Lemma 5.5 in \cite{articleInner}.
\begin{lemma}\label{lemma:technicalMatching}
Fix $\varrho>0$ and take $\kappa>0$ big enough. Then, there exists a constant $C$ (depending on $\varrho$ but independent of $\kappa$) such that, for $\phiA \in \XcalMchTotal$ with $\normMchTotal{\phiA}\leq\varrho$, the functions $g^{\inn}$ and $f^{\inn}$ in \eqref{def:operatorRRRInner} and the operator $\mathcal{R}^{\inn}$ in \eqref{def:fgInnfgMch} satisfy
\begin{align*}
\normMch{g^{\inn}(\cdot,\phiA)}_2 \leq C,
\qquad
\normMch{f_1^{\inn}(\cdot,\phiA)}_{\frac{11}3} \leq C,
\qquad
\normMch{f_j^{\inn}(\cdot,\phiA)}_{\frac43}
\leq C,
\quad j=2,3
\end{align*}
and
\begin{align*}
\normMch{\partial_W \mathcal{R}^{\inn}_1[\phiA]}_3 &\leq C, &
\normMch{\partial_X \mathcal{R}^{\inn}_1[\phiA] }_{\frac73} &\leq C, &
\normMch{\partial_Y \mathcal{R}^{\inn}_1[\phiA] }_{\frac73} &\leq C,
\\
\normMch{\partial_W \mathcal{R}^{\inn}_j[\phiA]}_{\frac23} &\leq C, &
\normMch{\partial_X \mathcal{R}^{\inn}_j[\phiA] }_{2} &\leq C, &
\normMch{\partial_Y \mathcal{R}^{\inn}_j[\phiA] }_{2} &\leq C,
\quad j=2,3.
\end{align*}
\end{lemma}
\begin{proof}[Proof of Lemma \ref{lemma:operatorTTmch}]
Let $\ZuMchO$ be as given in \eqref{def:ZdMchO}. Then, by Proposition \ref{proposition:existenceComecocos},
estimates \eqref{eq:boundsDomainMchInner} and taking $\gamma \in [\frac35,1)$, we have that, for $U \in \DuMchInn$,
\begin{align}\label{eq:boundsZMchO}
\vabs{\WuMchO(U)} \leq \frac{C}{\vabs{U}^{\frac{8}{3}}}
+ \frac{C \delta^{\frac{4}{3}}}{\vabs{U}}
\leq
\frac{C}{\vabs{U}^{\frac{8}{3}}}, \qquad
%
\normMch{\XuMchO}_{\frac43} \leq C, \qquad
%
\normMch{\YuMchO}_{\frac43} \leq C.
\end{align}
Then, using also Theorem \ref{theorem:innerComputations}, we obtain that
$(1-s)\ZuInn+s\ZuMchO \in \XcalMchTotal$ for $s \in [0,1]$ and $\gamma \in [\frac35,1)$ and
$
\normMchTotal{(1-s)\ZuInn+s\ZuMchO}\leq C.
$
As a result, using the definition of $\mathcal{B}$ in \eqref{def:operatorBBmch} and Lemma \ref{lemma:technicalMatching},
\begin{equation}\label{eq:boundsOperadorBBmch}
\begin{aligned}
\normMch{\mathcal{B}_{1,1}}_3 &\leq C, &
\normMch{\mathcal{B}_{1,2}}_{\frac73} &\leq C, &
\normMch{\mathcal{B}_{1,3}}_{\frac73} &\leq C, \\
\normMch{\mathcal{B}_{j,1}}_{\frac23} &\leq C, &
\normMch{\mathcal{B}_{j,2}}_2 &\leq C, &
\normMch{\mathcal{B}_{j,3}}_2 &\leq C,
\quad \text{for } j=2,3.
\end{aligned}
\end{equation}
Therefore, by Lemmas~\ref{lemma:operadorEEmch} and \ref{lemma:sumNormsMch} and \eqref{eq:boundsOperadorBBmch}, we obtain
\begin{align*}
\normMch{\mathcal{T}_1[\phiA]}_{\frac{4}{3}} &\leq
C \normMch{\pi_1 \paren{\mathcal{B} \phiA} }_{\frac{7}{3}}\\
&\leq C \boxClaus{\normMch{\mathcal{B}_{1,1}}_{1} \normMch{\phiA_1}_{\frac{4}{3}}+
\normMch{\mathcal{B}_{1,2}}_{\frac{4}{3}} \normMch{\phiA_2}_{1}+
\normMch{\mathcal{B}_{1,3}}_{\frac{4}{3}} \normMch{\phiA_3}_{1}} \\
&\leq
\frac{C}{\kappa^2} \normMch{\phiA_1}_{\frac{4}{3}} +
\frac{C}{\kappa} \normMch{\phiA_2}_{1} +
\frac{C}{\kappa} \normMch{\phiA_3}_{1}
\leq \frac{C}{\kappa} \normMchTotal{\phiA}.
\end{align*}
Proceeding analogously, for $j=2,3$, we have
\begin{align*}
\normMch{\mathcal{T}_j[\phiA]}_{1}
&\leq C \boxClaus{\normMch{\mathcal{B}_{j,1}}_{-\frac{1}{3}}
\normMch{\phiA_1}_{\frac{4}{3}}+
\sum_{l=2}^3
\normMch{\mathcal{B}_{j,l}}_{0} \normMch{\phiA_l}_{1}}
%
\leq \frac{C}{\kappa} \normMchTotal{\phiA}.
\end{align*}
Taking $\kappa>0$ big enough, we obtain the statement of the lemma.
\end{proof}
\subsection{End of the proof of Proposition \ref{theorem:matchingProof}}
To complete the proof of Proposition \ref{theorem:matchingProof}, we study the right-hand side of equation \eqref{eq:invariantEquationMchU4}.
First, we deal with the term $C^{\mch} e^{\mathcal{A}^{\inn} U}$.
Recall that $U_2$ and $U_3$ in \eqref{def:U12} satisfy
\begin{equation*
\frac{C^{-1}}{\delta^{2(1-\gamma)}} \leq
\vabs{U_j} \leq
\frac{C}{\delta^{2(1-\gamma)}},
\hspace{5mm}
\text{ for } j=2,3.
\end{equation*}
Then, taking into account that $\WuMchU= \WuMchO-\WuInn$, \eqref{eq:boundsZMchO} and Theorem \ref{theorem:innerComputations} imply
\begin{align*}
|C^{\mch}_W| = \vabs{\WuMchU(U_3)} \leq \vabs{\WuMchO(U_3)}+\vabs{\WuInn(U_3)}
\leq \frac{C}{\vabs{U_3}^{\frac{8}{3}}}
\leq
C \delta^{\frac{16}{3}(1-\gamma)}
\end{align*}
and, as a result, by Lemma \ref{lemma:sumNormsMch},
$
\normMchSmall{C^{\mch}_W}_{\frac{4}{3}}
\leq
C \delta^{\frac{8}{3}(1-\gamma)}.
$
Analogously, for $U \in \DuMchInn$,
\begin{align*}
|C^{\mch}_X e^{iU}|
&= |e^{i(U-U_3)} \XuMchU(U_3)|
\leq \frac{C e^{-\Im(U-U_3)}}{\vabs{U_3}^{\frac{4}{3}}}
\leq C \delta^{\frac{8}{3}(1-\gamma)}
\end{align*}
and then
$
\normMchSmall{C^{\mch}_X e^{iU}}_1
\leq C \delta^{\frac{2}{3}(1-\gamma)}.
$
An analogous result holds for $C^{\mch}_Y e^{-iU}$. Therefore,
\begin{equation}\label{eq:independentTermsMchA}
\normMchTotalSmall{C^{\mch} e^{\mathcal{A}^{\inn} U}}\leq
C \delta^{\frac{2}{3}(1-\gamma)}.
\end{equation}
Now, we estimate the norm of $\mathcal{G}^{\inn} \circ \mathcal{R}^{\mch}[\ZuMchO]$.
The operator $\mathcal{R}^{\mch}$ in~\eqref{def:operatorRRRmch} can be rewritten as
\begin{align*}
\mathcal{R}^{\mch}[\ZuMchO] = \frac{f^{\mch}(1+g^{\Inn})-
g^{\mch}(\mathcal{A}^{\Inn}\ZuMchO+f^{\Inn})}{(1+g^{\Inn})(1+g^{\Inn}+g^{\mch})}.
\end{align*}
Then by \eqref{eq:boundsZMchO}, Lemmas \ref{lemma:sumNormsMch} and \ref{lemma:technicalMatching} and taking $\kappa$ big enough, we obtain
\begin{equation}\label{proof:boundsfgInn}
\begin{aligned}
\normMch{g^{\inn}(\cdot,\ZuMchO)}_0 &\leq \frac{C}{\kappa^2} \leq \frac12, &\quad
%
\normMch{i\XuMchO + f_2^{\inn}(\cdot,\ZuMchO)}_0
&\leq C, \\
%
\normMch{f_1^{\inn}(\cdot,\ZuMchO)}_0 &\leq C,
&\quad
%
\normMch{-i\YuMchO + f_3^{\inn}(\cdot,\ZuMchO)}_0 &\leq C.
\end{aligned}
\end{equation}
To analyze
$f^{\mch}$ and
$g^{\mch}$ (see \eqref{def:fgInnfgMch}) we rely on the estimates for $H_1^{\inn}$ in \eqref{eq:boundsH1Inn} and its derivatives, which can be easily obtained by Cauchy estimates.
Indeed, they can be applied since $U \in \DuMchInn$ and, by \eqref{eq:boundsZMchO},
\[
\vabs{\WuMchO(U)},\vabs{\XuMchO(U)},\vabs{\YuMchO(U)}\leq C.
\]
Then, there exists $m>0$ such that
\begin{align}\label{proof:boundsMchA}
|g^{\mch}(U,\ZuMchO)| \leq C
\delta^{\frac43-2 m(1-\gamma)},
\quad
|f_j^{\mch}(U,\ZuMchO)|\leq C \delta^{\frac43-2 m(1-\gamma)},
\text{ for } j=1,2,3.
\end{align}
We note that, for $\gamma \in (\gamma^*_0,1)$ with $\gamma^*_0=\max\{\frac{3}{5},\frac{3m-2}{3m}\}$, we have that ${\frac43}-2m(1-\gamma)>0$.
Then, for $\gamma \in (\gamma^*_0,1)$, $\delta$ small enough and $\kappa$ big enough, using \eqref{proof:boundsfgInn} and
\eqref{proof:boundsMchA} we obtain
\begin{align*
|\mathcal{R}^{\mch}_j[\ZuMchO](U)| \leq C \delta^{\frac43-2 m(1-\gamma)}, \qquad
\text{ for } j=1,2,3.
\end{align*}
Then, by Lemmas \ref{lemma:sumNormsMch} and \ref{lemma:operadorEEmch},
\begin{align*}
\normMchTotalSmall{\mathcal{G}^{\inn} \circ \mathcal{R}^{\mch}[\ZuMchO]} &=
\normMchSmall{\mathcal{G}^{\inn}_1 \circ \mathcal{R}_1^{\mch}[\ZuMchO]}_{\frac{4}{3}}
+
\textstyle\sum_{j=2}^3
\normMchSmall{\mathcal{G}^{\inn}_j \circ \mathcal{R}_j^{\mch}[\ZuMchO]}_{1} \\
&\leq
C \normMchSmall{\mathcal{R}_1^{\mch}[\ZuMchO]}_{\frac{7}{3}} +
\textstyle
\sum_{j=2}^3
C \normMchSmall{\mathcal{R}_j^{\mch}[\ZuMchO]}_{1}
\leq
C \delta^{\frac43-2\paren{m+\frac73}(1-\gamma)}.
\end{align*}
If we take $\gamma^*=\max\claus{\frac35, \gamma^*_0, \gamma^*_1}$
with $\gamma^*_1=\frac{3m+5}{3m+7}$,
and $\gamma \in (\gamma^*,1)$,
\begin{align}\label{eq:independentTermsMchB}
\normMchTotalSmall{\mathcal{G}^{\inn} \circ \mathcal{R}^{\mch}[\ZuMchO]}
\leq
C \delta^{\frac23(1-\gamma)}.
\end{align}
To complete the proof of Proposition~\ref{theorem:matchingProof}, we consider
equation~\eqref{eq:invariantEquationMchU4}. By Lemma \ref{lemma:operatorTTmch}, $(\mathrm{Id} - \mathcal{T})$ is invertible in $\XcalMchTotal$ and moreover
\begin{equation*}
\begin{split}
\normMchTotal{\ZuMchU}& =
\normMchTotal{ (\mathrm{Id} - \mathcal{T})^{-1}
\paren{C^{\mch} e^{\mathcal{A}^{\inn} U} + \mathcal{G}^{\inn} \circ \mathcal{R}^{\mch}[\ZuMchO]}}\\
&\leq 2 \normMchTotal{ C^{\mch} e^{\mathcal{A}^{\inn} U} + \mathcal{G}^{\inn} \circ \mathcal{R}^{\mch}[\ZuMchO]}.
\end{split}
\end{equation*}
Then, it is enough to apply \eqref{eq:independentTermsMchA} and \eqref{eq:independentTermsMchB}.
\qed
\subsection{Proof of Lemma~\ref{lemma:computationsRRRtransitionOuter}}
\label{subsubsection:ProofcomputationsRRRtransitionOuter}
\textcolor{red}{Es podria treure.}
First, let us notice that, since $\DOuterTilde \subset \DuOut$, we have that $\YcalOuter \subset \XcalOut_{0,0}$ (see \eqref{def:XcalOut}).
Therefore, for $\alpha,\beta \in \mathbb{R}$, if $\phiA \in \XcalOut_{\alpha,\beta}$ then $\phiA \in \YcalOuter$ and
\begin{equation*
\normSup{\phiA} \leq C \normOut{\phiA}_{\alpha,\beta}.
\end{equation*}
Let $\phiA \in \YcalOuter$ such that $\normSup{\phiA}\leq \varrho \delta^2$ and $v \in \DOuterTilde$
then, for $\delta$ small enough, $v+\phiA(v) \in \DuOut$.
As a result, we can apply the estimates of the derivatives of $H_1^{\out}$ obtained in the proof of Lemma \ref{lemma:computationsRRROut} to operator $R[\phiA]$ (see \eqref{def:operatorRRRtransition}).
Indeed, by \eqref{proof:estimatef1OutDCerivatives},
\begin{align*}
\normSup{R[\phiA]}
\leq
\normOut{\partial_w H_1^{\out}
\paren{\cdot, \zuOut}}_{1,-\frac{2}{3}}
\leq
C\delta^2.
\end{align*}
Analogously, we can compute estimates for the derivative $DR[\phiA]$. Indeed, applying the results in Proposition \ref{proposition:existenceComecocos},
\eqref{proof:estimategOutDerivatives} and \eqref{proof:estimatef1OutDCerivatives}, we obtain
\begin{align*}
\normSup{DR[\phiA]} \leq&
\normOut{\partial_{uw} H_1^{\out}(\cdot,\zuOut)}_{1,\frac{1}{3}}
+
\normOut{\partial^2_{w} H_1^{\out}(\cdot,\zuOut)}_{0,-\frac{2}{3}}
\normOut{\partial_u \wuOut}_{1,1} \\
&+
\normOut{\partial_{wx} H_1^{\out}(\cdot,\zuOut)}_{0,\frac53}
\normOut{\partial_u \xuOut}_{0,\frac53}
+
\normOut{\partial_{wy} H_1^{\out}(\cdot,\zuOut)}_{1,-2}
\normOut{\partial_u \yuOut}_{0,\frac{7}{3}}
\leq C \delta^2.
\end{align*}
\qed
\subsection{Proof of Lemma~\ref{lemma:computationsRFlow}}
\label{subsubsection:ProofComputationsRFlow}
\textcolor{red}{Es podria treure.}
By the definition of $\mathcal{R}^{\flow}[\phiA]$ in \eqref{def:operatorRFlow}, we need to estimate the derivatives of $H_1(\Gamma_h(v)+\phiA(v);\delta)$ and
\begin{align}\label{proof:functionTopertatorRflow}
T[\phiA_{\lambda}] = -V'(\lambda_h+\phiA_{\lambda}) +V'(\lambda_h)
+ V''(\lambda_h)\phiA_{\lambda},
\end{align}
where the potential $V$ is given in \eqref{def:potentialV}.
Let us consider $\phiA=(\phiA_{\lambda},\phiA_{\Lambda},\phiA_x,\phiA_y) \in \XcalFlowTotal$ such that $\normFlowTotal{\phiA} \leq \varrho \delta^3$.
By Theorem \ref{theorem:singularities}, for $v \in \DFlow$ and $\delta$ small
\begin{align}\label{proof:estimatesFlow}
\vabs{\lambda_h(v)+\phiA_{\lambda}(v)} < \pi, \quad
\vabs{\Lambda_h(v)+\phiA_{\Lambda}(v)} < C, \quad
\vabs{\phiA_{x}(v)} < C\delta^3, \quad
\vabs{\phiA_{y}(v)} < C\delta^3.
\end{align}
Then, by definition of $H_1$ in \eqref{def:hamiltonianScalingH1}, we have that
\begin{align*}
H_1(\Gamma_h(v)+\phiA(v);\delta) =& \,
H_1^{\Poi}\big(\lambda_h(v)+\phiA_{\lambda}(v),
1+\delta^2(\Lambda_h(v)+\phiA_{\Lambda}(v)),
\delta \phiA_x(v),\delta \phiA_y(v);\delta \big) \\
&- V(\lambda_h(v)+\phiA_{\lambda}(v)) +
\frac{1}{\delta^4} F_{\pend}(\delta^2\Lambda_h(v)+\delta^2\phiA_{\Lambda}(v)),
\end{align*}
where $H_1^{\Poi}$ is given in \eqref{def:hamiltonianPoincare1} and $F_{\pend}$ in \eqref{def:Fpend}, respectively.
(Recall that $F_{\pend}(s)=\mathcal{O}(s^3)$).
By \eqref{proof:estimatesFlow} we have that $\vabs{(\delta^2(\Lambda_h(v)+\phiA_{\Lambda}(v)),
\delta \phiA_x(v), \delta \phiA_y(v))} \ll 1$,
for $\delta$ small enough, and
$\vabs{\lambda_h(v)+\phiA_{\lambda}(v)}<\pi$, as a result, we can apply the results in Corollary \ref{corollary:seriesH1Poi}.
Indeed, for $v \in \DFlow$,
\begin{equation}\label{proof:flowA}
\begin{aligned}
\vabs{\partial_{\lambda} H_1(\Gamma_h(v)+\phiA(v);\delta)}
&\leq C \delta^2,
& \quad
\vabs{\partial_{x} H_1(\Gamma_h(v)+\phiA(v);\delta)}
&\leq C \delta,
\\
\vabs{\partial_{\Lambda} H_1(\Gamma_h(v)+\phiA(v);\delta)}
&\leq C \delta^2,
& \quad
\vabs{\partial_{y} H_1(\Gamma_h(v)+\phiA(v);\delta)}
&\leq C \delta,
\end{aligned}
\end{equation}
and
\begin{equation}\label{proof:flowB}
\begin{split}
\vabs{\partial_{i j} H_1(\Gamma_h(v)+\phiA(v);\delta)} &\leq C \delta,
\qquad \text{for } i,j \in \claus{\lambda,\Lambda,x,y}.
\end{split}
\end{equation}
Then, it only remains to compute estimates for $T[\phiA_{\lambda}]$.
By \eqref{proof:functionTopertatorRflow} and
\eqref{proof:estimatesFlow} we have that, for $v \in \DFlow$,
\begin{equation}\label{proof:flowC}
\begin{split}
\vabs{T[\phiA](v)} \leq C \vabs{V''(\lambda_h(v))}
\vabs{\phiA_{\lambda}(v)} \leq C \delta^2, \\
\vabs{D T[\phiA](v)} \leq C \vabs{V'''(\lambda_h(v))}
\vabs{\phiA_{\lambda}(v)} \leq C \delta^2.
\end{split}
\end{equation}
Lastly, joining the estimates from \eqref{proof:flowA},
\eqref{proof:flowB} and \eqref{proof:flowC} we obtain the statement of the lemma.
\qed
\subsection{Estimates in the infinity domain}
\label{subsubsection:ProofComputationsRRRInf}
To prove Lemma~\ref{lemma:computationsRRRInf}, we need to obtain estimates for $\mathcal{R}^{\out}$ and its derivatives.
Let us recall that, by its definition in~\eqref{def:operatorRRROuter}, for $z=(w,x,y)$ we have
\begin{align}\label{eq:expressionRRRout}
\mathcal{R}^{\out}[z] = \paren{
\frac{f_1^{\out}(\cdot,z)}{1+g^{\out}(\cdot,z)},
\frac{{f_2}^{\out}(\cdot,z) - \frac{i x}{\delta^2} \, g^{\out}(\cdot,z)}{1+g^{\out}(\cdot,z)},
\frac{{f_3}^{\out}(\cdot,z) + \frac{i y}{\delta^2} \, g^{\out}(\cdot,z)}{1+g^{\out}(\cdot,z)}
}^T,
\end{align}
where $g^{\out} = \partial_w H_1^{\out}$ and $f^{\out}=\paren{-\partial_u H_1^{\out}, i\partial_y H_1^{\out}, -i\partial_x H_1^{\out} }^T$.
Therefore, we need to obtain first estimates for the first and second derivatives of $H_1^{\out}$, introduced in~\eqref{def:hamiltonianOuterSplit}, that is
\begin{equation}\label{proof:H1outForM1}
H_1^{\out} = H \circ (\phi_{\equi} \circ \phi_{\out}) - \paren{w + \frac{xy}{\delta^2}},
\end{equation}
where $H=H_0+H_1$ with $H_0= H_{\pend}+ H_{\osc}$ (see \eqref{def:hamiltonianScaling}, \eqref{def:hamiltonianScalingH0}).
Since $(\lambda_h,\Lambda_h)$ is a solution of the Hamiltonian $H_{\pend}$ and belongs to the energy level $H_{\pend}=-\frac12$,
\begin{align*}
H_0 \circ \phi_{\out}
=
H_{\pend}\paren{\lambda_h(u),\Lambda_h(u)-\frac{w}{3 \Lambda_h(u)}}
+
H_{\osc}(x,y;\delta)
=
-\frac12
+ w
- \frac{w^2}{6\Lambda_h^2(u)}
+ \frac{x y}{\delta^2}.
\end{align*}
Therefore, by \eqref{proof:H1outForM1}, the Hamiltonian $H_1^{\out}$ can be expressed (up to a constant) as
\begin{align}\label{eq:expressionHamiltonianInfty}
H_1^{\out} =
M \circ \phi_{\out}
- \frac{w^2}{6\Lambda_h^2(u)} ,
\end{align}
where
\begin{align*}
M(\lambda,\Lambda,x,y;\delta) =
(H \circ \phi_{\equi})(\lambda,\Lambda,x,y;\delta) -
H_0(\lambda,\Lambda,x,y).
\end{align*}
In the following lemma we give properties of $M$.
\begin{lemma}\label{lemma:expressionHamiltonianInfty}
Fix constants $\varrho>0$ and $\lambda_0 \in (0,\pi)$.
Then, there exists $\delta_0>0$ such that,
for $\delta \in (0,\delta_0)$,
$\vabs{\lambda}<\lambda_0$,
$\vabs{\Lambda}<\varrho$
and
$\vabs{(x,y)}<\varrho\delta$
,
the function $M$ satisfies
\begin{align*}
\vabs{ \partial_{\lambda} M} &\leq
C\delta^2 \vabs{(\lambda,\Lambda)} + C\delta \vabs{(x,y)}, &
\vabs{ \partial_{x} M} &\leq
C\delta \vabs{(\lambda,\Lambda,x,y)}, \\
\vabs{ \partial_{\Lambda} M} &\leq
C\delta^2 \vabs{(\lambda,\Lambda)} + C\delta \vabs{(x,y)}, &
\vabs{ \partial_{y} M} &\leq
C\delta \vabs{(\lambda,\Lambda,x,y)},
\end{align*}
and
\begin{align*}
\vabs{ \partial^2_{\lambda} M},
\vabs{ \partial_{\lambda\Lambda} M},
\vabs{ \partial^2_{\Lambda} M}
&\leq C \delta^2,
\qquad
\vabs{\partial_{i j} M} \leq C \delta,
\quad \text{for } i,j \in \claus{\lambda,\Lambda,x,y}.
\end{align*}
\end{lemma}
\begin{proof}
Applying $\phi_{\equi}$ (see \eqref{def:changeEqui}) to the Hamiltonian $H=H_0 + H_1$, we have that
\begin{equation}\label{proof:M1expressionLtres}
\begin{split}
M &=
\paren{H_0 \circ \phi_{\equi} - H_0}
+ H_1 \circ \phi_{\equi}
\\
&= \delta( x \Ltresy + y\Ltresx)
+ 3\delta^2 \Lambda \LtresLa
+ \delta^4 \paren{-\frac{3}{2}\LtresLa^2 +\Ltresx \Ltresy}
+ H_1 \circ \phi_{\equi}.
\end{split}
\end{equation}
Then,
\begin{align}\label{proof:estimatesDerivativesM1B}
\vabs{\partial_{i j} M} \leq
\vabs{\partial_{i j} H_1(\lambda,\Lambda + \delta^2\LtresLa, x+ \delta \Ltresx, y + \delta \Ltresy;\delta)}
,
\quad \text{for } i,j \in \claus{\lambda,\Lambda,x,y}.
\end{align}
Since $\vabs{\Lambda}<\varrho$
and $\vabs{(x,y)}<\varrho\delta$,
then
$\vabs{\Lambda+\delta^2\LtresLa}<2\varrho$
and ${\vabs{(x+\delta^3 \Ltresx,y+\delta^3\Ltresy)}<2\varrho\delta}$,
for $\delta$ small.
By the definition of $H_1$
in \eqref{def:hamiltonianScalingH1} we have that,
\begin{align*}
H_1(\lambda,\Lambda,x,y;\delta) &=
H_1^{\Poi}
\paren{\lambda,1+\delta^2\Lambda,\delta x, \delta y;\delta^4}
- V\paren{\lambda}
+ \frac{1}{\delta^4} F_{\pend}(\delta^2\Lambda),
\end{align*}
where $H_1^{\Poi}$ is given in \eqref{def:hamiltonianPoincare01} (see also \eqref{def:changeScaling}),
$V$ is given \eqref{def:potentialV}
and $F_{\pend}$ is given \eqref{def:Fpend} and satisfies $F_{\pend}(s)=\mathcal{O}(s^3)$.
Since $\vabs{(\delta^2\Lambda,\delta x, \delta y)} < 2\varrho \delta^2 \ll 1$, we apply Lemma \ref{corollary:seriesH1Poi} (recall that $\delta=\mu^{\frac14}$) and Cauchy estimates to
obtain
\begin{align}\label{proof:estimatesDerivativesM1C}
\vabs{ \partial^2_{\lambda} H_1}
\vabs{ \partial_{\lambda\Lambda} H_1},
\vabs{ \partial^2_{\Lambda} H_1}
&\leq C \delta^2,
\qquad
\vabs{\partial_{i j} H_1} \leq C \delta,
\quad \text{for } i,j \in \claus{\lambda,\Lambda,x,y}.
\end{align}
Then, \eqref{proof:estimatesDerivativesM1B} and
\eqref{proof:estimatesDerivativesM1C} give the estimates for the second derivatives of $M$.
For the first derivatives of $M$, let us take into account that, by Theorem~\ref{theorem:singularities}, $0$ is a critical point of both Hamiltonians $(H \circ \phi_{\equi})$ and $H_0$ and,
therefore, also of $M= (H \circ \phi_{\equi})- H_0$.
This fact and the estimates of the second derivatives, together with the mean value theorem, gives the estimates for the first derivatives of $M$.
\end{proof}
\begin{proof}[End of the proof of Lemma~\ref{lemma:computationsRRRInf}]
Let us consider $\varphi=(\varphi_w,\varphi_x,\varphi_y)^T \in \XcalInftyTotal$ such that
$\normInftyTotal{\phiA} \leq \varrho \delta^3$.
We estimate the first and second derivatives of $H_1^{\out}$ evaluated at $(u,\phiA(u))$ (recall \eqref{eq:expressionRRRout}), given by
\begin{align}\label{proof:expressionH1sepInfty}
H_1^{\out}(u,\phiA(u);\delta) = M\paren{\lambda_h(u),\Lambda_h(u)-\frac{\phiA_w(u)}{3\Lambda_h(u)},
\phiA_x(u),\phiA_y(u);\delta}
- \frac{\phiA^2_w(u)}{6\Lambda_h^2(u)}.
\end{align}
First, let us define
\begin{align*}
{\varphi}_{\lambda}(u) = \lambda_h(u), \qquad
{\varphi}_{\Lambda}(u) = \Lambda_h(u)-\frac{\varphi_w(u)}{3\Lambda_h(u)}
\quad \text{and} \quad
\Phi = (\phiA_{\lambda},\phiA_{\Lambda},\phiA_x,\phiA_y).
\end{align*}
Since $\normInftyTotal{\phiA} \leq \varrho \delta^3$ and $\lambda_h, \Lambda_h \in \XcalInfty_{\vap}$ (see \eqref{eq:separatrixBanachSpace}),
\begin{equation}\label{proof:estimatesCoordinatesEqui}
\normInfty{\varphi_{w}}_{2\vap} \leq C \delta^2, \qquad
\normInfty{\varphi_{x}}_{\vap},
\normInfty{\varphi_{y}}_{\vap} \leq C \delta^3, \qquad
\normInfty{\varphi_{\lambda}}_{\vap},
\normInfty{\varphi_{\Lambda}}_{\vap} \leq C.
\end{equation}
Moreover since, by Theorem \ref{theorem:singularities}, $\lambda_h(u)\neq\pi$ for $u \in \DuInfty$, we have that
\[
\vabs{\phiA_{\lambda}(u)} = \vabs{\lambda_h(u)} < \pi,
\quad
\vabs{\phiA_{\Lambda}(u)}
\leq C e^{-\vap \rhoInfty}
\leq C,
\quad
\vabs{(\phiA_{x}(u), \phiA_y(u))}
\leq C\delta^3 e^{-\vap \rhoInfty}
\leq C\delta^3
\]
and, therefore, we can apply Lemma~\ref{lemma:expressionHamiltonianInfty} to \eqref{proof:expressionH1sepInfty}.
In the following computations, we use generously Lemma \ref{lemma:sumNormsOutInf} without mentioning it.
\begin{enumerate}
\item First, we consider $g^{\out}=\partial_w H_1^{\out}$.
By \eqref{proof:expressionH1sepInfty}, we have that
\begin{align*}
g^{\out}(u,\varphi(u))
&=-\frac{\partial_{\Lambda} M \circ \Phi(u)}{3\Lambda_h(u)}
-\frac{\varphi_{w}(u)}{3\Lambda_h^2(u)}.
\end{align*}
Notice that, by Theorem \ref{theorem:singularities}, $\vabs{\Lambda_h(u)}\geq C$ for $u \in \DuInfty$. Then, $\normInftySmall{\Lambda_h^{-1}}_{-\vap} \leq C$.
Therefore, by Lemma \ref{lemma:expressionHamiltonianInfty} and estimates~\eqref{proof:estimatesCoordinatesEqui}, we have that
\begin{equation}\label{proof:boundsInftyg}
\begin{split}
\normInfty{g^{\out}(\cdot,\varphi)}_0 &\leq
C \delta \boxClaus{\delta\normInfty{\varphi_{\lambda}}_{\vap} +
\delta\normInfty{\varphi_{\Lambda}}_{\vap} +
\normInfty{\varphi_{x}}_{\vap} +
\normInfty{\varphi_{y}}_{\vap}}
+ C \normInfty{\varphi_{w}}_{2\vap} \\
&\leq C \delta^2.
\end{split}
\end{equation}
To compute its derivative with respect to $w$, by \eqref{proof:expressionH1sepInfty}, we have that
\begin{align*}
\partial_w g^{\out} (u,\varphi(u))
&=
\frac{\partial^2_{\Lambda} M \circ \Phi(u)}{9\Lambda^2_h(u)}
-\frac{1}{3\Lambda^2_h(u)},
\end{align*}
and, by Lemma \ref{lemma:expressionHamiltonianInfty} and estimates~\eqref{proof:estimatesCoordinatesEqui},
$\normInfty{\partial_w g^{\out}(\cdot,\varphi)}_{-2\vap} \leq C$.
Following a similar procedure, we obtain
$\normInfty{\partial_x g^{\out}(\cdot,\varphi)}_{-\vap} \leq C \delta$
and $\normInfty{\partial_y g^{\out}(\cdot,\varphi)}_{-\vap} \leq C \delta$.
\item Now, we obtain estimates for $f_1^{\out}=-\partial_u H_1^{\out}$.
By \eqref{proof:expressionH1sepInfty}, we have that
\begin{align*}
f_1^{\out}(u,\varphi(u)) =&
- \dot{\lambda}_h(u) \partial_{\lambda}{M} \circ \Phi(u)
-\frac{\Lad_h(u)}{3\Lambda^3_h(u)} \varphi^2_{w}(u) \\
&- \paren{\Lad_h(u) + \frac{\Lad_h(u)}{3 \Lambda_h^2(u)} \varphi_{w}(u)}
\partial_{\Lambda}{M} \circ \Phi(u).
\end{align*}
Then, since $\dot{\lambda}_h, \Lad_h \in \XcalInfty_{\vap}$, by Lemma \ref{lemma:expressionHamiltonianInfty} and estimates~\eqref{proof:estimatesCoordinatesEqui}, we have that
$\normInfty{f_1^{\out}(\cdot,\varphi)}_{2\vap} \leq C\delta^2$.
To compute its derivative with respect to $x$, by \eqref{proof:expressionH1sepInfty},
\begin{align*}
\partial_x f_1^{\out}(u,\varphi(u)) =&
- \dot{\lambda}_h(u) \partial_{x \lambda} M \circ \Phi(u)
- \paren{\Lad_h(u) + \frac{\Lad_h(u)}{3 \Lambda_h^2(u)} \varphi_{w}(u)}
\partial_{x \Lambda} M \circ \Phi(u)
\end{align*}
and, therefore,
$\normInfty{\partial_x f_1^{\out}(\cdot,\varphi)}_{\vap} \leq C\delta$.
Similarly one can obtain
$\normInfty{\partial_w f_1^{\out}(\cdot,\varphi)}_{0} \leq C\delta^2$ and
$\normInfty{\partial_y f_1^{\out}(\cdot,\varphi)}_{\vap} \leq C\delta$.
\item Analogously to the previous estimates,
we can obtain bounds for $f^{\out}_2= i \partial_y H_1^{\out}$ and $f^{\out}_3=-i \partial_x H_1^{\out}$.
Then, for $j=2,3$, it can be seen that
$\normInftySmall{f_j^{\out}(\cdot,\varphi)}_{\vap} \leq C\delta$,
and differentiating we obtain
$\normInftySmall{\partial_w f_j^{\out}(\cdot,\varphi)}_{-\vap} \leq C\delta$,
$\normInftySmall{\partial_x f_j^{\out}(\cdot,\varphi)}_{0} \leq C\delta$ and
$\normInftySmall{\partial_y f_j^{\out}(\cdot,\varphi)}_{0} \leq C\delta$.
\end{enumerate}
Then, by the definition of $\mathcal{R}^{\out}$ in~\eqref{eq:expressionRRRout} and the just obtained estimates, we complete the proof of the lemma.
\end{proof}
\subsection{Estimates in the outer domain}
\label{subsubsection:proofComputationsRRROut}
To obtain estimates of $\mathcal{R}^{\out}$, we write
$H_1^{\out}$ in~\eqref{def:hamiltonianOuterSplit} (up to a constant) as
\begin{equation*}
H_1^{\out} =
H_1 \circ \phi_{\equi} \circ \phi_{\out}
- \frac{w^2}{6\Lambda_h^2(u)}
+
\delta( x \Ltresy + y\Ltresx)
+ 3\delta^2 \LtresLa \paren{\Lambda_h(u)-\frac{w}{3\Lambda_h(u)}},
\end{equation*}
(see \eqref{eq:expressionHamiltonianInfty} and \eqref{proof:M1expressionLtres}).
Then, by the definition of $H_1$ in \eqref{def:hamiltonianScalingH1}, we obtain
\begin{align*}
H_1^{\out} =& \,
(H^{\Poi}_1-V)
\circ \phi_{\sca} \circ \phi_{\equi} \circ \phi_{\out}
+ \frac{1}{\delta^4}
F_{\pend}\paren{\delta^2\Lambda_h(u) -
\frac{\delta^2 w}{3 \Lambda_h(u)} +
\delta^4\LtresLa} \\
&- \frac{w^2}{6\Lambda_h^2(u)}
+
\delta( x \Ltresy + y\Ltresx)
+ 3\delta^2 \LtresLa \paren{\Lambda_h(u)-\frac{w}{3\Lambda_h(u)}},
\end{align*}
where $H_1^{\Poi}$ is given in \eqref{def:hamiltonianPoincare1},
the potential $V$ in \eqref{def:potentialV} and $F_{\pend}$ in \eqref{def:Fpend}.
The changes of coordinates $\phi_{\sca}$, $\phi_{\equi}$ and $\phi_{\out}$ are given in \eqref{def:changeScaling}, \eqref{def:changeEqui} and \eqref{def:changeOuter}, respectively.
Considering $z=(w,x,y)$, we denote the composition of change of coordinates as
\begin{equation}\label{def:changeTotalOuter}
(\lambda, L, \eta, \xi) =
\Theta(u,z) = (\phi_{\sca} \circ \phi_{\equi} \circ \phi_{\out})(u,z).
\end{equation}
Then, since $\mu=\delta^4$, the Hamiltonian $H_1^{\out}$ can be split (up to a constant) as
\begin{equation}\label{proof:H1outExpressionOuter}
H_1^{\out} = M_P + M_{S} + M_R,
\end{equation}
where
\begin{align}
M_P(u,z;\delta) =&
-\paren{\mathcal{P}[\delta^4-1]
- \frac{1}{\sqrt{2+2\cos \lambda}}
}\circ \Theta(u,z),
\label{def:expressionMJOuter}
\\
M_{S}(u,z;\delta) =& \,
\paren{
\frac{1}{\delta^4} \mathcal{P}[0] -
\frac{1-\delta^4}{\delta^4} \mathcal{P}[\delta^4]
- 1 + \cos \lambda} \circ\Theta(u,z),
\label{def:expressionMSOuter} \\
\begin{split}\label{def:expressionMROuter}
M_R(u,z;\delta) =&
- \frac{w^2}{6\Lambda_h^2(u)}
%
+ \delta^2 \LtresLa \paren{3 \Lambda_h(u)
- \frac{w}{\Lambda_h(u)} }
%
+ \delta (x \Ltresy + y \Ltresx ), \\
%
&+ \frac{1}{\delta^4}
F_{\pend}\paren{\delta^2\Lambda_h(u) -
\frac{\delta^2 w}{3 \Lambda_h(u)} +
\delta^4\LtresLa},
\end{split}
\end{align}
and $\mathcal{P}$ is the function given in~\eqref{def:functionD}.
To obtain estimates for the derivatives of $M_P$, $M_S$ and $M_R$, we first analyze the change of coordinates $\Theta$ in~\eqref{def:changeTotalOuter}. It can be expressed as
\begin{equation}\label{proof:ThtExpression}
\Theta(u,z)=
\Big(\pi + \Theta_{\lambda}(u), 1 + \Theta_{L}(u,w),
\Theta_{\eta}(x),\Theta_{\xi}(y) \Big),
\end{equation}
where
\begin{equation*}
\begin{aligned}
\Theta_{\lambda}(u) &= \lambda_h(u)-\pi, &
\Theta_{\eta}(x) &= \delta x + \delta^4 \Ltresx(\delta), \\
\Theta_L(u,w) &= \delta^2 \Lambda_h(u)
- \frac{\delta^2 w}{3 \Lambda_h(u)} + \delta^4 \LtresLa(\delta), \quad&
\Theta_{\xi}(x) &= \delta y + \delta^4 \Ltresy(\delta).
\end{aligned}
\end{equation*}
Next lemma, which is a direct consequence of Theorem \ref{theorem:singularities}, gives estimates for this change of coordinates.
\begin{lemma}\label{lemma:changeTotalOuter}
Fix $\varrho>0$ and $\delta>0$ small enough. Then, for
$\phiA \in \ballOuter \subset \XcalOutTotal$,
\begin{align*}
\normOut{\Theta_{\lambda}}_{0,-\frac{2}{3}}&\leq C, &
\normOut{\Theta_{L}(\cdot,\phiA)}_{0,\frac{1}{3}}
&\leq C \delta^2, &
\normOut{\Theta_{\eta}(\cdot,\phiA)}_{0,\frac{4}{3}}
&\leq C \delta^4, \\
\normOut{\Theta_{\lambda}^{-1}}_{0,\frac{2}{3}}&\leq C, &
\normOut{1+\Theta_{L}(\cdot,\phiA)}_{0,0}
&\leq C, &
\normOut{\Theta_{\xi}(\cdot,\phiA)}_{0,\frac{4}{3}}
&\leq C \delta^4.
\end{align*}
Moreover, its derivatives satisfy
\begin{align*}
\normOut{\partial_u \Theta_{\lambda}}_{0,\frac{1}{3}} &\leq C,
&
\normOut{\partial_u \Theta_{L}(\cdot,\phiA)}_{0,\frac{4}{3}} &\leq C \delta^2,
&
\normOut{\partial_w \Theta_{L}(\cdot,\phiA)}_{0,-\frac{1}{3}} &\leq C \delta^2,
\\
\normOut{\partial_{uw} \Theta_{L}(\cdot,\phiA)}_{0,\frac{2}{3}} &\leq C\delta^2,
&
\partial_x \Theta_{\eta},
\partial_y \Theta_{\xi} &\equiv \delta,
&
\partial^2_{w} \Theta_L,
\partial^2_{x} \Theta_{\eta},
\partial^2_y \Theta_{\xi} &\equiv 0.
\end{align*}
\end{lemma}
In the next lemma we obtain estimates for the derivatives of $M_P$.
\begin{lemma}\label{lemma:boundsMPout}
Fix $\varrho>0$, $\delta>0$ small enough and
$\kappa>0$ big enough. Then, for
$\phiA \in \ballOuter$ and $*=x,y$,
\begin{equation*
\begin{aligned}
\normOut{\partial_u M_P(\cdot,\phiA)
}_{1,1}
&\leq C \delta^2, &
\normOut{\partial_w M_P(\cdot,\phiA)
}_{1,-\frac{2}{3}}
&\leq C \delta^{2}, &
\normOut{\partial_* M_P(\cdot,\phiA)
}_{0,\frac{4}{3}}
&\leq C \delta, \\
\normOut{\partial_{u w} M_P(\cdot,\phiA)
}_{1,\frac{1}{3}}
&\leq C \delta^{2}, &
\normOut{\partial_{u *} M_P(\cdot,\phiA)
}_{0,\frac{7}{3}}
&\leq C \delta, &
\normOut{\partial^2_{w} M_P(\cdot,\phiA)
}_{0,\frac{4}{3}}
&\leq C \delta^4, \\
\normOut{\partial_{w *} M_P(\cdot,\phiA)
}_{0,\frac{5}{3}}
&\leq C \delta^3, &
\normOut{\partial^2_{*} M_P(\cdot,\phiA)
}_{0,2}
&\leq C \delta^2, &
\normOut{\partial_{xy} M_P(\cdot,\phiA)
}_{0,2}
&\leq C \delta^2.
\end{aligned}
\end{equation*}
\end{lemma}
\begin{proof}
We consider $\phiA \in \ballOuter \subset \XcalOutTotal$
and
we estimate the derivatives of $\mathcal{P}[\delta^4-1] \circ \Theta(u,\phiA(u))$. We
first we obtain bounds for $A[\delta^4-1]$ and $B[\delta^4-1]$ (see \eqref{def:DA} and \eqref{def:DB}).
To simplify the notation, we define
\begin{align}\label{proof:defABP}
\widetilde{A}(u) = A[\delta^4-1](\pi +\Theta_{\lambda}(u)), \qquad
\widetilde{B}(u,z) = B[\delta^4-1] \circ \Theta(u,z).
\end{align}
In the following computations we use extensively the results in Lemma~\ref{lemma:sumNormsOuter} without mentioning them.
\begin{enumerate}
\item Estimates of $\wtA(u)$:
Defining $\lah=\lambda-\pi$, by \eqref{def:DA},
\begin{align*}
A[\delta^4-1](\lah+\pi) = 2(1-\cos\lah) -2\delta^4(1-\cos\lah) + \delta^8.
\end{align*}
Then, applying Lemma~\ref{lemma:changeTotalOuter},
\begin{equation*}\label{proof:estimateTrigoOuter}
\begin{aligned}
\normOut{\sin \Theta_{\lambda}}_{0,-\frac{2}{3}}
\leq
C \normOut{\Theta_{\lambda}}_{0,-\frac{2}{3}}
\leq C, \qquad
\normOut{(1-\cos \Theta_{\lambda})^{-1}
}_{0,\frac{4}{3}}
\leq
C \normOut{\Theta_{\lambda}^{-2}
}_{0,\frac{4}{3}}
\leq C
\end{aligned}
\end{equation*}
and, as a result,
\begin{equation}\label{proof:DA}
\begin{split}
\normOutSmall{\wtA^{-1}}_{0,\frac{4}{3}}
&\leq
C \normOut{(1-\cos \Theta_{\lambda})^{-1}
}_{0,\frac{4}{3}}
\leq C,
\\
\normOutSmall{\partial_u \wtA}_{0,-\frac{1}{3}}
&\leq C
\normOut{\sin \Theta_{\lambda}}_{0,-\frac23}
\normOut{\partial_u \Theta_{\lambda}}_{0,\frac13}
\leq C.
\end{split}
\end{equation}
\item Estimates of $\wtB(u,\phiA(u))$:
Considering the auxiliary variables
$(\lah,\Lh)=(\lambda-\pi,L-1)$,
we have that
\begin{equation}\label{proof:D0expansion}
\begin{split}
B[\delta^4-1](\pi+\lah,1+\Lh,\eta,\xi)
= \,&
4 \Lh (1-\cos \lah + \delta^4 \cos \lah) \\
&+
\frac{\eta}{\sqrt{2}}
(-3 +2 e^{-i\lah}+e^{-2i\lah} +\delta^4(3+ e^{-2i\lah})) \\
&+
\frac{\xi}{\sqrt{2}}
(-3 +2 e^{i\lah}+e^{2i\lah} +\delta^4(3+ e^{2i\lah})) \\
&+ R[\delta^4-1](\pi+\lah,1+\Lh,\eta,\xi).
\end{split}
\end{equation}
Then, by the estimates in \eqref{eq:boundD2mes} and Lemma~\ref{lemma:changeTotalOuter},
\begin{equation}\label{proof:DB}
\begin{aligned}
\normOutSmall{\wtB(\cdot,\phiA)}_{1,-2}
\leq \,&
C \normOut{\Theta_{L}(\cdot,\phiA)
\Theta_{\lambda}^2}_{0,-1}
+ \frac{C}{\delta^2}
\normOut{\Theta_{\eta}(\cdot,\phiA) \Theta_{\lambda}}_{0,\frac23} \\
&+ \frac{C}{\delta^2}
\normOut{\Theta_{\xi}(\cdot,\phiA) \Theta_{\lambda}}_{0,\frac23}
+
\frac{C}{\delta^2}
\normOut{(\Theta_{L},\Theta_{\eta},\Theta_{\xi})^2}_{0,\frac23} \leq C \delta^2.
\end{aligned}
\end{equation}
Now, we look for estimates of the first derivatives of $\wtB(u,\phiA(u))$.
By its definition in \eqref{proof:defABP} and the expression of $\Theta$ in \eqref{proof:ThtExpression}, we have that
\begin{equation}\label{proof:boundsBtildePoincare}
\begin{aligned}
\partial_u \wtB =& \,
\boxClaus{ \partial_{\lambda} B[\delta^4-1] \circ \Theta}
\partial_u \Theta_{\lambda}
+
\boxClaus{\partial_{L} B[\delta^4-1] \circ \Theta}
\partial_u \Theta_{L}, \\
\partial_w \wtB =& \,
\boxClaus{
\partial_{L} B[\delta^4-1] \circ \Theta
}
\partial_w \Theta_{L}, \\
\partial_x \wtB =& \,
\boxClaus{
\partial_{\eta} B[\delta^4-1] \circ \Theta
}
\partial_{x} \Theta_{\eta}, \qquad
\partial_y \wtB = \,
\boxClaus{
\partial_{\xi} B[\delta^4-1] \circ \Theta
}
\partial_y \Theta_{\xi}.
\end{aligned}
\end{equation}
Differentiating \eqref{proof:D0expansion} and applying Lemma~\ref{lemma:changeTotalOuter},
\begin{align*}
\normOut{\partial_{\lambda} B[\delta^4-1] \circ \Theta(\cdot,\phiA)}_{1,-\frac43}
\leq& \,
C \normOut{\Theta_{L}(\cdot,\phiA) \Theta_{\lambda}}_{-\frac13} +
\frac{C}{\delta^2}
\normOut{\Theta_{\eta}(\cdot,\phiA)}_{0,\frac43}
\\
&+ \frac{C}{\delta^2}
\normOut{\Theta_{\xi}(\cdot,\phiA)}_{0,\frac43}
+ C\delta^2
\leq C \delta^2,
\\
\normOut{\partial_{L} B[\delta^4-1] \circ \Theta(\cdot,\phiA)}_{1,-\frac73}
\leq&
C \normOut{\Theta_{\lambda}^2}_{0,-\frac43} +
\frac{C}{\delta^2} \normOut{\Theta_{L}(\cdot,\phiA)}_{0,\frac13}
+ \frac{C}{\kappa}
\leq C,
\\
\normOut{\partial_{*} B[\delta^4-1] \circ \Theta(\cdot,\phiA)}_{0,-\frac23}
\leq&
C \normOut{\Theta_{\lambda}}_{0,-\frac23}
+
\frac{C}{\kappa}
\leq C, \quad \text{for } *=\eta,\xi.
\end{align*}
Then, using also \eqref{proof:boundsBtildePoincare} and taking $*=x,y$,
\begin{align}\label{proof:DBfirst}
\normOutSmall{\partial_u \wtB(\cdot,\phiA)
}_{1,-1}
&\leq C \delta^2, &
\normOutSmall{\partial_w \wtB(\cdot,\phiA)
}_{1,-\frac83}
&\leq C \delta^2, &
\normOutSmall{\partial_* \wtB(\cdot,\phiA)
}_{0,-\frac23}
&\leq C \delta.
\end{align}
Analogously, for the second derivatives, one can obtain the estimates
\begin{equation}\label{proof:DBsecond}
\begin{aligned}
\normOutSmall{\partial_{u w} \wtB(\cdot,\phiA) }_{1,-\frac53}
&\leq C \delta^2, & \hspace{-1mm}
\normOutSmall{\partial^2_{w} \wtB(\cdot,\phiA)
}_{0,\frac23}
&\leq C \delta^4, & \hspace{-1mm}
\normOutSmall{\partial_{u *} \wtB(\cdot,\phiA)
}_{0,\frac13}
&\leq C \delta, \\
\normOutSmall{\partial_{w *} \wtB(\cdot,\phiA)
}_{0,-\frac13}
&\leq C \delta^3,& \hspace{-1mm}
\normOutSmall{\partial^2_{*} \wtB(\cdot,\phiA)
}_{0,0}
&\leq C \delta^2, & \hspace{-1mm}
\normOutSmall{\partial_{xy} \wtB(\cdot,\phiA)
}_{0,0}
&\leq C \delta^2.
\end{aligned}
\end{equation}
\end{enumerate}
Now, we are ready to obtain estimates for $M_P(u,\phiA(u))$ by using the series expansion \eqref{def:SerieP}.
First, we check that it is convergent.
Indeed, by \eqref{proof:DA} and \eqref{proof:DB}, for $u \in \DuOut$ and taking $\kappa$ big enough we have that
\begin{align*}
\vabs{\frac{\wtB(u,\phiA(u))}{\wtA(u)}}
&\leq
\normOutSmall{\wtB(\cdot,\phiA)}_{0,-\frac{4}{3}}
\normOutSmall{\wtA^{-1}}_{0,\frac{4}{3}}
\leq
\frac{C}{\kappa^2\delta^2}
\normOutSmall{\wtB(\cdot,\phiA)}_{1,-2}
\leq \frac{C}{\kappa^2} \ll 1.
\end{align*}
Therefore, by \eqref{def:functionD2} and \eqref{def:expressionMJOuter},
\begin{align}\label{proof:expressionMP}
\vabs{M_P(u,\phiA(u))} \leq
\vabs{\frac1{\textstyle\sqrt{A[\delta^4-1](\lambda_h(u))}} - \frac1{\sqrt{2+2\cos \lambda_h(u)}} }
+
C \frac{|\wtB(u,\phiA(u))|}{|\wtA(u)|^{\frac32}}.
\end{align}
Then, to estimate $M_P$ and its derivatives, it only remains to analyze the $u$-derivative of its first term.
Indeed, by the definition of $A[\delta^4-1]$ in \eqref{def:DA}.
\begin{align}\label{proof:DApotential}
\normOut{\partial_u \paren{\frac1{\textstyle\sqrt{A[\delta^4-1](\lambda_h(u))}} - \frac1{\sqrt{2+2\cos \lambda_h(u)}} } }_{0,\frac43} \leq C \delta^4.
\end{align}
Therefore, applying estimates \eqref{proof:DA}, \eqref{proof:DB}, \eqref{proof:DBfirst},
\eqref{proof:DBsecond} and \eqref{proof:DApotential}, to the derivatives of $M_P$ and using \eqref{proof:expressionMP}, we obtain the statement of the lemma.
\end{proof}
Analogously to Lemma \ref{lemma:boundsMPout}, we obtain estimates for the first and second derivatives of $M_S$ and $M_R$ (see \eqref{def:expressionMSOuter} and \eqref{def:expressionMROuter}).
\begin{lemma}\label{lemma:boundsMSout}
Fix $\varrho>0$,
$\delta>0$ small enough
and $\kappa>0$ big enough. Then, for
$\phiA \in \ballOuter$ and $*=x,y$, we have
\begin{equation*
\begin{aligned}
\normOut{\partial_u M_S(\cdot,\phiA)
}_{0,\frac{4}{3}}
&\leq C \delta^2, &
\normOut{\partial_w M_S(\cdot,\phiA)
}_{0,-\frac{1}{3}}
&\leq C \delta^{2}, &
\normOut{\partial_* M_S(\cdot,\phiA)
}_{0,0}
&\leq C \delta, \\
\normOut{\partial_{u w} M_S(\cdot,\phiA)
}_{0,\frac{2}{3}}
&\leq C \delta^{2}, &
\normOut{\partial_{u *} M_S(\cdot,\phiA)
}_{0,\frac{1}{3}}
&\leq C \delta, &
\normOut{\partial^2_{w} M_S(\cdot,\phiA)
}_{0,-\frac{2}{3}}
&\leq C \delta^4, \\
\normOut{\partial_{w *} M_S(\cdot,\phiA)
}_{0,-\frac{1}{3}}
&\leq C \delta^3, &
\normOut{\partial^2_{*} M_S(\cdot,\phiA)
}_{0,0}
&\leq C \delta^2, &
\normOut{\partial_{xy} M_S(\cdot,\phiA)
}_{0,0}
&\leq C \delta^2.
\end{aligned}
\end{equation*}
and
\begin{equation*
\begin{aligned}
\normOut{\partial_u M_R(\cdot,\phiA)
}_{1,1}
&\leq C \delta^2, &
\normOut{\partial_w M_R(\cdot,\phiA)
}_{1,-\frac{2}{3}}
&\leq C \delta^{2}, &
\normOut{\partial_* M_R(\cdot,\phiA)
}_{0,0}
&\leq C \delta, \\
\normOut{\partial_{u w} M_R(\cdot,\phiA)
}_{1,\frac{1}{3}}
&\leq C \delta^{2}, &
\partial_{u *} M_R(\cdot,\phiA)
&\equiv 0, &
\normOut{\partial^2_{w} M_R(\cdot,\phiA)
}_{0,-\frac{2}{3}}
&\leq C, \\
\partial_{w *} M_R(\cdot,\phiA)
&\equiv 0, &
\partial^2_{*} M_R(\cdot,\phiA)
&\equiv 0, &
\partial_{xy} M_R(\cdot,\phiA)
&\equiv 0.
\end{aligned}
\end{equation*}
\end{lemma}
\begin{proof}[End of the proof of Lemma~\ref{lemma:computationsRRROut}]
We start by estimating the first and second derivatives of $H_1^{\out}(u,\phiA(u);\delta)$ in suitable norms.
Recall that by \eqref{proof:H1outExpressionOuter}, $H_1^{\out}=M_P+M_S+M_R$.
Therefore, taking $\varphi \in \ballInfty \subset \XcalOutTotal$ and applying Lemmas \ref{lemma:boundsMPout} and \ref{lemma:boundsMSout}:
\begin{enumerate}
\item For $g^{\out}=\partial_w H_1^{\out}$ one has
\[
\begin{split}
\normOut{g^{\out}(\cdot,\phiA)}_{1,-\frac23} \leq&
\normOut{\partial_w M_P(\cdot,\phiA)}_{1,-\frac23}
+ C
\normOut{\partial_w M_S(\cdot,\phiA)}_{0,-\frac13} +
\normOut{\partial_w M_R(\cdot,\phiA)}_{1,-\frac23} \\ \leq& C \delta^2
\end{split}
\]
and, in particular, for $\kappa$ big enough
\begin{equation}\label{def:Fitag}
\normOut{g^{\out}(\cdot,\phiA)}_{0,0} \leq C \kappa^{-2} \ll 1.
\end{equation}
Analogously,
$\normOut{\partial_w g^{\out}(\cdot,\phiA)}_{0,-\frac23}
\leq C$ and
$\normOut{\partial_* g^{\out}(\cdot,\phiA)}_{0,\frac53}
\leq C \delta^3$,
for $*=x,y$.
\item For $f_1^{\out}=-\partial_u H_1^{\out}$, one has that
\[
\normOut{f^{\out}_1(\cdot,\phiA)}_{1,1}
\leq
\normOut{\partial_u M_P(\cdot,\phiA)}_{1,1}
+
C \normOut{\partial_u M_S(\cdot,\phiA)}_{0,\frac43}+
\normOut{\partial_u M_R(\cdot,\phiA)}_{1,1}
\leq
C \delta^2,
\]
$\normOut{\partial_w f_1^{\out}(\cdot,\phiA)}_{1,\frac13}
\leq
C \delta^2$ and
$\normOut{\partial_* f_1^{\out}(\cdot,\phiA)}_{0,\frac73}
\leq
C \delta,
\quad $
for $*=x,y$.
\item For $f_2^{\out} = i \partial_y H_1^{\out}$ and $f_3^{\out} = -i \partial_x H_1^{\out}$,
we can obtain the estimates
\begin{equation}\label{proof:estimatef2f3Out}
\begin{split}
\normOut{f_2(\cdot,\phiA)}_{0,\frac43}
\leq&
\normOut{\partial_y M_P(\cdot,\phiA)}_{0,\frac43}
+ C
\normOut{\partial_y M_S(\cdot,\phiA)
+
\partial_y M_R(\cdot,\phiA)}_{0,0}
\leq
C \delta, \\
\normOut{f_3(\cdot,\phiA)}_{0,\frac43}
\leq&
\normOut{\partial_x M_P(\cdot,\phiA)}_{0,\frac43}
+ C
\normOut{\partial_x M_S(\cdot,\phiA)
+
\partial_x M_R(\cdot,\phiA)}_{0,0}
\leq
C \delta.
\end{split}
\end{equation}
Analogously, we have that
$\normOutSmall{\partial_w f_j^{\out}(\cdot,\phiA)}_{0,\frac53}
\leq
C \delta^3$ and
$\normOutSmall{\partial_* f_j^{\out}(\cdot,\phiA)}_{0,2}
\leq
C \delta^2$,
for $j=2,3$ and $*=x,y$.
\end{enumerate}
Joining these estimates and taking $\kappa$ big enough, we complete the proof of the lemma.
\end{proof}
\begin{remark}\label{rmk:R}
Note that that $\DOuterTilde \subset \DuOut$ and $\YcalOuter \subset
\XcalOut_{0,0}$ (see \eqref{def:Yout} and \eqref{def:XcalOut}). Then, the proof
of Lemma~\ref{lemma:computationsRRRtransitionOuter} is a direct consequence of
the estimates for $g^{\out}$ and its derivatives in Item 1 above and the
fact that, by \eqref{eq:systemEDOsOuter} and \eqref{eq:InvU},
\[
R[\uOut](v)= \partial_w H_1^{\out}
\paren{v + \uOut(v), \zuOut(v+\uOut(v))}= g^{\out}\paren{v + \uOut(v),
\zuOut(v+\uOut(v))}.
\]
\end{remark}
|
2,877,628,090,375 | arxiv | \section{Introduction}
The study of ferrous meteorites informs our understanding of the solar system as well as of terrestrial metallurgy. These bodies consisting primarily of iron and nickel, are remnants of protoplanetary cores that formed during the early solar system \cite{krot2008,goldstein2009,weiss2013,Scott2020} and are thought to have produced magnetic fields in a similar manner to Earth's geodynamo \cite{weiss2013}. Although the original location of iron meteorites is thought to be the asteroid belt, i.e., between the orbits of Mars and Jupiter, isotopic measurements suggest that some meteorites originated beyond Jupiter \cite{kruijer2017}, while others came from the Earth-forming region in the interior of the solar system. Therefore, the study of metallic meteorites, which provide the oldest thermal and magnetic record of the early solar system, can provide a deep understanding of what may have been the precursor of Earth itself.
From a materials science perspective, meteorites provide almost ideal environments for atomic arrangements to approach thermodynamic equilibrium during cooling over billions of years. Such conditions can permit the formation of tetrataenite (designation L$1_0$, AuCu-I prototype structure) which is extremely difficult to synthesize in the laboratory \cite{poirier2015,lewis2014}. Tetrataenite's alternating layers of Fe and Ni atoms are stacked parallel to the tetragonal \textit{c} axis to form a superlattice that donates impressive technical magnetic properties \cite{Lewis2020}. Tetrataenite is not documented in conventional Fe-Ni binary phase diagram \cite{Swartzendruber1991} but may be found in meteoritical phase diagrams \cite{Scott2020,yang1996,howald2003,scorzelli2015} containing a complex set of ferromagnetic phases \cite{Scott2020,scott1997,yang2007} that are, by convention, designated by their Ni content. The L$1_0$ phase of FeNi forms during cooling from disordered face-centered cubic (fcc, designation A1) Ni-rich taenite. Other meteoritic phases include kamacite, the Ni-poor body-centered cubic (bcc, designation A2) alloy that contains a maximum of 5 at.\% Ni \cite{owen1969,clarke1978,yang2005}, and awaruite, an intermetallic Ni$_3$Fe-type compound with L$1_2$-type structure \cite{howald2003,yang1997}. These Fe-Ni phases and their crystallographic information are summarized in Supporting Information. The kinetics of phase transformations in the Fe-Ni system are acknowledged to be extremely slow \cite{scorzelli2015} as a result of the sluggish interdiffusion of Fe and Ni, which is likely influenced by magnetic long-range order \cite{goldstein2009}. Details of the phase assemblage in a meteorite determine its internal magnetic field \cite{Skomski2013,harrison2018}, which impacts the interpretation of its thermal history \cite{goldstein2009,yang1997}.
\subsection{A clandestine meteoritic microstructure}
In this work, the investigation was focused on the NWA 6259 meteorite which consists of a very large multi-variant region of tetrataenite \cite{poirier2015} and possesses the second highest Ni content ($\sim$43 at.\%) of all reported meteorites. The structure of the NWA 6259 specimen is shown on different length scales in Figure \ref{fig:tem}. Details of sample preparation and characterization techniques are provided in the Supporting Information (Fig.S3). A sample for study was removed from the central region of the meteorite specimen (Figure \ref{fig:tem}a) and was determined to possess an approximate mesoscopic composition of Fe 57 at.\% and Ni 43 at.\%; with Co ($\sim$3 at.\%) and a minor enrichment in Cu (Fig. S4); the dark inclusions observed in Figure \ref{fig:tem}a contain sulphur and phosphorus. A crystallographic orientation map derived from electron backscatter diffraction (EBSD) data (Figure \ref{fig:tem}a) reveals that, within the resolution limit of the technique, this region can be considered as a single crystal. This orientation map guided the preparation of crystallographically-defined electron-transparent specimens (Figure \ref{fig:tem}b, Fig. S3) for higher resolution studies. The specimen matrix contains a network of precipitates and lamellar inclusions (Figures \ref{fig:tem}b,c) and is verified to possess tetragonal symmetry with superlattice diffraction reflections (Figure \ref{fig:tem}d) that signal the long-range chemical order of L$1_0$ FeNi, tetrataenite. The high degree of chemical order of the tetrataenite matrix is confirmed by the small but finite intensity difference attributed to alternate scattering of Fe and Ni atom columns detected by high-angle annular dark field (HAADF) scanning transmission electron microscopy (TEM) (Figure \ref{fig:tem}e).
The structure and composition of small (Figure \ref{fig:tem}c) crystalline precipitates within the L$1_0$ matrix were examined at \AA ngstrom-level resolution using correlative electron microscopy and 3-dimensional (3D) atom probe tomography (APT) performed on the needle-shaped specimen (Figure \ref{fig:apt}a) prepared using focused ion beam (FIB) milling. Theses results confirm the tetrataenite composition itself as 45 at.\% Ni: 55 at.\%; within the matrix, iso-composition surfaces superimposed onto the reconstructed tomographic 3D point cloud reveal that regions richer than 26 at.\% Ni (Figure \ref{fig:apt}b) contain a dense distribution of Ni-poor ($\sim$90 at.\% Fe) precipitates with a bimodal distribution of coarse (28$\pm$6 nm) and ultrafine (2.0$\pm$0.5 nm) average diameters at approximately 15000 precipitates per cubic micrometer (Figures \ref{fig:stem}). The 50 at.\% iso-composition level reveals Ni-rich lamellae of composition of $\sim$66 at.\% Ni, close to that of the ideal composition of the awaruite \cite{howald2003,Wakelin_1953}, Figure \ref{fig:apt}c. A combined tomographic reconstruction in Figure \ref{fig:apt}c shows the overall nanostructure, together with corresponding quantitative elemental composition scans.
\subsection{Nanostructure, strain and a new Fe - Ni phase}
A fascinating aspect of the meteorite nanostructure is the role that strain plays in the crystallographic features of two types of Ni-poor precipitates embedded within the L$1_0$-type matrix. A representative coarse Ni-poor precipitate, embedded incoherently in the matrix, is delineated by regularly spaced (1 - 2 nm) misfit dislocations at the precipitate-matrix interface ([100]$_\mathrm{A2}$(010)$_\mathrm{A2}$~||~[110]$_{\mathrm{L1_0}}$(001)$_{\mathrm{L1_0}}$ orientational relationship) and is confirmed to adopt the bcc (A2-type) structure (Figure \ref{fig:stem}a,b). In contrast, the ultrafine (1 - 2 nm) Ni-poor precipitates, Fig. \ref{fig:stem}c,d, coherently embedded in the matrix, have the same crystal symmetry as the surrounding L$1_0$ matrix but possess a \textit{chemically disordered} face-centered tetragonal (A6-type) crystal structure with unit cell parameters similar to tetrataenite. These ultrafine precipitates are lattice-matched to the tetrataenite matrix but possess a composition of 90 at.\% Fe. To the best of our knowledge, this is the first report of a tetragonal Ni-poor phase in the Fe-Ni system, although recently the synthesis of tetragonal, nominally equiatomic FeNi has been confirmed \cite{Lewis2020,lewis2016}.
A relatively big, 4-nm-diameter Ni-poor precipitate, adjacent to a Ni-rich lamella, Fig. \ref{fig:stem}d, is characterized by a strain field as revealed by geometric phase analysis based on Fourier transformation of a high-resolution STEM image as shown in Figure \ref{fig:stem}e,f. This region, which contains two dislocations and a corresponding strain at the phase boundary (Figure \ref{fig:stem}e), is consistent with the interpretation of nanoscale decomposition of the metastable tetrataenite phase through precipitation of Ni-poor phases with either cubic A2 (coarse kamacite precipitates) or tetragonal A6 (utrafine precipitates) crystal structures and a lamellar Ni-rich L$1_2$-type phase. While the two types of Ni-poor precipitates are nearly isotropic in shape and are distributed evenly in the matrix, the Ni-rich awaruite precipitates follow distinct crystallographic directions in the matrix, suggesting that Ni migrated along diffusion-favorable directions to form the lamellae, leaving behind Ni-poor pockets.
The tetragonal A6-structured FeNi phase in this meteorite divulges a fascinating new aspect of the Fe - Ni system. The close relationship between various cubic symmetries was formalized long ago as the Bain distortion \cite{bain1924}, in which a bcc lattice can be obtained from an fcc lattice by a compression parallel to the c axis and an expansion along an a axis to form a body-centered tetragonal lattice \cite{bowles1972}. The A6 structure adopted by the ultrafine Ni-poor precipitates is likely stabilized through largely coherent bonding to the parent tetragonal L$1_0$ phase. These ultrafine A6 precipitates can be considered as Guinier-Preston (G.P.) zones, which are manifestations of an initial stage of precipitation during solid-state phase decomposition \cite{hill1973}. G.P. zones typically possess an intermediate crystal structure and composition that are different from those of both the thermodynamically stable phase and the host phase.
\section*{Effects of phase diversities on magnetic properties}
The presence and diversity of these Fe-Ni phases impacts both the micromagnetic and bulk magnetic states of the material, and consequently influences how magnetometry is used to interpret meteoritic history as well as to evaluate tetrataenite's potential as a technological material. The results reported here confirm that the NWA 6259 meteorite, and therefore likely other stony, stony-iron and iron meteorites, can be regarded as magnetic nanocomposites with strong interphase magnetic coupling. Magnetic configurations in nanocomposites have been studied extensively as novel exchanged-coupled permanent magnets \cite{Skomski2013,skomski2003}, and it is known that extrinsic, or technical, magnetic properties such as coercivity and remanence depend on the volume fractions of the phases, the diameters of precipitates and the degree of exchange coupling at interfaces. In order to investigate these aspects, the magnetic configuration of the NWA 6259 meteorite was studied using Lorentz TEM and off-axis electron holography \cite{Kovacs2018} (Figures \ref{fig:magn}a-d) applied to the same samples that were investigated using microstructural characterization (Supporting Information, Fig. S5). Magnetic imaging in the remanent state was conducted in specimens prepared with the magnetic easy axis of the L$1_0$ FeNi phase oriented both in-plane (Figures \ref{fig:magn}a,b) and out-of-plane (Figures \ref{fig:magn}c,d). These images reveal a high density of \ang{180} or \ang{90} magnetic domains, with sizes ranging from 100 to 500 nm. Quantitative magnetic induction maps (Figs \ref{fig:magn}b,d) indicate that the \ang{180} magnetic domain walls are almost parallel to the magnetic easy axis, as expected for a uniaxial system \cite{kittel1949}. The magnetic domain walls are distorted in the vicinity of the Ni-rich lamellae, marked by dashed lines. This distortion is attributed to the difference in magnetocrystalline anisotropy energy of the L$1_0$ and L$1_2$ Fe-Ni phases. The Ni-poor A2 nano- and A6 ultrafine precipitates are not observed to affect the overall magnetic domain configuration in the studied samples. Nonetheless, all phases impact the magnetic state, and the nature of the A6-type (tetragonal) precipitates is of particular interest. Computational \cite{pinski1986,moruzzi1989} and experimental \cite{tsunoda1988a,tsunoda1988b} investigations of fcc-type iron indicate an anti-ferromagnetic \cite{lavrentiev2014,abrikosov2007,xiong2011,sjostedt2002} (AFM) ground state that is typically not accessible because the A2 (bcc) to A1 (fcc) phase transition in iron occurs above its Curie temperature (\textit{i.e.}, the temperature below which it is ferromagnetic). An atomistic simulation of a non-collinear configuration of atomic moments that leads to zero net magnetization (Fig. S6). As AFM ordering breaks cubic symmetry, antiferromagnetism is intimately linked to the presence of a structural distortion \cite{marsman2002,massalski2009,Dunin-Borkowski1999}. Thus, the A6-type tetragonal Fe(Ni) phase stabilized in the NWA 6259 meteorite is anticipated to exhibit antiferromagnetism. This hypothesis was investigated with bulk thermomagnetic measurements conducted in low magnetic field on a single sample of the as-received NWA 6259 meteorite. Two consecutive heating and cooling cycles in the temperature range 300 K $\leq$ T $\leq$ 900 K (Figure \ref{fig:magn}e) were performed, with full magnetic hysteresis loop measured at room temperature before and after each thermal excursion (Figure \ref{fig:magn}f). The first heating branch confirmed the reported tetrataenite kinetic Curie temperature $T_{C1}$ $\sim$ 830 K \cite{poirier2015,scorzelli2015}. Upon the first cooling, the magnetization remained close to zero until an apparent second Curie temperature of $T_{C2}$ $\sim$ 740 K where it rose to a value of 65 kA/m that was maintained down to room temperature (Figure \ref{fig:magn}f). The corresponding hysteresis loop returned a room temperature saturation magnetization of 1150 kA/m, same as that of the as-received state, but with a vanishingly small coercivity much decreased from the as-received value of 0.1 T, as expected for the chemically disordered FeNi phase. The low-field magnetization of the second heating cycle dipped slightly at $\sim$ 660 K and then fell abruptly at $T_{C2}$ $\sim$ 740 K. Upon the final cooling from 900 K, the magnetization again remained at an extremely low value down to a new magnetic transition at $T_{C3}$ $\sim$ 660 K to rise again to 65 kA/m. Most strikingly, the final hysteresis loop indicated a 14 \% increase of the room temperature saturation magnetization to $\sim$ 1260 kA/m (Figure \ref{fig:magn}f) with coercivity still at nearly zero. These results motivated an \textit{in situ} annealing study in the TEM (Figure \ref{fig:magn}g), which indicates dissolution of the noted precipitates and concurrent disordering of the L$1_0$ structure begins at T $\sim$ 600 K after approximately 1 hour. No clear sign of precipitates or of L$1_0$ superlattice reflections were detectable at 923 K and after cooling the specimen to room temperature (Figure S7). Overall, these results are consistent with the existence of a transitional phase with a magnetic transition temperature of 740 K that bridges the chemically ordered L$1_0$ FeNi phase and the disordered A1-type FeNi phase \cite{lewis2016} of Curie temperature 660 K. This study also demonstrates that, upon heating, the large population of Ni-rich and Ni-poor precipitates dissolve into the tetrataenite matrix, which itself is hypothesized to undergo a chemical disordering. These changes combine to collapse the magnetocrystalline anisotropy and yield magnetically soft behavior. Finally, the large increase in saturation magnetization noted in the third and final room temperature hysteresis loop is consistent with dissolution of the A6-type AFM ultrafine phase, which was previously providing magnetic voids that reduced the matrix saturation magnetization of the meteorite.
This conceptualized micromagnetic state was simulated with a model (Figure \ref{fig:sim}) based on the microstructure derived from the imaging data of the NWA 6259 specimen (Figure \ref{fig:tem}), including the number density and type of precipitates determined from the APT experiments (Figure \ref{fig:apt}). The ferromagnetic A2-type cubic precipitates were distributed evenly and randomly throughout the sample, whereas the A6-type AFM nanoprecipitates were simulated as non-magnetic 2-nm-diameter voids with a vanishing magnetization. The resultant contour plot of the magnetization component parallel to the L$1_0$ \textit{c} axis (easy axis), Figure \ref{fig:sim}c, shows a simulated micromagnetic domain state with \ang{180} magnetic domains parallel to the \textit{c} axis with widths of approximately 200 nm that is in excellent agreement with the experiment (Figure \ref{fig:magn}b). Closer inspection of the simulated domain walls reveals a straight wall structure (Figure \ref{fig:sim}d) and distortions, or kinks, at intersections with L$1_2$ lamellae (Figure \ref{fig:sim}e), exactly as found in the electron microscopy experiment (Figure \ref{fig:magn}b). A detailed view of the local magnetization rotation and Bloch domain wall broadening at a phase intersection is shown in Figure \ref{fig:sim}e. These kinks are attributed to the difference in magnetic anisotropy, and consequently in the magnetic domain wall energy and width, between the L$1_0$ and L$1_2$ phases. The magnetic domain wall width is calculated as 5.6 nm in the L$1_0$ phase and 18 nm in the L$1_2$ phase (Fig. S5). Further, Figure \ref{fig:sim}e shows how the handedness of the domain wall differs on either side of the L$1_2$ lamella, a feature associated with minimization of the dipole-dipole energy state of the domain wall that has implications for the stability of the ensemble magnetic state. These domain walls signal magnetically weak spots where magnetization curling instabilities can form and facilitate magnetization reversal.
\section*{Scientific and technological implications of the meteoritic hidden microstructure}
New knowledge of the previously undescribed ``hidden'' structure and properties of the NWA 6259 meteorite reported here impacts not only how iron meteoritic data might be used interpret the origins of our solar system but also invites renewed consideration of tetrataenite as sustainable permanent magnet. Utilizing highest-resolution probes combined with magnetometry and simulations, the microstructure is revealed to be comprised of a magnetic phase assemblage of ferromagnetic cubic ($\sim$30 nm diameter), antiferromagnetic tetragonal ($\sim$2 nm diameter) precipitates and ferromagnetic L$1_2$-type lamellae embedded in a tetrataenite matrix. At the current time these antiferromagnetic precipitates are not considered to be the hypothesized \textit{antitaenite} phase \cite{Rancourt1995,Rancourt1997,Rancourt1999}, on the basis of different postulated formation modes, crystal structures and magnetic transition temperatures.
These antiferromagnetic precipitates decrease the saturation magnetization and the soft magnetic inclusions act as weak regions that nucleate easy magnetization reversal. Both of these effects decrease the technical magnetic properties of the meteoritic sample, advancing the deduction that the maximum energy product of tetrataenite, deduced from measurements of natural materials such as the NWA 6259 meteorite, may be underestimated by as much as 15-20 \% to reach a value that is close to 70 \% of that of the best rare-earth magnets. This conclusion invites renewed consideration of tetrataenite as a sustainable advanced permanent magnet.
\begin{figure*}[t]
\centering
\includegraphics[width=\linewidth]{figure1_TEM.png}
\caption{Microstructure of meteorite NWA 6259.
(a) Electron backscatter diffraction image quality map recorded from the fragment used for electron microscopy. The dark grey inclusions contain sulfur and phosphorus. The inverse pole figure map with respect to the sample normal direction on the right shows that the crystal is one grain.
(b) Bright-field TEM image showing lamellar microstructure.
(c) Magnified bright-field TEM image revealing precipitates adjacent to lamellae in the Fe-Ni matrix.
(d) Electron diffraction pattern recorded from the area shown in (b), consistent with an ordered tetragonal L$1_0$ structure. The viewing direction is [100]. Red triangles mark superlattice reflections. (e) HAADF STEM image of the tetrataenite L$1_0$ FeNi matrix. The enlarged region and schematic diagram on the right shows a primitive unit cell of the tetragonal phase. Intensity variations in the line profile, which was obtained from the marked region, are associated with differences in atomic number Z between Fe (Z = 26) and Ni (Z = 28). The detector semi-angle used was 69 mrad.}
\label{fig:tem}
\end{figure*}
\begin{figure}[t]
\centering
\includegraphics[width=0.6\linewidth]{figure2_APT.png}
\caption{Fe-Ni phase decomposition in tetrataenite matrix. (a) Bright-field TEM image of a needle-shaped specimen prepared for atom probe tomography reconstruction. The marked region (red cone) is reconstructed and analyzed in (b). (b) Reconstruction showing Ni-poor (Fe-rich) regions (red) delineated by 26 at.\% Ni iso-concentration surfaces. The corresponding elemental concentration profiles (I and II) across the particles marked in (b) (Fe: red/ Ni: green), showing an Fe composition close to 90 at.\%.
(c) Reconstruction of the precipitates and lamella delineated by a 50 at.\% Ni iso-concentration surface. Based on the Fe-Ni phase diagram, the lamella is inferred to be awaruite, fcc FeNi$_2$. Corresponding composition profile along the line marked in (c) showing Fe and Ni enrichment in the tetrataenite matrix. }
\label{fig:apt}
\end{figure}
\begin{figure}[t]
\centering
\includegraphics[width=\linewidth]{figure3_STEM.png}
\caption{Structure and strain analyses. (a) Overview HAADF STEM image of a bcc A2 Fe-Ni precipitate (25 x 35 nm), shown alongside (b) an atomic-resolution HAADF STEM image of the A2/L$1_0$ interface, which is decorated by misfit dislocations every 1-2 nm. The inset Fourier transforms confirm the bcc structure of the precipitate. (c) Overview bright-field STEM image of ultrafine (<5 nm) Ni-poor A6 Fe-Ni precipitates (dark). (d) Atomic-resolution HAADF STEM image of an A6 precipitate next to a 3-monolayers-thick Ni-rich L$1_2$ lamella. (e) Strain rotation map of the A6 and L$1_2$ lamella shown in (d). Arrows mark misfit dislocations at the precipitate boundary. (f) Strain and shear in the region marked in (e) across the A6 precipitate and L$1_2$ lamella. }
\label{fig:stem}
\end{figure}
\begin{figure}[b]
\centering
\includegraphics[width=\linewidth]{figure4_Magn.png}
\caption{Magnetic properties of NWA 6259. (a) Fresnel defocus image of a specimen in which the magnetic easy axis ([001] of L$1_0$, \textit{c} axis) is in-plane. Bands of black and white contrast arise from the presence of magnetic domain walls. Variations in contrast marked with yellow triangles suggest local changes in wall inclination. (b) Magnetic induction map measured using off-axis electron holography from the dashed region in (a), showing \ang{180} magnetic domain walls. Dashed lines mark the locations of Ni-rich lamella precipitates. Colors and arrows indicate the magnetic field direction. Contour spacing: 2$\pi$ radians. (c) Fresnel defocus image of a specimen in which the magnetic easy axis is out-of-plane. Defocus: 0.5 mm. (d) Magnetic induction map showing a complex arrangement of magnetic domains. Contour spacing: 2$\pi$ radians. (e) Thermo-magnetic curves M(T) 1 and M(T) 2. The ferromagnetic transition temperature is 830 K. (f) Magnetization hysteresis curves measured in the ``as received'' condition and after annealing cycles (M(T) 1, 2). (g) Precipitate dissolution and chemical disordering observed in bright-field TEM images and SAED patterns recorded at 373, 773 and 923 K. Red triangles mark 001 reflections of the ordered L$1_0$ structure. }
\label{fig:magn}
\end{figure}
\begin{figure}
\centering
\includegraphics[width=0.6\linewidth]{figure5_sim.png}
\caption{Micromagnetic simulations of A2, A6, and L$1_2$ Fe-Ni precipitates in an L$1_0$ FeNi matrix. (a) Simulation system based on the TEM image of the NWA 6259 specimen shown in Fig. 1e. (b) 3D visualization of the simulated structure, showing randomly-distributed spheroidal A2 and A6 Fe-Ni precipitates. (c) Simulation of an equilibrium magnetic state with domain walls parallel to the c axis of the L$1_0$ structure. (d,e) Simulations showing that a domain wall that propagates through an L$1_2$ lamella is bent. An example is marked by a white rectangle. Magnified view showing twisting of the local magnetization at the intersection with the L$1_2$ lamella.}
\label{fig:sim}
\end{figure}
\begin{acknowledgement}
The authors are grateful to D. Meertens, M. Kruth and W. Pieper for TEM specimen preparation and to M. Keil for EBSD preparation. This project has received funding from the European Research Council (ERC) under the European Union's Horizon 2020 research and innovation programme (Grant No. 856538, project ``3D MAGiC'') and from the Horizon 2020 Research and Innovation Programme (Grant No. 823717, project ``ESTEEM3''). Funding by the Deutsche Forschungsgemeinschaft (DFG, German Research Foundation) via CRC/TRR 270 ``HoMMage'' (Project-ID 405553726) is gratefully acknowledged. Portions of this research were conducted with high performance computational resources provided by the Louisiana Optical Network Infrastructure (http://www.loni.org). This research was supported in part by a cooperative agreement with U.S. Department of Energy's Advanced Research Projects Agency-Energy (ARPA-E) and by Northeastern University.
\end{acknowledgement}
\begin{suppinfo}
\begin{itemize}
\item Filename: Methods, Materials, Figures
\end{itemize}
\end{suppinfo}
|
2,877,628,090,376 | arxiv | \section{Introduction}
\label{sec:1}
The physicist Ernest Rutherford is believed to have once distinguished physics from the other sciences, referring to the latter as
merely ``stamp collecting''~\cite{Bernal1939}.
While Rutherford may have been exceptional in explicitly voicing the traditional arrogance of physicists to other branches
of knowledge, it is true that the spectacular success of physics in explaining
the natural world has led many physicists to believe that progress has not
happened in other sciences because
those working in these fields are not trained to examine observed phenomena
from the perspective of physics.
Intriguingly, practitioners in several branches of knowledge have also occasionally looked at physics as a model to aspire to,
a phenomenon sometimes referred to as ``Physics-envy''. For instance, the science of economics has undergone such a phase,
particularly in the late nineteenth century, and concepts from classical physics, such as equilibria and their stability, were central to the
development of the field during this period~\cite{Mirowski1989}. However, this situation gradually changed starting at the beginning of the twentieth century,
curiously just around the time when physics was about to be transformed by the ``quantum revolution'', and economics
took a more formal, mathematical turn. The development of game theory starting from the 1920s and 1930s eventually provided
a new {\em de facto} language for theorizing about economic and social phenomena. However, despite this apparent ``parting of ways''
between economics and physics, there have been several attempts, if somewhat isolated, throughout the previous century to build
bridges between these two fields. In the 1990s, these efforts achieved sufficient traction and a sub-discipline sometimes referred to as
``econophysics'' emerged with the stated aim of explaining economic phenomena using tools from different branches of
physics~\cite{Sinha2010}.
In earlier times, the branch of physics now known as dynamical systems theory had been a rich source of ideas for economists developing their field.
More recently, however, it has been the field of statistical mechanics,
which tries to explain the emergence of systems-level properties at the macro-scale as a result of interactions between its components
at the micro-scale, that has become a key source of concepts and techniques used to quantitatively model
various social and economic phenomena. The central idea underlying this enterprise of developing statistical mechanics-inspired models is that, while the behavior of individuals may be essentially
unpredictable, the collective behavior of a large population comprising many such individuals interacting with each other may exhibit
characteristic patterns that are amenable to quantitative analysis and explanation, and could possibly even be predicted.
This may bring to one's mind the fictional discipline of ``psychohistory'', said to have been devised by Hari Seldon of Isaac Asimov's {\em Foundation} series fame~\cite{Asimov1951}, that aimed to predict the large-scale features of future developments
by discerning statistical patterns inherent in large populations. Asimov, who was trained in chemistry (and was a Professor of Biochemistry at Boston
University), in fact used the analogy of a gas, where the
trajectory of any individual molecule is almost impossible to predict, although the behavior of a macroscopic volume is strictly constrained
by well-understood laws.
A large number of the statistical mechanics-inspired models for explaining economic phenomena appear to use the framework
of interacting spins. This is perhaps not surprising given that spin models provide perhaps the simplest descriptions of
how order can emerge spontaneously out of disorder. An everyday instance of such
a self-organized order-disorder transition is exemplified by the so-called ``effect of a staring crowd''~\cite{Kikoin1978}.
Consider a usual city street where pedestrians walking along the sidewalk are each looking in different arbitrarily chosen directions.
This corresponds to a ``disordered'' situation, where each component is essentially acting independently and
no coordination is observed globally. If however a pedestrian at some point persistently keeps looking at a particular object
in her field of view (which corresponds to a fluctuation event arising through chance),
this action may induce other pedestrians to also do likewise - even though there may actually be nothing remarkable to look
at. Eventually, it may be that the gaze of almost all pedestrians will be aligned with each other and each of them will be staring
into the same point in space that is devoid of any intrinsic interest.
This situation will correspond to the spontaneous emergence of ``order'' through interactions between the components, i.e.,
as a result of the pedestrians responding almost
unconsciously to each other's actions. It is of course also possible to have everyone stare towards the same point by having
an out-of-the-ordinary event (a ``stimulus'') happen there. In this case, it will be the stimulus extrinsic to the pedestrians - rather than
interactions between
the individuals - that causes the transition from the disordered to ordered state.
The simplest of the spin models, the {\em Ising model}, was originally proposed to understand spontaneous magnetization
in ferromagnetic materials below a critical temperature. It assumed the existence of a large number of elementary spins,
each of which could orient in any one of two possible directions (``up'' or ``down'', say). Each spin is coupled to
neighboring spins through exchange interactions, which makes it energetically favorable for neighboring spin pairs to be both
oriented in the same direction. However, when the system is immersed in a finite temperature environment, thermal
fluctuations can provide spins with sufficient energy to override the cost associated with neighboring spins being oppositely aligned.
The spins could also be subject to the influence of an external field that will break the symmetry between the two
orientations and will make one of the directions preferable to the spins. By associating temperature to the degree of
noise or uncertainty among agents, field to any external influence on the agents and exchange coupling between spins
to interaction between individuals in their social milieu, it is easy to see that the Ising model can be employed to
quantitatively model a variety of social and economic situations involving a large number of interacting individuals.
Such modeling is particularly relevant when the question of interest involves qualitative changes that occur in the collective
behavior as different system parameters are varied. The nature of the transition may also be of much interest as external field-driven
ordering typically manifests as a first-order or discontinuous transition, while transitions orchestrated entirely through interactions between
the components has the characteristics of a second-order or continuous transition. As the latter is often associated with so-called
power laws, it is not unusual that these are often much sought after by physicists modeling social or economic phenomena
(sometimes to the puzzlement of economists).
The popularity of spin models in the econophysics community has however not percolated to mainstream social scientists, who,
probably justifiably, find such models to be overly simplified descriptions
of reality. Many economic and social phenomena are
therefore quantitatively described in terms of game theoretic models, where the strategic considerations of individuals, who
rationally choose between alternatives in order to maximize their utilities or payoffs, come to the fore.
However, such approaches have also been criticized as being based upon an idealized view of the capabilities of individual agents and
of the information that they have access to for making decisions. A complete description of aspects of economic life is possibly neither
provided by spin models nor by game-theoretic ones - but being two very different types of caricatures of reality, an attempt to integrate these
two approaches may provide us with a more nuanced understanding of the underlying phenomena.
With this aim in view, in the two following sections we describe in brief the essential framework of these two approaches that are
used to understand collective behavior in a population of agents. We show that despite differences, there are in fact many parallels
and analogies between spin model-based and game theoretic approaches to describing social phenomena.
We conclude with the suggestion that the statistical mechanics approach used at present may not be completely adequate for describing
strategic interactions between rational agents that is the domain of game theory. This calls for the development of a new formalism that will allow seamless
integration of statistical mechanics with game theory - which will possibly be the most enduring contribution of econophysics to the
scientific enterprise.
\section{Collective decision making by agents: Spins \ldots}
\label{sec:2}
We can motivate a series of models of the dynamics of collective decision making by agents that differ in terms of the
level of details or information resolution that one is willing to consider.
We begin by considering a group of $N$ agents, each of whom are faced with the problem of having to choose between a finite number of
possible options at each time step $t$, where the temporal evolution of the system is assumed to occur over discrete intervals.
To simplify matters we consider the special case of binary decisions in which the agents, for instance, simply choose between
``yes'' or ``no''. Thus, in the framework of statistical physics, the state of each agent (representing the choice made by it) can be mapped to
an Ising spin variable $S_i = \pm 1$. Just as spin orientations are influenced by the exchange interaction coupling with their neighbors in
the Ising model, agents take decisions that can, in principle, be based on the information regarding the choices made by other agents (with whom they
are directly connected over a social network) in the past - as well as the memory of its own previous choices.
If an agent needs to explicitly identify the specific choice made by each neighbor in order to take a decision, then this
constitutes the most detailed input information scenario. Here, each agent $i$ considers the choices made by its $k_i$ neighbors in
the social network of which it is a part (if its
own choices also need to be taken into account we may assume that it includes
itself in its set of neighbors). Furthermore, each agent $i$ has a memory of the choices made by its neighbors in the preceding $m_i$ time steps.
Thus, the agent, upon being presented with a history represented as a $m_i \times k_i$ binary matrix, has to choose between
$-1$ and $+1$. As there are $2^{m_i k_i}$ possible histories that the agent may need to confront, this calls for formulating
an input-output function $f_i$ for the agent that, given a string of $m_i k_i$ bits, can generate the probability that the agent
will make a particular choice, viz., Pr($S_i = +1$) = $f_i (\{\pm 1, \pm 1, \ldots \pm 1\}_{m_i k_i})$ and with
Pr($S_i = -1$)= $1 - $Pr($S_i = +1$).
In other words, the choice of each agent $i$ will be determined by a function whose domain is a $m_i k_i$-dimensional
hypercube and range is the unit interval $[0,1]$.
The above situation is simplified by assuming that agents do not know the exact identity of the choices made by each of
its neighbors but only have access to the aggregate information
as to how many chose a particular option, e.g., $+1$.
A natural extension of this is the scenario where,
instead of an explicit network, agents are considered to essentially interact with
the entire group. Such an effectively ``mean-field'' like situation (where pairwise interactions between spins are replaced by
a self-consistent field representing the averaged effect of interactions of a spin with the collective) will arise when, in particular,
an agent's choice is made on the basis of a global observable that is the record of the outcome of choices made by all agents.
For instance, one can model financial markets in this manner, with agents deciding whether to trade or not in an asset based
entirely on its price, a variable that is accessible to all agents and which changes depending on the aggregate choice behavior
of agents - with price rising if there is a net demand (more agents choose to buy than sell) and falling if the opposite is true
(more agents choose to sell than to buy). Thus, if $N_+$ and $N_-$ are the number of agents choosing $+1$ and $-1$, respectively,
then agents base their decision on their knowledge of the net number of agents who choose one option rather than the other,
i.e., $N_+ - N_- = \sum_i S_i = N M$, with $M$ being the magnetization or average value of spin state in the Ising model.
In this setting, the choice of the $i$th agent having memory (as stated above) is made using information about the value of $M$
in the preceding $m_i$ time steps. Therefore, the input-output function specifying the choice behavior of the agents maps
a string of $m$ continuous variables\footnote{We however note that as there only $N$ agents
whose choices need to be summed, the relevant information can be expressed in $\log_2 (N+1)$ bits.} lying in the interval $[-1,1]$ to a probability for choosing a particular
option, viz., Pr($S_i = +1$) = $f_i (M_1, M_2, \ldots, M_m$) where $M_j$ is the value of magnetization $j$ time steps earlier.
One can view several agent-based models that seek to reproduce the stylized features of price movements in financial markets
as special cases of this framework, including the model proposed by Vikram {\em et al}~\cite{Vikram2011} that exhibits
heavy-tailed distributions for price fluctuations and trading volume which are quantitatively similar to that observed empirically,
as well as volatility clustering and multifractality.
A further simplification can be achieved upon constraining the function $f_i$ to
output binary values, so that Pr($S_i = +1$)
can only be either $0$ or $1$. The set of functional values realized for all possible values of the argument (i.e., all possible histories
that an agent can confront) which defines the {\em strategy} of the agent can, in this case, be written as a binary string of length
$2^{m \log_2 (N+1)} = (N+1)^m$. It is easy to see that the total number of possible distinct strategies is $2^{(N+1)^m}$.
In reality, of course, many of these possible strategies may not make much sense and one would be focusing on the
subset for which $f_i$ has some well-behaved properties such as monotonicity. To simplify the situation even more,
the granularity of the information on choices made in the past can be reduced~\cite{Sasidevan2018}. In the most extreme case, the information
about the aggregate or net choice of agents at a particular instant can be reduced to a single bit, viz., sign($M_j$) instead
of $M_j$. This will be the case, for instance, when one only knows whether a particular option was chosen by the majority or not,
and not how many opted for that choice. The number of possible different histories that an agent may confront is only $2^m$
in this situation and thus, the total number of possible strategies is $2^{2^m}$. The well-known {\em Minority Game}~\cite{Moro2004} can be seen
as a special case of this simplified formalism. It is the very anti-thesis of a coordination game with each agent trying to
be contrary to the majority. In other words, each agent is aiming to use those $f_i$ which would ensure $S_i \times$ sign($M$) $= -1$.
In the detailed input information scenario described above, a Minority Game (MG) like setting will translate into an Ising model
defined over a network, where connected spin pairs have anti-ferromagnetic interactions with each other. Such a situation
will correspond to a highly frustrated system, where the large number of energy minima would correspond to the
various possible efficient solutions of the game. However, if the system remains at any particular equilibrium for all time,
this will not be a fair solution as certain individuals will always form the minority and thus get benefits at the expense
of others. A possible resolution that may make it both efficient and fair is to allow for fluctuations that will force
the collective state to move continuously from one minima to another, without settling down into any single one for a very long time.
An important feature of the MG is the ability of agents to adapt their strategies, i.e., by evaluating at each time step the performance
or payoff obtained by using each of the strategies, the agent can switch between strategies in order to maximize payoff. One can ask how the
introduction of ``learning'' into the detailed input information scenario will affect the collective dynamics of the system.
In the classical MG setting, each agent begins by randomly sampling a small number of $f$s (typically $2$) from the set of all possible
input-output functions and then scores each of them based on their performance against the input at each time step, thereafter choosing the
one with the highest score for the next round. In the detailed information setting, we need to take into account that an agent will need
to consider the interaction strength it has with each of its neighbors in the social network it is part of. Thus, agents could adapt based
on their performance not just by altering strategy but also by varying the importance
that they associate with information arriving from their
different neighbors (quantified in terms of weighted links). Hence, link weight update dynamics could supplement (or even replace) the
standard strategy scoring mechanism
used by agents to improve their payoffs in this case. For example, an agent may strengthen links with those neighbors whose past choices
have been successful (i.e., they were part of the minority) while weakening links with those who were unsuccessful.
Alternatively, if agent $i$ happened to choose $S_i$ correctly, i.e., so as to have a sign opposite to
that of sign($M$), while its neighbor agent $j$ chose wrongly, learning may lead to the link from $j$ to $i$
becoming positive (inducing $j$ to copy the choice made by $i$ in the future) while the link from $i$ to $j$ becomes negative (suggesting
that $i$ will choose the opposite of what $j$ has chosen).
It may be worth noting in this context that the role of a link weight update rule on collective dynamics has been investigated
in the context of spin models earlier, although in the different context of coordination where agents prefer to make similar choices
as their neighbors~\cite{Singh2014}. Using a learning rule that is motivated by the Hebbian weight update dynamics that is often used
to train artificial recurrent neural network models, it has been seen that depending on the rate at which link weights adapt (relative
to the spin state update timescale) and the degree of noise in the system, one could have an extremely high diversity in the time required to
converge to {\em structural balance}
(corresponding to spins spontaneously segregating into two clusters, such that within each cluster all interactions are ferromagnetic and
all interactions between spins belonging to different clusters are anti-ferromagnetic) from an initially frustrated system.
It is intriguing to speculate as to what will be observed if instead the learning dynamics tries to make the spins mis-align with
their neighbors, which would be closer to the situation of MG.
\section{Collective decision making by agents: \ldots and Games}
\label{sec:3}
We now shift our focus from the relatively simpler spin-model inspired descriptions of collective behavior of agents to
those that explicitly incorporate strategic considerations in the decision-making of agents. Not surprisingly, this often
involves using ideas from game theory. Developed by John von Neumann in the early part of the $20$th century, the
mathematical theory of games provides a rigorous framework to describe decision-making by ``rational'' agents.
It appears intuitive that the states of binary Ising-like spins can be mapped to the different choices of agents when they
are only allowed to opt between two possible actions. We will call these two options available to each agent as action A
and action B, respectively (e.g., in the case of the game Prisoners' Dilemma, these will correspond to ``cooperation''
and ``defection'', respectively). However, unlike in spin models, in the case of games it is difficult to see in general that
the choices of actions by agents are somehow reducing an energy function describing the global state of the system.
This is because instead of trying to maximize the total payoff for the entire population of agents, each agent (corresponding
to a ``spin'') is only trying to maximize its own expected payoff - sometimes at the cost of others. Possibly the only exception
is the class of the Potential Games wherein one can, in principle, express the desire of every agent to alter their action
using a global function, viz., the ``potential'' function for the entire system.
Let us take a somewhat more detailed look into the analogy. For a spin-model, one can write down the effective time-evolution
behavior for each spin from the energy function as the laws of physics dictate that at each time step the spins will try to
adopt the orientation that will allow the system as a whole to travel ``downhill'' along the landscape defined by the
energy function $$E = - \sum_{ij} J_{ij} S_i S_j + h \sum_i S_i.$$ Here, $J_{ij}$ refers to the strength of interaction between
spins $i$ and $j$, the summation $\sum_{ij}$ is performed over neighboring spin pairs and $h$ refers to an external field.
In the absence of any thermal fluctuations (i.e., at zero temperature), it is easy to see that the state of each spin
will be updated according to $$S_i (t+1) = {\rm sign} (\sum_j J_{ij} S_j + h).$$
For the case of a symmetric 2-person game, the total utility resulting from the choice of actions made by a group of agents
whose collective behavior can be decomposed into independent dyadic interactions, will be given by $$U = R f_{AA} + P f_{BB} + (S+T) f_{AB}.$$
Here $R$ and $P$ refer to the payoffs obtained by two agents when both choose A or both choose B, respectively, while if one chooses
A and the other chooses B, the former will receive $S$ while the latter will receive $T$. The variables $f_{AA}$, $f_{BB}$ and $f_{AB}$
refer to the fraction of agent pairs who both choose A, or both choose B, or where one chooses A while the other chooses B, respectively.
On the other hand, for an individual agent the payoff is expressed as
$$U_i = \sum_j p_i p_j R + p_i (1-p_j) S + (1-p_i) p_j T + (1-p_i)(1-p_j) P,$$ where $p_i, p_j$ refer to the probabilities of agents $i$ and
$j$, respectively, to choose action A. As an agent $i$ can only alter its own strategy by varying $p_i$, it will evaluate $\partial U_i/\partial p_i$
and increment or decrement $p_i$ so as to maximize $U_i$, eventually reaching an equilibrium.
Different solution concepts will be manifested according to the different ways an agent can model the possible strategy $p_j$ used
by its opponent $j$
(which of course is unknown to the agent $j$). Thus, in order to solve the above equation the agent $i$ actually replaces the variable
$p_j$ by its assumption $\hat{p_j}$ about that strategy.
In the conventional Nash solution framework, the agent is agnostic about its opponent's strategy so that even $\hat{p_j}$ is an unknown.
To physicists, this approach may sound similar to that of a maximum entropy formalism where the solution is obtained with the least
amount of prior knowledge about the situation at hand.
However, advances in cognitive science and attempts to develop artificial intelligence capable of semi-human performance in various
tasks have made us aware that human subjects rarely approach a situation where they have to anticipate their opponent's move
with a complete `blank slate' (so to say). Even if the opponent is an individual who the subject is encountering for the first time,
she is likely to employ a {\em theory of mind} to try to guess the strategy of the opponent. Thus, for example, a goalie facing a
penalty kick will make a decision as to whether to jump to the left or the right as soon as the kick is taken (human response time
is too slow for it to make sense for the goalie to wait until she actually sees which direction the ball is kicked) by trying to simulate
within her mind the thought process of the player taking the kick. In turn, the player
taking the penalty kick is also attempting to guess whether the goalie
is more likely to jump towards the left or the right, and will, so to say,
try to ``get inside the mind'' of the goalie. Each player is, of course,
aware that the other player is trying to figure out what she is thinking and will
take this into account in their theory of mind of the opponent.
A little
reflection will make it apparent that this process will ultimately lead to an
infinite regress where each individual is modeling the thought process of
the opponent simulating her own thought process, to figure out what the
opponent might be thinking, and so on and so forth (Fig.~\ref{fig:2}).
\begin{figure}[tbp]
\includegraphics[width=.99\linewidth]{Coaction_schema.png}
\caption{A schematic diagram illustrating the infinite regress of theories of mind
(viz., "she thinks that I think that she thinks that I think that \ldots'')
that two opponents use to guess the action that the other will choose.
Figure adapted from a drawing of the cover of the {\em Division Bell} music
album of Pink Floyd designed by
Storm Thorgerson based on illustrations by Keith Breeden.}
\label{fig:2}
\end{figure}
The co-action solution framework~\cite{Sasidevan2015, Sasidevan2016} solves
the problem of how agents decide their strategy while taking into account
the strategic considerations of their opponent
by assuming that if both agents
are rational, then regardless of what exact steps are used by each to arrive
at the solution, they
will eventually converge to the same strategy. Thus, in
this framework, $\hat{p_j} = p_i$. This results in solutions that often
differ drastically from those obtained in the Nash framework. For example,
let us consider the case of the 2-person {\em Prisoners' Dilemma} (PD), a well-known
instance of a {\em social dilemma}. Here, the action chosen by each of the agents
in order to maximize their individual payoffs paradoxically results in
both of them ending up with a much inferior outcome than that would have been
obtained with an alternative set of choices. In PD, each agent has the choice
of either cooperation (C: action A) or defection (D: action B) and the payoffs
for each possible pair of actions chosen by the two (viz., CC, DD, CD or DC) have the hierarchical
relation $T > R > P > S$. The value of the payoff $T$ is said to
quantify the temptation of an agent for unilateral defection, while $R$ is
the reward for mutual cooperation, $P$ is the penalty paid when both agents
choose defection and $S$ is the so-called ``sucker's payoff'' obtained
by the agent whose decision to cooperate has been met with defection by its opponent.
Other symmetric 2-person games can be defined upon altering the hierarchy among the values of the different
payoffs. Thus, $T > R > S > P$ characterizes a game referred to as {\em Chicken} (alternatively referred to as
{\em Hawk-Dove} or {\em Snowdrift}) that has been used extensively to model phenomena ranging from nuclear sabre-rattling
between nations (with the prospect of mutually assured destruction) to evolutionary biology.
Another frequently studied game called {\em Stag Hunt}, which is used to
analyze social situations that require agents to coordinate their actions in order
to achieve maximum payoff, is obtained when $R > T \geq P > S$.
In the Nash framework, the only solution to a one-shot PD (i.e., when the game is played only once) is for both agents to choose defection.
As is easily seen, they therefore end up with $P$, whereas if they had both cooperated they would have received $R$ which is a higher
payoff. This represents the dilemma illustrated by the game, namely that choosing to act in a way which appears to be optimal for the individual
may actually yield a sub-optimal result for both players. Indeed,
when human subjects are asked to play this game with each other, they are often seen to
instinctively choose cooperation over defection. While this may be explained by assuming irrationality on the part of
the human players, it is worth noting that the apparently naive behavior on the part of the players actually allows them to
obtain a higher payoff than they would have received had they been strictly ``rational'' in the Nash sense.
In fact, the rather myopic interpretation of rationality in the Nash perspective may be indicative of more fundamental
issues. As has been pointed out in Ref.~\cite{Sasidevan2015}, there is
a contradiction between the two assumptions underlying the Nash solution, viz., (i) the players are aware that they are both
equally rational and (ii) that each agent is capable of {\em unilateral deviation}, i.e., to choose an action that
is independent of what its opponent does. The co-action framework resolves this by noting that
if a player knows that the other is just as rational as her, she will take this into
account and thus realize that both will eventually use the same strategy (if not the same action, as in the case of a mixed strategy).
Therefore, cooperation is much more likely in the solution of PD in the co-action framework, which is in line with empirical observations.
A much richer set of possibilities emerges when one allows the game to be played repeatedly between the same set of agents.
In this iterative version of PD (IPD), mutual defection is no longer the only solution even in the Nash framework, because
agents need to now take into account the history of prior interactions with their opponents. Thus, direct reciprocity between
agents where, for example, an act of cooperation by an agent in a particular round is matched by a reciprocating act of cooperation by its
opponent in the next round, can help in maintaining cooperation in the face of the ever-present temptation towards unilateral defection.
Indeed, folk theorems indicate that mutual cooperation is a possible equilibrium solution of the infinitely repeated IPD.
Multiple reciprocal strategies, such as ``tit-for-tat'' and ``win-stay, lose-shift'' have been devised and their performance
tested in computer tournaments for PD. Intriguingly, it has been shown that when
repeated interactions are allowed between rational agents, the co-action solution is for agents to adopt a Pavlov strategy.
In this, an agent sticks to its previous choice if it has been able to achieve a sufficiently high payoff but alters the choice if it
receives a low payoff, which allows robust cooperation to emerge and maintain itself~\cite{Sasidevan2016}.
Moving beyond dyadic interactions to general $N$-person games, the analysis of
situations where an agent simultaneously interacts with multiple neighbors
can become a formidable task, especially with increasing number of agents.
Thus, one may need to simplify the problem considerably in order to investigate
collective dynamics of a group of rational agents having strategic interactions
with each other. One possible approach - which deviates somewhat from the
strictly rational nature of the agents - invokes the concept of
{\em bounded rationality}. Here, the ability of an agent to find the optimal
strategy that will maximize its payoff is
constrained by its cognitive capabilities and/or the nature of the
information it has access to.
A notable example of such an approach is the model proposed by Martin Nowak and
Robert May~\cite{Nowak1992},
where a large number of agents, arranged on a lattice,
simultaneously engage in PD
with all their neighbors in an iterative fashion. As in the conventional
2-player iterated PD, each agent may choose to either cooperate or defect at each
round, but with the difference that the agents nominate a single
action that it uses in its interactions with each of its neighbors. At the end
of each round, agents accumulate the total payoff received from each
interaction and compares it with those of its neighbors. It then copies the action
of the neighbor having the highest payoff to use in the next round.
Note that each
agent only has access to information regarding the decisions of agents
in a local region, viz. its topological neighborhood, and hence the
nature of the collective dynamics is
intrinsically dependent on the structure of the underlying connection
network. Nowak and May demonstrated that the model can
sustain a non-zero fraction of cooperating agents, even after a very
large number of rounds. In other words, limiting interactions to an agent's
network neighborhood may
allow cooperation to remain a viable outcome - a concept that has been referred
to as {\em network reciprocity}.
The model described above has been extremely influential, particularly
in the physics community, where it has motivated
a large number of studies that have built upon the basic framework provided
by Nowak and May.
Beyond the implications for how cooperation can be sustained in a population
of selfish individuals,
these studies have revealed tantalizing links between game theory and
statistical physics. For instance, by considering the distinct
collective dynamical regimes as phases, one may describe the switching
between these regimes in terms of non-equilibrium phase transitions.
The non-equilibrium nature is manifest from the breakdown of detailed balance
(where the transition rate from one state to another is exactly matched
by that of the reverse process) because of the existence of absorbing
states. These states are defined by cessation of further evolution once
they are reached by the system and correspond to either all agents being
cooperators or all being defectors. The system cannot escape these states
as agents can only copy actions that are still extant in the population.
While Nowak and May had considered a deterministic updating procedure
(viz., the `imitate the best' rule described above), there have been
several variants that have incorporated the effect of
uncertainty in an agent's decision-making process. One of the most
commonly used approaches is to allow each agent $i$ to choose a
neighbor $j$ at random and copy its action with a probability given
by the Fermi distribution function:
\[
\Pi_{i\rightarrow j}=\frac{1}{1+\exp(-(\pi_{j}-\pi_{i})/K)}\,,
\]
where $\pi_{i}$ and $\pi_{j}$ are, respectively, the total payoffs
received by agents $i$ and $j$ in the previous round, and $K$ is the
effective temperature or noise in the decision-making process~\cite{Szabo1998}. The
utility of this function is that it allows one to smoothly interpolate
between a deterministic situation in the limit $K\rightarrow 0$ (viz.,
agent $i$ will copy agent $j$ if $\pi_{j}>\pi_{i}$) and a completely
random situation in the limit $K\rightarrow\infty$ (viz., agent $i$
will effectively toss a coin to decide whether to copy agent $j$).
Implementing this scheme in a population of agents whose interactions are
governed by different connection topologies allows us to investigate
the spectrum of collective dynamical states that arise, and the transitions
between them that take place upon varying system parameters~\cite{Menon2018}.
\begin{figure}[tbp]
\includegraphics[width=.99\linewidth]{games_K_T_space_schematic.png}
\caption{Schematic parameter space diagrams illustrating the dependence on the contact network
structure of the collective dynamics of a system of agents that
synchronously evolve their states (representing actions) through strategic interactions with
their neighbors. Each agent in the system adopts one of two possible actions
at each round, viz. cooperate or defect, and receives an accumulated
payoff based on each of their neighbor’s choice of action. The agents
update their action at every round by choosing a neighbor at random
and copying their action with a probability that is given by a Fermi
function, where the level of temperature (noise) is controlled by the
parameter $K$. The broken horizontal line in both panels corresponds to the case
where the temptation $T$ (payoff for choosing defection when other agent has chosen cooperation) is equal to the reward $R$ for
mutual cooperation. Hence the region above the line corresponds to the case
where agents play the Prisoner’s Dilemma game, while that below
corresponds to the case where they play the Stag Hunt game.
Note that the temptation $T$ can be viewed as a field, in analogy to
spin systems, as its value biases an agent's preference for which
action to choose.
The three
regimes displayed in each case correspond to situations where the
system converges to a state where all the agents cooperate (``all
C''), all agents choose defection (``all D'') or the states of the agents
fluctuate over time (``F''). We note that the region corresponding to fluctuations appears to
comprise two large segments connected by a narrow strip. However, the nature of the
collective behavior is qualitatively different in the two segments,
as the dynamics observed for large $K$ can be understood as arising
due to extremely long transience as a result of noise. The left panel
displays the regimes obtained when agents are placed on a two-dimensional lattice,
where each agent has $8$ neighbors, while the right panel displays
the situation where agents are placed on a homogeneous random network
where all nodes have $8$ neighbors. The difference in the collective
dynamics between the two scenarios is most noticeable at intermediate
values of $K$, where the system can converge to an all C state even in
the Prisoner’s Dilemma regime.}
\label{fig:1}
\end{figure}
Fig.~\ref{fig:1} shows the different collective states of the system that occur at various regions of the ($K,T$) parameter space.
It is tempting to compare this with the phase diagrams obtained by varying the temperature and external field in spin systems.
First, the state of an agent, i.e., the action chosen by it at a particular time instant, can be mapped to a spin orientation - e.g.,
if the $i$th agent chooses cooperation, then the corresponding spin state can be designated $S_i = +1$, whereas $S_i = -1$
implies that the agent has chosen defection.
Typically, there is symmetry between the two orientations $\{-1,+1\}$ that a spin can adopt. However, in games such as PD
one of the actions may be preferable to another under all circumstances (e.g., unconditional defection or $p=0$ is the dominant
strategy in PD). This implies the existence of an effective external field, whose magnitude is linearly related to the ratio of the
temptation for defection and reward for cooperation payoffs, viz., $1-(T/R)$, that results in one of the action choices being more
likely to be adopted by an agent than another. We also have noise in the state update dynamics of the agents as, for a finite
value of $K$, an agent stochastically decides whether to adopt the action of a randomly selected neighbor who has a higher
total payoff than it. This is not unlike the situation where spins can
sometimes switch to energetically unfavorable orientations because of thermal
fluctuations,
when the system is in a finite temperature environment.
Analogous to ordered states in spin systems (corresponding to the spins being aligned), we
have the collective states all C (all agents choose to cooperate) or all D (all agents have chosen defection),
and similar to a disordered state we observe that the collective dynamics of agents can converge to a fluctuating state F
where agents keep switching between cooperation and defection. Just as in spin systems, the phases are
distinguished by using an order parameter, namely, magnetization per spin $m= \sum_i S_i /N \in [-1,1]$, we can define
an analogous quantity $2 f_C - 1$, which is a function of the key observable for the system of agents, viz., the fraction
of agents who are cooperating at any given time $f_C$. As for $m$, the value of this quantity is bounded between $-1$
(all D) and $+1$ (all C), with the F state yielding values close to $0$ provided sufficient averaging is done
over time.
Note that despite this analogy between the parameters (viz., temperature/noise and field/payoff bias) governing the
collective dynamics of spin systems and that of a population of agents that exhibit
strategic interactions with each other,
there are in fact significant differences between the two. As is manifest from Fig.~\ref{fig:1}, an increase in the noise $K$
does not quite have the same meaning as raising the temperature in spin systems. Unlike the latter situation,
agents do not flip from cooperation to defection with equal probability as the temperature/noise increases.
Instead, with equal probability agents either adopt the action chosen by a randomly selected neighbor or stick
to their current action state. Not surprisingly, this implies that all C and all D states will be stable (for different
values of the field $T$, the payoff value corresponding to temptation for unilateral defection, relative to the reward for
mutual cooperation) even when $K$ diverges.
In addition, even in the absence of noise (i.e., at $K=0$) we observe that agents can keep switching between different
actions. In other words, unlike the situation in spin systems at zero temperature, the system will keep evolving dynamically.
When an agent determines that a randomly selected neighbor has higher total payoff than it, the agent will
switch to the action chosen by its neighbor deterministically. Therefore, if there is a coexistence of cooperation and
defection states there will be switching between these two actions - thereby ensuring the existence
of the fluctuating state at $K=0$.
Spin systems are also characterized by coarsening dynamics, wherein spins of
similar orientation coalesce over time to
form domains. Existence of such domains in a spin system, whereby spins of
opposite orientations can coexist even in the ordered
phase, mean that even at low temperatures the global magnetization of a sufficiently large systems can yield quite small values.
This happens not because of the absence of order, as is obvious, but because of
coexistence of ordered regions that happen
to be oppositely aligned. At the boundary of two such domains,
the existence of spin pairs that are oppositely aligned means
that there is a energy cost which increases with the perimeter of the boundary. Thus, energy minimization will result in the
boundaries becoming smoother over time and the shape of the domains eventually stabilize.
Agents on lattices or networks
will also exhibit the spontaneous formation of domains or clusters of interacting agents who have chosen the same action.
Indeed, in order to maintain cooperation in the system for any length of time (in the presence of defectors), the cooperators
will have to form clusters. Within these clusters agents receive a sufficiently
high payoff from cooperating neighbors to prevent them from switching
to defection, despite the potential for being exploited by any neighbor that
chooses to defect.
However, the collective dynamics leads to a form of ``anti-coarsening". This is because agents choosing defection would like to
be surrounded by as many cooperating agents as possible in order to maximize their payoff, so that the boundary between
groups of cooperators and defectors will tend to develop kinks and corners over time, instead of becoming smoother as in
the case of spins. Furthermore, as the cooperators would tend to prefer as few defectors as possible at neighboring positions,
we would observe ceaseless flux in the shape of the domain boundaries unless the system eventually converges to
any one of the two absorbing states, all C or all D.
As already mentioned earlier, the mechanism of agents copying the action of neighbors who are more successful than them - although
helping to simplify the dynamics - is somewhat dis-satisfactory as the agents are now no longer strictly rational. For instance, if the
collective dynamics results in the system converging to the all C absorbing state, all agents will always cooperate with each other
from that time onwards, as there is no agent left to copy the defection action from. Yet, in a one-shot PD game, defection is always the dominant
strategy as will be realized by any agent who is being ``rational'' and works out the implications of its action in light of the payoff
matrix (instead of blindly copying its neighbor). Of course, in the iterated PD, it is no longer true that unconditional defection is the best
strategy~\cite{Axelrod1984}. Nevertheless, an all C state is highly unstable
as it provides a lucrative target for
agents who choose to defect, knowing that they will reap an extremely high payoff
at the expense of the cooperators. One possible way to prevent global cooperation
from being an absorbing state
in the modeling framework described above is to introduce a mutation probability.
This will allow agents to spontaneously switch
to a particular action with a low probability, independent of whether any of their more successful neighbors is using it or not.
This will ensure that even if a population has reached an all C state, it need not remain there always.
A more innovative
approach that re-introduces the essential rationality of agents in the context of studying the collective dynamics of a large number of agents
interacting over a social network has been introduced in Sharma {\em et al}~\cite{Sharma2019}. Although formulated in the specific
context of agents making rational decisions as to whether to get vaccinated (based on information about the incidence of a disease and
knowledge of how many neighbors have already gotten vaccinated), the framework can be generally applied to understand many
possible situations in which a large number of agents make strategic decisions through
interactions with other agents. In this approach, each agent plays a symmetric
2-person game with its ``virtual self'', rather than with any of its neighbors,
in order to decide its action. The interaction with neighbors is introduced by making specific entries in the payoff
matrix that an agent uses for its decision process into functions of the number of its neighbors who have chosen a particular action.
Thus, in the context of vaccination, if all its neighbors have already chosen to vaccinate themselves, an agent is already protected
from disease and is most likely to choose not to get vaccinated (thereby avoiding any real or imagined cost associated with
vaccination, e.g., perceived side-effects). As the neighborhood of each agent is different (in general) when considering either
a lattice or a network, this means that each agent is playing a distinct game. Not only will the games played by each other
differ quantitatively (i.e., in terms of the payoffs of the game) but also qualitatively. Thus, for instance, one agent may be playing what
is in effect PD while another may be playing Chicken. Initial explorations suggest that such spatio-temporal variation of strategies
may give rise to a rich variety of collective dynamical phenomena, which have implications for problems as diverse as designing
voluntary vaccination programs so as to have maximum penetration in a population and predicting voter turnout in elections.
\section{In lieu of a Conclusion}
\label{sec:4}
The brief presentation in this chapter of several approaches towards understanding the collective dynamics of a population of
interacting agents, by using both physics-inspired
spin models and game theoretic models of rational individuals making
strategic decisions, has hopefully made it clear that there are clear parallels and analogies between the two
frameworks. Although both are at best caricatures of reality, albeit of different types, comparing and contrasting
between the results generated by both of these approaches should help us understand better how and why large groups
or crowds behave in certain ways. While physicists may harbor the hope of revolutionizing the understanding of
society through the use of simple models of self-organizing phenomena, it may also be that the contribution may be
the other way around. In general, for a group of rational agents, unlike the case in spin models, there appears to
be no single global function (such as energy) whose minimization leads to the collective states. Thus, it appears that the traditional
tools of statistical mechanics maybe inadequate for describing
situations where the same collective state may have different utilities for each agent. For instance, in PD, agent $1$ choosing C while
agent $2$ choosing D, may be the best of all possible outcomes for $2$ - but it is the worst of all possible outcomes for agent $1$.
Therefore, while agent $2$ may be desirous of nudging the system to such an outcome, agent $1$ maybe as vehemently trying
to push the system away from such a state. How then would one proceed to model the collective activity of such systems using
the present tools of statistical mechanics~? It does appear that we may need to have a new formulation of statistical mechanics
that applies to the situation outlined above. Thus, it may well turn out that the lasting significance of econophysics will be
in not what it does for economics, but rather in the new, innovative types of physical theories, particularly in statistical physics, that it may spawn.
\begin{acknowledgement}
We thank our collaborator Anupama Sharma, whose joint work with us forms the basis of several ideas discussed above, and
Deepak Dhar, whose
insightful comments had first gotten us intrigued about the relation between strategic games and statistical physics. The research reported
here has been funded in part by the Department of Atomic Energy, Government of India through the grant for Center of Excellence in Complex Systems
and Data Science at IMSc Chennai.
\end{acknowledgement}
|
2,877,628,090,377 | arxiv | \section{Introduction}
\paragraph{Background: previous notions of classicality.}
Studying the non-classicality of quantum mechanics is a field that originated from the collective effort of the scientific community to obtain meaningful interpretations of the ontologically opaque yet undoubtedly successful theory of quantum mechanics.
One of the early influential works highlighting how quantum mechanics departs significantly from classical mechanics was that of Einstein, Podolsky and Rosen \cite{EPR}: there, it was brought to light that local realism, a natural notion of classicality, is in conflict with the quantum description of nature.
Realism means that one posits the existence of a hidden state that should describe the actual physics behind the scenes, the \emph{ontic} (actual) state of the system.
Local realism means that the ontic state cannot be updated from a spacelike-separated spacetime region.
This notion was further studied and turned into an experimentally verifiable \emph{no-go theorem} by Bell \cite{Bell}: the no-go theorem states that quantum mechanics cannot be described by a local hidden variable model.
For the perspective of this \doc{}, it is important to notice that this notion of classicality only applies to spacelike-separated systems, whereas a single quantum system is not eligible to be tested via the prism of local realism.
A natural generalization of local realism is that of non-contextual realism, where the associated classical model is called non-contextual hidden variable model.
This notion of classicality assumes that at the ontic state level, the outcome statistics of one measurement are 1) statistically independent from the outcome statistics of any other commuting
measurement and 2) non-varying with respect to changing the jointly-measured commuting measurement. This notion was formalized and shown to be inconsistent with quantum mechanics by Kochen and Specker \cite{KS}.
Only commuting measurements may be tested through the prism of non-contextual realism but possibly on a single quantum systems, which was not the case with local realism.
The work of Spekkens \cite{Spekkens05} lead to a new notion of non-contextuality that subsumes Bell's local hidden variables and Kochen-Specker non-contextuality.
The assumption of realism is similar to that of the previously mentioned notions of classicality, but the scope of non-contextuality is more universal.
The first step towards formulating an assumption of non-contextuality is to formulate a notion of operational equivalence, such as e.g.\ the operational equivalence of an electron spin-$\frac{1}{2}$ degree of freedom and a photon polarization degree of freedom as two implementations of a qubit.
The corresponding assumption of non-contextuality is to posit that operationally equivalent procedures have an identical representation at the level of the ontic model.
In \cite{Spekkens05}, several no-go theorems are presented to show the incompatibility of quantum mechanics with respect to Spekkens' non-contextuality.
Quantum procedures may be eligible for testing their classicality with respect to Spekkens' non-contextuality irrespective of the existence of commuting measurements.
Furthermore, the incompatibility of quantum mechanics and Spekkens' non-contextuality has known links with computational efficiency of quantum protocols, see e.g.\ \cite{SHVStateDiscrimination,SHVPostSelectedMetrology}, which supplement the existing links between computational efficiency and violations of Kochen-Specker non-contextuality (a special case of Spekkens' non-contextuality), see e.g.\ \cite{NCHVAdv1,NCHVAdv2}.
\paragraph{The objective notion of classicality.}
The present work aims at obtaining a notion of classicality that is applicable to an arbitrary prepare-and-measure scenario and that provides an answer to the question of whether the scenario is classical or not with respect to that notion of classicality.
The prepare-and-measure scenario may consist of all states and measurements allowed by quantum mechanics within a given Hilbert space, but it can also consist of strict subsets of these:
this would be interesting if for instance one has an apparatus that only allows to produce certain types of states or perform certain types of measurements.
Then, one could answer the question of whether this specific apparatus has a classical description or not.
Alternatively, one can associate to a given quantum protocol a corresponding prepare-and-measure scenario that only features the states and measurements relevant for the protocol.
For instance, the set of states of the scenario could be special types of multi-qubit states of a quantum computer that are relevant for a given algorithm.
Then, assessing the classicality of the prepare-and-measure scenario associated to the protocol is an indirect way of assessing the classicality of the protocol itself.
This assessment may help identify resources that are most useful for efficient protocols.
Local realism and non-contextual realism are well-motivated and widely useful notions of classicality, but they do not quite fulfill the requirement that any set of states and set of measurements are eligible for a test of classicality.
Indeed, local realism specializes to local measurements on spacelike separated systems, and non-contextual realism specializes to commuting measurements.
On the other hand, the universality of Spekkens' notion of non-contextuality makes it a promising basis for the formulation of our objective notion of classicality.
\paragraph{Content overview.}
Section \ref{sec:PresentationClassicalModel} will formalize the quantum prepare-and-measure scenario under consideration,
motivate and define the adjustments to Spekkens' non-contextuality that are to be made: the notion of the reduced space will be introduced for that purpose.
We turn the classicality of a scenario as in \cref{def:classicalmodel} into the existence of a classical model as in \cref{prop:basiccriterion} on page~\pageref{prop:basiccriterion},
and use this result to demonstrate that the reduced space formulation of the classical model implies the equivalence of different
implementations of the scenario
as in \cref{prop:ancilla}.
Then, in \cref{sec:ExistenceClassicalModel}, the classicality of a given prepare-and-measure scenario is turned into the unit separability{} criterion as in \cref{th:maincriterion} on page~\pageref{th:maincriterion}.
This criterion allows one to extract theoretical properties of the classical model, such as the ontic space cardinality bounds of \cref{th:CardinalityBounds} on page \pageref{th:CardinalityBounds}.
Furthermore, an algorithmic formulation that evaluates the criterion for a given scenario is presented in section \ref{sec:algo}. In section \ref{sec:Connection}, parallel independent work treating generalized probabilistic theories is discussed and connected to the content of this \doc.
\section{Presentation of the classical model}
\label{sec:PresentationClassicalModel}
\subsection{The prepare-and-measure scenario}
Let $\mathcal H$ be a finite dimensional Hilbert space corresponding to the quantum system. The set of Hermitian matrices acting on $\mathcal H$ is denoted $\linherm\hil$. $\linherm\hil$ has the structure of a real inner product space of dimension $\dim(\linherm\hil) = \dim(\mathcal H)^2$: its inner product, often referred to as the Hilbert-Schmidt inner product, is defined by $\scal{a}{b}{\linherm\hil} := \tr{\mathcal H}{ab}$ for all $a,b\in\linherm\hil$.
The set of density matrices, i.e.\ positive semi-definite, trace-one hermitian matrices acting on $\mathcal H$, is denoted $\mathcal S(\mathcal H)$.
The set of quantum effects, i.e.\ positive semi-definite matrices $E$ acting on $\mathcal H$ such that $\id\mathcal H-E$ is also positive semi-definite, is denoted $\mathcal E(\mathcal H)$.
\begin{definition}
Let $\mathtt s\subseteq\mathcal S(\mathcal H)$ be a non-empty subset of states that is convex.\label{def:available states}
\end{definition}
Physically, any non-convex set $S_1$ of density matrices together with the possibility of taking classical probabilistic mixtures leads to a set of states $S_2$ that is the convex hull of $S_1$, and hence $S_2$ is convex.
\begin{definition}
\label{def:available effects}
Let $\mathtt e\subseteq\mathcal E(\mathcal H)$ be a subset of effects such that
\begin{myitem}
\item $\mathtt e$ is convex;
\item $0_\mathcal H,\id{\mathcal H}\in\mathtt e$;
\item if $E\in\mathtt e$, then there exists a completion $\{E_k\in\mathtt e\}_k$ such that $E + \sum_k E_k = \id\mathcal H$.
\end{myitem}
\end{definition}
The convexity requirement (i) for $\mathtt e$ is motivated by allowing classical probabilistic mixtures of different measurements, see \text{appendix}~\ref{app:qexp} for an explicit example.
The requirement (ii) comes from the fact that the trivial effects should always be allowed and faithfully be represented in the ontological model.
The requirement (iii) reflects the fact that in any practical application, the effects in $\mathtt e$ will come from complete POVM sets.
\Cref{prop:coarsegrainings} will formalize the fact that including or not incoherent coarse-grainings of measurements in $\mathtt e$ does not make a difference for our purposes.
Furthermore, \cref{prop:ancilla} will formalize the fact that distinct but operationally equivalent quantum descriptions of a prepare-and-measure scenario are equivalent as far as the classical model is concerned.
The pair $(\sets,\sete)${} is referred to as being an instance of a quantum prepare-and-measure scenario, or just a scenario for brevity.
\subsection{The reduced space}
Since we are primarily concerned with quantum protocols that involve preparing a given state $\rho\in\mathtt s$ and measuring it once with a complete set of effects where each effect $E$ belongs to $\mathtt e$,
the experimental predictions of quantum mechanics for such protocols are entirely encoded in the probabilities $\scal \rho E \linherm\hil$ for all $\rho\in\mathtt s$ and $E\in\mathtt e$.\footnote{One is of course allowed to go beyond this setting by including post-measurement states in the set $\mathtt s$ which is a way to account for multiple consecutive measurements.}
For any set $X\subseteq\linherm\hil$, we denote the linear span of its elements as $\myspan X\subseteq\linherm\hil$, which is the minimal vector subspace that contains $X$.
For any $a\in\linherm\hil$, the projection of $a$ over any vector subspace $\mathcal V \subseteq \linherm\hil$ equipped with an orthonormal basis $\{v_i\in\mathcal V\}_i$ is denoted $\proj{\mathcal V}{a} := \sum_i \scal{v_i}{a}{\linherm\hil}v_i$.
The projection of a set $X\subseteq\linherm\hil$ over $\mathcal V$ is denoted $\proj{\mathcal V}{X} := \{\proj{\mathcal V}{x}:\ x\in X\}.$
\begin{definition}[Reduced space]
\label{def:mainr}
Let
\begin{equation}
\mathcal R := \proj{\myspan{\sete}}{\myspan{\sets}}
\end{equation}
be the \emph{reduced space} associated to the scenario $(\sets,\sete)$. $\mathcal R\subseteq\linherm\hil$ is a vector space that we equip with the inner product inherited from $\linherm\hil$.
\end{definition}
Note that $\dim(\mathcal R)\leq\dim(\linherm\hil) = \dim(\mathcal H)^2$. The main property of the reduced space is the following. See \text{appendix}~\ref{app:mainr} for a proof.
\begin{prop}
\label{prop:reducedscalar}
For all $\rho\in\mathtt s$, for all $E\in\mathtt e$,
\begin{equation}
\scal{\rho}{E}{\linherm\hil} = \scal{\proj{\mathcal R}{\rho}}{\proj{\mathcal R}{E}}{\mathcal R}.
\end{equation}
\end{prop}
\Cref{prop:reducedscalar} shows that we can in fact restrict the analysis of the probabilities associated to $(\sets,\sete)${} to the analysis of all probabilities $\scal{\red\rho}{\red E}{\mathcal R}$ for all $\red\rho\in\proj{\mainr}{\sets}$ and for all $\red E\in\proj{\mainr}{\sete}$.
\subsection{Definition of the classical model}
We now motivate the construction of the classical model that we are considering for the scenario $(\sets,\sete)$. This model is largely based on the mathematical description and notation introduced in \cite{Spekkens05}.
While this work was in development, a closely related but slightly more restricted notion of non-contextuality was considered in \cite{Spekkens19} --- see section \ref{sec:SimplexEmbeddability} for differences and similarities in the results.
\subsubsection{Ontic space}
We introduce the notion of an ontic state space, or ontic space for short, denoted $\Lambda$. An ontic state $\lambda\in\Lambda$ is meant to describe a classical state of the system, so that $\Lambda$ can be thought of as a classical phase space that will be assigned to the quantum setup.
Let $\bm{P}$ denote a preparation procedure, i.e.\ a set of operational instructions that fully specify the steps one needs to take to obtain the same preparation. The first idea of the classical model is to associate to each preparation procedure $\bm{P}$ a classical probability distribution, i.e.\ normalizable and non-negative, over $\Lambda$. We refer to these probability distributions as the ontic state distributions.
The ontic state distribution gives the probability $\probcond{\lambda}{\bm{P}}$ that the system is in the ontic state $\lambda$ after having been prepared by the preparation procedure $\bm{P}$.
Let $\bm{M}$ be a measurement procedure with outcomes labeled by $k$. We denote by $\bm{M}_k$ the event that the outcome $k$ occurred when the measurement procedure $\bm{M}$ was carried out. Any operational detail should be included in the specification of $\bm{M}$. In the classical model, the measurements will be represented as classical probability distributions over the outcomes $k$; but these probability distributions, referred to as the response functions, will not depend on the quantum states directly.
Instead, the response functions will ``read off" the value of a given ontic state $\lambda$ to produce the outcome statistics.
The response function is thus represented by the conditional probabilities $\probcond{\bm{M}_k}{\lambda}$.
The actual outcome statistics, given a preparation $\bm{P}$ and an event $\bm{M}_k$, will be the outcome statistics $\probcond{\bm{M}_k}{\lambda}$ averaged over the probability that the system was in the ontic state $\lambda$, which is specified by the ontic state distribution $\probcond{\lambda}{\bm{P}}$:
\begin{equation}
\label{eq:OnticProbRule}
\probcond{\bm{M}_k}{\bm{P}} = \inthv{\probcond{\lambda}{\bm{P}}\probcond{\bm{M}_k}{\lambda}}.
\end{equation}
\subsubsection{Non-contextual state representation}
In complete generality, the probability $\probcond{\lambda}{\bm{P}}$ could depend on any detail of the preparation procedure $\bm{P}$.
This is not very satisfactory: we know from quantum mechanics that all possible measurement statistics are uniquely determined from the density matrix $\rho(\bm{P})$ associated to the preparation procedure $\bm{P}$.
The standard assumption of non-contextuality that would prevail here was introduced by Spekkens in \cite{Spekkens05}.
There, it is justified that any detail of the preparation procedure $\bm{P}$ which is not reflected in the density matrix $\rho(\bm{P})$ is part of the context.
The corresponding assumption of non-contextuality is that the non-contextual ontic state distribution only depends on the density matrix $\rho(\bm{P})$: thus, we make the replacement
\begin{equation}
\probcond{\lambda}{\bm{P}} \rightarrow \probcond{\lambda}{\rho(\bm{P})}.
\end{equation}
For example, in the case of a mixed quantum state, the ontic state distribution associated with that quantum state does not depend on which ensemble decomposition the mixed state may have originated from.
Another example is the case where a mixed state originated from the partial trace of a pure entangled state on a larger Hilbert space: the ontic state distribution does not distinguish among the different purifications.
In the setup considered here, the only states available are in the set $\mathtt s$, so that it would be reasonable to require that there exists a valid ontic state distribution $\probcond{\lambda}{\rho}$ for any $\rho\in\mathtt s$, without requiring anything else for the other quantum states in $\mathcal S(\mathcal H) \setminus \mathtt s$.
However, we argue that this is still too permissive given that the only measurements available are those taken out of the set $\mathtt e$, and we would like to posit a generalized notion of non-contextuality. Indeed, it is clear from \cref{prop:reducedscalar} that any detail of the preparation procedure that is reflected in $\rho\in\mathtt s$ but that is not reflected in the reduced density matrix $\proj{\mathcal R}{\rho}$ will not be resolved by the available measurement resource $\mathtt e$ and is thus part of a context.
Our generalized notion of non-contextuality, following the guiding principles of \cite{Spekkens05}, is that the ontic state distribution only depends on $\proj{\mathcal R}{\rho}$, i.e.\ we make the further replacement
\begin{equation}
\probcond{\lambda}{\rho} \rightarrow \probcond{\lambda}{\proj\mathcal R\rho}.
\end{equation}
The conventional label for the ontic state distribution is $\mu$ \cite{Spekkens05}: for all $\lambda\in\Lambda$, for all $\rho\in\mathtt s$,
\begin{equation}
\ostate{\proj{\mathcal R}{\rho}}{\lambda} := \probcond{\lambda}{\proj{\mathcal R}{\rho}}.
\end{equation}
This means that $\mu$ has the following domain:
\begin{subequations}
\begin{equation}
\label{eq:mumapping}
\mu\funcrange{\proj{\mainr}{\sets}\times \Lambda}{\mathbb R}.
\end{equation}
The normalization and non-negativity of the probability distributions read
\begin{align}
\forall \red\rho\in\proj{\mainr}{\sets}:\ && \inthv{\ostate{\red\rho}{\lambda}} &= 1, \label{eq:NormMu}\\
\forall\lambda\in\Lambda,\forall \red\rho\in\proj{\mainr}{\sets}:\ && \ostate{\red\rho}{\lambda} &\geq 0. \label{eq:basicmupos}
\end{align}
It is also reasonable to require that the ontic state distribution mapping represents classical probabilistic mixtures of quantum states by classical probabilistic mixtures of ontic states. This is formulated as a convex-linearity requirement of the form:
\begin{multline*}
\forall\lambda\in\Lambda,\forall p\in[0,1],\forall \red\rho_1,\red\rho_2\in\proj{\mainr}{\sets}:\ \\
\end{multline*}
\vspace{-1.4cm}
\begin{multline}
\ostate{p\red\rho_1 + (1-p)\red\rho_2}{\lambda} \\
= p \ostate{\red\rho_1}{\lambda}
+ (1-p)\ostate{\red\rho_2}{\lambda}. \label{eq:convexlinearityrho}
\end{multline}
\end{subequations}
\subsubsection{Non-contextual measurement representation}
As previously stated, the response function distribution $\probcond{\bm{M}_k}{\lambda}$ could in principle depend on all operational details of $\bm{M}$.
The notion of non-contextuality that would prevail here \cite{Spekkens05} would be that the response function distribution does not depend on more than the POVM $\{E_k(\bm{M}_k)\}_k$ associated to the measurement procedure $\bm{M}$.
Thus, we make the replacement
\begin{equation}
\label{eq:ResponseFunctionPOVM}
\probcond{\bm{M}_k}{\lambda} \rightarrow \probcond{E_k(\bm{M}_k)}{\lambda}.
\end{equation}
This is motivated by the fact that in quantum mechanics, two distinct measurement procedures which lead to the same POVM are equivalent with respect to the statistics that are produced upon measuring any state. Equation \eqref{eq:ResponseFunctionPOVM} implies that the response function does not resolve whether a POVM originated from a coarse-graining of a finer POVM; nor does it resolve whether the POVM originated from tracing out the result of a projective measurement on a larger Hilbert space.
In our setup where all available POVM elements belong to $\mathtt e$, it is reasonable to require that there exists a valid response function $\probcond{E}{\lambda}$ for all $E\in\mathtt e$,
irrespective of what is predicted for other quantum effects in $\mathcal E(\mathcal H)\setminus\mathtt e$.
This is however too permissive: consider distinct quantum effects $E_1,E_2\in\mathtt e$. Given the set $\mathtt s$ and \cref{prop:reducedscalar}, it could be that $\proj{\mathcal R}{E_1} = \proj{\mathcal R}{E_2}$ so that the effects are indistinguishable in this setup.
Thus, the part of a quantum effect $E\in\mathtt e$ which is not reflected in $\proj{\mathcal R}{E}$ is part of a new kind of context, and we make the further replacement
\begin{equation}
\probcond{E}{\lambda} \rightarrow \probcond{\proj{\mathcal R}{E}}{\lambda}.
\end{equation}
The mapping that associates a response function to each quantum effect is denoted $\xi$, following the notation of \cite{Spekkens05}: for all $\lambda\in\Lambda$, for all $E\in\mathtt e$,
\begin{equation}
\orep{\proj{\mathcal R}{E}}{\lambda} :=
\probcond{\lambda}{\proj{\mathcal R}{E}}.
\end{equation}
The domain of $\xi$ is then:
\begin{subequations}
\begin{equation}
\label{eq:ximapping}
\xi\funcrange{\proj{\mainr}{\sete}\times\Lambda}{\mathbb R}.
\end{equation}
The explicit normalization and non-negativity are imposed as follows:
\begin{multline}
\forall \lambda\in\Lambda,\forall K\in\mathbb N\cup\{+\infty\}, \\
\forall \left\{ E_k\in\mathtt e:\ \textstyle{\sum_{k=1}^K E_k} = \id\mathcal H\right\}_{k=1}^K: \\
\textstyle \sum_{k=1}^K \orep{\proj{\mathcal R}{E_k}}{\lambda} = 1, \label{eq:explicitnormalizationxi}
\end{multline}
\vspace{-0.7cm}
\begin{align}
\label{eq:basicxipos}
&\forall\lambda\in\Lambda,\forall \red E\in\proj{\mainr}{\sete}:\ &&&&&&&& \orep{\red E}{\lambda} \geq 0.
\end{align}
In addition to the properties already specified, the response function mapping should represent classical probabilistic mixtures of quantum effects as classical probabilistic mixtures of response functions:
\begin{multline*}
\forall\lambda\in\Lambda,\forall p\in[0,1],\forall \red E_1,\red E_2\in\proj{\mainr}{\sete}:\ \\
\end{multline*}
\vspace{-1.3cm}
\begin{multline}
\label{eq:convexlinearityxi}
\orep{p\red E_1 + (1-p)\red E_2}{\lambda} \\
= p \orep{\red E_1}{\lambda}
+ (1-p)\orep{\red E_2}{\lambda}.
\end{multline}
\end{subequations}
We are now able to formulate our definition of the generalized non-contextual ontological model.
For brevity, we will simply use the term ``classical model" in this \doc, although this is of course one specific definition of classicality that is by no means the only choice.
\begin{definition}[Classical model]
\label{def:classicalmodel}
The classical model for $(\sets,\sete)${} is specified as follows.
Let $\mu$ be the ontic state mapping that has domain \eqref{eq:mumapping} and that satisfies \eqref{eq:NormMu}, \eqref{eq:basicmupos}, and \eqref{eq:convexlinearityrho}. Let $\xi$ be the response function mapping that has domain \eqref{eq:ximapping} and that satisfies \eqref{eq:explicitnormalizationxi}, \eqref{eq:basicxipos}, and \eqref{eq:convexlinearityxi}.
The classical model is required to reproduce the statistics that quantum mechanics predicts for the available states and measurements. Using \cref{prop:reducedscalar} to write down the probability in quantum mechanics and equation \eqref{eq:OnticProbRule} to write down the probability in the classical model, this requirement may be formulated in the reduced space $\mathcal R$ as follows:
\begin{multline}
\label{eq:consistmuxi}
\forall\red \rho\in\proj{\mainr}{\sets},\forall \red E\in\proj{\mainr}{\sete}:\
\\
\scal{\red\rho}{\red E}{\mathcal R}
= \inthv{\ostate{\red \rho}{\lambda}\orep{\red E}{\lambda}}.
\end{multline}
\end{definition}
The next proposition shows that the added requirement that the ontological primitives only depend on the projection over the reduced space of the states and effects is a non-trivial strengthening of the constraint of non-contextuality. A proof is given in \text{appendix}~\ref{app:mainr}.
\begin{restatable}{prop}{PropBadClassicalModel}
\label{prop:badclassicalmodel}
We let \cref{def:classicalmodel} under the replacement of
$$
\hspace{1.15cm}P_\mathcal R\funcrange{\linherm\hil}{\mathcal R\subseteq\linherm\hil}
$$
with
$$
\id{\linherm\hil}\funcrange{\linherm\hil}{\linherm\hil}
$$
be the definition of a ``partially contextual ontological model". Then, it holds that
\begin{myitem}
\item any $(\sets,\sete)${} that admits a
classical model (\cref{def:classicalmodel})
also admits a partially contextual ontological model;
\item there exists $(\sets,\sete)${} that admits a partially contextual ontological model but no
classical model.
\end{myitem}
\end{restatable}
The proof uses a generic construction for $(\sets,\sete)${} as in (ii) starting from any scenario that does not admit a classical model: intuitively, one can think of a scenario where the states are labeled classically but the operational effects ignore these labels. However, the existence of these labels allow the construction of a classical model.
\subsection{Structure of the classical model}
Let us now use the properties of the classical model to derive basic results related to its structure which will be useful for our later endeavors. The following proposition is proven in \text{appendix}~\ref{app:LinExt}, and is motivated by the analysis of the no-go theorem developed in \cite{Spekkens08}.
\begin{restatable}{prop}{PropExtensions}
\label{prop:extensions}
Let $\lambda\in\Lambda$ be arbitrary. Starting from the convex-linear mappings
\begin{subequations}
\begin{align}
\ostate{\cdot}{\lambda}\funcrange{\proj{\mainr}{\sets}}{\mathbb R_{\geq 0}}, \\
\orep{\cdot}{\lambda}\funcrange{\proj{\mainr}{\sete}}{\mathbb R_{\geq 0}},
\end{align}
\end{subequations}
there exist unique linear extensions
\begin{subequations}
\begin{align}
\ostate{\cdot}{\lambda}\funcrange{\mathcal R}{\mathbb R},\\
\orep{\cdot}{\lambda}\funcrange{\mathcal R}{\mathbb R}.
\end{align}
\end{subequations}
\end{restatable}
Following \cite{Spekkens08}, we may apply Riesz' representation theorem, stated in \text{appendix}~\ref{app:LinExt}, for any fixed $\lambda\in\Lambda$ to obtain that there exist unique $F(\lambda)\in\mathcal R$, $\sigma(\lambda)\in\mathcal R$ such that for all $\lambda\in\Lambda$, $r\in\mathcal R$:
\begin{subequations}
\label{eq:OnticScalarReprs}
\begin{align}
\ostate{r}{\lambda} &= \scal{r}{F(\lambda)}{\mathcal R} \\
\orep{r}{\lambda} &= \scal{\sigma(\lambda)}{r}{\mathcal R}.
\end{align}
\end{subequations}
We will express the non-negativity requirements of \eqref{eq:basicmupos} and \eqref{eq:basicxipos} using the notion of the polar convex cone.
\begin{definition}[Polar convex cone]
\label{def:Polar}
For any real inner product space $\mathcal V$ of finite dimension, for any $X\subseteq \mathcal V$, the \emph{polar} convex cone\footnote{See \text{appendix}~\ref{app:convex} for definitions of convex and conic sets.} $\polar{X}{\mathcal V}$ is defined as
\begin{equation}
\polar{X}{\mathcal V} := \big\{y\in\mathcal V:\ \forall x\in X,\ \scal{x}{y}{\mathcal V}\geq 0\big\}.
\end{equation}
\end{definition}
We may now formulate the following theorem which links the existence of the classical model to the existence of specific mathematical primitives. The proof is presented in \text{appendix}~\ref{app:BasicCriterion}. Such a representation is a generalization of the frame representation of quantum mechanics introduced in \cite{Ferrie08}.
\begin{restatable}[Basic classicality criterion]{theorem}{ThBasicCriterion}
\label{prop:basiccriterion}
Given $(\sets,\sete)${} that lead to the reduced space $\mathcal R$ (\cref{def:mainr}), there exists a classical model with ontic state space $\Lambda$ (\cref{def:classicalmodel}) if and only if there exist mappings $F$, $\sigma$ with ranges
\begin{subequations}
\label{eq:simpleranges}
\begin{align}
F\funcrange{\Lambda}{\polar{\proj{\mainr}{\sets}}{\mathcal R}}, \label{eq:RangeF}\\
\sigma\funcrange{\Lambda}{\polar{\proj{\mainr}{\sete}}{\mathcal R}}, \label{eq:RangeSigma}
\end{align}
\end{subequations}
satisfying the normalization condition
\begin{equation}
\forall \lambda\in\Lambda:\
\otrace{\sigma(\lambda)} = 1 \label{eq:normsigma}
\end{equation}
as well as the consistency requirement: for all
$ r,s\in\mathcal R,$
\begin{equation}
\label{eq:consistency}
\scal{r}{s}{\mathcal R} = \inthv{\scal{r}{F(\lambda)}{\mathcal R}
\scal{\sigma(\lambda)}{s}{\mathcal R}}.
\end{equation}
\end{restatable}
This theorem will in particular prove useful to determine the unit separability{} criterion in the next section. For completeness, as proven in \text{appendix}~\ref{app:BasicCriterion}, we have the alternative expressions $\polar{\reds}{\mainr} = \mathcal R\cap(\bigpolar{\mathtt s}{\linherm\hil})$, and $\polar{\rede}{\mainr} = \mathcal R\cap(\bigpolar{\mathtt e}{\linherm\hil})$.
Furthermore, equation \eqref{eq:normsigma} is equivalent to a trace condition since $\otrace{\sigma(\lambda)} = \tr{\mathcal H}{\sigma(\lambda)}$.
We are now in a position to easily give a prescription for including or not coarse-grained effects in the set $\mathtt e$: namely, it does not make any difference. A proof is given in \text{appendix}~\ref{app:BasicCriterion}.
\begin{restatable}[Incoherent coarse-grainings]{prop}{PropCoarseGrainings}
\label{prop:coarsegrainings}
Consider any prepare-and-measure scenario $(\sets,\sete)${}. Suppose that there exist $\{E_k\in\mathtt e\}_{k=1}^N$ where $N \in \mathbb N \cup \{+\infty\}$ such that $\sum_{k=1}^N E_k \leq \id\mathcal H$ and the sum $\sum_{k=1}^N E_k$ may not be in $\mathtt e$. Then, the scenario $(\sets,\sete)${} and the extended scenario
$$
\Big(\mathtt s,\mathtt e_{\textup{ext}} :=
\textup{conv}\big(\mathtt e\cup\big\{\textstyle\sum_{k=1}^N E_k\big\}\big)\Big)
$$
define the same reduced space $\mathcal R = \proj{\myspan{\sete}}{\myspan{\sets}} = \proj{\myspan{\mathtt e_{\textup{ext}}}}{\myspan{\sets}}$ and same polar cone $\polar{\rede}{\mainr} = \polar{\proj{\mathcal R}{\mathtt e_\textup{ext}}}{\mathcal R}$.
\end{restatable}
This proves that the scenarios $(\sets,\sete)${} and $(\mathtt s,\mathtt e_\textup{ext})$ are completely equivalent as inputs to \cref{prop:basiccriterion}. In particular, any classical model for any one of these two scenarios is valid for the other too.
Furthermore, one can easily apply \cref{prop:coarsegrainings} recursively to account for the inclusion of multiple coarse-grained effects.
It is also possible to prove the following proposition, which proves the equivalence of the classical models for a given scenario $(\sets,\sete)${} and another one, $(\tilde\mathtt s,\tilde\mathtt e)$, that produces the same operational statistics as $(\sets,\sete)${}.
This proves that it does not matter whether e.g.\ one chooses to use a larger Hilbert space including an ancilla so that the states are purified and/or the measurements are implemented as projective measurements in accordance with Naimark's theorem. The proof is given in \text{appendix}~\ref{app:BasicCriterion}.
\begin{restatable}{prop}{PropAncilla}
\label{prop:ancilla}
Let $\mathcal H$ and $\tilde\mathcal H$ denote two potentially different finite dimensional Hilbert spaces. Let $I$ be a discrete or continuous range of indices. Consider two scenarios, both assumed to verify \cref{def:available states,def:available effects}:
\begin{align*}
\Big(\mathtt s &= \myconv{\{\rho_k\in\linherm\hil\}_{k\in I}}, \\
\mathtt e &= \myconv{\{E_k\in\linherm\hil\}_{k\in I}}\Big)
\end{align*}
and
\begin{align*}
\Big(
\tilde\mathtt s &= \textup{conv}\big(\{\tilde\rho_k\in\linherm{\tilde\mathcal H}\}_{k\in I}\big), \\
\tilde\mathtt e &= \textup{conv}\big(\{\tilde E_k\in\linherm{\tilde\mathcal H}\}_{k\in I}\big)
\Big)
\end{align*}
that satisfy, for all $k,l\in I$,
\begin{equation}
\tr{\mathcal H}{\rho_k E_l} = \textup{Tr}_{\tilde\mathcal H}\big[\tilde \rho_k \tilde E_l\big].
\label{eq:ancillastats}
\end{equation}
Then, these two scenarios define reduced spaces $\mathcal R = \proj{\myspan{\sete}}{\myspan{\sete}}$ and $\tilde\mathcal R = P_{\textup{span}(\tilde\mathtt e)}\big(\textup{span}(\tilde\mathtt s)\big)$ that have the same dimension and the scenario $(\sets,\sete)${} admits a classical model if and only if the scenario $(\tilde\mathtt s,\tilde\mathtt e)$ does.
In fact, if one of the two scenarios admits a classical model with a certain ontic space $\Lambda$, so does the other with the same ontic space $\Lambda$.
\end{restatable}
We will return to the computational equivalence of working with either version of the prepare-and-measure scenario in \cref{prop:ancillacomp}.
\section{Unit separability{} and cardinality bounds}
\label{sec:ExistenceClassicalModel}
In this section, we will derive a more powerful criterion, referred to as unit separability, for the existence of a classical model.
This criterion was inspired by the no-go theorems of \cite{Ferrie08,Spekkens08}.
The main two notions that will be introduced are a notion of generalized separability
as well as a notion of a generalized Choi-Jamio\l kowsky{} isomorphism, providing the means to reformulate the consistency of the classical model with respect to the predictions of quantum mechanics.
\subsection{Mathematical preliminaries}
\subsubsection{Generalized separability}
Consider the tensor product space $\mathcal R\otimes\mathcal R$ with $\mathcal R$ as in \cref{def:mainr}. It is a real inner product vector space | its inner product is defined for product operators as follows: for all $a,b,x,y\in\mathcal R$,
\begin{equation}
\label{eq:ScalarTensor}
\scal{a\otimes b}{x\otimes y}{\mathcal R\otimes\mathcal R}
:= \scal{a}{x}{\mathcal R}\scal{b}{y}{\mathcal R}.
\end{equation}
To obtain the complete inner product, extend this expression by linearity.
Then, we define the two following sets which are of primordial importance:
\begin{definition}[Generalized product operators]
\label{def:prodes}
The set of generalized product operators is defined to be
\begin{multline}
\mathtt{Prod}(\sets,\sete) :=
\{F\otimes \sigma\in \mathcal R\otimes\mathcal R:\
F\in\polar{\proj{\mainr}{\sets}}{\mathcal R}, \\
\sigma\in\polar{\proj{\mainr}{\sete}}{\mathcal R}
\}.
\end{multline}
\end{definition}
Recall that the convex hull $\myconv{X}$ of a set $X$ is the set of all convex combinations of finitely many elements of $X$, as defined in \text{appendix}~\ref{app:convex}.
\begin{definition}[Generalized separable operators]
\label{def:sepes}
The set of generalized separable operators is defined to be
\begin{equation}
\mathtt{Sep}(\sets,\sete) := \myconv{\mathtt{Prod}(\sets,\sete)}.
\end{equation}
\end{definition}
Referring to the definitions introduced in \text{appendix}~\ref{app:convex}, $\mathtt{Prod}(\sets,\sete)$ is a cone and $\mathtt{Sep}(\sets,\sete)$ is a convex cone. More details on the structure of $\mathtt{Prod}(\sets,\sete)$ and $\mathtt{Sep}(\sets,\sete)$ are presented in \text{appendix}~\ref{app:sepes}.
\subsubsection{Choi-Jamio\l kowsky{} isomorphism}
We will make use of a simple generalization of the Choi-Jamio\l kowsky{} isomorphism \cite{Jamiolkowsky72}.
Let $\lin\mathcal R$ be the space of linear maps from $\mathcal R$ to $\mathcal R$. The Choi-Jamio\l kowsky{} isomorphism maps each linear map in $\lin\mathcal R$ to an element of $\mathcal R\otimes\mathcal R$.
\begin{definition}
\label{def:Choi}
For any $\Phi\in\lin\mathcal R$, let $\choi\Phi\in\mathcal R\otimes\mathcal R$ be the Choi-Jamio\l kowsky{} operator defined uniquely by the relations
\begin{equation}
\label{eq:ChoiRelation}
\forall r,s\in\mathcal R:\
\scal{r}{\Phi(s)}{\mathcal R} = \scal{\choi\Phi}{r\otimes s}{\mathcal R\otimes\mathcal R}.
\end{equation}
\end{definition}
The proof of uniqueness, of bijectivity and explicit coordinate solutions are derived in \text{appendix}~\ref{app:choi}. The following lemma is also proven in \text{appendix}~\ref{app:choi}:
\begin{lemma}
\label{prop:choiid}
For any orthonormal basis of $\mathcal R$ $\{R_i\in\mathcal R\}_{i=1}^{\dim(\mathcal R)}$\emph{:}
\begin{equation}
\choi{\id\mathcal R} = \sum_{i=1}^{\dim(\mathcal R)} R_i\otimes R_i.
\end{equation}
\end{lemma}
\subsection{The unit separability{} criterion}
Starting from \cref{prop:basiccriterion}, we may now derive an alternative criterion for the existence of a classical model. First, we make an assumption for the types of ontic spaces that we are considering.
\begin{definition}[Riemann integrable classical model]
\label{def:RiemannIntegrableModel}
A classical model with ontic space $\Lambda$ as introduced in \cref{def:classicalmodel} is \emph{Riemann integrable} if and only if there exist
\begin{subequations}
\begin{align}
\Delta&\funcrange{\mathbb N\times\mathbb N}{\mathbb R_{\geq 0}}, \label{eq:RangeDelta} \\
\hv^{\textup{(dis)}}&\funcrange{\mathbb N\times\mathbb N}{\Lambda},
\end{align}
\end{subequations}
such that
\begin{multline}
\label{eq:RiemannSum}
\forall \red\rho\in\proj{\mainr}{\sets},\forall \red E\in\proj{\mainr}{\sete}:\
\inthv{\ostate{\red\rho}{\lambda}\orep{\red E}{\lambda}} \\
=
\lim_{N\rightarrow\infty}
\sum_{k=1}^N \Delta_{N,k}\,
\mu\big(\red\rho,\hv^{\textup{(dis)}}_{N,k}\big)
\xi\big(\red E,\hv^{\textup{(dis)}}_{N,k}\big).
\end{multline}
\end{definition}
$N$ can be thought of as being a number of subsets that form a discrete partition of $\Lambda$, while $k$ is a discrete index running over all such subsets and $\lambda^{\text{(dis)}}\in\Lambda$ is a value in that subset of $\Lambda$.
Riemann integrable classical models include in particular:
\begin{myitem}
\item classical models equipped with discrete, finite ontic spaces $\Lambda$, which means that $\Lambda$ is isomorphic to $\{1,\dots,N\}$ for some $N\in\mathbb N$;
\item classical models equipped with discrete, countable infinite ontic spaces $\Lambda$, which means that $\Lambda$ is isomorphic to $\mathbb N$;
\item classical models equipped with a continuous ontic space $\Lambda$ isomorphic to $\mathbb R^d$ for some $d\in\mathbb N$ such that for all $\red\rho\in\proj{\mainr}{\sets}$, $\red E\in\proj{\mainr}{\sete}$, the real function $\ostate{\red\rho}{\cdot}\orep{\red E}{\cdot}\funcrange{\mathbb R^d}{\mathbb R_{\geq 0}}$ is Riemann integrable. Such classical models are reasonable physically because they may be seen as describing a system with finitely many continuous degrees of freedom such as position and momentum of finitely many particles.
\end{myitem}
\begin{restatable}[Main theorem: unit separability]{theorem}{ThMainCriterion}
\label{th:maincriterion}
The prepare-and-measure scenario $(\sets,\sete)${} admits a Riemann integrable classical model (\cref{def:RiemannIntegrableModel}) if and only if:
\begin{equation}
\label{eq:ChoiInSepes}
\choi{\id\mathcal R} \in\mathtt{Sep}(\sets,\sete).
\end{equation}
\end{restatable}
This formulation is useful because it allows one to derive properties of the classical model when it exists: the main theoretical application is described in section \ref{sec:cardinality} where the cardinality of the ontic space $\card\Lambda$ is shown to be constrained by the dimension of the reduced space $\mathcal R$.
Note that $\choi{\id\mathcal R}$ is easy to compute, whereas $\mathtt{Sep}(\sets,\sete)$ is harder to characterize. Still, well-known algorithmic results from convex analysis make the separability criterion decidable as described in section \ref{sec:algo}.
\begin{proof}[Proof overview]
The complete proof is given in \text{appendix}~\ref{app:criterion}. In one direction, the goal is to show that if there exists a classical model for $(\sets,\sete)$, then the ontic mappings $F$ and $\sigma$ from \cref{prop:basiccriterion} satisfy:
\begin{equation}
\label{eq:integralhv}
\choi{\id\mathcal R} = \inthv{F(\lambda)\otimes\sigma(\lambda)}.
\end{equation}
The assumption of Riemann integrability allows to prove that \eqref{eq:integralhv} implies $\choi{\id\mathcal R}\in\mathtt{Sep}(\sets,\sete)$.
For the other direction, the idea is to show that if $\choi{\id\mathcal R}\in\mathtt{Sep}(\sets,\sete)$ holds, then there exists a decomposition of the form
\begin{equation}
\choi{\id\mathcal R} = \sum_{i=1}^n F_i\otimes\sigma_i
\end{equation}
for $F_i\in\polar\proj{\mainr}{\sets}\mathcal R$ and $\sigma_i\in\polar\proj{\mainr}{\sete}\mathcal R$
which yields a valid Riemann integrable model allowing to compute quantum statistics as follows: for all $\red\rho\in\proj{\mainr}{\sets}$, $\red E\in\proj{\mainr}{\sete}$,
\begin{gather}
\scal{\red\rho}{\red E}{\mathcal R} =
\sum_{i=1}^n \scal{\red\rho}{F_i}{\mathcal R}\scal{\sigma_i}{\red E}{\mathcal R}. \qedhere
\end{gather}
\end{proof}
\subsection{Ontic space cardinality}
\label{sec:cardinality}
In this section, we will show two simple bounds for the cardinality $\card\Lambda$ of the ontic state space. For our purposes, it suffices to distinguish two cases: either $\card\Lambda < \infty$ which means that $\Lambda$ is a finite set consisting of $\card\Lambda$ many elements, or $\card\Lambda = \infty$ which means that $\Lambda$ is countable or uncountable infinite.
Then, one can show a lower and upper bound for the size of the ontic space as in the following theorem. The proof is given in \text{appendix}~\ref{app:card}.
\begin{restatable}[Ontic space cardinality bounds]{theorem}{ThCardinalityBounds}
\label{th:CardinalityBounds}
For any $(\sets,\sete)${} that admit a classical model with ontic state space $\Lambda$, it holds that either $\Lambda$ is an infinite set, or it is discrete and respects
\begin{equation}
\dim(\mathcal R) \leq \card\Lambda.
\end{equation}
Furthermore, if $(\sets,\sete)${} admit a Riemann integrable classical model (\cref{def:RiemannIntegrableModel}), there exists a classical model for $(\sets,\sete)${} with discrete ontic space $\Lambda_{\textup{min}}$ which verifies
\begin{equation}
\dim(\mathcal R) \leq \card{\Lambda_{\textup{min}}} \leq \dim(\mathcal R)^2 \leq\dim(\linherm\hil)^2.\label{eq:bounds on ontic space}
\end{equation}
\end{restatable}
Recall that $\dim(\linherm\hil) = \dim(\mathcal H)^2$: the dimension of the quantum Hilbert space thus plays an important role in determining the maximal cardinality of the ontic space.
Furthermore, \cref{prop:coarsegrainings,prop:ancilla} together with \cref{th:CardinalityBounds} prove the bounds on the number of ontic space and the minimal ontic space cardinality to be invariant under the addition of incoherent coarse-grainings in the scenario, and also under a shift of representation of the operational (quantum) primitives.
\paragraph*{Addendum:} after the first version of this work was released, related bounds on the number of ontic states in ontological models of operational theories were produced under different assumptions. The work of~\cite{shahandeh_contextuality_2020} is also concerned with prepare-and-measure scenarios, and our $(\sets,\sete)${} would be termed a subGPT there. A stronger upper bound was given in their theorem 3 that can be written as $\card{\Lambda} = \dim(\mathcal R)$: the discrepancy between this bound and that of equation \eqref{eq:bounds on ontic space} comes from additional constraints related to the uniqueness of possible classical models, effectively building up a more constrained alternative to Spekkens' non-contextuality.
In~\cite{schmid_structure_2020}, the original notion of Spekkens' non-contextuality is probed in a framework that includes preparations and measurements but also features a proper treatment of transformations. One key result of their work is that including the requirement of the non-contextual ontological representation of transformations strengthen the bound of \eqref{eq:bounds on ontic space} down to, loosely speaking,\footnote{There would be more to say about tomographic completeness assumptions: we use $\mathcal R$ as the representative vector space by analogy with our work, but for the proper description see~\cite{schmid_structure_2020}.} $\card{\Lambda} = \dim(\mathcal R)$.
\subsection{Alternative reduced spaces}
\label{sec:altr}
We have defined the reduced space in \cref{def:mainr} as
\begin{equation}
\mathcal R = \proj{\myspan{\mathtt e}}{\myspan{\mathtt s}}.
\end{equation}
However, one could ask whether an alternative definition of the reduced space would preserve the same physical motivation for the classical model while implying a possibly distinct notion of classicality for the prepare-and-measure scenario $(\sets,\sete)$.
Such an alternative definition could for instance be obtained from swapping the roles of $\mathtt s$ and $\mathtt e$ in the definition of $\mathcal R$, thus leading to a potential alternative reduced space $\proj{\myspan{\sets}}{\myspan{\sete}}$.
In this section, we will define and motivate a generalized class of reduced spaces from which one can construct generalized Spekkens' non-contextual classical models, and prove that the corresponding notions of classicality are all equivalent. To start with, consider the following class of reduced spaces:
\begin{definition}
\label{def:AltReducedSpace}
An alternative reduced space is any real, finite dimensional inner product space $\altr\mathcal R$ together with two linear maps $f,g\funcrange{\linherm\hil}{\altr\mathcal R}$ that verify
\begin{subequations}
\begin{align}
\label{eq:AltScal}
\nonumber \forall \rho\in\mathtt s,\forall E\in\mathtt e:\ & \\
\scal{\rho}{E}{\linherm\hil} &= \scal{f(\rho)}{g(E)}{\altr\mathcal R}\!, \\
\myspan{f(\mathtt s)} &= \altr\mathcal R, \label{eq:spanfs}\\
\myspan{g(\mathtt e)} &= \altr\mathcal R. \label{eq:spange}
\end{align}
\end{subequations}
\end{definition}
The fact that both maps $f,g$ have their image in the same vector space allows one to preserves the symmetry between the treatment of states and effects. The real inner product structure of any $\altr\mathcal R$ is a simple mathematical choice. We will return to the validity of the choice of finite dimensionality of $\altr\mathcal R$ later.
Equation \eqref{eq:spange} is motivated by the fact that for any $\rho\in\mathtt s$, the probabilities $\{\scal{\rho}{E}{\linherm\hil} : E\in\mathtt e\}$ do not necessarily fully determine $\rho$.
On the other hand, with equation \eqref{eq:spange} and the non-degeneracy of the inner product at hand,
the probabilities $\{\scal{f(\rho)}{g(E)}{\altr\mathcal R}: E\in\mathtt e\}$ completely determine $f(\rho)$. Thus, $f(\rho)$ is a good primitive to devise a non-contextual model that only resolves the degrees of freedom that are resolved by $g(E)$. This argument can be repeated swapping each $\rho,$ $\mathtt s$ and $f$ with $E$, $\mathtt e$ and $g$ respectively to motivate analogously equation \eqref{eq:spanfs}.
The inner product bilinearity and equations \eqref{eq:spanfs}, \eqref{eq:spange} imply that if \eqref{eq:AltScal} is to hold then $f,g$ have to be linear maps.
Without attempting to fully characterize the set of solutions to \cref{def:AltReducedSpace},
we prove that while $\mathcal R$ as defined in \cref{def:mainr} is indeed a valid solution to \cref{def:AltReducedSpace}, it is not the only such solution.
The proof is given in \text{appendix}~\ref{app:altr}.
\begin{restatable}{prop}{PropScalarAlt}
\label{prop:scalaralt}
The choice
\begin{subequations}
\begin{align}
\altr\mathcal R &:= \mathcal R = \proj{\myspan{\sete}}{\myspan{\sets}}, \\
f(\cdot) &:=\proj{\mathcal R}{\cdot}, \\ g(\cdot) &:=\proj{\mathcal R}{\cdot}
\end{align}
\end{subequations}
yields a valid alternative reduced space in \cref{def:AltReducedSpace}; and so does the swapped version
\begin{subequations}
\begin{align}
\altr\mathcal R &:= \proj{\myspan{\sets}}{\myspan{\sete}} =: \mathcal R', \\
f(\cdot) &:= \proj{\mathcal R'}{\cdot}, \\
g(\cdot) &:= \proj{\mathcal R'}{\cdot}.
\end{align}
\end{subequations}
\end{restatable}
We required the dimension of $\altr\mathcal R$ to be finite: in fact, \cref{def:AltReducedSpace} allows to prove the following proposition, see \text{appendix}~\ref{app:altr} for a proof.
\begin{restatable}{prop}{PropAltDimensions}
\label{prop:AltDimensions}
It holds that for any reduced space $\altr\mathcal R$ (\cref{def:AltReducedSpace}),
$
\dim(\altr\mathcal R) = \dim(\mathcal R)$ where $\mathcal R$ is defined in \cref{def:mainr}.
\end{restatable}
Recall that the vector space inclusion $\mathcal R\subseteq\linherm\hil$ bounds the dimension of $\mathcal R$ and thus also bounds that of any $\altr\mathcal R$: $\dim(\altr\mathcal R) \leq \dim(\linherm\hil) = \dim(\mathcal H)^2$.
The classical model is defined for a given alternative reduced space in \text{appendix}~\cref{def:altclassicalmodel} by analogy to the classical model formulated for $\mathcal R$.
It turns out that if one were to use any alternative reduced space, one would derive equivalent results to those already obtained.
In addition, a result that holds formulated in $\mathcal R$ is usually equivalent to that formulated in any $\altr\mathcal R$.
Most importantly we have the following equivalence, proven in \text{appendix}~\ref{app:equivalence}:
\begin{restatable}[Equivalence of reduced spaces]{theorem}{ThEquivalenceReducedSpaces}
\label{th:EquivalenceReducedSpaces}
Given any $(\sets,\sete)$, consider $\mathcal R$ and any $\altr\mathcal R$ constructed from $(\sets,\sete)$.
There exists a classical model with ontic space $\Lambda$ constructed on $\mathcal R$ (\cref{def:classicalmodel}) if and only if
there exists a classical model constructed on $\altr\mathcal R$ (\text{appendix}~\cref{def:altclassicalmodel}) with the same ontic space $\Lambda$.
\end{restatable}
Note that the ontic primitives of the models in $\mathcal R$ and a given $\altr\mathcal R$ may be different, in particular they may belong to distinct vector spaces; but the ontic space that underlies the classical model is the same in either case. The implications of \cref{th:EquivalenceReducedSpaces} are the following:
\begin{myitem}
\item saying that the scenario $(\sets,\sete)${} admits a classical model is a statement which can be made regardless of which reduced space one chooses to use;
\item in the case of Riemann integrable classical models (\cref{def:RiemannIntegrableModel}), the generic case is that the ontic space is discrete as stated in \cref{th:CardinalityBounds}. Then, according to \cref{th:EquivalenceReducedSpaces}, any choice of alternative reduced space $\altr\mathcal R$ will yield ontic spaces of the same cardinality as those of $\mathcal R$.
\end{myitem}
Our choice to use $\mathcal R$ rather than another alternative reduced space $\altr\mathcal R$ is without significance.
Some additional equivalences between the alternative reduced spaces that are relevant for the algorithmic evaluation of the unit separability{} criterion will be provided in section \ref{sec:Computational equivalence of reduced spaces}.
\section{Algorithmic formulation, witnesses and certifiers}
\label{sec:algo}
The content of this section is organized as follows. First, we describe general results from convex analysis and introduce the vertex enumeration problem in section \ref{sec:venum}.
Then, we describe general theoretical results that hold for an arbitrary scenario $(\sets,\sete)${} in section \ref{sec:GeneralAspects}: these results help characterize the set of separable operators $\mathtt{Sep}(\sets,\sete)$ appearing in the unit separability{} criterion of \cref{th:maincriterion}.
In section~\ref{sec:PolyhedralCase}, we specialize to the to-be-defined polyhedral scenarios for which the unit separability{} criterion can be verified exactly.
In section \ref{sec:PolyhedralApprox}, we show how to certify the classicality or witness the non-classicality of an arbitrary scenario $(\sets,\sete)${} using the results of the polyhedral case, and discuss the convergence of the resulting hierarchies of algorithmic tests.
In section \ref{sec:Computational equivalence of reduced spaces}, we show the equivalence between the computational complexities of the algorithm formulated in $\mathcal R$ and that formulated in an alternative reduced space.
\subsection{Vertex enumeration}
\label{sec:venum}
Let us introduce some notation. A review of the main convex analysis definitions is presented in \text{appendix}~\ref{app:convex}, and the proofs of the propositions of this section are presented in \text{appendix}~\ref{app:ConvexConeResolution}.
For any finite dimensional real inner product space $\mathcal V$, let $X\subseteq \mathcal V$ be an arbitrary set.
The conic hull $\mycone{X}$ is the set of elements of the form $\lambda x \in\mathcal V$ for any $\lambda\in\mathbb R_{\geq 0}$ and $x\in X$.
A convex cone $\mathcal C\subseteq\mathcal V$ is one that equals its convex hull and also its conic hull: $\mathcal C = \myconv{\mathcal C} = \mycone{\mathcal C}$.
A half-line is the conic hull of a single element of the vector space. An extremal half-line of $\mathcal C$ is a half-line whose elements cannot be expressed as the average of linearly independent elements of $\mathcal C$.
The set of extremal half-lines of a convex cone $\mathcal C$ is denoted $\coneextr{\mathcal C}$.
\begin{restatable}[Pointed cone]{definition}{DefPointedCone}
\label{def:PointedCone}
Let $\mathcal C\subseteq\mathcal V$ be a convex cone. $\mathcal C$ is said to be a pointed cone if
\begin{myitem}
\item $\mathcal C$ is closed; \label{item:ClosedPointedCone}
\item $\mathcal C \neq \emptyset$ and $\mathcal C\neq \{0\}$; \label{item:NonTrivalPointedCone}
\item there exists a linear function $L\funcrange{\mathcal V}{\mathbb R}$ such that for all $c\in\mathcal C\setminus\{0\}$, $L(c) > 0$. \label{item:LinPointedCone}
\end{myitem}
\end{restatable}
The following proposition guarantees the representation of pointed cones as the convex hull of their extremal half-lines.
\begin{restatable}{prop}{PropResolutionPointedCone}
\label{prop:ResolutionPointedCone}
If $\mathcal C\subseteq \mathcal V$ is a pointed cone, then it holds that
\begin{equation}
\textstyle \mathcal C = \myconv{\bigcup_{\mathfrak{l}\in\coneextr{\mathcal C}} \mathfrak{l}}.
\end{equation}
\end{restatable}
We will also need the representation of the polar cone (\cref{def:Polar}) as the convex hull of its extremal half-lines: the following definition and proposition will be useful for that purpose.
\begin{restatable}[Spanning cone]{definition}{DefSpanningCone}
\label{def:SpanningCone}
A convex cone $\mathcal C\subseteq\mathcal V$ is a spanning cone in $\mathcal V$ if
\begin{myitem}
\item $\mathcal C$ is closed; \label{item:SpanningCone(i)}
\item $\mathcal C \neq \mathcal V$;
\label{item:ClosedSpanningCone}
\item $\myspan{\mathcal C} = \mathcal V$.
\label{item:SpanningSpanningCone}
\end{myitem}
\end{restatable}
Notice that the spanning cone property depends on the vector space $\mathcal V$ in which one embeds $\mathcal C$.
\begin{restatable}{prop}{PropResolutionPolarCone}
\label{prop:ResolutionPolarCone}
If $\mathcal C\subseteq\mathcal V$ is a spanning cone, then the polar cone $\polar{\mathcal C}{\mathcal V}\subseteq\mathcal V$ (\cref{def:Polar}) is a pointed cone, which implies by \cref{prop:ResolutionPointedCone} that
\begin{equation}
\textstyle \polar{\mathcal C}{\mathcal V} = \myconv{\bigcup_{\mathfrak{l}\in\coneextr{\polar{\mathcal C}{\mathcal V}}} \mathfrak{l}}.
\end{equation}
\end{restatable}
We see that if $\mathcal C\subseteq\mathcal V$ is a spanning pointed cone in $\mathcal V$, both $\mathcal C$ and the polar $\polar{\mathcal C}{\mathcal V}$ may be represented as the convex hull of their respective extremal half-lines. This defines the so-called vertex enumeration problem\footnote{We follow the denomination given in the literature, e.g.\ in \cite{Avis2000}, but we define the vertex enumeration problem even if the cone has infinitely many extremal half-lines.}:
\begin{definition}[Vertex enumeration problem]
\label{def:venum}
For $\mathcal C\subseteq \mathcal V$ a spanning pointed cone, the vertex enumeration problem consists in obtaining the extremal half-lines of $\polar{\mathcal C}{\mathcal V}$ from the extremal half-lines of $\mathcal C$. We denote the vertex enumeration map $\venum{\mathcal V}{\cdot}$\textup{:}
\begin{equation}
\venum{\mathcal V}{\coneextr{\mathcal C}} := \coneextr{\polar{\mathcal C}{\mathcal V}}.
\end{equation}
\end{definition}
The last proposition that will prove useful is the following half-space representation of a convex cone, starting from its extremal half-lines\footnote{A half-space $H\subseteq\mathcal V$ is the geometric interpretation of a homogeneous linear inequality solution set $H := \{v\in\mathcal V:\ L_H(v)\geq0\}$ where $L_H$ is a linear functional. By Riesz' representation \cref{th:Riesz},
there exists a unique $v_H\in\mathcal V$ such that $L_H(\cdot) = \scal{v_H}{\cdot}{\mathcal V}$. It is then clear that $H = \polar{\{v_H\}}{\mathcal V}=\polar{[\mycone{v_H}]}{\mathcal V}$.
Thus, the polar cone of a half-line is a half-space and conversely.}.
\begin{restatable}{prop}{PropHyperspaceRepresentation}
\label{prop:HyperspaceRepresentation}
A solution to the vertex enumeration problem allows one to represent a spanning pointed cone $\mathcal C\subseteq\mathcal V$ as the intersection of half-spaces:
\begin{equation}
\textstyle \mathcal C = \bigcap_{\mathfrak{l}\in\venum{\mathcal V}{\coneextr{\mathcal C}}} \polar{ \mathfrak{l}}{\mathcal V}.\label{eq:vertex intersection hyper}
\end{equation}
\end{restatable}
\subsection{General aspects of the algorithm}
\label{sec:GeneralAspects}
The proofs of the propositions of this section are presented in \text{appendix}~\ref{app:algo}.
For the purpose of determining the structure of $\mathtt{Sep}(\sets,\sete)$, it turns out that rather than considering the convex sets $\proj{\mainr}{\sete}$ and $\proj{\mainr}{\sets}$, the main objects of interest are the convex cones $\mycone{\proj{\mathcal R}{\closure\mathtt s}}$ and $\mycone{\proj{\mathcal R}{\closure\mathtt e}}$, where $\closure X$ denotes the closure of $X$. Indeed:
\begin{restatable}{prop}{PropPointedConeExpression}
\label{prop:PointedConeExpression}
\begin{subequations}
\begin{align}
\polar{\proj{\mainr}{\sets}}{\mathcal R} &= \polar{[\mycone{\proj{\mathcal R}{\closure\mathtt s}}]}{\mathcal R}, \label{eq:PolarPRS} \\
\polar{\proj{\mainr}{\sete}}{\mathcal R} &= \polar{[\mycone{\proj{\mathcal R}{\closure\mathtt e}}]}{\mathcal R}. \label{eq:PolarPRE}
\end{align}
\end{subequations}
\end{restatable}
These expressions are useful due to the fact that the vertex enumeration problem is well-defined for the sets $\mycone{\proj{\mathcal R}{\closure\mathtt s}}$ and $\mycone{\proj{\mathcal R}{\closure\mathtt e}}$:
\begin{restatable}{prop}{PropPointedSpanningPRSE}
\label{prop:PointedSpanningPRSE}
$\mycone{\proj{\mathcal R}{\closure\mathtt s}}$ and $\mycone{\proj{\mathcal R}{\closure\mathtt e}}$ are spanning pointed cones in $\mathcal R$.
\end{restatable}
Together with \cref{prop:PointedConeExpression}, this shows that applying the vertex enumeration map to $\mycone{\proj{\mathcal R}{\closure\mathtt s}}$ and $\mycone{\proj{\mathcal R}{\closure\mathtt e}}$ will yield the extremal half-lines of $\polar{\proj{\mainr}{\sets}}{\mathcal R}$ and $\polar{\proj{\mainr}{\sete}}{\mathcal R}$:
\begin{subequations}
\label{eq:VertexEnumSE}
\begin{align}
\coneextr{\polar\proj{\mainr}{\sets}\mathcal R} &= \venum{\mathcal R}{\coneextr{\mycone{\proj{\mathcal R}{\closure\mathtt s}}}}, \\
\coneextr{\polar\proj{\mainr}{\sete}\mathcal R} &= \venum{\mathcal R}{\coneextr{\mycone{\proj{\mathcal R}{\closure\mathtt e}}}}.
\end{align}
\end{subequations}
Knowing the extremal half-lines of $\polar\proj{\mainr}{\sets}\mathcal R$ and $\polar\proj{\mainr}{\sete}\mathcal R$, the characterization of $\mathtt{Sep}(\sets,\sete)$ as the convex hull of its extremal half-lines is readily obtained. First consider the following proposition:
\begin{restatable}{prop}{PropSepesSpanningPointed}
\label{prop:SepesSpanningPointed}
$\mathtt{Sep}(\sets,\sete)$ is a spanning pointed cone in $\mathcal R\otimes\mathcal R$.
\end{restatable}
This proposition together with \cref{prop:ResolutionPointedCone} guarantees that we may represent $\mathtt{Sep}(\sets,\sete)$ as the convex hull of its extremal half-lines.
The following definition will prove useful in this section:
\begin{definition}
Given any two sets $X,Y\subseteq \mathcal R$, the minimal tensor product set $X\otimes_{\textup{set}} Y\subseteq \mathcal R\otimes\mathcal R$ is defined as
\begin{equation}
X\otimes_{\textup{set}} Y := \{x\otimes y:\ x\in X, y\in Y\}.
\end{equation}
\end{definition}
If $\mathfrak{l}_1$ and $\mathfrak{l}_2$ are half-lines in $\mathcal R$, then $\mathfrak{l}_1\otimes_{\textup{set}}\mathfrak{l}_2$ is a half-line in $\mathcal R\otimes\mathcal R$. The following proposition makes explicit the extremal half-lines of $\mathtt{Sep}(\sets,\sete)$:
\begin{restatable}{prop}{PropResolutionSepes}
\label{prop:ResolutionSepes}
It holds that
\begin{multline}
\coneextr{\mathtt{Sep}(\sets,\sete)} = \left\{
\mathfrak{l}_1 \otimes_{\textup{set}} \mathfrak{l}_2:\
\mathfrak{l}_1 \in \coneextr{\polar\proj{\mainr}{\sets}\mathcal R},\right. \\
\left. \mathfrak{l}_2 \in \coneextr{\polar\proj{\mainr}{\sete}\mathcal R}
\right\}.
\label{eq:sepesextremallines}
\end{multline}
\end{restatable}
Thus, knowing the extremal half-lines of $\polar{\reds}{\mainr}$ and $\polar{\rede}{\mainr}$ is equivalent to knowing the extremal half-lines of $\mathtt{Sep}(\sets,\sete)$. Now, apply the vertex enumeration map to these extremal half-lines:
\begin{equation}
\label{eq:VertexEnumSepes}
\coneextr{\polar{\mathtt{Sep}(\sets,\sete)}{\mathcal R\otimes\mathcal R}} = \venum{\mathcal R\otimes\mathcal R}{\coneextr{\mathtt{Sep}(\sets,\sete)}}.
\end{equation}
The extremal half-lines of $\polar{\mathtt{Sep}(\sets,\sete)}{\mathcal R\otimes\mathcal R}$ are of particular interest.
We introduce the set $\mathtt{Wit}(\sets,\sete)\subset \mathcal R\otimes\mathcal R$ that picks out the norm-one elements of each extremal half-line:
\begin{multline}
\mathtt{Wit}(\sets,\sete) := \Big\{\Gamma\in \mathfrak{l} :\ \norm{\Gamma}{\mathcal R\otimes\mathcal R} = 1,\\
\mathfrak{l}\in\coneextr{\polar{\mathtt{Sep}(\sets,\sete)}{\mathcal R\otimes\mathcal R}}\Big\},
\end{multline}
The hyperspace representation of \cref{prop:HyperspaceRepresentation} applied to $\mathtt{Sep}(\sets,\sete)$ and the fact that $\polar{[\mycone{X}]}{\mathcal R\otimes\mathcal R} = \polar{X}{\mathcal R\otimes\mathcal R}$ for any $X\in\mathcal R\otimes\mathcal R$ allows one to write:
\begin{multline}
\mathtt{Sep}(\sets,\sete) = \left\{\Omega\in\mathcal R\otimes\mathcal R:\
\scal{\Omega}{\Gamma}{\mathcal R\otimes\mathcal R} \geq 0 \right.\\
\left. \forall\, \Gamma\in\mathtt{Wit}(\sets,\sete)\right\}.
\end{multline}
Starting from the unit separability{} criterion of \cref{th:maincriterion}, we see that $(\sets,\sete)${} admit a Riemann integrable classical model if and only if
\begin{equation}
\label{eq:VerifGamma}
\forall\, \Gamma\in\mathtt{Wit}(\sets,\sete):\ \scal{\choi{\id\mathcal R}}{\Gamma}{\mathcal R\otimes\mathcal R} \geq 0.
\end{equation}
Thus, for any non-classical $(\sets,\sete)$, there must exist a ``non-classicality witness" $\Gamma_0\in\mathtt{Wit}(\sets,\sete)$ such that
\begin{equation}
\label{eq:NonClassWitn}
\scal{\choi{\id\mathcal R}}{\Gamma_0}{\mathcal R\otimes\mathcal R} < 0.
\end{equation}
This notion of non-classicality witness will be further explored in the next sections.
The authors were made aware of the fact that a related triple vertex enumeration routine was already proposed in \cite{elie_algo_2017}, producing similar sorts of non-classicality witnesses but in a different, somewhat more specialized, framework.
One important technical difference is that the authors of \cite{elie_algo_2017} were looking at a reduced set of inequalities, i.e., in the light of the current results, they were looking at a lower dimensional projection of the convex cone $\mathtt{Sep}(\sets,\sete)$. For this reason, \cref{prop:ResolutionSepes} did not apply to their setup, which is why in this case it was crucial to remove the redundant elements out of the set on the right-hand side of equation \eqref{eq:sepesextremallines} to optimize the runtime of the algorithm, as was emphasized in their work.
\begin{comment}
\vgr{Previous version:}
\vgr{
The authors were made aware of the fact that a related triple vertex enumeration routine was already proposed in \cite{elie_algo_2017}, producing similar sorts of non-classicality witnesses but in a different, somewhat more specialized, framework.
One important technical difference is that the authors of \cite{elie_algo_2017} were looking at a reduced set of inequalities, i.e., in the light of the current results, they were looking at a 9-dimensional projection of their 81-dimensional convex cone $\mathtt{Sep}(\sets,\sete)$. For this reason, \cref{prop:ResolutionSepes} did not apply to their setup, which is why in this case it was crucial to remove the redundant elements out of the set on the right-hand side of equation \eqref{eq:sepesextremallines} to optimize the runtime of the algorithm, as was emphasized in their work.
}
\end{comment}
\subsection{Solvable cases: polyhedral scenarios}
\label{sec:PolyhedralCase}
The main bottleneck for an efficient implementation of the algorithm described so far is the actual resolution of the vertex enumeration problem in equations \eqref{eq:VertexEnumSE} and \eqref{eq:VertexEnumSepes}. In this section, we describe the case of polyhedral scenarios, for which the unit separability{} criterion may be evaluated in finite time.
\begin{definition}[Polyhedral scenarios]
\label{def:PolyhedralScenario}
The prepare-and-measure scenario $(\sets,\sete)${} is said to be a \emph{polyhedral scenario} if the convex cones $\mycone{\proj{\mathcal R}{\closure\mathtt s}}$ and $\mycone{\proj{\mathcal R}{\closure\mathtt e}}$ have finitely many extremal half-lines:
\begin{subequations}
\begin{align}
N_\mathtt s := \card{\coneextr{\mycone{\proj{\mathcal R}{\closure\mathtt s}}}} < \infty, \\
N_\mathtt e := \card{\coneextr{\mycone{\proj{\mathcal R}{\closure\mathtt e}}}} < \infty.
\end{align}
\end{subequations}
\end{definition}
A sufficient condition for $(\sets,\sete)${} to form a polyhedral scenario is that $\mathtt s$ is the convex hull of finitely many quantum states, and $\mathtt e$ is the convex hull of finitely many quantum effects.
The motivation for the name is that convex cones that are generated by finitely many extremal half-lines are special cases of the well-known polyhedral convex cones \cite{ConvexAnalysis}.
In the vertex enumeration problem, if $\mathcal C\subset\mathcal V$ is a spanning pointed cone that has finitely many extremal half-lines, i.e.\ if $\mathcal C$ is polyhedral, then $\polar{\mathcal C}{\mathcal V}$ will have finitely many extremal half-lines, as described in e.g.\ section 4.6 of \cite{ConvexAnalysis}.
Efficient algorithms to solve the vertex enumeration in this case exist in the literature such as the reverse search approach of \cite{Avis2000}.
Thus, for a polyhedral scenario $(\sets,\sete)$, the first vertex enumeration problems, i.e.\ those of equations \eqref{eq:VertexEnumSE}, will each produce a finite number of extremal half-lines. Let there be $M_\mathtt s\in\mathbb N$ extremal half-lines of $\polar\proj{\mainr}{\sets}\mathcal R$, and $M_\mathtt e\in\mathbb N$ for $\polar\proj{\mainr}{\sete}\mathcal R$. These will form, via \cref{prop:ResolutionSepes}, the $M_\mathtt s\cdot M_\mathtt e$ extremal half-lines of $\mathtt{Sep}(\sets,\sete)$.
Then, the vertex enumeration of \eqref{eq:VertexEnumSepes} will yield the finite set $\mathtt{Wit}(\sets,\sete)$.
It then suffices to verify $\card{\mathtt{Wit}(\sets,\sete)}$ homogeneous linear inequalities in $\mathcal R\otimes\mathcal R$ as in \eqref{eq:VerifGamma}\footnote{Verifying $m$ linear inequalities in an inner product space of dimension $d$ has complexity $\mathcal O(md)$, which is negligible compared to the vertex enumeration problems involved here.} to obtain a definite answer for the classicality or non-classicality of $(\sets,\sete)$.
If the runtime of the vertex enumeration problem as in \cref{def:venum} is denoted $\venumtime{\card{\coneextr{\mathcal C}}}{\card{\coneextr{\polar{\mathcal C}{\mathcal V}}}}{\dim(\mathcal V)}$, and assuming that determining the extremal half-lines of $\mycone{\proj{\mathcal R}{\closure\mathtt s}}$ and $\mycone{\proj{\mathcal R}{\closure\mathtt e}}$ is comparatively simple, then the total runtime\footnote{Here we focus on time complexity of the algorithm, but \eqref{eq:venumtime} is of course also valid for space complexity.} of the algorithm will be of order
\begin{multline}
\label{eq:venumtime}
\venumtime{N_\mathtt s}{M_\mathtt s}{\dim(\mathcal R)}
+\venumtime{N_\mathtt e}{M_\mathtt e}{\dim(\mathcal R)} \\
+\venumtime{M_\mathtt s\cdot M_\mathtt e}{\card{\mathtt{Wit}(\sets,\sete)}}{\dim(\mathcal R)^2}.
\end{multline}
\label{par:Complexity}
It is now natural to ask what form assumes the time complexity $\venumtime{n}{m}{d}$. To compare with the existing literature, note that vertex enumeration of the spanning pointed cones described in this article is equivalent to vertex enumeration of a compact polyhedral convex set: this is made explicit in \cref{lem:SlicedCone}.
As far as the authors are aware, the computational complexity of such a vertex enumeration problem is an open question \cite{GeneratingVerticesHard,VertexReview}.
However, it still holds that when certain structural assumptions on the structure of the input convex set are made, the vertex enumeration problem admits efficient solutions, i.e.\ solutions for which $\venumtime{n}{m}{d}$ is polynomial in $n,m,d$ \cite{Avis2000,VertexReview}.
It it an open question whether the algorithm described in this \doc{} considers vertex enumeration problems for which such structural assumptions are generically met.
\subsection{Polyhedral approximations}
\label{sec:PolyhedralApprox}
In the general case where $(\sets,\sete)${} is not a polyhedral scenario (\cref{def:PolyhedralScenario}), or where $(\sets,\sete)${} is a polyhedral scenario but the runtime of the previous algorithm is prohibitively long due to e.g.\ a large number of extremal half-lines, one may still choose any polyhedral inner or outer approximation of the relevant cones, yielding either classicality certifiers or non-classicality witnesses as described in the following sections.
\subsubsection{Classicality certifiers}
First, consider an outer approximation of the input cones: let $\mathcal C_\sets^{\textup{(out)}},\mathcal C_\sete^{\textup{(out)}}\subseteq\mathcal R$ be spanning pointed cones (\cref{def:PointedCone,def:SpanningCone}) in $\mathcal R$ such that
\begin{subequations}
\label{eq:OuterApprox}
\begin{align}
\mycone{\proj{\mathcal R}{\closure\mathtt s}} &\subseteq \mathcal C_\sets^{\textup{(out)}}, \\
\mycone{\proj{\mathcal R}{\closure\mathtt e}} &\subseteq \mathcal C_\sete^{\textup{(out)}},
\end{align}
\end{subequations}
and such that $\card{\coneextr\mathcal C_\sets^{\textup{(out)}}},\card{\coneextr\mathcal C_\sete^{\textup{(out)}}} < \infty$. Such cones always exist: let us give a constructive example. Consider the hyperspace description of $\mycone{\proj{\mathcal R}{\closure\mathtt s}}$, $\mycone{\proj{\mathcal R}{\closure\mathtt s}}$ as in \cref{prop:HyperspaceRepresentation}.
If one keeps a finite set of at least $\dim(\mathcal R)$ hyperspaces, the resulting cones will be spanning pointed
cones outer approximations of $\mycone{\proj{\mathcal R}{\closure\mathtt s}}$, $\mycone{\proj{\mathcal R}{\closure\mathtt s}}$ with finitely many extremal half-lines.
The algorithm described in the previous section may be run in exactly the same way as in the polyhedral case, with $\mathcal C_\sets^{\textup{(out)}}$ and $\mathcal C_\sete^{\textup{(out)}}$ replacing $\mycone{\proj{\mathcal R}{\closure\mathtt s}}$ and $\mycone{\proj{\mathcal R}{\closure\mathtt e}}$ as inputs to the algorithm.
Let $\mathtt{Sep}^{\textup{(in)}}$ be the cone that the algorithm will characterize:
\begin{equation}
\label{eq:InnerSepes}
\mathtt{Sep}^{\textup{(in)}} := \myconv{\polar{[\mathcal C_\sets^{\textup{(out)}}]}{\mathcal R}
\otimes_{\textup{set}}
\polar{[\mathcal C_\sete^{\textup{(out)}}]}{\mathcal R}
}.
\end{equation}
Using \cref{lem:SubPolar} and equations \eqref{eq:OuterApprox}, it can be shown that
\begin{equation}
\label{eq:SepoutSubsetSepes}
\mathtt{Sep}^{\textup{(in)}} \subseteq \mathtt{Sep}(\sets,\sete),
\end{equation}
which justifies the reversed superscript of $\mathtt{Sep}^{\textup{(in)}}$: outer conic approximations in \eqref{eq:OuterApprox} yield an inner approximation of $\mathtt{Sep}(\sets,\sete)$ in \eqref{eq:SepoutSubsetSepes}.
Then, let $\{\Gamma_i^{\text{(in)}}\in\mathcal R\otimes\mathcal R\}_i$ be the finite set of witnesses produced by the algorithm, i.e.\ there is one $\Gamma_i^{\text{(in)}}$ for each extremal half-line of the convex cone $\polar{[\mathtt{Sep}^{\textup{(in)}}]}{\mathcal R\otimes\mathcal R}$.
By the hyperspace description of \cref{prop:HyperspaceRepresentation}, if it holds that for all $i$,
\begin{equation}
\label{eq:InnerApprox}
\scal{\choi{\id\mathcal R}}{\Gamma_i^{\text{(in)}}}{\mathcal R\otimes\mathcal R} \geq 0,
\end{equation}
then $\choi{\id\mathcal R}\in\mathtt{Sep}^{\textup{(in)}}$ and thus also $\choi{\id\mathcal R}\in\mathtt{Sep}(\sets,\sete)$ thanks to equation \eqref{eq:SepoutSubsetSepes}. This guarantees the classicality of $(\sets,\sete)$: in that case, the finite set $\{\Gamma_i^{\textup{(in)}}\}_i$ is referred to as a set of \emph{classicality certifiers}.
If instead \eqref{eq:InnerApprox} does not hold for all $i$, then the approximation is inconclusive. One may then, for example, use refined polyhedral outer approximations ${\mathcal C_\sets^{\textup{(out)}}}{'}\subset \mathcal C_\sets^{\textup{(out)}}$, $\mathcal C_\sete^{\textup{(out)}}{'}\subset\mathcal C_\sete^{\textup{(out)}}$ that are subsets of the previous ones but still verify \eqref{eq:OuterApprox} to obtain a finer inner approximation of $\mathtt{Sep}(\sets,\sete)$, and repeat the procedure. The convergence of this hierarchy of finer approximations will be discussed shortly but let us first describe the outer approximations of $\mathtt{Sep}(\sets,\sete)$.
\subsubsection{Non-classicality witnesses}
In parallel to attempting to certify the classicality of $(\sets,\sete)${} by using outer approximations, one may also consider inner approximations to the input cones: choose spanning pointed cones $\mathcal C_\sets^{\textup{(in)}}\!,\, \mathcal C_\sete^{\textup{(in)}}\subseteq\mathcal R$ such that
\begin{subequations}
\label{eq:InnerApproxCones}
\begin{align}
\mathcal C_\sets^{\textup{(in)}}&\subseteq \mycone{\proj{\mathcal R}{\closure\mathtt s}}, \\
\mathcal C_\sete^{\textup{(in)}}&\subseteq \mycone{\proj{\mathcal R}{\closure\mathtt e}},
\end{align}
\end{subequations}
and such that $\card{\coneextr\mathcal C_\sets^{\textup{(in)}}},\card{\coneextr\mathcal C_\sete^{\textup{(in)}}} < \infty$. Such cones always exist. For example, consider the extremal half-line description of $\mycone{\proj{\mathcal R}{\closure\mathtt s}}$, $\mycone{\proj{\mathcal R}{\closure\mathtt s}}$ as in \cref{prop:ResolutionPointedCone}.
By keeping a finite set of at least $\dim(\mathcal R)$ extremal half-lines, the resulting cones will be spanning pointed cones inner approximations of $\mycone{\proj{\mathcal R}{\closure\mathtt s}}$, $\mycone{\proj{\mathcal R}{\closure\mathtt s}}$ with finitely many extremal half-lines.
The algorithm may be run in that case as well, with $\mathcal C_\sets^{\textup{(in)}}$, $\mathcal C_\sete^{\textup{(in)}}$ as inputs rather than $\mycone{\proj{\mathcal R}{\closure\mathtt s}}$, $\mycone{\proj{\mathcal R}{\closure\mathtt s}}$. Let $\mathtt{Sep}^{\textup{(out)}}$ be the cone that the algorithm will characterize:
\begin{equation}
\mathtt{Sep}^{\textup{(out)}} := \myconv{\polar{[\mathcal C_\sets^{\textup{(in)}}]}{\mathcal R}
\otimes_{\textup{set}}
\polar{[\mathcal C_\sete^{\textup{(in)}}]}{\mathcal R}}.
\end{equation}
Using \cref{lem:SubPolar} and equations \eqref{eq:InnerApproxCones}, it can be shown that
\begin{equation}
\label{eq:OuterSepes}
\mathtt{Sep}(\sets,\sete)\subseteq \mathtt{Sep}^{\textup{(out)}},
\end{equation}
which again justifies the reversed superscripts.
The resulting set of witnesses is denoted $\{\Gamma_i^{\text{(out)}}\in\mathcal R\otimes\mathcal R\}_i$, i.e.\ there is one $\Gamma_i^{\text{(out)}}$ for each of the extremal half-lines of $\polar{[\mathtt{Sep}^{\textup{(out)}}]}{\mathcal R\otimes\mathcal R}$.
Then, if there exists $j$ such that
\begin{equation}
\label{eq:NonClassWitness}
\scal{\choi{\id\mathcal R}}{\Gamma_j^{\text{(out)}}}{\mathcal R\otimes\mathcal R} < 0,
\end{equation}
then looking back at the hyperspace representation of \cref{prop:HyperspaceRepresentation}, $\choi{\id\mathcal R}\notin \mathtt{Sep}^{\textup{(out)}}$, and by the subset inclusion \eqref{eq:OuterSepes} also $\choi{\id\mathcal R}\notin\mathtt{Sep}(\sets,\sete)$.
$\Gamma_j^{\text{(out)}}$ is referred to as a \emph{non-classicality witness} of the scenario $(\sets,\sete)$.
If there does not exist such a $j$, then the approximation is inconclusive and one should use a refined inner approximation in \eqref{eq:InnerApproxCones}.
\subsubsection{Comments on convergence}
It is important to realize that finer and finer approximations will have more and more extremal half-lines, and will yield a computationally harder vertex enumeration problem --- see section~\ref{par:Complexity} for a partial description of the complexity of the problem in each instance of a polyhedral input to the algorithm.
The procedure of repeatedly refining the inner or outer approximations will in principle converge to a definite answer provided that $\choi{\id\mathcal R}$ is in an interior or exterior point of the closed (\cref{prop:sepclosed}) convex cone $\mathtt{Sep}(\sets,\sete)$. Whether there exists instances of $(\sets,\sete)${} such that $\choi{\id\mathcal R}$ is a boundary point\footnote{From \cref{th:maincriterion}, such boundary points describe classical scenarios $(\sets,\sete)$.} of $\mathtt{Sep}(\sets,\sete)$ is an open question.
An alternative approach to refining the polyhedral approximations would be to change the inner or outer approximations randomly while keeping the number of extremal half-lines fixed. This procedure would have the merit of probing more of the structure of $(\sets,\sete)${} while keeping the computational complexity fixed, but there is no guarantee for the convergence of this approach.
\subsubsection{Connections with quantum entanglement}
The present algorithm may be recast as a basic algorithm to treat the usual problem of verifying the entanglement of a given bipartite state. Let us give the key ideas to relate the two procedures. Let the convex cone of positive semi-definite matrices be $\mathcal P(\hil)\subset \linherm\hil$:
\begin{equation}
\mathcal P(\hil) := \big\{h\in\linherm\hil:\ \scal{h}{\ketbra{\psi}}{\linherm\hil} \geq 0 \ \forall \ket\psi\in\mathcal H\big\}.
\end{equation}
The cone of unnormalized bipartite product states on $\mathcal H\otimes\mathcal H$ is $\mathcal P(\hil)\otimes_{\textup{set}}\mathcal P(\hil)$.
The convex cone of unnormalized separable quantum states, $\mathtt{Q.Sep}$, is:
\begin{equation}
\mathtt{Q.Sep} := \myconv{\mathcal P(\hil)\otimes_{\textup{set}}\mathcal P(\hil)}.
\end{equation}
If a state $\Omega\in\linherm{\mathcal H\otimes\mathcal H}$ belongs to $\mathtt{Q.Sep}$, it is said to be separable, else it is said to be entangled.
To recast the problem of determining whether $\Omega$ is entangled or not to an application of the algorithm described in the previous sections, consider the following main identifications. First, in the previous algorithm, replace $\mathcal R$ with $\linherm\hil$. Then, the input cones $\mycone{\proj{\mathcal R}{\closure\mathtt s}}$ and
$
\mycone{\proj{\mathcal R}{\closure\mathtt e}}$ are both replaced with $\mathcal P(\hil)$. The tensor product cone $\mathtt{Q.Sep}$ is related to $(\mathcal P(\hil),\mathcal P(\hil))$ in the same way that $\mathtt{Sep}(\sets,\sete)$ is related to $(\mycone{\proj{\mathcal R}{\closure\mathtt s}},
\mycone{\proj{\mathcal R}{\closure\mathtt e}})$.
For this identification to work, one needs to recall the basic result that $\bigpolar{\mathcal P(\hil)}{\linherm\hil} = \mathcal P(\hil)$.
Then, the state $\Omega\in\linherm{\mathcal H\otimes\mathcal H}$ replaces $\choi{\id\mathcal R}$.
Characterizing whether $\Omega\in\mathtt{Q.Sep}$ can thus be reduced to a non-polyhedral instance of the previous algorithm, due to the infinite number of extremal half-lines of $\mathcal P(\hil)$: the set of extremal half-lines of $\mathcal P(\hil)$ is equal to the set of all half-lines $\mycone{\ketbra\psi}$ with $\ket\psi\in\mathcal H$.
Thus, it makes sense to use inner and outer approximations
as described in section~\ref{sec:PolyhedralApprox}. Here, ``classicality certifiers" become ``separability certifiers" and a ``non-classicality witness" becomes an entanglement witness in the usual sense of the literature, see e.g.\ \cite{CharacterizingEntanglement} for a review.
Due to the complexity of vertex enumeration in the general case, there exist more efficient algorithms in the literature to produce entanglement witnesses such as the SDP hierarchy of \cite{ImprovedSDPHierarchy}.
\subsection{Computational equivalence: changing reduced spaces and quantum primitives}
\label{sec:Computational equivalence of reduced spaces}
\subsubsection{Changing reduced spaces}
In section~\ref{sec:altr}, it was shown that the classicality of $(\sets,\sete)${} is a concept that is independent of whether one chooses to work with the initial reduced space $\mathcal R$ (\cref{def:mainr}) or with any alternative reduced space $\altr\mathcal R$ (\cref{def:AltReducedSpace}).
The previous sections suggested an algorithmic procedure to verify the classicality of $(\sets,\sete)${} through an evaluation of the unit separability criterion, \cref{th:maincriterion}.
One may ask whether it is simpler to execute this algorithmic procedure when working in $\mathcal R$ or any other $\altr\mathcal R$. The most computationally intensive part of the algorithm is to solve the vertex enumeration problem, and, as stated in section~\ref{par:Complexity}, the complexity of the vertex enumeration problem depends on
1) the dimension of the ambient vector space, but by \cref{prop:AltDimensions} these are the same in $\mathcal R$ and any $\altr\mathcal R$, and 2) the number of extremal half-lines of the cone and its dual. The following propositions will prove the equivalence of number of extremal half-lines of the relevant convex cones built in $\mathcal R$ or any other $\altr\mathcal R$.
\begin{definition}
\label{def:IsoCone}
Given any two finite dimensional real inner product spaces $\mathcal U$, $\mathcal V$ such that $\dim(\mathcal U) = \dim(\mathcal V)$, two convex cones $\mathcal C\subseteq\mathcal U$ and $\mathcal D \subseteq \mathcal V$ are said to be isomorphic, denoted $\mathcal C \sim \mathcal D$, if and only if there exists an invertible linear map $\Phi\funcrange{\mathcal U}{\mathcal V}$ such that
\begin{equation}
\Phi(\mathcal C) = \mathcal D.
\end{equation}
\end{definition}
Applying this definition to the relevant cones in our setup, we obtain:
\begin{restatable}{prop}{PropAltrAlgoEquiv}
\label{prop:AltrAlgoEquiv}
Choosing any alternative reduced space $\altr\mathcal R$ with associated mappings $f,g$ (\cref{def:AltReducedSpace}), it holds that:
\begin{subequations}
\begin{align}
\mycone{\proj{\mathcal R}{\closure\mathtt s}}) &\sim \mycone{f(\closure\mathtt s)},
\label{eq:IsoPrs}\\
\mycone{\proj{\mathcal R}{\closure\mathtt e}}) &\sim \mycone{g(\closure\mathtt e)},\label{eq:IsoPre} \\
\mathtt{Sep}(\sets,\sete) &\sim \altr\mathtt{Sep}(\sets,\sete), \label{eq:IsoSepes}
\end{align}
\end{subequations}
where
\begin{equation}
\altr\mathtt{Sep}(\sets,\sete) := \myconv{
\bigpolar{f(\mathtt s)}{\altr\mathcal R}
\otimes_{\textup{set}}
\bigpolar{g(\mathtt e)}{\altr\mathcal R}
}.
\end{equation}
\end{restatable}
The following proposition will allow one to assert the computational equivalence of $\mathcal R$ and $\altr\mathcal R$:
\begin{restatable}{prop}{PropIsoCones}
\label{prop:IsoCones}
Given any two finite dimensional real inner product spaces $\mathcal U$, $\mathcal V$ such that $\dim(\mathcal U) = \dim(\mathcal V)$, any two convex cones $\mathcal C\subseteq\mathcal U$ and $\mathcal D \subseteq \mathcal V$ such that $\mathcal C\sim\mathcal D$ have the following properties:
\begin{myitem}
\item there is a one-to-one correspondence between the extremal half-lines of $\mathcal C$ and those of $\mathcal D$;
\item the same holds for the extremal half-lines of the polar cones due to $\polar{\mathcal C}{\mathcal U} \sim \polar{\mathcal D}{\mathcal V}$.
\end{myitem}
\end{restatable}
\Cref{prop:AltrAlgoEquiv,prop:IsoCones}, proven in \text{appendix}~\ref{app:AlgorithmicEquivalence}, prove that all the cones involved in the algorithm that verifies the unit separability{} criterion will yield vertex enumeration problems of the same complexity because this complexity depends on the number of extremal half-lines as described in \eqref{eq:venumtime}.
\subsubsection{Changing quantum primitives}
We now expand on \cref{prop:ancilla} to show that not only are statistically equivalent scenarios equivalent for the existence of a classical model of a certain type,
but also that the computational complexity of verifying the unit separability criterion is the same for either one of the two descriptions. The proof is given in \text{appendix}~\ref{app:AlgorithmicEquivalence}.
\begin{restatable}{prop}{PropAncillaComp}
\label{prop:ancillacomp}
For the scenarios $(\sets,\sete)${} and $(\tilde \mathtt s,\tilde\mathtt e)$ and the corresponding reduced spaces $\mathcal R$ and $\tilde\mathcal R$ defined in \cref{prop:ancilla}, it holds that
\begin{align}
\mycone{\proj{\mathcal R}{\mathtt s}} &\sim
\mycone{P_{\tilde\mathcal R}(\tilde\mathtt s)}, \label{eq:ancillacones}\\
\mycone{\proj{\mathcal R}{\mathtt e}} &\sim
\mycone{P_{\tilde\mathcal R}(\tilde\mathtt e)}, \label{eq:ancillaconee}\\
\mathtt{Sep}(\sets,\sete) &\sim
\textup{Sep}(\tilde\mathtt s,\tilde\mathtt e), \label{eq:ancillaconesepes}
\end{align}
which proves by \cref{prop:IsoCones} the computational equivalence of starting with either $(\sets,\sete)${} or $(\tilde \mathtt s,\tilde\mathtt e)$.
\end{restatable}
\section{Connections with generalized probabilistic theories}
\label{sec:Connection}
\subsection{Generalized probabilistic reformulation}
\label{sec:GPT}
Although we formulated the classical model of \cref{def:classicalmodel} for quantum primitives, the fact that the sets $\mathtt s$, $\mathtt e$ originate from the Hilbert space of the quantum system is not crucial for the present classical model construction.
Instead, rather than considering the vector space $\linherm\hil$ equipped with the Hilbert-Schmidt inner product, consider any real inner product space $\mathcal V$ of finite dimension:
\begin{subequations}%
\label{eq:GPTsubst}
\vspace{-0.4cm}
\begin{equation}
\linherm\hil \rightarrow \mathcal V.
\end{equation}
Then, replace $\mathtt s$ by $\Omega\subseteq\mathcal V$ and $\mathtt e$ by $\mathcal E\subseteq \mathcal V$, following standard notation \cite{Janotta_2014,Spekkens19}:
\begin{align}
\mathtt s &\rightarrow \Omega, \\
\mathtt e &\rightarrow \mathcal E.
\end{align}
The probability that an effect $E\in\mathcal E$ occurs upon measuring a state $\rho\in\Omega$ is given by the inner product $\scal{\rho}{E}{\mathcal V}$ by analogy with the usual Hilbert-Schmidt inner product probability rule of quantum mechanics.
The properties required for $\Omega$ and $\mathcal E$ that are necessary for the results of this \doc{} are the following. $\Omega$ and $\mathcal E$ must be non-empty, bounded convex sets such that for all $s\in\Omega$ and $e\in\mathcal E$: $\scal{s}{e}{\mathcal V} \geq 0$. It must be that $0\in\mathcal V$ belongs to $\mathcal E$. There must exist $u\in\mathcal E$ such that for all $s\in\Omega$: $\scal{s}{u}{\mathcal V} = 1$ --- this $u$ replaces $\id\mathcal H$:
\begin{equation}
\id\mathcal H \rightarrow u.
\end{equation}
We also require that for all $e\in\mathcal E$, there exists a completion $\{e_k\in\mathcal E\}_k$ such that $e + \sum_k e_k = u$.
\end{subequations}
All the results of this \doc{} can then easily be rederived in this generalized setting: once the prepare-and-measure scenario is defined, the derivations only rely on the axiomatic properties of the state and effect sets together with the basic real, finite-dimensional inner product space structure which are assumed both in the quantum setting and this generalized setting.
As an illustration, the reduced space of \cref{def:mainr} obtained under the substitution \eqref{eq:GPTsubst} is
\begin{equation}
\mathcal R_{G} := \proj{\myspan{\mathcal E}}{\myspan{\Omega}} \subseteq \mathcal V.
\end{equation}
\subsection{Connections with simplex-embeddability}
\label{sec:SimplexEmbeddability}
Recently, a similar approach to the contextuality of arbitrary prepare-and-measure scenarios was presented in \cite{Spekkens19}. In this section, we will relate the present results to their work. First, we give an explicit name to the category of generalized probabilistic theories considered in their work:
\begin{definition}[Tomographically complete generalized probabilistic theories, after \cite{Spekkens19}]
\label{def:TomographicallCompleteGPT}
A generalized probabilistic theory $(\mathcal V,\Omega,\mathcal E)$ is \emph{tomographically complete} if and only if the sets $\Omega$ and $\mathcal E$ are closed and
\begin{subequations}
\begin{align}
\myspan{\Omega} = \mathcal V, \\
\myspan{\mathcal E} = \mathcal V.
\end{align}
\end{subequations}
To match the previous notation of this \doc, this pair $(\Omega,\mathcal E)$ is said to be a tomographically complete prepare-and-measure scenario.
\end{definition}
For tomographically complete generalized probabilistic theories, the reduced space under the substitution \eqref{eq:GPTsubst} becomes simply the vector space $\mathcal V$, since
\begin{equation}
\label{eq:TomographicallyCompleteReducedSpace}
\proj{\myspan{\mathcal E}}{\myspan{\Omega}} = \proj{\mathcal V}{\mathcal V} = \mathcal V.
\end{equation}
The definition 1 in \cite{Spekkens19} (reproduced in \text{appendix}~\cref{def:Simplex}) of simplex-embeddability applies only to tomographically complete generalized probabilistic theories:
indeed, if the generalized probabilistic theory was not tomographically complete, the Spekkens' non-contextual model considered in \cite{Spekkens19} for such a theory would have to be formulated in a distinct fashion.
In this \doc, such a generalized Spekkens' non-contextual model has been formulated in \cref{def:classicalmodel}. It turns out, however, that for tomographically complete generalized probabilistic theories,
the classical model of this \doc{} and the classical model of \cite{Spekkens19} are actually equivalent, as proven in \text{appendix}~\ref{app:connection}.
\begin{restatable}{prop}{PropSimplexImpliesClassical}
\label{prop:SimplexImpliesClassical}
Any tomographically complete generalized probabilistic theory $(\mathcal V,\Omega,\mathcal E)$ is simplex-embeddable in $d$ dimensions in the sense of definition 1 of \cite{Spekkens19}, if and only if the tomographically complete prepare-and-measure scenario $(\Omega,\mathcal E)$ admits a classical model in the sense of \cref{def:classicalmodel} (under the substitution (\ref{eq:GPTsubst})) with a discrete ontic space of finite cardinality $d$.
\end{restatable}
Now consider an arbitrary tomographically complete generalized probabilistic theory denoted $G:=(\mathcal V,\Omega,\mathcal E)$.
Then, let $b(G)\in\mathbb N$ be such that if $G$ is simplex-embeddable, then it is also simplex-embeddable in at most $b(G)$ dimensions. It was asked in \cite{Spekkens19} whether there existed such a bound.
\Cref{prop:SimplexImpliesClassical} proves as a corollary the existence of this bound:
\begin{restatable}{corollary}{CorollaryDimSimplex}
\label{corollary:DimSimplex}
For any tomographically complete generalized probabilistic theory $G= (\mathcal V, \Omega, \mathcal E)$ for which there exists $d\in\mathbb N$
such that $G$ is simplex-embeddable in $d$ dimensions, it holds that $G$ is also simplex-embeddable in $d_{\textup{min}}\in\mathbb N$ dimensions with
\begin{equation}
\dim(\mathcal V) \leq d_\textup{min} \leq \dim(\mathcal V)^2,
\end{equation}
i.e.\ $b(G) = b(\mathcal V,\Omega,\mathcal E) = \dim(\mathcal V)^2$.
\end{restatable}
\begin{proof}
If $G=(\mathcal V,\Omega,\mathcal E)$ is simplex-embeddable, then by \cref{prop:SimplexImpliesClassical}, the prepare-and-measure scenario $(\Omega,\mathcal E)$ admits a finite, discrete classical model which is a special case of Riemann integrable classical models (\cref{def:RiemannIntegrableModel} under the substitution \eqref{eq:GPTsubst}).
By \cref{th:CardinalityBounds} under the substitution \eqref{eq:GPTsubst}, there also exists a classical model with a minimal ontic space cardinality $\card\Lambda =: d_{\text{min}}$ such that $\dim(\mathcal V)\leq d_{\text{min}} \leq \dim(\mathcal V)^2$ where we used \eqref{eq:TomographicallyCompleteReducedSpace} to substitute $\mathcal R$ in \cref{th:CardinalityBounds} with $\mathcal V$ rather than with $\proj{\myspan{\mathcal E}}{\myspan\Omega}$.
Again by \cref{prop:SimplexImpliesClassical}, this means that the generalized probabilistic theory $G$ is simplex-embeddable in $d_{\text{min}}$ dimensions.
\end{proof}
In \cite{Spekkens19}, by leveraging arguments of \cite{Spekkens18}, it was shown that if a tomographically complete generalized probabilistic theory is such that $\mathcal E$ admits finitely many extremal points, then there exists such a bound $b(G)$,
and the analysis of \cite{Spekkens18} also suggests that a similar bound holds if the set of states $\Omega$ has finitely many extremal points.
However, this bound which is the number of extremal points of the polytope defined in the ``Characterization P1 of the noncontextual
measurement-assignment polytope" of \cite{Spekkens18}, depends on the set $\mathcal E$ and does not have a clear behavior as the number of extremal points of $\mathcal E$ grows --- it could in principle diverge.
For fixed $\mathcal V$, however, the upper bound $\dim(\mathcal V)^2$ of \cref{corollary:DimSimplex} remains constant for arbitrary choice of $(\Omega,\mathcal E)$, even with infinitely many extremal points.
We now turn to applying the results of \cite{Spekkens19} to the original framework of this \doc{}.
In \cite{Spekkens19}, an argument is given about the need for so-called ``dimension mismatches".
This useful argument can be rephrased in our setup as a proof that the lower bound $\dim(\mathcal R)$ in \cref{th:CardinalityBounds} is not always tight, i.e.\ there exist $(\sets,\sete)${} that admit a Riemann integrable classical model with minimal ontic state space cardinality
\begin{equation}
d_{\min} = \dim(\mathcal R)+1.
\end{equation}
While not always tight, it is easy to see from the simplex-embeddability criterion of \cite{Spekkens19} that there exist $(\sets,\sete)${} such that the lower bound in \cref{th:CardinalityBounds} is saturated.
These considerations raise the open question of whether the upper bound $\dim(\mathcal R)^2$ in \cref{th:CardinalityBounds} is tight, i.e.\ whether there exist $(\sets,\sete)${} such that the minimal ontic space has cardinality $\dim(\mathcal R)^2$.
\section{Conclusion}
After introducing the prepare-and-measure scenario $(\sets,\sete)${} and the reduced space $\mathcal R$, a generalized Spekkens' non-contextual model was formulated as in \cref{prop:basiccriterion} on page~\pageref{prop:basiccriterion}. A new classicality criterion, unit separability, was extracted in \cref{th:maincriterion} on page~\pageref{th:maincriterion}. This theorem allowed to extract properties for the size of the ontic space $\Lambda$, with most importantly the new bound $\card{\Lambda}\leq\dim(\mathcal R)^2$ in \cref{th:CardinalityBounds} on page~\pageref{th:CardinalityBounds}.
The algorithmic formulation of the criterion was discussed in section~\ref{sec:algo}, allowing one to evaluate numerically the (non-)classicality of a given scenario. Connections with generalized probabilistic theories were given in section~\ref{sec:Connection}, with most importantly the ontic space cardinality bounds translating as dimension bounds for simplex-embeddability as in \cref{corollary:DimSimplex} on page~\pageref{corollary:DimSimplex}.
Future directions of research include most importantly the application of the classicality criterion to modern protocols in quantum information theory.
Such applications will hopefully uncover links between this notion of non-classicality and the efficiency of quantum protocols.
\tocless\acknowledgments
We would like to thank Martin Pl\'avala for
fruitful discussions. M.W. acknowledges support from the Swiss National Science Foundation (SNSF) via an AMBIZIONE Fellowship (PZ00P2\_179914) in addition to the National Centre of Competence in Research QSIT.
|
2,877,628,090,378 | arxiv | \section{introduction}
Spintronics \cite{zutic2004,wolf2001}, a new sub-disciplinary field
of condensed matter physics, has been regarded as bringing
hope for a new generation of electronic devices. The advantages of
spintronic devices include reducing the power consumption and overcoming
the velocity limit of electric charge \cite{zutic2004}. The two degrees
of freedom of the spin enable to transmit more information in quantum
computation and quantum information. In the past decade, many interesting
phenomena emerged, moving the study of spintronics forward. The spin
Hall effect predicts an efficient spin injection without the need of metallic ferromagnets \cite{sinova2004}, and can generate a substantial
amount of dissipationless quantum spin current in a semiconductor
\cite{murakami2003}. All these provide the fundamental on designing
spintronic devices, such as spin transistors \cite{datta1990} that were predicted several years ago. Experiment progresses
have also been made in recent years \cite{kato2004,matsuzaka2009}.
Since Rashba stated problems inherent in the theory of transport spin
currents driven by external fields and gave his definition on the
spin current tensor $J_{ij}$ \cite{rashba2003}, there were several
works on how to define the spin current in different cases. Sun et
al. suggested that there was no need to modify the traditional definition
on the spin current, but an additional term which describes the spin
rotation should be included in the previous commonly accepted definition
\cite{sun2005,sun2008}. A modified definition given by Shi
\cite{shi2006} solved the conservation problem of the traditional
spin current in the system Hamiltonian. His definition ensured an
equilibrium thermodynamics theory built on spintronics, in accordance
with other traditional transport theory, for instance, the Onsager
relation.
Spin Hall effect, a vital phenomenon induced by spin-orbit coupling,
has been extensively studied for years, although the microscopic origins
of the effect are still being argued. Hirsch et al. \cite{hirsch1999}
referred that anisotropic scattering by impurities will lead to the
spin Hall effect, while an intrinsic cause of spin Hall effect was
proposed by Sinova et al. \cite{sinova2004}. Both theoretical and
experimental work reported recently demonstrated the achievements of
spin polarization in semiconductors \cite{ohno1999,ohno_electrical_1999,valenzuela_direct_2006}.
In this letter, the spin current $J_{s}$, orbit angular momentum
(OAM) current $J_{L}$ and the total angular momentum (TAM) $J_{J}$,
as well as the corresponding continuity equations have been delivered.
In our dyad form expressions, the velocity operator
$\alpha$ and the spin operator $\Sigma$ can well display the physical
meaning of the spin current. In addition, the non-relativistic approximation (NRA) expressions have been derived and the quantum effects have been predicted in our
expressions, which can not be deduced from previous works. Its vital
effect on the finite size effect of the spin current will be showed
and calculated in Hg/CdTe system. We recommand to use the effecitive TAM $J$ and its current $J_{J}$ to replace the traditional spin $S$ and spin current in spintronics.
\section{the angular momentum in dyad form}
According to the quantum electrodynamics theory, the Largrangian
\begin{eqnarray}
\mathscr{L}_{QED} & = & \mathscr{L}_{Dirac}+\mathscr{L}_{Maxwell}+\mathscr{L}_{int}\\
& = & \bar{\psi}\left(ic\gamma^{\mu}\partial_{\mu}-mc^{2}\right)\psi-\frac{1}{4}F_{\mu\nu}^{2}-\bar{\psi}c\gamma^{\mu}\psi A_{\mu}\\
& = & \bar{\psi}\left(ic\gamma^{\mu}D_{\mu}-mc^{2}\right)\psi-\frac{1}{4}F_{\mu\nu}^{2}
\end{eqnarray}
can be represented in two terms.
\begin{equation}
\mathscr{L}_{e}=\bar{\psi}\left(ic\gamma^{\mu}D_{\mu}-mc^{2}\right)\psi,
\end{equation}
\[
\mathscr{L}_{\gamma}=-\frac{1}{4}F_{\mu\nu}^{2}
\],
and the corresponding Hamiltonian of $\mathscr{L}_{e}$ is well-known as
\begin{equation}
\hat{H}=c\left(\vec{\alpha}\cdot\vec{\Pi}\right)+\beta mc^{2}+V\label{eq:H}
\end{equation}
According of the Noether's theorem, one can derive the following
equation
\begin{equation}
\partial_{\mu}\left(J_{J}\right)_{\mu}=0\label{eq:JJ}.
\end{equation}
while the corresponding Noether current is
\[
J_{J}=J_{s}+J_{L}
\].
Here, the spin current density $J_{s}$ is expressed
\begin{equation}
\left(J_{S}\right)_{\alpha\beta}^{\mu}=\frac{1}{4}\bar{\psi}\left(\gamma_{\mu}\sigma_{\alpha\beta}+\sigma_{\alpha\beta}\gamma_{\mu}\right)\psi\label{eq:Js},
\end{equation}
and the OAM current $J_{L}$
\[
\left(J_{L}\right)_{\alpha\beta}^{\mu}=x_{\alpha}T_{\beta\rho}+x_{\beta}T_{\alpha\rho}
\]
with $T_{\mu\nu}=\frac{1}{2}\bar{\psi}\gamma_{\mu}D_{\nu}\psi$.
Here $\gamma_{\mu}$ is the Dirac Matrix, and $\sigma_{\alpha\beta}=\frac{i}{2}\left[\gamma_{\alpha},\gamma_{\beta}\right]$.
There may be some differences for choosing other representations of
Dirac Matrix and the deduction details of Eq.\eqref{eq:Js} are shown in Appendix.
The Lorentz invariance of the Lagrangian ensures the conservation
of TAM current $J_{J}$ of electrons. Eq.\eqref{eq:JJ} shows that
the spin current alone is not conserved, unless the orbital angular momentum is fixed
\subsection{The dyad form in 3D space}
It is necessary to bridge the definition of the spin current $J_{S}$
with the traditional descriptions in spintronics.
Using the operator $\hat{\Sigma}_{i}=\left[\begin{array}{cc}
\sigma_{i} & 0\\
0 & \sigma_{i}
\end{array}\right]$ and $\hat{\alpha}_{i}=\left[\begin{array}{cc}
0 & \sigma_{i}\\
\sigma_{i} & 0
\end{array}\right]$, the Eq.\eqref{eq:Js} turns as follows (shown in Appendix)
\begin{equation}
\left(\hat{J}_{S}\right)_{\mu\nu}=\frac{i}{2}\psi^{\dagger}\left(\hat{\alpha}_{\mu}\hat{\Sigma}_{\nu}\right)\psi,
\end{equation}
thus the spin current operator is
\begin{equation}
\tensor {J_{S}}=\frac{i}{2}\left(\hat{\alpha}\hat{\Sigma}\right)\label{eq:JsD}
\end{equation}
where $\hat{\alpha}$ and $\hat{\Sigma}$ are the velocity operator
and spin operator in Dirac equation, respectively.
In the traditional definition \cite{sun2005}, the spin current density
operator $\frac{1}{2}\left(\hat{v}\hat{s}+\hat{s}\hat{v}\right)$
(Here,$\hat{v}=\frac{\hat{p}}{m}$ or $\frac{\hat{\Pi}}{m}$) means
the carriers with a spin $\hat{s}$ flowing at a speed of $\hat{v}$.
However, the spin is an intrinsic physical character in quantum theory.
The traditional definition based on an analogy of the classical current
can not accurately describe the spin current.
Firstly, in relativistic quantum mechanics, the physical meanings of the
velocity operator $\hat{\alpha}$ has been clearly described. Also,
it should be pointed out that, there is a relationship between the
electric current and the spin (the spin current) in $\frac{1}{c}$
order (which is shown in the next section). The spin-orbit coupling
effect demands to replace the momentum operator $\hat{p}$ (or $\hat{\Pi}$)
with the operator $\hat{\alpha}$.
Secondly, for the commutation relation of $\hat{\alpha}$ and $\hat{\Sigma}$,
\[
\left[\hat{\alpha},\hat{\Sigma}\right]=i\hat{\alpha}
\]
the quantum effect of the definition is lost during the classical
analogy, especially in dealing with the finite size effect of the
spin current and describing the experiment results in spintronics.
Deriving the expression of the OAM current $J_{L}$
and the TAM $J_{J}$ is similar to that of spin current $J_{s}$:
\begin{equation}
\left(J_{L}\right)_{\mu\nu}=\alpha_{\mu}L_{\nu}\label{eq:LsD}
\end{equation}
\begin{equation}
\tensor {J_{J}}=\tensor{J_{s}}+\tensor{J_{L}}\label{eq:JjD}
\end{equation}
where OAM operator
$L_{\gamma}=\epsilon_{\gamma}^{\alpha\beta}x_{\alpha}\Pi_{\beta}$.
\subsection{Angular momentum current of photons}
Generating and manipulating the polarization of electrons (or the carriers)
is vital for spintronics. The main method is by letting the electron
absorb or emit photons, in order to change its spin state.
The Lagrangian for a Maxwell field is
\[
\mathscr{L}_{\gamma}=-\frac{1}{4}F_{\mu\nu}^{2}.
\]
The corresponding terms to describe the photon's spin current, the
OAM current and the TAM current are
\begin{equation}
\tensor {J_{s}^{p}}=\left[\vec{\nabla}\vec{A}\right]\times\vec{A},
\end{equation}
\begin{equation}
\tensor {J_{L}^{p}}=\vec{r}\times\tensor T,
\end{equation}
\begin{equation}
\tensor {J_{J}^{p}}=\tensor {J_{s}^{p}}+\tensor {J_{L}^{p}}
\end{equation}
respectively. Here $T_{ij}=\frac{1}{2}\delta_{ij}\left(E_{i}E_{i}+H_{i}H_{i}\right)-E_{i}E_{j}-H_{i}H_{j}$.
Obviously, only the TAM current $J_{J}+J_{J}^{p}$ meet the continuity
equation
\[
\frac{\partial}{\partial t}\left(\vec{J}+\vec{J^{p}}\right)+\nabla\cdot\left(\tensor {J_{J}}+\tensor {J_{J}^{p}}\right)=0
\]
By choosing the TAM current $J_{J}$ (without the photon field) or
$J_{J}+J_{J}^{p}$ (in the general occasion), one can keep the traditional
theory unchanged, like the Onsager relation and the conservation law, which are built on the equilibrium state theory.
\section{the NRA expression}
In order to easily discuss and describe the physical meanings of the
current expression, it is necessary to have a non-relativistic form of
spin current. After some tedious simplifications (shown in Appendix),
we derive the non-relativistic expression of the spin current, OAM current
and TAM current.
\begin{equation}
\tensor {J_{s}}=\left(\vec{\Pi}\vec{\sigma}+\vec{\sigma}\vec{\Pi}+i\left(\vec{\sigma}\times\vec{\Pi}\right)\vec{\sigma}+i\vec{\sigma}\left(\vec{\Pi}\times\vec{\sigma}\right)\right)\label{eq:JDNRA}
\end{equation}
\begin{equation}
\tensor {J_{L}}=\left(\vec{\Pi}\vec{L}+\vec{L}\vec{\Pi}+i\left(\vec{\sigma}\times\vec{\Pi}\right)\vec{L}+i\vec{L}\left(\vec{\Pi}\times\vec{\sigma}\right)\right)
\end{equation}
\begin{equation}
\tensor {J_{J}}=\left(\vec{\Pi}\vec{J}+\vec{J}\vec{\Pi}+i\left(\vec{\sigma}\times\vec{\Pi}\right)\vec{J}+i\vec{J}\left(\vec{\Pi}\times\vec{\sigma}\right)\right)
\end{equation}
where two important relations
\[
\chi=\frac{\left(\sigma\cdot\Pi\right)}{2mc}\phi=\left(1-\frac{1}{8m^{2}c^{2}}\right)\frac{\left(\sigma\cdot\Pi\right)}{2mc}\psi
\]
\[
\left(\vec{\sigma}\cdot\vec{A}\right)\left(\vec{\sigma}\cdot\vec{B}\right)=\vec{A}\cdot\vec{B}+i\vec{\sigma}\cdot\left(\vec{A}\times\vec{B}\right)
\]
are used. The result is shown to be completely equivalent to Eq.\eqref{eq:JsD},\eqref{eq:LsD}
and \eqref{eq:JjD} up to the order of $\frac{1}{c}$. Obviously,
not only the traditional term of the spin current, but the other term
\begin{equation}
i\left(\vec{\sigma}\times\vec{\Pi}\right)\vec{\sigma}+i\vec{\sigma}\left(\vec{\Pi}\times\vec{\sigma}\right)\label{eq:ET}
\end{equation}
also contributes to the spin current in the same order.
In quantum physics, there are some quantum effects that can not be analogized
with the classical theory. The term \eqref{eq:ET} can only be
described as \textquotedbl{}similar\textquotedbl{} as a kind of quantum
rotation. In Sun's work \cite{sun2005}, the extra term $\omega_{s}$
is used to describe spin rotation, because a complete description
of vector current should include translation and rotation motions
as the classical theory shows. Here, the term \eqref{eq:ET} in our
work which is accurately deduced proves two important conclusions
as follows: First, the traditional definition of spin current can
not make the spin conserved, which has been widely accepted. Second,
the term \eqref{eq:ET} is the origin of the so-called quantum rotation,
and its contribution is the exact source of the similar term in Sun's
paper \cite{sun2005}.
More importantly, because the term\eqref{eq:ET}, with an \textquotedbl{}i\textquotedbl{}
in its coefficient, stands for its quantum effect that can not be
analogized classically, it does not only contribute to the magnitude
of the spin current in the same order compared with the traditional
definition, but also predict some important effects like the Spin
Hall effect.
The diagonal matrix element of $J_{s}$ is
\[
J_{ii}^{s}=\frac{1}{4mc}\left(\hbar\left(\phi\sigma\cdot\nabla\phi^{\dagger}-\phi^{\dagger}\sigma\cdot\nabla\phi\right)-\frac{2e}{c}\phi^{\dagger}\sigma\cdot A\phi\right),
\]
which is similar to the traditional definition of the spin current
$\frac{1}{2}\left(vs+sv\right)$. While, the non-diagonal matrix element
of the spin current that can be determined by
\[
J_{ij}=\frac{1}{2mc}\epsilon_{ijk}j_{k}
\]
with
\[
j=\left(\frac{\hbar}{2m}\left(\phi\left(\nabla\right)\phi^{\dagger}-\phi^{\dagger}\left(\nabla\right)\phi\right)-\frac{e}{2m}\phi^{\dagger}\vec{A}\phi+\frac{\hbar}{2m}\nabla\times\left(\phi^{\dagger}\sigma\phi\right)\right)
\]
is the exact matrix element of the current density operator in quantum
electrodynamics.
Let's see an example of 2D HgTe/CdTe quantum well. First, we choose Kane model for semiconductors confining in a heterojunction of semiconductor HgTe/CdTe. The parameters are adopted from Ref \cite{zhou2008}.
\begin{figure}
\includegraphics[width=0.8\columnwidth]{Fig1.eps}
\caption{\label{Fig1}the spin current of $\Psi_{\uparrow+}\left(k_{x},y\right)$ at $k=0.01nm^{-1}$,$\Psi_{\downarrow-}\left(-k_{x},y\right)$
at $k=-0.01nm^{-1}$, the spin up (solid/green), spin down,and the sum.}
\end{figure}
Fig.\ref{Fig1} shows the spin current of our definition. The
wave functions $\Psi(k_{x},y)$ are the edge states for $L=200 nm$.
It is shown that the current exists not only in the bulk, but also on both edges (dependent on the spatial distribution parameters of the wave functions $\lambda_{1}$, $\lambda_{2}$ and kinetic momentum $k$ in Ref \cite{zhou2008}), while, no spin current exists according to the traditional definition
\[
\hat{J}{}_{yz}=\frac{1}{2}\left(\hat{v}_{y}\hat{s}_{z}+\hat{s}_{z}\hat{v}_{y}\right)
\]
with $\hat{v}_{y}=-\frac{\hbar}{m_{e}}\partial_{y}$.
\begin{figure}
\includegraphics[width=0.8\columnwidth]{Fig2}
\caption{\label{Fig2}the spin current of $k=0nm^{-1}$: the spin up, spin down, and the sum.}
\end{figure}
When $k=0$, the spin current still exists in the surface, as shown
in Fig\ref{Fig2}. This distinctive character other than the traditional
electric current has been predicted in previous papers \cite{zutic2004,murakami2003,sinova2004}.
It should be pointed out that, because of the existence of the term
\eqref{eq:ET} , the surface effect of the spin current can be much
more enhanced, since the quantum rotation is much stronger at the edge, and thus it contributes much more than the traditional definition of the spin current.
\section{the conservation and the continuity equations}
As pointed out, the conservation of spin current is a contradictory
issue. Different conclusions have been drawn for taking different
occasions into consideration.
In non-relativistic quantum mechanics, the spin is a conserved quantity
when the OAM is frozen. The continuity equation is
\begin{equation}
\frac{\partial}{\partial t}\vec{S}+\nabla\cdot\tensor{J_{S}}=0\label{eq:JsCE}
\end{equation}
In Sun's work \cite{sun2005}, he treated the spin as a classical vector,
including two different motions. The translation motion can be described
by the traditional definition of spin current, while the rotating motion can be described by the angular velocity operator $\vec{j}_{\omega}\left(r,t\right)$.
Note that our extra term in Eq.\eqref{eq:ET} is the quantum origin
of the rotation operator $\vec{j}_{\omega}\left(r,t\right)$.
In the case when the OAM is not frozen (suitable for most spintronic systems), the continuity equation \eqref{eq:JsCE} turns into
\[
\frac{\partial}{\partial t}\vec{J}+\nabla\cdot\tensor{J_{J}}=0.
\]
The spin-orbit coupling effect results in that the spin is not a good
quantum number any more. Because of the TAM $J$ is the good quantum
number, one can only choose the TAM $\hat{J}$ and its corresponding
current $J_{J}$ to describe the transport phenomena.
The theory of Quantum Electrodynamics points out that the electron's
TAM can not stay conservation in the extra field. The Lorentz tranformation
of the system's Lagranian gives out the continuity equation
\begin{equation}
\frac{\partial}{\partial t}\left(\vec{J}+\vec{J^{p}}\right)+\nabla\cdot\left(\tensor {J_{J}}+\tensor {J_{J}^{p}}\right)=0\label{eq:TC}
\end{equation}
This equation shows that the TAM of the system (the electrons and the
photons) stays conservation. This is the physical meaning of Eq.\eqref{eq:TC}.
Eq.\eqref{eq:TC} can be written in another form
\[
\frac{\partial}{\partial t}\vec{J}+\nabla\cdot\tensor {J_{J}}=-\left(\frac{\partial}{\partial t}\vec{J^{p}}+\nabla\cdot\tensor {J_{J}^{p}}\right)
\]
The existance of $\mathscr{L}_{int}$ enables the electrons and the
photons to exchange angular momentum by some specific rules. This
is exact the theoretical support on the experiments, namely by absorbing
and emitting the photons, the electron's TAM can be changed. The change
of photons' TAM state explains the corresponding polarizing phenomena
in spintroncs.
Since the spin current by itself is not conserved, its rate equation can be derived using the Heisenberg equation of motion.
\begin{eqnarray*}
\frac{\partial}{\partial t}\tensor {J_{S}} & = & -\frac{i}{\hbar}\left[\tensor {J_{S}},H\right]\\
& = & -\frac{i}{\hbar}\left(c\left(\vec{\alpha}\vec{\alpha}\times\vec{\Pi}+\vec{\Sigma}\times\vec{\Pi}\vec{\Sigma}\right)-i\beta mc^{2}\vec{\alpha}\vec{\Sigma}\right)
\end{eqnarray*}
It shows obviously that not only the traditional term $\alpha\cdot\Pi$,
but also $\beta mc^{2}$ has a commutation relation in $J_{S}$. A simple analogy with the classical theory can not describe the change of the spin current(or the TAM current).
\section{the TAM in semiconductors}
In the previous section, the differences between the spin current
and the TAM current has been discussed. Besides the accuracy and quantum
effect, the TAM current $J_{J}$ has also other advantages to describe the polarizing distribution in spintronics.
For III-V semiconductors, the quantum numbers of the wave functions
are $\left(j,j_{z}\right)$, but not $\left(s,m\right)$, because
the spin-orbit coupling plays an important role in deciding the energy
band structures of the systems. Compared with the spin current, the TAM current is more accurate and meaningful,
The physics qualities describing the transport phenomena are indirect
spin-dependent, while they are the functions of the TAM current $J$
and $J_{z}$.
As usual, the media of the spintronic devices are mainly the magnetic or
dilute magnetic semiconductors, which have a strong dielectric and
magnetic polarization. According to the theory of electrodynamics
in the media, especially considering the energy bands structure, the
different Lande $g$ of the angular momentum and the spin affect the
spin-orbit coupling to some extent, and simply calculating the spin current using the traditional definition can not meet the need of describing and explaining the experimental
results.
\section{the TAM in media}
\subsection{The general spin-orbit coupling}
The NRA of Dirac equation Eq.\eqref{eq:H} can be written
\begin{equation}
H=H_{1}+H_{2}\label{eq:Hlong}
\end{equation}
where
\begin{equation}
H_{1}=\frac{\left(\vec{P}-\frac{e}{c}\vec{A}\right)^{2}}{2m}-\frac{\vec{P}^{3}}{8m^{3}c^{2}}+eA_{0}-\frac{e}{2mc}\vec{\sigma}\left(\nabla\times\vec{A}\right)+\frac{e}{8m^{2}c^{2}}\Delta A_{0}
\end{equation}
and
\begin{equation}
H_{2}=-\frac{e}{4m^{2}c^{2}}\vec{\sigma}\left(\vec{E}\times\left(\vec{p}-\frac{e}{c}\vec{A}\right)\right)
\end{equation}
The term $H_{2}$ is called the spin-orbit coupling, which is
one of fundamentals of the spintronics.
To study the carrier's transport properties, the electromagnetic susceptibility should be taken into calculation.
In the case of the media having a relative velocity respect to the
carriers, the electromagnetic field in the polarized media interacting
with the carriers is
\begin{equation}
\vec{D}=\epsilon\vec{E}+\frac{\epsilon\mu-1}{c}\vec{v}\times\vec{H}
\end{equation}
\begin{equation}
\vec{B}=\mu\vec{H}+\frac{\epsilon\mu-1}{c}\vec{E}\times\vec{v}
\end{equation}
where $\vec{v}$ is the relative speed of the media in the field.
By placing these relations into Eq.$\eqref{eq:Hlong}$ and utilizing
the relation
\[
\vec{A}=\frac{1}{2}\vec{B}\times\vec{r},
\]
the Hamiltonian (up to $o\left(\frac{1}{m^{2}}\right)$) turns to
be
\begin{equation}
H=H'_{1}+H'_{2}\label{eq:HM}
\end{equation}
\begin{equation}
H'_{1}=\frac{\hat{P}^{2}}{2m}-\frac{e\mu}{2mc}\left(\hat{\vec{L}}+\sigma\right)\cdot\vec{H}+\frac{e^{2}}{2mc^{2}}\vec{A}^{2}-\frac{\hat{\vec{P}}^{4}}{8m^{2}c^{2}}+eA_{0}+\frac{e}{8m^{2}c^{2}}\Delta A_{0}
\end{equation}
\begin{equation}
H'_{2}=-\frac{e}{4m^{2}c^{2}}\left(\left(2\epsilon\mu-1\right)\vec{\sigma}+2\left(\epsilon\mu-1\right)\vec{L}\right)\cdot\left(\vec{E}\times\vec{\Pi}\right)\label{eq:JEC}
\end{equation}
The spin-orbit coupling $H_{2}$ turns into a larger term $H'_{2}$.
According to the quantum electrodynamics, the spin-orbit coupling
is induced by the electric field in which the electron moves at a
speed of $\vec{\Pi}$ acting on the electron's spin.
\begin{equation}
\frac{e}{4m^{2}c^{2}}\vec{\sigma}\left(\vec{E}\times\vec{\Pi}\right)=\frac{e}{4m^{2}c^{2}}\frac{\partial V}{\partial r}\mbox{\ensuremath{\left(\vec{\sigma}\cdot\vec{L}\right)}}\label{eq:SO}
\end{equation}
When one considers the external fields in the solid-state media by
the electromagnetic polarization for the moving carriers, one should
include the OAM into calculation. This means that not only the spin, but
also the OAM is coupled with the electric field. When $\epsilon\mu=1$,
the coupling term$\eqref{eq:JEC}$ turns back to be Eq.$\eqref{eq:SO}$,
the same as the traditional spin-orbit coupling. When $\epsilon\mu\gg1$, however, the orbit angular accumulation affect the coupling term almost as well
as that of the spin. Thus the OAM becomes crucial to describe the polarization
of the system.
According to the theory of the spin Hall effect, the carriers carrying
different spins flow in the opposite directions. In our case, the carries with different angular
momentums $\left(j,j_{z}\right)$ flow in the different directions.
The only difference is that the OAM is included in our model.
It should be noticed that the condition $\epsilon\mu\gg1$ usually
holds in most semiconductors, like III-V compound semiconductors of GaAs, GaN etc. Thus,
\begin{eqnarray}
\frac{e}{4m^{2}c^{2}}\left(\left(2\epsilon\mu-1\right)\vec{\sigma}+2\left(\epsilon\mu-1\right)\vec{L}\right)\cdot\left(\vec{E}\times\vec{\Pi}\right) & \doteq & 2\epsilon\mu\left(g_{s}\vec{s}+g_{l}\vec{L}\right)\cdot\left(\vec{E}\times\vec{\Pi}\right)\mu_{B}\nonumber \\
& = & 2\epsilon\mu\left(g_{j}\vec{J}\right)\cdot\left(\vec{E}\times\vec{\Pi}\right)\mu_{B}.\label{eq:GJEC}
\end{eqnarray}
According to the relation of the effective Lande $g$ value and the effective
mass, $g$ in the Eq.$\eqref{eq:GJEC}$ should be replaced by $g^{*}$
in the semiconductors \cite{shen_l-valley_2008}. These imply that
$\vec{j}$ should replace the spin, as the physical quantity in more
general cases.
\section{the discussion on some experiments}
\subsection{Spin Hall effect}
Zhang proposed a semi-classical Boltzmann-like equation to describe
the distribution of the spins \cite{zhang2000}. The similar behaviour can also be deduced from our definition, considering the finite size effects.
In the system, the spin up current is
\[
\left\langle J_{s}\right\rangle _{xz}=\left\langle \Psi_{\uparrow\pm}\left|J_{s}\right|\Psi_{\uparrow\pm}\right\rangle =J_{t}+J_{e}
\]
The $J_{t}$ and $J_{e}$ are the traditional definition of the spin
current and the extra term Eq.\eqref{eq:ET}, respectively.
As shown in the Appendix, $J_{t}$ is proportional to $k{}_{x}$,
namely
\[
J_{t}^{\pm}=\mp C_{h}^{\pm}E_{x}
\]
But $J_{e}$ is independent on $k_{x}$, and is only as a function of the density
distribution of electrons in $y$ direction. namely
\[
J_{e}^{\pm}=C_{y}^{\pm}E_{y}^{\pm}\left(y\right)
\]
This, is a similar result compared to the Eqs (12) and (13) in Zhang's
paper \cite{zhang2000}. The spin accumulates in the y direction, which is exactly the same as that concluded from his anomalous Hall field. However, the spin diffusion is decided by the parameters $\omega$ and $D$ in his conclusion. However, in our expression, while the spin diffusion are the corresponding parameters is dependent of the spatial distribution parameters $\lambda_{1},\lambda_{2}$\cite{zhou2008} deduced from our definition of spin current.
\subsection{The TAM Hall effect}
Now we discuss about the spin Hall effect in GaAs bulk system with spin-orbit
coupling effect on the energy band structure.
According to Eq.\eqref{eq:JEC}, the Rashba effect can be written in $c_{i}k_{i}j_{i}$, $\Psi_{\frac{3}{2},\frac{3}{2}}$, $\Psi_{\frac{3}{2},\frac{1}{2}}$ and $\Psi_{\frac{1}{2},\frac{1}{2}}$ ($\Psi_{j,j_{z}}$)accumulate to one edge while the $\Psi_{\frac{3}{2},-\frac{3}{2}}$
,$\Psi_{\frac{3}{2},-\frac{1}{2}}$ , $\Psi_{\frac{1}{2},-\frac{1}{2}}$ on the other edge, namely the TAM $j$ accumulates in both edges. It is easy to find that on both sides,
\begin{equation}
\left\langle \Psi\left(r\right)\left|\hat{\vec{J}}\right|\Psi\left(r\right)\right\rangle =\sum_{j_{z}}\left\langle \frac{3}{2},j_{z}\left|\hat{\vec{j}}\right|\frac{3}{2},j_{z}\right\rangle \neq0.\label{eq:DIS}
\end{equation}
According to the theory of Kerr rotation \cite{condon1977} $\theta_{K}=\theta'_{K}+i\theta''_{K}$,
where
\[
\theta'_{K}=-\frac{\omega_{p}^{2}}{2n\left(\epsilon_{\perp}-1\right)}\Sigma_{ab}\frac{\beta_{a}}{\omega_{ab}}\frac{\Gamma_{ab}\left(\omega_{ab}^{2}+\omega^{2}+\Gamma\right)\left(f_{ab}^{+}-f_{ab}^{-}\right)}{\left(\omega_{ab}^{2}-\omega^{2}+\Gamma_{ab}^{2}\right)+4\omega^{2}\Gamma_{ab}^{2}}
\]
\[
\theta''_{K}=\frac{\omega_{p}^{2}\omega}{2n\left(\epsilon_{\perp}-1\right)}{\displaystyle \Sigma}_{ab}\frac{\beta_{a}}{\omega_{ab}}\frac{\left(\omega_{ab}^{2}-\omega^{2}+\Gamma_{ab}^{2}\right)\left(f_{ab}^{+}-f_{ab}^{-}\right)}{\left(\omega_{ab}^{2}-\omega^{2}+\Gamma_{ab}^{2}\right)+4\omega^{2}\Gamma_{ab}^{2}}
\]
where $\beta_{a}$ is the probability density of the carriers occupying
$a$ energy level, $\epsilon_{\perp}$ is the dielectric quality,
$n$ is the refractive index, $\Gamma_{ab}$ is the line width, $\omega$
is circular frequency of incident light and $\hbar\omega_{ab}$ is the energy gap. Obviously, the Kerr rotation angular is proportional to $\left(f_{ab}^{+}-f_{ab}^{-}\right)$ whose expression
is
\begin{equation}
f_{ab}^{\pm}=\frac{m\omega_{ab}}{\hbar e^{2}}\left|P_{ab}^{\pm}\right|^{2}
\end{equation}
\begin{equation}
P_{ab}^{\pm}=e\left\langle \Psi_{a}\left|x\pm iy\right|\Psi_{b}\right\rangle,
\end{equation}
where $\Psi_{a}$ is the ground state and $\Psi_{b}$ is the excitation state. When $\left(f_{ab}^{+}-f_{ab}^{-}\right)\neq0$, the Kerr rotation occurs.
As mentioned above, $\Psi_{\frac{3}{2},\frac{3}{2}}$ ,$\Psi_{\frac{3}{2},\frac{1}{2}}$
and $\Psi_{\frac{1}{2},\frac{1}{2}}$ accumulate on one side, while $\Psi_{\frac{3}{2},-\frac{3}{2}}$ ,$\Psi_{\frac{3}{2},-\frac{1}{2}}$
and $\Psi_{\frac{1}{2},-\frac{1}{2}}$ accumulate on the other side. Thus,$P_{hh}/P_{lh}=$16.4634 at $\Gamma$ point. As $k$ varies, the ratio is influenced by
Fermi surface according to the Rashba term. The accumulation of electrons in heavy hole bands attributes to the Kerr rotation.
The TAM $j$ accumulation gives the same image as the traditional spin Hall effect. Note that the spin does not accumulate actually, so the OAM plays an important role on the accumulation. More over, because the total angular moment $J$ offers more degrees of freedom, we can use it to transmit more information in the same condition.
In summary, the spin-orbit coupling has been incorporated into the TAM $j$
couples with the electric field. The OAM can be treated as the spin, especially in some system with a large $\epsilon\mu$. We recommend that the TAM $j$ current replaces the spin current to describe the motion of the carriers with
different angular momentum. The physical nature of polarization accumulation and the Kerr rotation has ben explained using our theory.
\begin{acknowledgments}
This work was supported by the NSFC (Grants Nos. 11175135, 11074192 and J0830310), and the National 973 program (Grant No. 2007CB935304).
\end{acknowledgments}
{\bf Appendix}
|
2,877,628,090,379 | arxiv | \section{Main Theorems}
The common theme of the subject of irregularities of distribution is to
show that, no matter how $N$ points are selected, their distribution must be far from uniform.
In the present article, we are primarily interested in the precise
behavior of such estimates near the $ L ^{\infty }$ endpoint,
phrased in terms of exponential Orlicz classes. We restrict our
attention to the two-dimensional case.
Let $ \mathcal A_N \subset [0,1] ^{2}$ be a set of $N$ points in
the unit square. For $\vec x = (x_1, x_2)\in [0,1]^2$, we define
the \emph{Discrepancy function} associated to $ \mathcal A_N$ as
follows:
\begin{equation*}
D_N (\vec x) \coloneqq \sharp \bigl(\mathcal A_N \cap [0, \vec x)\bigr) - N \abs{ [0, \vec x)}\,,
\end{equation*}
where $ [0,\vec x)$ is the axis-parallel rectangle in the unit
square with one vertex at the origin and the other at $ \vec x =(x_1
,x_2)$, and $ \abs{ [0, \vec x)}=x_1 \cdot x_2$ denotes the Lebesgue
measure of the rectangle. This is the difference between the actual
number of points in the rectangle $ [0,\vec x)$ and the expected
number of points in this rectangle. The relative size of this
function, in various senses, must necessarily increase with $ N$.
The principal result in this direction is due to Roth
\cite{MR0066435}:
\begin{roth} \label{t.roth} In all dimensions $ d\ge 2$, we have the
following estimate
\begin{equation}\label{e.roth}
\norm D_N. 2. \gtrsim (\log N) ^{(d-1)/2}
\end{equation}
where the implied constant is only a function of dimension $ d$.
\end{roth}
The same bound holds for the $ L ^{p}$ norm, for $ 1<p<\infty $,
\cite{MR0491574}, and is known to be sharp as to the order of
magnitude, see \cite{MR610701} and \cite{MR903025} for a history of
this subject (for the case $d=2$, see Corollary \ref{c.p} below).
The endpoint cases of $ p=1$ and $ p=\infty $ are much harder.
We concentrate on the case of $ p=\infty $ in this note, just in
dimension $ d=2$, and refer the reader to
\cites{MR1032337,0705.4619,math.CA/0609815,MR637361} for more
information about the case of $ d\ge 3$. For information about the
case of $ p=1 $, see \cites{MR637361,math.NT/0609817}. As it has
been shown in the fundamental theorem of W. Schmidt
\cite{MR0319933}, in dimension $d=2$, the lower bound on the
$L^\infty$ norm of the Discrepancy function is substantially greater
than the $L^p$ estimate \eqref{e.roth}:
\begin{schmidt}\label{t.schmidt}
For any set $ \mathcal A_N\subset [0,1]^2$ we have
\begin{equation}\label{e.schmidt}
\norm D_N .\infty . \gtrsim \log N \,.
\end{equation}
\end{schmidt}
This theorem is also sharp: one particular example is the famous van
der Corput set \cite{Cor35} -- a detailed discussion is contained
in \S3. In this paper, we give an interpolant between the results of
Roth and Schmidt, which is measured in the scale of exponential
Orlicz classes.
\begin{theorem}\label{t.lower}
For any $N$-point set $ \mathcal A_N \subset [0,1]^2$ we have
\begin{equation*}
\norm D_N .\operatorname {exp} (L ^{\alpha }) . \gtrsim (\log N ) ^{1-1/\alpha }\,,
\qquad 2\le \alpha < \infty \,.
\end{equation*}
\end{theorem}
Of course the lower bound of $ (\log N) ^{1/2}$, the case of $
\alpha =2$ above, is a consequence of Roth's bound. The other
estimates require proof, which is a variant of Hal{\'a}sz's
argument \cite{MR637361}. We give details below and also remark
that this estimate in the context of the Small Ball Inequality
\cites{MR95k:60049,MR96c:41052} is known \cite {2000b:60195}. In
addition, we demonstrate that the previous theorem is sharp.
\begin{theorem}\label{t.vdc} For all $ N$, there is a choice
of $ \mathcal A_N$, specifically the digit-scrambled van der Corput
set (see Definition~\ref{d.vdc}), for which we have
\begin{equation}\label{e.vdc}
\norm D_N. \operatorname {exp} (L ^{\alpha }). \lesssim (\log N) ^{
1-1/ \alpha }\,, \qquad 2\le \alpha <\infty \,.
\end{equation}
\end{theorem}
In view of Proposition \ref{p.comparable}, taking $\alpha=2$, the
theorem above immediately yields the sharpness of the $L^p$ lower
bounds in $d=2$ with explicit dependence of constants on $p$.
\begin{corollary}\label{c.p} For every $1\le p<\infty$, the set $\mathcal A_N$ from Theorem \ref{t.vdc}
satisfies
\begin{equation}
\norm D_N. p. \lesssim p^{1/2} (\log N )^{1/2},
\end{equation}
where the implied constant is independent of $p$.
\end{corollary}
There is another variant of the Roth lower bound, which we state here.
\begin{theorem}\label{t.bmolower} We have the estimate
\begin{equation*}
\norm D_N . \operatorname {BMO} _{1,2}. \gtrsim ( {\log
N})^{1/2}\,,
\end{equation*}
where the norm is the dyadic Chang-Fefferman product $\operatorname {BMO}$
norm (see Definition~\ref{d.cf}), introduced in \cite{MR584078}.
\end{theorem}
Indeed, this Theorem is just a corollary to a standard proof of Roth's Theorem,
and its main
interest lies in the fact that the estimate above is sharp. It is useful to
recall the simple observation that the $ \operatorname {BMO}$ norm is insensitive to functions that
are constant in either the vertical or horizontal direction. That is, we have
$ \norm D_N . \operatorname {BMO} _{1,2}.= \norm \widetilde D_N . \operatorname {BMO} _{1,2}. $,
where
\begin{equation*}
\begin{split}
\widetilde D_N (x_1,x_2)
&= D_N (x_1,x_2)- \int _0 ^{1} D_N (x_1,x_2) \; dx_1
\\& \qquad -\int _0 ^{1} D_N (x_1,x_2) \; dx_2 +
\int _0 ^{1 }\!\!\int _0 ^{1} D_N (x_1,x_2) \; dx_1 \, dx_2\,.
\end{split}
\end{equation*}
\begin{theorem}\label{t.bmo}
For $ N= 2 ^{n}$, there is a choice of $ \mathcal A_N$, specifically
the digit-scrambled van der Corput set, for which we have
\begin{equation}\label{e.bmoupper}
\norm D_N . \operatorname {BMO} _{1,2} . \lesssim (\log
N) ^{1/2} \,.
\end{equation}
\end{theorem}
The main point of these results is that they unify the theorems of
Roth and Schmidt in a sharp fashion. This line of research is also
of interest in higher dimensions, but the relevant conjectures do
not seem to be as readily apparent. As such, we think that this is
an interesting theme for further investigation.
In the next section we collect a variety of results needed to prove
the main Theorems. These results are drawn from the theory of
Irregularities of Distribution, Harmonic Analysis, Probability
Theory and other subjects. In \S3 we discuss the structure of the
digit-scrambled van der Corput set. Section 4 is dedicated to the
analysis of the Haar decomposition of the Discrepancy function for
the van der Corput set. The proofs of the main theorems above are
then taken up in the \S5 and \S6.
The results of this paper concern refinements of the $ L ^{\infty }$-endpoint
estimates for the Discrepancy Function. In three dimensions, even the
correct form of Schmidt's Theorem is not yet known, making the discussion of these
results in three dimensions entirely premature, though speculation about
such results could inform the analysis of the more difficult three dimensional case.
See \cites{math.CA/0609815,0705.4619} for recent information about the higher dimensional
versions of Schmidt's Theorem.
The authors thank the referee for an expert reading, and suggestions to improve the paper.
\section{Preliminary Facts}
We suppress many constants which do not affect the arguments in
essential ways. $ A \lesssim B$ means that there is an absolute
constant $ K>0$ such that $ A \le K B$. Thus $ A \lesssim 1$ means
that $ A$ is bounded by an absolute constant. And if $ A \lesssim B
\lesssim A$, we write $ A \simeq B$.
\bigskip
\subsection*{Inequalities
We recall the square function inequalities for martingales, in a
form convenient for us.
In one dimension, the class of dyadic intervals in the unit interval are
$\mathcal D {} \coloneqq {}\{ [j2^{-k},(j+1)2^{-k})
\mid j,k\in \mathbb N\,, 0\le j < 2 ^{k}\} $. Let $ \mathcal D_n$ denote the
dyadic intervals of length $ 2 ^{-n}$, and by abuse of notation, also the
sigma field generated by these intervals. For an integrable function $ f$ on
$ [0,1]$, the conditional expectation is
\begin{equation*}
f_n=\mathbb E (f \mid \mathcal D_n) \coloneqq \sum _{I\in \mathcal D_n} \mathbf 1_{I} \cdot
\lvert I\rvert ^{-1} \int _{I} f (y)\;dy\,.
\end{equation*}
The sequence of functions $ \{ f_n \mid n\ge 0\}$ is a \emph{martingale}. The
\emph{martingale difference sequence } is $ d_0=f_0$, and $ d_n= f_n-f _{n-1}$ for
$ n\ge 1$. The sequence of functions $ \{d_n\mid n\ge 0\}$ are pairwise orthogonal.
The \emph{square function} is
\begin{equation*}
\operatorname S (f) \coloneqq \Biggl[\sum _{n=0} ^{\infty } \lvert d_n\rvert ^2 \Biggr]
^{1/2} \,.
\end{equation*}
We have the following extension of the Khintchine inequalities.
\begin{theorem}\label{t.martKhintchine} The inequalities below hold, for
some absolute choice of constant $ C>0$.
\begin{equation}\label{e.martKhintchine}
\norm f. p. \le C \sqrt p \norm \operatorname S (f).p.\,, \qquad 2\le p < \infty \,.
\end{equation}
In addition, this inequality holds for Hilbert space valued
functions $ f$.
\end{theorem}
For real-valued martingales, this was observed by \cite{MR800004}.
The extension to Hilbert space valued martingales is useful for us
and is proved in \cite{MR1439553}. The best constants in these
inequalities are known for $ p\ge 3$ \cite{MR1018577}.
\subsection*{Orlicz Spaces
For background on Orlicz Spaces, we refer the reader to \cite{MR0500056}.
Consider a symmetric convex function $ \psi $,
which is zero at the origin, and is otherwise non-zero. Let $ (\Omega , P)$
be a probability space, on which our functions are defined, and let $ \mathbb E $
denote expectation over the probability space.
We can define
\begin{equation}\label{e.psi}
\norm f. L ^\psi . = \inf \{ K>0\mid \mathbb E \psi (f \cdot K ^{-1} )\le 1\}\,,
\end{equation}
where we define the infimum over the empty set to be $ \infty $. The
set of functions $ L ^{\psi } = \{f \mid \norm f. L ^{\Psi } . <
\infty \}$ is a normed linear space, called the Orlicz space
associated with $ \psi $.
We are interested in, for instance, $ \psi (x)= \operatorname e ^{x
^2 }-1$, in which case we denote the Orlicz space by $ \operatorname
{exp} (L ^{2})$. More generally, for $\alpha >0$, we let $ \psi
_{\alpha } (x)$ be a symmetric convex function which equals $
\operatorname e ^{\lvert x\rvert ^{\alpha } }-1$ for $ \lvert
x\rvert $ sufficiently large, depending upon $ \alpha $.\footnote{We
are only interested in measuring the behavior of functions for large
values of $ f$, so this requirement is sufficient. For $ \alpha >1$,
we can insist upon this equality for all $ x$.} And we write $ L
^{\psi _{\alpha }} = \operatorname {exp}(L ^{\alpha })$. These are
the spaces used in the statements of our main Theorems \ref{t.lower}
and \ref{t.vdc}. It is obvious that, for all $1\le p<\infty$ and
$\alpha
>0$, we have $L^p \supset \operatorname {exp}(L ^{\alpha }) \supset
L^\infty$, hence Theorem \ref{t.lower} can be indeed viewed as
interpolation between the estimates of Roth \eqref{e.roth} and
Schmidt \eqref{e.schmidt}. The following useful proposition is
well-known and follows from elementary methods.
\begin{proposition}\label{p.comparable} We have the following equivalence of norms
valid for all $ \alpha >0$:
\begin{equation*}
\norm f. \operatorname {exp}(L ^{\alpha }). \simeq
\sup _{p>1} p ^{-1/\alpha } \norm f. p. \,.
\end{equation*}
\end{proposition}
We shall also make use of the duality relations for the exponential
Orlicz classes. For $ \alpha >0$, let $ \varphi _{\alpha } (x)$ be a
symmetric convex function which equals $ \lvert x\rvert (\log
(3+\lvert x\rvert) ) ^{\alpha } $ for $ \lvert x\rvert $
sufficiently large, depending upon $ \alpha $.\footnote{For $ \alpha
\ge 1$, we can take this as the definition for all $ \lvert
x\rvert\ge 0 $.} The Orlicz space $ L ^{\varphi _{\alpha }}$ is
denoted as $ L ^{\varphi _{\alpha }} = L (\log L) ^{\alpha }$. The
propositions below are standard.
\begin{proposition}\label{p.dual} For $ 0<\alpha <\infty $, the two
Orlicz spaces $ \operatorname {exp}(L ^{\alpha })$ and $ L (\log L)
^{1/\alpha }$ are Banach spaces which are dual to one another.
\end{proposition}
\begin{proposition}\label{p.indicator} Let $ E$ be a measurable subset of
a probability set. We have
\begin{equation*}
\norm \mathbf 1_{E} . L (\log L) ^{1/\alpha }. \simeq \mathbb P (E)
\cdot (1 - \log \mathbb P (E) ) ^{1/\alpha }\,.
\end{equation*}
\end{proposition}
\subsection*{Chang-Wilson-Wolff Inequality
Each dyadic interval has a left and right half, $ {I_{\textup{left}}}, {I_{\textup{right}}}$
respectively, which are also dyadic. Define the
Haar function associated with $ I$ by
\begin{equation*}
h_I \coloneqq -\mathbf 1 _{I_{\textup{left}}}+ \mathbf 1 _{I_{\textup{right}}}
\end{equation*}
Note that here the Haar functions are normalized in $ L ^{\infty }$.
In particular, the square function with this normalization has the
form
\begin{equation*}
\operatorname S (f) ^2 = \sum _{I\in \mathcal D} \frac { \ip f,
h_I, ^2 } {\lvert I\rvert ^2 } \mathbf 1_{I} \,, \qquad\textup{for
}\quad f (x)= \sum _{I} \frac {\ip f, h_I, } {\lvert I\rvert } h_I
(x).
\end{equation*}
We can now deduce the Chang-Wilson-Wolff inequality.
\begin{cww} For all Hilbert space valued martingales, we have
\begin{equation*}
\norm f . \operatorname {exp}(L ^2 ). \lesssim \norm \operatorname S (f). \infty . \,.
\end{equation*}
\end{cww}
Indeed, we have
\begin{equation*}
\norm f. p. \lesssim \sqrt p \cdot \norm \operatorname S (f).p.
\lesssim \sqrt p \cdot \norm \operatorname S (f). \infty . \,.
\end{equation*}
Taking $ p\to \infty $, and using Proposition~\ref{p.comparable}, we deduce the
inequality above.
\medskip
In dimension $2 $, a \emph{dyadic rectangle} is a product of dyadic intervals, thus an
element of
$\mathcal D^2 $. A Haar function associated to
$R $ is the product of the Haar functions associated
with each side of $R $, namely for $ R_1\times R_2$,
\begin{equation*}
h_{R_1\times R_2 }(x_1 ,x_2) {} \coloneqq \prod _{t=1} ^{2}h _{R_t}(x_t)\,.
\end{equation*}
See Figure~\ref{f.haar}.
Below, we will expand the definition of Haar functions, so that we can describe
a basis for $ L ^{2} ([0,1] ^2 )$.
We will concentrate on rectangles of a fixed volume, contained in $[0,1]^2 $. The notion of the square function is also useful in the two dimensional
context. It has the form
\begin{equation} \label{e.haar2}
\operatorname S (f) ^2 = \sum _{R\in \mathcal D ^2} \frac { \ip f
,h_R , ^2 } {\lvert R\rvert ^2 } \mathbf 1_{R} \,, \qquad \textup
{ for } \quad f (x)= \sum _{R\in \mathcal D ^2 } \frac { \ip f ,h_R
,} { {\lvert R\rvert }} h_R (x)\,.
\end{equation}
Jill Pipher \cite{MR850744} observed the following extension of the Chang-Wilson-Wolff inequality.
\begin{2cww} For functions $ f$ in the plane as in \eqref{e.haar2} we have
\begin{equation*}
\norm f . \operatorname {exp} (L) . \lesssim \norm \operatorname S (f) . \infty . \,.
\end{equation*}
\end{2cww}
Namely, in the case of two-parameters, the exponential integrability
has been reduced by a factor of two. This follows from a two-fold
application of the Littlewood-Paley inequalities, with best
constants, for Hilbert space valued functions. Details can be found
in \cites{MR850744,MR1439553,math.CA/0609815}. In fact, we will need
the following variant.
\begin{theorem}\label{t.one} Let $ n\ge 1$ be an integer.
Suppose that $ f$ on the plane has the expansion
\begin{equation*}
f = \sum _{\substack{R\in \mathcal D ^2\\ \lvert R\rvert = 2 ^{-n} } }
\frac { \ip f ,h_R ,} {{\lvert R\rvert }} h_R \,.
\end{equation*}
That is, $ f$ is in the linear span of Haar functions with a fixed volume. Then, we
have the estimate
\begin{equation*}
\norm f . \operatorname {exp} (L ^2 ) . \lesssim \norm S (f) . \infty . \,.
\end{equation*}
\end{theorem}
Thus, if $ f$ is in the linear span of a `one-parameter' family of
rectangles, we regain the exponential-squared integrability. The
proof is straightforward. As the volumes of the rectangles are
fixed, one need only apply the one-parameter Chang-Wilson-Wolff
inequality in, say, the $ x_1$ variable, holding the $ x_2$ variable
fixed.
The following simple proposition reduces the proof of Theorem
\ref{t.vdc} to the case $\alpha=2$.
\begin{proposition}\label{p.AA} Suppose that for $ A\ge 1$, we have
\begin{equation*}
\norm f. \operatorname {exp}( L ^2) . \le \sqrt A\,, \qquad \norm f. \infty . \le A \,.
\end{equation*}
It follows that
\begin{equation*}
\norm f. \operatorname {exp} (L ^{\alpha }). \le A ^{ 1- 1/ \alpha }
\,, \qquad 2\le \alpha < \infty \,.
\end{equation*}
\end{proposition}
\subsection*{Bounded Mean Oscillation
We recall facts about dyadic $BMO$ spaces, see \cites{cf1,MR584078}.
We need to subtract some terms from $D_N$, as it
is not necessarily in the span of the Haar functions as we have
defined them. The deficiency is that standard Haar functions on the
unit square have zero means in both directions. Hence, for a dyadic
interval $ I\in \mathcal D$, we also need to consider
\begin{equation*}
h ^{1}_I = \mathbf 1_{I} = \lvert h_I\rvert\,.
\end{equation*}
And set $ h^0_I=h_I$, where `$ 0$' stands for `zero integral' and `$
1$' for `non-zero integral.' In the plane, for $ \epsilon _1\,, \,
\epsilon _2\in \{0,1\}$ set
\begin{equation} \label{e.01}
h_{R_1 \times R_2} ^{\epsilon _1, \epsilon _2}(x_1, x_2) = \prod _{j=1} ^{2}
h ^{\epsilon _j} _{R_j} (x_j)\,.
\end{equation}
We will sometimes write $h_R=h_R ^{0,0}$ in order to simplify our
notation. With these definitions we have the following
{\emph{orthogonal}} basis for $ L ^2 ([0,1] ^2 )$.
\begin{equation*}
\{ h ^{1,1} _{[0,1] ^2 } \}
\cup
\{ h ^{1,0} _{[0,1] \times I}\,,\,
h ^{1,0} _{I\times [0,1] }\mid I\in \mathcal D\}
\cup
\{h_R ^{0,0} \mid R\in \mathcal D ^2 \}\,.
\end{equation*}
There are couple of different $ \operatorname {BMO}$ spaces that
are relevant here. Let us begin with the variants of the more
familiar C.~Fefferman, one-parameter, dyadic $ \operatorname {BMO}$
spaces.
\begin{definition}\label{d.f} Define the space $ \operatorname {BMO}_1$
to be those square integrable functions $ f$ in the span of $\{ h
^{0,1} _{I \times [0,1]} \mid I \in \mathcal D \}$ which satisfy
\begin{equation}\label{e.1}
\norm f. \operatorname {BMO}_1 . \coloneqq
\sup _{J\in \mathcal D} \Bigl[ \lvert J\rvert ^{-1} \sum _{\substack{I\in \mathcal D\\ I\subset J}}
\frac {\ip f ,h ^{0,1} _{I \times [0,1]} , ^2 } {\lvert I\rvert }
\Bigr] ^{1/2} < \infty \,.
\end{equation}
Define $ \operatorname {BMO}_2$ similarly, with the roles of the first and second coordinate
reversed.
\end{definition}
\begin{definition}\label{d.cf} Dyadic Chang-Fefferman $ \operatorname {BMO}_{1,2}$ is defined to be
those square integrable functions $ f$ in the linear span of $ \{h_R \mid R\in \mathcal D ^2
\}$, for which we have
\begin{equation}\label{e.12}
\norm f. \operatorname {BMO}_{1,2} . \coloneqq \sup _{ U\subset
[0,1] ^2 } \Bigl[ \lvert U\rvert ^{-1} \sum _{\substack{R \in
\mathcal D ^2
\\ R\subset U}}
\frac {\ip f ,h _R , ^2 } {\lvert R\rvert }
\Bigr] ^{1/2} < \infty \,.
\end{equation}
We stress that the supremum is over {\em all} measurable subsets $
U\subset [0,1] ^2 $, not just rectangles.
\end{definition}
It is well-known that these `uniform square integrability'
conditions imply that the corresponding functions enjoy higher
moments. This is usually phrased as the John-Nirenberg
inequalities, which we state here in their sharp exponential form.
\begin{jn} We have the following estimate for $ f\in \operatorname {BMO} _{1}$,
and $ \varphi \in \operatorname {BMO} _{1,2}$.
\begin{align}\label{e.jn1}
\norm f. \operatorname {exp} (L). &\lesssim \norm f . \operatorname {BMO} _{1}.
\\ \label{e.jn12}
\norm \varphi . \operatorname {exp} (\sqrt L). &\lesssim \norm \varphi . \operatorname {BMO} _{1,2}.
\end{align}
\end{jn}
Note that in the second inequality, \eqref{e.jn12}, the number of
parameters has doubled, hence the exponential integrability has
decreased by a factor of two. Of course, if the square function of $ f$ is bounded, one sees
immediately that the functions are necessarily in $ BMO$. And in
this circumstance the Chang-Wilson-Wolff inequalities give an
essential strengthening of the
John-Nirenberg estimates.
\bigskip
\subsection*{Discrepancy
Below, we will refer to the two parts of the Discrepancy function as
the `linear' and the `counting' part. Specifically, they are
\begin{align}\label{e.linear}
L_N (\vec x) &= N x_1 \cdot x_2 \,,
\\ \label{e.counting}
C_ {\mathcal P} (\vec x) &= \sum _{\vec p \in \mathcal P} \mathbf 1_{[\vec p, \vec 1) } (\vec x) \,.
\end{align}
Here, $ \mathcal P$ is the subset of the unit square of cardinality
$ N$.
In proving upper bounds on the Discrepancy function, one of course
needs to capture a cancellation between these two, that is large
enough to nearly completely cancel the nominal normalization by $
N$.
We recall some definitions and facts about Discrepancy
which are well represented in the literature, and apply to general selection of
point sets, see
\cites{MR0066435,MR554923,MR903025}.
We call a function $f$ an \emph{$\mathsf r$ function with parameter
$ \vec r= (r_1 , r_2)$} if $ \vec r\in \mathbb N ^{2}$, and
\begin{equation}
\label{e.rfunction} f=\sum_{ R \in \mathcal R _{\vec r}}
\varepsilon_R\, h_R\,,\qquad \varepsilon_R\in \{\pm1\}\, ,
\end{equation}
where we set $ \mathcal R _{\vec r} \coloneqq \{ R= R_1 \times R_2
\mid R\in \mathcal D^2\,, R\subset [0,1]^2\, ,\ \lvert R_t\rvert= 2
^{- r_t}\,, \ t=1,2\}\,.$
We will use $f _{\vec r} $ to denote a
generic $\mathsf r$ function. A fact used without further comment is
that $ f _{\vec r} ^2 \equiv 1$.
Let $ \abs{ \vec r}= \sum _{t=1} ^{2} r_t=n$, which we refer to as
the index of the $ \mathsf r$ function. And let $\mathbb H _n ^{2}
\coloneqq \{\vec r \in \{0,1 ,\dotsc, n\} ^2
\mid \abs{ \vec r}=n\}$, i.e., the set of all $\vec r \,$'s such that
rectangles in $\mathcal R_{\vec r}$ have area $2^{-n}$.
It is fundamental to the subject that $\sharp \mathbb H _n ^{2} =
n+1 $. We refer to $ \{f _{\vec r} \mid r\in \mathbb H _n ^{2}\}$
as hyperbolic $ \mathsf r$ functions. The next four Propositions are
standard.
\begin{proposition}\label{p.rvec}
For any selection $ \mathcal A_N$ of $ N$ points in the unit cube
the following holds. Fix $ n$ with $ 2N< 2 ^{n}\le 4N$. For each
$\vec r\in \mathbb H_{n} ^{2}$, there is an $\mathsf r$ function $f
_{\vec r} $ with
\begin{equation*}
\ip D_N, f _{\vec r}, \gtrsim 1\,.
\end{equation*}
\end{proposition}
\begin{proof}
There is a very elementary one dimensional fact: for all dyadic intervals $I$,
\begin{equation} \label{e.veryelementary}
\int _{0} ^{1 } x \cdot h _{I}(x) \; dx =\tfrac 14 \abs{ I} ^2 \,.
\end{equation}
This immediately implies that
\begin{equation} \label{e.vv}
\langle x_1\cdot x_2\, , \,h_R ^{0,0}(x_1, x_2) \rangle = 4 ^ {-2}\abs{ R} ^2 \,.
\end{equation}
Thus, the inner product with the linear part of the Discrepancy function is completely
straightforward. We have $ \ip L , h_R ^{0,0}, \ge 4 ^{-2} N \lvert R\rvert ^2
\ge 4 \lvert R\rvert $ for $R \in \mathcal R_{\vec r}$ with $\vec r \in \mathbb H_n^2$.
Call a rectangle $R\in \mathcal R _{\vec r}$
\emph{good} if $R$ does {\bf not} intersect $\mathcal A_N$, otherwise call it \emph{bad}.
Set
\begin{equation} \label{e.f_r}
f _{\vec r} {} \coloneqq \sum _{R\in \mathcal R _{\vec r}}
\operatorname {sgn} (\ip D_N, h_R,) h_R \,.
\end{equation}
Each bad rectangle contains at least one point in $\mathcal A_N$, and $2^ n\ge2N$, so
there are at least $N$ good rectangles. Moreover, one should observe that the counting function
$\sharp (\mathcal A_N\cap [0,\vec x))$ is orthogonal to $ h_R$ for each good rectangle $
R$. That is,
\begin{equation*}
\ip C_ {\mathcal A_N}, h ^{0,0}_R, =0\,, \qquad
\textup{whenever}\quad R\cap \mathcal A_N=\emptyset\,.
\end{equation*}
Critical to this property is the fact that Haar functions have mean
zero on each line parallel to the coordinate axes.
Thus, by \eqref{e.vv}, for a {\em good} rectangle $R\in \mathcal R _{\vec
r}$ we have
\begin{equation*}
\ip D_N, h_R,=-\ip L_N, h_R,=-N\ip \abs{ [0,\vec x)} , h_R(\vec x),=-N2 ^ {-2n-4}
\lesssim - 2 ^{-n}\,.
\end{equation*}
Hence, to complete the proof, we can estimate
\begin{align*}
\ip D_N, f _{\vec r}, \ge
\sum _{\substack{R\in \mathcal R _{\vec r}\\ \text {$R$ is good} } }
\lvert \ip D_N, h_R, \rvert \gtrsim 2^ {-n} \sharp \{
R\in \mathcal R _{\vec r}\mid \text {$R$ is good} \}
\gtrsim 1\,.
\end{align*}
\end{proof}
\begin{proposition}\label{p.>n} Let $ f _{\vec s}$ be any $ \mathsf r$ function
with $ \abs{ \vec s}>n$. We have
\begin{equation*}
\abs{ \ip D_N, f _{\vec s}, } \lesssim N {2 ^{-\abs{ \vec s}}}\,.
\end{equation*}
\end{proposition}
\begin{proof}
This is a brute force proof. Consider the linear part of the Discrepancy function.
By (\ref{e.veryelementary}), we have
\begin{equation*}
\abs{ \ip L_N, f _{\vec s}, }\lesssim N 2 ^{-\abs{ \vec s}} \,,
\end{equation*}
as claimed.
Consider the part of the Discrepancy function that arises from the point set.
Observe that for any point $\vec x_0$ in the point set, we have
\begin{equation*}
\abs{ \ip \mathbf 1 _{[\vec 0, \vec x_0)} , f _{\vec s}, } \lesssim 2 ^{- \abs{ \vec
s}}\,.
\end{equation*}
Indeed, of the different Haar functions that contribute to $ f _{\vec s}$, there
is at most one with non zero inner product with the function
$\mathbf 1 _{[\vec 0, \vec x)} (\vec x_0) $ as a function of $ \vec x$. It is the
one rectangle which contains $ x_0$ in its interior. Thus the inequality above follows.
Summing it over the $ N$ points in the point set completes the proof of the Proposition.
\end{proof}
\begin{proposition}\label{p.Not2Many}
In dimension $ d=2$ the following holds. Fix a collection of $
\mathsf r$ functions $\{ f _{\vec r} \mid \vec r \in \mathbb H_{n}
^{2} \}$. Fix an integer $ 2\le v \le n$ and $ \vec s $ with $0\leq s_1,s_2 \leq n$ and $\abs{
\vec s}\ge n+ v -1$. Let $ \operatorname {Count} (\vec s ; v) $ be
{the number of ways to choose distinct $ \vec r_1 ,\dotsc, \vec r_v\in
\mathbb H _n ^2 $ so that $ \prod _{w=1} ^{v} f _{\vec r_w}$ is an
$ \vec s$ function.} We have
\begin{equation} \label{e.Not2Many}
\operatorname {Count} (\vec s ; v) = {\abs {\vec s}-n-1 \choose v
-2 }\,.
\end{equation}
\end{proposition}
\begin{proof}
Fix a vector $ \vec s$ with $ \abs{ \vec s}>n$, and suppose that
\begin{equation*}
\prod _{w=1} ^{v} f _{\vec r_w}
\end{equation*}
is an $ \vec s$ function. Then, the maximum of the first coordinates of the $ \vec r_w$
must be $ s_1$, and similarly for the second coordinate. Thus, the vector $ s$
completely specifies two of the $ \vec r_w$.
The remaining $ v-2$ vectors must be distinct, and take values in the first
coordinate that are greater than $n-s_2$ and less than $ s_1$.
Hence there are at most $ \abs{ \vec s}-n-1$
possible choices for these vectors.
This completes
the proof.
\end{proof}
In two dimensions, the decisive product rule holds. If $R, R' \in
\mathcal D^2$ are distinct, have the same area and non-empty intersection, then we have
\begin{equation}\label{e.productRule}
h_R \cdot h_{R'} = \pm h_{R\cap R'}.
\end{equation}
This rule is illustrated in Figure~\ref{f.haar} and can be generalized as follows
\begin{proposition}\label{p.productRule}
In dimension $ d=2$ the following holds. Let $ \vec r_1 ,\dotsc,
\vec r_k$ be elements of $ \mathbb H _n ^{2}$ where one of the
vectors occurs an odd number of times. Then, the product $ \prod
_{j=1} ^{k} f _{\vec r}$ is also an $ \mathsf r$ function. If the $
\vec r_j$ are distinct and $ k\ge 2$, the product has index larger
than $ n$.
\end{proposition}
\begin{figure}
\begin{tikzpicture}
\draw (2.5,2.5) node {$ h_R$}; \draw (0,0) rectangle (8,2);
\draw[fill,fill opacity=0.5,lightgray] (0,0) rectangle (4,1);
\draw[fill,fill opacity=0.5,lightgray] (4,1) rectangle (8,2);
\draw (6,4.5) node {$ h_{R'}$}; \draw (4,0) rectangle (8,4);
\draw[fill,fill opacity=0.5,lightgray] (4,0) rectangle (6,2);
\draw[fill,fill opacity=0.5,lightgray] (6,2) rectangle (8,4);
\end{tikzpicture}
\caption{Two Haar functions. } \label{f.haar}
\end{figure}
\section{The Digit-Scrambled van der Corput Set} \label{s.vandercorput}
In this section we introduce the digit-scrambled van der Corput set,
that is, a variation of the classical van der Corput set described,
e.g., in \cite{MR1697825}*{Section 2.1}, and prove some auxiliary
lemmas that will help us exploit its properties. This set will be
our main construction for the upper bounds in Theorems \ref{t.vdc}
and \ref{t.bmo}, although strictly speaking, Theorem \ref{t.bmo} is
satisfied by the standard van der Corput point distribution. The
reasons we need this modified version of the van der Corput set will
become clear by the end of this section.
First, we introduce some additional definitions and notations.
\begin{definition}
For $x\in [0,1)$ define $\textnormal {d}_i(x)$ to be the $i$'th digit in the
binary expansion of $x$, that is
$$\textnormal {d}_i(x)=\lfloor 2^ix\rfloor \space \textup{ mod 2}. $$
\end{definition}
\begin{definition}
For $x\in[0,1)$ we define the \emph{digit reversal} function by means of the expression
\begin{align*}
\label{revn}
\textnormal {d}_i\left(\textnormal {rev}_n (x)\right)=
\begin{cases}
\textnormal {d}_{n+1-i}(x), \ &i=1,2\cdots n,
\\
0, &\mbox{otherwise},
\end{cases}
\end{align*}
in other words, setting $\textnormal {d}_i(x) = x_i$, we have $\textnormal {rev}_n (0.x_1 x_2
... x_n) = 0.x_n ... x_2 x_1$.
\end{definition}
\begin{definition}
Let $x,\sigma\in [0,1)$ where $\sigma$ has $n$ binary digits. We define the number $x\oplus \sigma$ as
\label{d.oplus}
$$\textnormal {d}_i(x\oplus \sigma)=\textnormal {d}_i(x)+\textnormal {d}_i(\sigma) \space \textup{ mod 2},$$
i.e. the $i^{th}$ digit of $x$ changes if $\textnormal {d}_i(\sigma)=1$ and stays
the same if $\textnormal {d}_i(\sigma)=0$. In the literature this operation is
called \emph{digit scrambling} or \emph{digital shift}.
\end{definition
\begin{remark}
We stress at this point that when we define a digit scrambling we only use the first $n$ binary digits of the number $\sigma\in[0,1)$. As a result, for each given positive integer $n$ there are exactly $2^n$ such digital shifts, that is, the number of digital shifts is finite. The choice of a real number $\sigma\in[0,1)$ to represent this operation is just a matter of notational convenience.
\end{remark}
We are now ready to define the digit-scrambled van der Corput set.
\begin{definition}
For an integer $ n\ge 1$ and a number $\sigma\in[0,1)$ we define the $\sigma$-digit scrambled
\emph{van der Corput} set $\mathcal{V}_{n,\sigma}$ as
\label{d.vdc}
\begin{equation*}
\mathcal{V}_{n,\sigma} = \{ v_{n,\sigma}(\tau):\, \ \tau=0,1,\ldots,2^n-1\},
\end{equation*}
where
\begin{equation*}
v_{n,\sigma}(\tau)=\bigg(\frac{\tau} {2^n}, \operatorname {rev_n}
\bigg(\frac{\tau}{2^n}\oplus \sigma\bigg)\bigg)+ (2 ^{-n-1}, 2
^{-n-1}).
\end{equation*}
It is clear that the digit-scrambled van der Corput set has
cardinality $|\mathcal{V}_{n,\sigma}|=2^n.$ We should notice that
the roles of $x$ and $y$ coordinates are symmetric, since we can
write $\mathcal{V}_{n,\sigma} = \{(\textnormal {rev}_n(\tau/2^n \oplus \sigma')
,{\tau}/ {2^n})+ (2 ^{-n-1}, 2 ^{-n-1}):\, \
\tau=0,1,\ldots,2^n-1\}$ with $\sigma'=\textnormal {rev}_n (\sigma)$.
\end{definition}
With the notation introduced above, the standard van der Corput set
$$\mathcal{V}_n = \{ (0.x_1 x_2 ... x_n 1 , 0.x_n ... x_2 x_1 1):\,
x_i =0,1 \} $$ is just $\mathcal{V}_n=\mathcal{V}_{n,0}.$ Note that
our definition differs from the classical by
the shift $ (2 ^{-n-1}, 2 ^{-n-1})$. This shift `pads' the binary expansion of the elements by a final $1$ in the $(n + 1)^\textup{st}$
place, and ensures that the average value of each coordinate is $ \tfrac 12 $:
\begin{equation}
\label{e.centralized}
2 ^{-n}\sum _{ (x,y)\in \mathcal V_{n,\sigma}} x = 2 ^{-n}\sum _{ (x,y)\in \mathcal V_{n,\sigma}} y = \frac{1}{2}.
\end{equation}
This is just a technical modification that will simplify our formulas and calculations.
The following proposition describes which points of the van der Corput set $\mathcal{V}_{n,\sigma}$ fall into any given dyadic rectangle.
\begin{proposition}
\label{p.net} Let $k,l\in\mathbb{N}$ and $i\in\{0,1,\ldots,2^k-1\}$,
$j\in\{0,1,\ldots,2^l-1\}. $ Consider a dyadic rectangle
\begin{equation*}
R=\left[\frac{i}{2^k},\frac{i+1}{2^k}\right)\times\left[\frac{j}{2^l},\frac{j+1}{2^l}\right).
\end{equation*}
Then the set $\mathcal{V}_{n,\sigma}\cap R$ consists of the points $v_{n,\sigma}(\tau)$ where
\begin{align*}
\textnormal {d}_m\big(\frac{\tau}{2^n}\big)=
\begin{cases}
\textnormal {d}_m(\frac{i}{2^k}),\ &m=1,2\cdots, k,
\\
\textnormal {d}_{n+1-m}(\frac{j}{2^k}) + \textnormal {d}_m(\sigma) \mod 2, \ &m=n+1-l,
\cdots, n.
\end{cases}
\end{align*}
\end{proposition}
\begin{proof}
Let $(x,y)$ be any point $[0,1)^2$. It is easy to see that $(x,y)\in R$ if and only if
\begin{align*}
\textnormal {d}_q(x)&=\textnormal {d}_q\big(\frac{i}{2^k}\big)\ \ \ \textup{for all} \ \ \ q=1,2,\ldots, k, \ \ \textup{ and} \\
\textnormal {d}_r(y)&=\textnormal {d}_r\big(\frac{j}{2^l}\big)\ \ \ \ \textup{for all} \ \ \ r=1,2,\ldots, l.
\end{align*}
The proposition is now a simple consequence of the structure of the
van der Corput set.
\end{proof}
Some remarks are in order:
\begin{remarks}\label{r.npoints}
\item{ When $k+l<n$ there are exactly $2^{n-(k+l)}$ points of the van der Corput set inside the canonical rectangle $R$.
Indeed, the conditions of Proposition \ref{p.net} only specify the first $k$ and last $l$ binary digits of the $x-$coordinates of the points $v_{n,\sigma}(\tau)$. }
\item {When $k+l>n$ it might happen that the set of conditions in proposition \ref{p.net} is void (observe that the system is overdetermined in this case). }
\item {Finally, when $k+l=n$, that is when the rectangle $R$ has volume $|R|=2^{-n}$,
the system of equations in \ref{p.net} gives a unique point of the
van der Corput set inside $R$. So, for fixed $n$, the van der Corput
set $\mathcal{V}_{n,\sigma}$ is a \emph{net}: every dyadic
rectangle of volume $N^{-1}=2^{-n}$ contains exactly one point. This has the well-known consequence, see \cite{MR1697825}, that
\begin{equation}\label{e.schmidtsharp}
\norm D_N(\mathcal{V}_{n,\sigma}). \infty. \lesssim \log N.
\end{equation}}
This fact is independent of the digit scrambling $\sigma$ and holds
in particular for the standard van der Corput set $\mathcal{V}_n$
(\cite{Cor35}, \cite{MR0066435}). In view of Schmidt's Theorem
\eqref{e.schmidt} this means that the van der Corput set is
\emph{extremal} in terms of measuring the Discrepancy function in
$L^\infty$. However, the same is not true if one is interested in
meeting the lower bound in Roth's Theorem, that is, the standard van
der Corput set $\mathcal{V}_n$ is not extremal in terms of measuring
the Discrepancy function in $L^2$. The lemma below explains this
fact. In particular it shows that the $L^2$ discrepancy of
$\mathcal{V}_n$ is big because of a single `zero-order' Haar
coefficient, i.\thinspace e.\thinspace the mean $\int D_N$. The lemma also shows that digit
scrambling provides a remedy for this shortcoming. This fact has been observed by Chen in \cite{MR711520} where the author uses digit scrambling in order to obtain the best possible $L^p$ upper bounds for a general class of 'one point in a box' sets in general dimension (see the case $k+l=n$ in the remarks above). We also note that
similar calculations, albeit slightly less general, have been carried out in \cite{HAZ}. We include a proof of this Lemma for the
sake of completeness.
\end{remarks}
\begin{lemma}\label{l.integral}We have
$$\int_0^1\!\!\int_0^1D_N(\mathcal{V}_{n,\sigma})\ dxdy=\frac{1}{4}\left(\frac{n}{2}-\sum_{k=1}^n \textnormal {d}_k(\sigma)\right) .$$
In particular
$$\int_0^1\!\!\int_0^1D_N(\mathcal{V}_{n})\ dxdy=\frac{n}{8}.$$
On the other hand, if $\sum_{k=1}^n \textnormal {d}_k(\sigma)=n/2$, i.e. half of
the digits are scrambled, then
$$\int_0^1\!\!\int_0^1D_N(\mathcal{V}_{n,\sigma})\ dxdy=0.$$
\end{lemma}
\begin{proof}
As usually, we write $N=2^n$. We have
\begin{align*}
I&\coloneqq\int_0^1\!\!\int_0^1 D_N(\mathcal{V}_{n,\sigma})(x,y)\ dxdy=-N/4+\sum_{\tau=0}^{N-1}\int_0 ^1\int_0 ^1 \mathbf 1_{[0,x]\times[0,y]}\left(v_{n,\sigma}\left(\tau/N\right)\right)dxdy\\&= -N/4+\sum_{\tau=0}^{N-1}\left(1-\frac{\tau}{N}-\frac{1}{2N}\right)\left(1-\textnormal {rev}_n\left(\frac{\tau}{N}\oplus\sigma\right)-\frac{1}{2N}\right).
\end{align*}
Using $\eqref{e.centralized}$ we get
\begin{equation}\label{e.i}
I=-\frac{N}{4}+\frac{1}{2}-\frac{1}{4N}+\sum_{\tau=0}^{N-1}\frac{\tau}{N}\cdot
\textnormal {rev}_n\left(\frac{\tau}{N}\oplus\sigma\right).
\end{equation}
Now expand the sum above using the binary representation of the
summands as follows:
\begin{align}\label{e.diag} \sum_{\tau=0}^{N-1}\frac{\tau}{N}\cdot \textnormal {rev}_n\left(\frac{\tau}{N}\oplus\sigma\right)&=\sum_{\tau=0}^{N-1}\sum_{k=1}^n\sum_{l=1}^n\frac{\textnormal {d}_k\left(\frac{\tau}{N}\right) \textnormal {d}_{l}\left(\textnormal {rev}_n\left(\frac{\tau}{N}\oplus\sigma\right)\right)}{2^{k+l}} \notag
\\&=\sum_{\tau=0}^{N-1}\sum_{k=1}^n\sum_{l=1}^n\frac{\textnormal {d}_k\left(\frac{\tau}{N}\right) \textnormal {d}_{n+1-l}\left(\frac{\tau}{N}\oplus\sigma\right)}{2^{k+l}}
\\&=\sum_{k=1}^n\sum_{l=1}^n \frac{1}{2^{k+l}} \sum_{\tau=0}^{N-1} {\textnormal {d}_k\left(\frac{\tau}{N}\right) \textnormal {d}_{n+1-l}\left(\frac{\tau}{N}\oplus\sigma\right)}.\notag
\end{align}
Finally observe that if $s,t\in\{1,2,\ldots,n\}$ then
\begin{align}\label{e.orthodigit}
\sum_{\tau=0}^{N-1}\textnormal {d}_s\left(\frac{\tau}{N}\right)\textnormal {d}_t\left(\frac{\tau}{N}\oplus\sigma\right)=
\begin{cases}
\frac{N}{2}
\left(1-\textnormal {d}_s(\sigma)\right), &s=t
\\
\frac{N}{4}
, &s\neq t.
\end{cases}
\end{align}
Indeed, when $s=t$, the terms in the sum above are non-zero exactly
when $\textnormal {d}_s(\frac{\tau}{N})=1$ and $\textnormal {d}_s(\sigma)=0$, and hence the
first equality. The case $s\neq t $ is similar.
Using \eqref{e.orthodigit} and \eqref{e.diag} we get
\begin{align*}
\sum_{\tau=0}^{N-1} \frac{\tau}{N} \, \textnormal {rev}_n \left(
\frac{\tau}{N}\oplus\sigma \right) &=
\frac{n}{8}-\frac{1}{4}\sum_{k=1}^n
\textnormal {d}_k(\sigma)+\frac{N}{4}-\frac{1}{2}+\frac{1}{4N},
\end{align*}
which, combined with \eqref{e.i}, completes the proof.
\end{proof}
{\emph {Remark.}} We should point out that in \cite{KrPil} it has
been shown that the $L^2$ norm of the Discrepancy of the
digit-scrambled van der Corput set depends only on the number of
$1$'s in $\sigma$, and not their distribution.
\section{Haar Coefficients for the Digit-Scrambled van der Corput Set}
In this section we will work with the digit-scrambled van der Corput set $\mathcal{V}_{n,\sigma}$ as defined in Section \ref{s.vandercorput}, where $\sigma\in[0,1)$ is arbitrary and $N=2^n$. We will just write $D_N$ for the discrepancy function of $\mathcal{V}_{n,\sigma}$. The following Lemma records the main estimate for the Haar
coefficients of $D_N$ and is the core of the proof for the upper bounds in Theorems \ref{t.vdc} and \ref{t.bmo}.
\begin{lemma}\label{l.haarcoeffs} For any dyadic rectangle
$ R \in \mathcal {D}^2$ we have
\begin{equation*}
\lvert \ip D_N, h_R, \rvert \lesssim \frac{1}{ N}.
\end{equation*}
\end{lemma}
We need to consider dyadic rectangles of the form
$R=\left[\frac{i}{2^k};\frac{i+1}{2^k}\right)\times\left[\frac{j}{2^l};\frac{j+1}{2^l}\right)$,
where $k,l\in\mathbb{N}$ and $i\in\{0,1,\ldots,2^k-1\}$,
$j\in\{0,1,\ldots,2^l-1\}. $ The proof will be divided in two cases,
depending on whether the volume of $R$ is `big' or `small'.
We will use an auxiliary function to help us write down formulas for the inner product of the counting part with the Haar function corresponding to the rectangle $R$. In particular, $\phi:\mathbb R\rightarrow \mathbb R$ is the periodic function
\begin{align*}
\phi (x) =
\begin{cases}
\{x\}, \ &0<\{x\}<\tfrac 12
\\
1-\{x\}, \ &\tfrac 12 <\{x\} < 1 ,
\end{cases}
\end{align*}
where $\{x\}$ is the fractional part of $x$.
Observe that the function $\phi$ is the periodic extension of the anti-derivative of the Haar function on [0,1]. See Figure \ref{f.phi}.
Let $p=(p_x,p_y)\in[0,1)^2$. A moment's reflection allows us to
write
\begin{align}\label{e.phi}
\ip \mathbf 1_{[\vec p,\vec 1)} ,h_R,=
\begin{cases}
|R| \phi (2^kp_x)\phi (2^l p_y), \ &p\in R,
\\
0, &\textup{otherwise}.
\end{cases}
\end{align}
We also record two simple properties of the function $\phi$ that will be useful in what follows. First, for $x\in\mathbb{R}$,
\begin{equation}\label{e.plushalf}
\phi (x)+\phi \bigg(x\oplus \frac{1}{2}\bigg) = \frac{1}{2}.
\end{equation}
Second, $\phi$ is a `Lipschitz' function with constant $1$. For
$x,y\in \mathbb{R}$,
\begin{equation}\label{e.lips}
\left|\phi(y)-\phi(x)\right|\leq |\{y\}-\{x\}|.
\end{equation}
\begin{proof}[Proof of Lemma \ref{l.haarcoeffs} when $|R|<\frac{4}{N}$]
We fix a dyadic rectangle $R$ with $|R|<\frac{4}{N}$. We treat the linear part and the counting part separately.
For the linear part we have that
\begin{equation*}
\ip L_N,h_R , = \frac{N|R|^2}{4^2}\lesssim \frac{1}{N}.
\end{equation*}
Now notice that since $k+l>n-2$, there are at most $2$ points in
$\mathcal{V}_{n,\sigma}\cap R$. Since $\phi$ is obviously bounded by
$1$, formula \eqref{e.phi} implies
\begin{equation*}
|\ip C_{\mathcal{V}_{n,\sigma}},h_R,|\leq |R| \sum_{p\in\mathcal{V}_{n,\sigma}\cap R} \phi (2^kp_x)\phi (2^l p_y) \leq 4|R|\lesssim\frac{1}{N}.
\end{equation*}
Summing up the estimates for the linear and the counting part completes the proof.
\end{proof}
\begin{figure}
\begin{tikzpicture}
\draw[->](0,-.5) -- (0,1.5); \draw (.125,1) -- (-.125,1) node[left]
{$ 1$}; \draw[->] (-.5,0) -- (3.5,0); \draw (1.5,.125) --
(1.5,-.125) node[below] {$ 1/2$}; \draw (0,0) -- (1.5,1) -- (3,0);
\end{tikzpicture}
\caption{The graph of the function $ \phi $.} \label{f.phi}
\end{figure}
\begin{proof}[Proof of Lemma \ref{l.haarcoeffs} when $|R|\geq \frac{4}{N}$]
The proof of the case $|R|\geq \frac{4}{N}$ is much more involved as
this is the typical case where the rectangle contains `many' points
of the point set $\mathcal{V}_{n,\sigma}$. Before going into the
details of the proof we will discuss the structure of the set $R\cap
\mathcal{V}_{n,\sigma}$ in order to organize and simplify the
calculations that follow.
First, notice that the condition $|R|\geq \frac{4}{N}$ implies that
$n-(k+l)\geq 2$. In other words, there are at least $4$ points in
the set $R\cap \mathcal{V}_{n,\sigma}$ according to Proposition
\ref{p.net} and Remark \ref{r.npoints}. To be more precise, let us
look at a point $p=(x,y)\in \mathcal{V}_{n,\sigma}$. The $x$-coordinate
can be written in the form $x=0.x_1x_2\ldots x_n1,$ where
$x_i=\textnormal {d}_i(x)$, for $i=1,2,\ldots,n$.
The first $k$ and the last $l$ binary digits of $x$ are determined
by the fact that $x\in R$ (Proposition \ref{p.net}). That leaves us
with at least $2$ `free' digits for $x$
$$x=0.x_1\ldots x_k,*,\ldots,*,x_{n-l+1} \ldots x_n 1.$$
We group all points in $\mathcal{V}_{n,\sigma}\cap R$ in quadruples
according to the choices for the first and last `free` digits
$x_{k+1}$ and $x_{n-l}$. In particular, we consider
\emph{quadruples} \eqref{e.quad} of points in
$\mathcal{V}_{n,\sigma}\cap R$ with $x$-coordinates of the form:
\[ \begin{array}{ccc}\tag{Q}\label{e.quad}
&0.x_1\ldots x_k \ 0 \ x_{k+2} \ldots,x_{n-l-1} \ 0\ x_{n-l+1}\ldots x_n 1,\\
&0.x_1\ldots x_k \ 0 \ x_{k+2} \ldots,x_{n-l-1} \ 1 \ x_{n-l+1}\ldots x_n 1,\\
&0.x_1\ldots x_k \ 1 \ x_{k+2} \ldots,x_{n-l-1} \ 0 \ x_{n-l+1}\ldots x_n 1,\\
&0.x_1\ldots x_k \ 1\ x_{k+2} \ldots,x_{n-l-1} \ 1\ x_{n-l+1}\ldots x_n 1.
\end{array}\]
\begin{figure}
\begin{tikzpicture}
\draw (8.5,1.5) node {$R$}; \draw (0,0) rectangle (8,2);
\draw[fill,fill opacity=0.5,lightgray] (0,0) rectangle (4,1);
\draw[fill,fill opacity=0.5,lightgray] (4,1) rectangle (8,2);
\foreach \position in { (.675,.125), (4.675,.325), (4.775,1.325),
(.775,1.125)} { \draw[fill=black] \position circle (2pt);}%
\draw (.65,.5) node {$ (u,v)$};
\end{tikzpicture}
\caption{The quadruple $Q$. } \label{f.four}
\end{figure}
There are exactly $2^{n-(k+l)-2}=\frac{N|R|}{4}$ such quadruples. Let's index the quadruples \ref{e.quad} arbitrarily as $Q_r$, $r=1,2,\ldots,\frac{N|R|}{4}$. Observe that we can write
\begin{align}\label{e.qsplit}
\ip D_N, h_R, = \sum_{p\in\mathcal{V}_{n,\sigma}\cap R} \ip \textbf{1}_ {[\vec p, \vec 1)} ,h_R, -\frac{N|R|^2}{16}=
\sum_{r=1} ^{\frac{N|R|}{4} } \bigg(\sum _{p\in Q_r} \ip \textbf{1}_ {[\vec p, \vec 1)} ,h_R, -\frac{|R|}{4}\bigg).
\end{align}
The following Proposition exploits large cancellation within these
quadruples.
\begin{proposition}\label{p.cancel}
\begin{equation*}
\ABs { \sum _{p\in Q_r} \ip \mathbf 1_ {[\vec p, \vec 1)} ,h_R, -\frac{|R|}{4} } \lesssim \frac{1}{N^2|R|}.
\end{equation*}
\end{proposition}
Let assume Proposition \ref{p.cancel} for a moment in order to
complete the proof of Lemma \ref{l.haarcoeffs}. Indeed, Proposition
\ref{p.cancel} together with equation \eqref{e.qsplit} immediately
yield
\begin{align*}
\ip D_N, h_R, \lesssim \sum_{r=1} ^{\frac{N|R|}{4} }\frac{1}{N^2|R|}
\lesssim \frac{1}{N}.
\end{align*}
This completes the proof modulo Proposition \ref{p.cancel}.
\end{proof}
\begin{proof}[Proof of Proposition \ref{p.cancel}] For the proof of the proposition we will fix a
$Q=Q_r$ and suppress the index $r$ since it does not play any role.
Suppose $p=(u,v)$ is any of the points with $x$-coordinate as in
\eqref{e.quad} and $y$-coordinate $v$ such that
$p\in\mathcal{V}_{n,\sigma}$ . Then it is easy to see that the
quadruple \eqref{e.quad} consists of the four points which can be
written in the form:
\begin{align*}\tag{Q}
\begin{cases}
&(u,v),\\
&(u\oplus 2^{-k-1},v\oplus2^{-n+k}), \\
&(u\oplus 2 ^{-n+l},v\oplus2^{-l-1}),\\
&(u\oplus 2^{-n+l}\oplus 2^{-k-1},v\oplus2^{-n+k}\oplus2^{-l-1}).
\end{cases}
\end{align*}
See also Figure \ref{f.four}.
We invoke equation \eqref{e.phi} to write
\begin{align}\label{e.phiprop}
\sum _{p\in Q} \ip \textbf{1}_ {[\vec p, \vec 1)} ,h_R, -\frac{|R|}{4} = |R|\Big( \sum _{p\in Q} \phi(2^kp_x)\phi(2^lp_y) -\frac{1}{4} \Big)\eqqcolon |R|\big(\Sigma-\frac{1}{4}\big).
\end{align}
We have
\begin{align*}
\Sigma &= \phi(2^ku)\phi(2^lv)\\
&+\phi(2^k u\oplus \frac{1}{2})\phi(2^l ( v\oplus2^{-n+k}))\\
&+\phi(2^k ( u\oplus 2 ^{-n+l}))\phi(2^l v\oplus \frac{1}{2})\\
&+\phi(2^k u\oplus 2^k\cdot 2^{-n+l}\oplus \frac{1}{2})\phi(2^l v\oplus2^l\cdot 2^{-n+k}\oplus\frac{1}{2}).
\end{align*}
Using equation \eqref{e.plushalf} we get
\begin{equation*}
\Sigma=\frac{1}{4}+\big[\phi(2^ku)- \phi(2^k ( u\oplus 2 ^{-n+l}))\big]\big[\phi(2^lv)-\phi(2^l ( v\oplus2^{-n+k}))\big].
\end{equation*}
Finally, using the fact the the function $\phi$ is Lipschitz
\eqref{e.lips} we have
\begin{align*}
\ABs{\Sigma -\frac{1}{4}} \leq (2^{-n+l+k})^2=\frac{1}{N^2 |R|^2}.
\end{align*}
This estimate together with equation \eqref{e.phiprop} completes the proof.
\end{proof}
Lemma \ref{l.haarcoeffs} has an analogue in the case of Haar
functions $h^{1,0}_{[0,1]\times I}$ and $h^{0,1} _{I\times [0,1]}$,
where $I\in \mathcal{D}$. Observe also that the inner product that
corresponds to $h^{1,1} _{[0,1]^2}$ is the content of Lemma
\ref{l.integral} of the previous section.
\begin{lemma}\label{l.semihaarcoeffs}
For $I\in\mathcal{D}$ we have the estimates
\begin{align*}
&\abs{\ip D_N, h^{0,1}_{I\times [0,1]}, }\lesssim |I|,\\
&\abs{\ip D_N, h^{1,0}_{[0,1]\times I}, }\lesssim |I|.
\end{align*}
\end{lemma}
\begin{proof}
It suffices to prove just the first estimate in the
statement of the Lemma.
The proof proceeds in a more or less analogous fashion as the proof
of Lemma \ref{l.haarcoeffs}. We fix a dyadic interval
$I=\left[\frac{i}{2^k},\frac{i+1}{2^k} \right)$ and write
$h_I=h^{0,1} _{I\times[0,1]}$. We need an analogue of formula
\eqref{e.phi} which in this case becomes
\begin{align}\label{e.phix}
\ip \mathbf 1_{[\vec p,\vec 1)} ,h_I,=
\begin{cases}
|I| \phi (2^kp_x)(1-p_y), \ &p_x\in I,
\\
0, &\textup{otherwise}.
\end{cases}
\end{align}
As in the proof of Lemma \ref{l.haarcoeffs}, we need to consider
separately the case of small volume and large volume rectangles. The
small volume case here is $|I|\leq \frac{2}{N}$. Note that in this
case there are at most $2^{n-k}\leq 2$ points of the van der Corput
set whose $x$ coordinate lies in $I$. Using equation \eqref{e.phix}
we trivially get the desired estimate as in the proof of the
corresponding case of Lemma \ref{l.haarcoeffs}.
We now turn to the main part of the proof, namely the estimate
\begin{equation*}
\abs{\ip D_N, h^{1,0}_{I\times [0,1]}, }\lesssim |I|,
\end{equation*}
when $|I|>\frac{2}{N}$. Instead of the quadruples \eqref{e.quad}, we
now group the points of the van der Corput set with $x$-coordinate
in $I$, into \emph{pairs} \eqref{e.pair} of the form:
\[ \begin{array}{ccc}\tag{P}\label{e.pair}
&0.x_1\ldots x_k \ 0\ x_{k+2} \ldots x_n 1,\\
&0.x_1\ldots x_k \ 1 \ x_{k+2} \ldots x_n 1.
\end{array}\]
If $(u,v)$ is one of the two points in \eqref{e.pair}, we also have the description:
\begin{align*}\tag{P}
\begin{cases}
&(u,v),\\
&(u\oplus 2^{-k-1},v\oplus2^{-n+k}).
\end{cases}
\end{align*}
There are $2^{n-k-1}$ such pairs and let's index them arbitrarily as
$P_r$, $r=1,2,\ldots ,2^{n-k-1}$. We write
\begin{align}
\ip D_N, h_I , = \sum_{p\in\mathcal {V}_{n,\sigma}\cap I \times[0,1] } \ip \textbf{1}_ {[\vec p, \vec 1)} ,h_I, -\frac{N|I|^2}{8}&=
\sum_{r=1} ^{2^{n-k-1}} \sum _{p\in P_r} \ip \textbf{1}_ {[\vec p, \vec 1)} ,h_I, -\frac{N|I|^2}{8}.
\end{align}
Now for any pair \eqref{e.pair} we use \eqref{e.phix} to write
\begin{align*}
\sum _{p\in P} \ip \textbf{1}_ {[\vec p, \vec 1)} ,h_I, &= |I|\ \phi(2^ku)(1-v)+|I|\ \phi(2^k(u\oplus2^{-k-1}))(1-v\oplus 2^{-n+k})\\
&=|I|\ \big[ \phi(2^ku)+\phi(2^ku\oplus 2^{-1})\big] \ (1-v)
\\&+|I|\ \phi(2^ku\oplus2^{-1})\ (v-v\oplus2^{-n+k})
\\&=\frac{1}{2}|I|(1-v) + |I|\ \phi(2^ku\oplus2^{-1})\ (v-v\oplus2^{-n+k}).
\end{align*}
where in the last equality we have used \eqref{e.plushalf}. Using
the fact that $\abs{v-v\oplus2^{-n+k}} = 2^{-n+k}$ and assuming
$\textnormal {d}_{n-k} (v)=0$, it is routine to check that
\begin{equation}\label{e.almosthere}
\ip D_N, h_I ,=|I|\ \bigg\{\frac{1}{2} \sum_{r=1} ^{2^{n-k-1}} (1-v_r)-2^{n-k-3}+\mathcal{O}(1)\bigg\},
\end{equation}
where $v_r$ are $y$-coordinates of the form
\begin{align*}
v_r&=0.Y_1\ldots Y_{n-k-1} 0 y_{n-k+1}\ldots y_n
\end{align*}
The digits $y_{n-k+1}$ up to $y_n$ are fixed
because of the digit reversal structure of the van der Corput set.
We can then estimate the sum in the previous expression as follows:
\begin{align*}
\sum_{r=1} ^{2^{n-k-1}} (1-v_r)&= 2^{n-k-1}-\frac{1}{2}2^{n-k-1}\bigg(1 -2^{-n+k+1}\bigg) + \mathcal{O}(1)=2^{n-k-2}+\mathcal{O}(1).
\end{align*}
Substituting in \eqref{e.almosthere} we get
\begin{equation*}
\ip D_N, h_I ,=|I|\ \bigg\{\frac{1}{2}\big(2^{n-k-2}+\mathcal{O}(1)\big) -2^{n-k-3} +\mathcal{O}(1)\bigg\}\lesssim |I|,
\end{equation*}
which completes the proof.
\end{proof}
\section{BMO Estimates for the Discrepancy Function}
This section is devoted to the proofs of Theorems \ref{t.bmolower}
and \ref{t.bmo}. We recall that the Dyadic Chang-Fefferman $
\operatorname {BMO}_{1,2}$ is defined to consist of those square
integrable functions $ f$ in the linear span of $ \{h_R \mid R\in
\mathcal D ^2 \}$, for which we have
\begin{equation*}
\norm f. \operatorname {BMO}_{1,2} . \coloneqq \sup _{ U\subset
[0,1] ^2 } \Biggl[ \lvert U\rvert ^{-1} \sum _{\substack{R \in
\mathcal D ^2
\\ R\subset U}}
\frac {\ip f ,h _R , ^2 } {\lvert R\rvert }
\Biggr] ^{1/2} < \infty \,.
\end{equation*}
We begin with the proof of Theorem \ref{t.bmolower} which is
essentially just a repetition of the argument used in Proposition
\ref{p.rvec}.
\begin{proof}[Proof of Theorem \ref{t.bmolower}] We fix a distribution $\mathcal{A}_N$ of $N$ points in the unit square
and take $n$ such that $2N<2^n\leq 4N$. For the special choice of $U=[0,1]^2$ we have
\begin{equation*}
\norm D_N. \operatorname {BMO}_{1,2} .^2 \geq \sum_{\vec r \in\mathbb{H}_n} \sum_{\substack{ R\in\mathcal{R}_{\vec r} \\ R\cap \mathcal{A}_N= \emptyset }} \frac{\ip D_N,h_R,^2}{|R|}.
\end{equation*}
Consider a rectangle $R\in \mathcal{R}_{\vec r}$ which does not
contain any points of $\mathcal{A}_N$. Then
\begin{equation*}
\ip D_N,h_R , = - \ip L_N, h_R, = - \frac{|R|^2}{4^2}.
\end{equation*}
As a result,
\begin{equation*}
\norm D_N. \operatorname {BMO}_{1,2} .^2 \gtrsim \sum_{\vec r \in\mathbb{H}_n}\sum_{\substack{ R\in\mathcal{R}_{\vec r} \\ R\cap \mathcal{A}_N= \emptyset }} N^2 |R|^3\gtrsim \frac{1}{N} \sum_{\vec r \in\mathbb{H}_n}\sharp \{R\in\mathcal R_{\vec r}, \ R\cap \mathcal{A}_N=\emptyset\}.
\end{equation*}
For fixed $\vec r \in \mathbb{H}_n$ we have $\sharp \{R\in\mathcal R_{\vec r}, R\cap \mathcal{A}_N=\emptyset\}\geq N$, arguing as in the proof of Proposition \ref{p.rvec}. Thus we get
\begin{equation*}
\norm D_N. \operatorname {BMO}_{1,2} .^2 \gtrsim \sum_{\vec r \in\mathbb{H}_n} 1\gtrsim n.
\end{equation*}
This completes the proof since $n\simeq \log N$.
\end{proof}
We proceed with the proof of the upper bound in Theorem \ref{t.bmo}.
Our extremal set of cardinality $N=2^n$ will be
$\mathcal{V}_{n,\sigma}$ for arbitrary $\sigma\in[0,1)$, as defined
in Definition~\ref{d.vdc}. We will just write $D_N$ for the
Discrepancy function of the digit-scrambled van der Corput set.
\begin{proof}[Proof of Theorem \ref{t.bmo}]
We fix a measurable set $ U\subset [0,1] ^2 $ and consider only
rectangles $R$ in the family $ \{R \in \mathcal D ^2,R\subset U\}$.
We will sometimes suppress the fact that our rectangles are
contained in $U$ to simplify the notation.
The are two estimates that are relevant here, one for large
rectangles and one for small volume rectangles. For the large volume
case, $|R|\geq 2^{-n}$, we have
\begin{align*}
\lvert U\rvert ^{-1} \sum_{|R|\geq 2^{-n}}
\frac {\ip D_N ,h _R , ^2 } {\lvert R\rvert } &= \lvert U\rvert ^{-1}\sum_{k=0} ^n \sum_{\vec r \in \mathbb{H}_k} \sum_{R\in \mathcal{R}_{\vec r}} \frac{ \ip D_N ,h _R , ^2 }{|R|}\\
&\lesssim N^{-2}\lvert U\rvert ^{-1}\sum_{k=0} ^n 2^k \sum_{\vec r
\in \mathbb{H}_k} \sum_{R\in \mathcal{R}_{\vec r}}1,
\end{align*}
where we have used the estimate $\ip D_N ,h _R , \lesssim
\frac{1}{N}$ of Proposition \ref{l.haarcoeffs}. Now observe that for
fixed $k$ and $\vec r \in\mathbb{H}_k$ there are at most $2^k|U|$
rectangles $R\in\mathcal{R}_{\vec r}$ contained in $U$.
Furthermore, there are $k$ choices for the `geometry' $\vec r \in
\mathbb{H}_k$. We thus get
\begin{align*}
\lvert U\rvert ^{-1} \sum_{|R|\geq 2^{-n}} \frac {\ip D_N ,h _R ,
^2 } {\lvert R\rvert } &\lesssim
N^{-2}\sum_{k=0} ^n k(2^{k})^2 \lesssim \frac{n(2^n)^2}{N^2}= n.
\end{align*}
In the small volume term we treat the linear and the counting parts
separately.
For the linear part we use \eqref{e.vv} to get $\ip L_N ,h _R, =4^{-2}N \lvert R \rvert^2$. So we have
\begin{align*}
\lvert U\rvert ^{-1} \sum_{|R|< 2^{-n}}
\frac {\ip L_N ,h _R , ^2 } {\lvert R\rvert }& = \lvert U\rvert ^{-1}\sum_{k=n+1} ^\infty \sum_{\vec r \in \mathbb{H}_k} \sum_{R\in \mathcal{R}_{\vec r}} \frac{ \ip L_N ,h _R , ^2 }{|R|}\\
&\simeq N^2\lvert U\rvert ^{-1}\sum_{k=n+1} ^\infty \sum_{\vec r \in \mathbb{H}_k} (2^{-k})^3\sum_{R\in \mathcal{R}_{\vec r}} 1.
\end{align*}
Now arguing as in the large volume case we have $\sum_{R\in
\mathcal{R}_{\vec r}} 1\lesssim 2^k |U|$, and thus
\begin{align*}
\lvert U\rvert ^{-1} \sum_{|R|< 2^{-n}}
\frac {\ip L_N ,h _R , ^2 } {\lvert R\rvert }& \lesssim N^2 \sum_{k=n+1} ^\infty k(2^{-k})^2 \lesssim n.
\end{align*}
It remains to bound the counting part that corresponds to small
volume rectangles, i.e.
\begin{equation*}
\lvert U\rvert ^{-1} \sum_{|R|< 2^{-n}}
\frac {\ip C_{\mathcal{V}_{n,\sigma}} ,h _R , ^2 } {\lvert R\rvert }.
\end{equation*}
Let $ \mathcal R $ be the maximal dyadic rectangles $R$ of area at
most $ 2 ^{-n}$, contained inside $ U$, and such that $h_R$ has
non-zero inner product with the counting part. It is essential to
note that
\begin{equation} \label{e.b1}
\sum _{R\in \mathcal R} \lvert R\rvert \lesssim n \lvert U\rvert\,.
\end{equation}
Indeed, for each rectangle $ R\in \mathcal R$, the function $ h _R
$ is, as we have observed, orthogonal to each $\mathbf 1_{[\vec p,
\vec 1) } $ with $\vec p$ not in the interior of $ R$. Thus, $ R$
must contain one element of the van der Corput set in its interior.
On the other hand $\mathcal{V}_{n,\sigma}$ is a net so $R$ contains
exactly one point. Now look at all the rectangles in
$R\in\mathcal{R}$, $R=R_x\times R_y$, with a fixed side length
$|R_x|$. The length of this side must be at least $2^{-n}$ in order
for the rectangle to contain a point of the van der Corput set in
its interior, so there are at most $n$ choices for $|R_x|$. On the
other hand, the rectangles in $\mathcal{R}$ with the same side
length must be disjoint since they are maximal and dyadic. Since
they are all contained in $U$, their union has volume at most $U$.
Summing over all possible side lengths $|R_x|$ proves \eqref{e.b1}.
Now, we can write
\begin{align*}
\lvert U\rvert ^{-1} \sum _{|R|<2^{-n}}
\frac {\ip C_{\mathcal{V}_{n,\sigma}} ,h _R , ^2 } {\lvert R \rvert } \leq \lvert U\rvert ^{-1} \sum_{R\in \mathcal {R}} \sum_{R^\prime \subseteq R} \frac {\ip C_{\mathcal{V}_{n,\sigma}} ,h _{R^\prime }, ^2 } {\lvert R^\prime \rvert }.
\end{align*}
Note that we have inequality instead of equality, since a rectangle
$R$ can be contained in several maximal rectangles. However, this
does not create any problem.
Let $R\in \mathcal{R}$ be fixed and let $\vec{p}_R$ be the unique
point of $\mathcal{V}_{n,\sigma}$ contained in $R$. We can use
Bessel's inequality to bound the inner sum:
\begin{equation}
\sum_{R^\prime \subseteq R} \frac {\ip C_{\mathcal{V}_{n,\sigma}} ,h _{R^\prime }, ^2 }
{\lvert R^\prime \rvert }\leq \norm \mathbf{1}_{(\vec{p}_R, \vec{1}]} . L^2(R).^2 \leq |R| \,
.
\end{equation}
Thus, by \eqref{e.b1}
\begin{align*}
\lvert U\rvert ^{-1} \sum _{|R|<2^{-n}} \frac {\ip C_{\mathcal{V}_{n,\sigma}} ,h _{R }, ^2 } {\lvert R \rvert } \lesssim
\lvert U\rvert ^{-1} \sum_{R\in \mathcal{R}} \lvert R \vert \lesssim
n.
\end{align*}
The proof is finished, since we have shown that for any measurable
set $U\subset [0,1]^2$
\begin{align*}
\Bigg( \lvert U\rvert ^{-1} \sum _{\substack{R \in \mathcal D ^2
\\ R\subset U}}
\frac {\ip D_N ,h _R , ^2 } {\lvert R\rvert } \Bigg)^\frac12
\lesssim n^\frac12 \simeq \sqrt{\log N}.
\end{align*}
\end{proof}
\section{The $\textnormal{exp}(L^\alpha)$ Estimates for the Discrepancy Function.}
\subsection{Lower bound: The Proof of Theorem~\ref{t.lower}}
The proof is by way of duality and is very similar to Hal{\'a}sz's proof
\cite{MR637361} of Schmidt's Theorem, see \eqref{e.schmidt}. Fix the
point distribution $ \mathcal A_N \subset [0,1] ^2 $. Set $ 2N<2
^{n} \le 4N$, so that $ n \simeq \log N$. Proposition~\ref{p.rvec}
provides us with $ \mathsf r$ functions $ f _{\vec r} $ for $\vec
r\in \mathbb H _n ^{2}$. Let $ \mathbb G _N ^{2} \subset \mathbb H
_N ^{2}$ be those elements of $ \mathbb H _N ^{2}$ whose first
coordinate is a multiple of a sufficiently large integer $ a$. We
construct the following functions:
\begin{equation} \label{e.Psi}
\Psi \coloneqq \prod _{\vec r \in \mathbb G _N ^2 } (1+ f _{\vec
r}), \qquad \quad \widetilde \Psi \coloneqq \Psi -1 .
\end{equation}
The `product rule' \ref{p.productRule} easily implies that $ \Psi $
is a positive function of $L^1$ norm one. In fact, letting $
g=\sharp \, \mathbb G _n ^2 $, it is clear that
\begin{equation*}
\Psi = 2 ^{g} \mathbf 1_{E}\,, \qquad \mathbb P (E)= 2 ^{-g}\,.
\end{equation*}
Therefore, by Proposition~\ref{p.indicator},
\begin{equation*}
\norm \widetilde \Psi . L (\log L) ^{1/\alpha }. \simeq g
^{1/\alpha } \simeq n ^{1/\alpha }\,.
\end{equation*}
The fact that $\ip D_N, \widetilde \Psi, \gtrsim n$ is well-known
\cite{MR637361}, \cite{MR1697825}. In fact, if we expand
\begin{align*}
\widetilde \Psi &= \sum _{k=1} ^{g} \Psi _{k} \,,
\\
\Psi _k & = \sum _{ \{\vec r_1 ,\dotsc, \vec r_k\} \subset \mathbb G
_n ^2 } \prod _{\ell =1} ^{k} f _{\vec r_\ell }\, ,
\end{align*}
then, using the `product rule' \ref{p.productRule}, it is not hard
to see that we have
\begin{equation}\label{e.a}
\ip D_N , \Psi _1 , \gtrsim g \gtrsim \frac n a \, ,
\end{equation}
and the other, higher order terms can be summed up, using
Propositions \ref{p.>n} and \ref{p.Not2Many}, to give a much smaller
estimate for $ a$ sufficiently large.
Thus, we can estimate
\begin{equation*}
n \lesssim \ip D_N, \widetilde \Psi , \lesssim \norm D_N .
\operatorname {exp} (L ^{\alpha }). \cdot n ^{1/\alpha }\, ,
\end{equation*}
and so Theorem~\ref{t.lower} holds.
\subsection{Upper bound: The Proof of Theorem \ref{t.vdc} in the case that $ N=2 ^{n}$.}
In this section we shall obtain the upper bound of the
$\operatorname{exp} (L^2)$ norm of the discrepancy of the
digit-scrambled van der Corput set. We shall consider the case of $ N= 2 ^{n}$, leaving the
general case to later. Lemma \ref{l.integral} tells us
that we should choose $\mathcal V_{n,\sigma}$ with half the digits
`scrambled', i.e. $\sum_{i=1}^n \textnormal {d}_i (\sigma) = \lfloor n/2 \rfloor$
-- this will be the only restriction on $\sigma$ and for simplicity
we shall assume that $n$ is even.
We expand $D_N$ in the Haar series and break the expansion into
several parts (in view of our choice of $\sigma$, $h^{1,1}$ does not
play a role in the expansion):
\begin{align}
\nonumber D_N & = \sum_{R\in \mathcal D^2} \frac{\ip D_N, h_R,}{|R|}h_R
+ \sum_{R=I \times [0,1]} \frac{\ip D_N, h^{0,1}_R,}{|R|}h^{0,1}_R
+ \sum_{R= [0,1]\times I} \frac{\ip D_N, h^{1,0}_R,}{|R|}h^{1,0}_R\\
\label{e.hexp}& = \sum_{R:|R|> 2^{-n}} \frac{\ip D_N, h_R,}{|R|}h_R
+ \sum_{R:|R|\le 2^{-n}} \frac{\ip C_N, h_R,}{|R|}h_R -
\sum_{R:|R|\le 2^{-n}} \frac{\ip L_N, h_R ,}{|R|}h_R\\
\label{e.hexp1}& \,\,\, + \sum_{R=I \times [0,1]} \frac{\ip D_N,
h^{0,1}_R,}{|R|}h^{0,1}_R
+ \sum_{R= [0,1]\times I} \frac{\ip D_N, h^{1,0}_R,}{|R|}h^{1,0}_R
\end{align}
For the first sum in the expansion \eqref{e.hexp} above we have:
\begin{align*}
\nonumber \NOrm \sum_{R:|R|> 2^{-n}} \frac{\ip D_N, h_R,}{|R|}h_R .
\operatorname{exp}(L^2). & \le \sum_{k=0}^{n-1} \NORm \sum_{R:|R|=
2^{-k}} \frac{\ip D_N, h_R,}{|R|}h_R . \operatorname{exp}(L^2).\\
\nonumber &\lesssim \sum_{k=0}^{n-1} \NORM \Bigg( \sum_{R:|R|=
2^{-k}} \frac{\ip D_N,
h_R, ^2}{|R|^2} {\mathbf 1}_R \Bigg)^{\frac12}.\infty.\\
& \lesssim \sum_{k=0}^{n-1} \frac1{N} \cdot
\sqrt{k+1} \cdot 2^k \approx \sqrt{n},
\end{align*}
where we have used the hyperbolic version of the Chang-Wilson-Wolff
inequality (Theorem \ref{t.one}), the estimate of the Haar
coefficients of $D_N$ (Lemma \ref{l.haarcoeffs}), and the fact that
each point in $[0,1]^2$ lives in $k+1$ dyadic rectangles of volume
$2^{-k}$.
The last sum in \eqref{e.hexp} is easy to estimate. Since $\ip L_N,
h_R, = 4^{-d} N |R|^2$, we have:
\begin{align}
\nonumber \NOrm \sum_{R:|R|\le 2^{-n}} \frac{\ip L_N, h_R,}{|R|}h_R
. \operatorname{exp}(L^2). & \le 4^{-d} \sum_{k=n}^{\infty} \NOrm
\sum_{R:|R|=
2^{-k}} N 2^{-k} h_R . \operatorname{exp}(L^2).\\
\nonumber &\lesssim N \sum_{k=n}^\infty 2^{-k} \NORm \bigg(
\sum_{R:|R|= 2^{-k}} {\mathbf 1}_R \bigg)^{\frac12}.\infty.\\
\label{e.exp3} & \lesssim N \sum_{k=n}^{\infty} \sqrt{k+1} \cdot
2^{-k} \approx \sqrt{n},
\end{align}
where we have once again applied Theorem \ref{t.one}.
The second sum in \eqref{e.hexp} is the hardest. We consider
rectangles $R$ of volume $|R|\le 2^{-n}$. Recall that, in order for
$\ip C_N, h_R,$ to be non-zero, $R$ must contain points of $\mathcal
V_{n,\sigma}$ in the interior. The structure of the van der Corput
set then implies that we must at least have $|R_1|,|R_2| \ge
2^{-n}$. For each such rectangle $R$, one can find a unique
`parent': a dyadic rectangle $\widetilde{R} \subset [0,1]^2$ with
$|\widetilde{R}|= 2^{-n}$, $\widetilde{R}_1=R_1$, and $R\subset
\widetilde{R}$. We can now write
\begin{align}\label{e.e1}
\NOrm \sum_{R: |R|< 2^{-n}} \frac{\ip C_N, h_R,}{|R|} h_R . p. =
\NORM \sum_{k=0}^{n} \, \sum_{\substack{\widetilde{R}:\,
|\widetilde{R}|=2^{-n}\\ |\widetilde{R}_1|=2^{-k}}} \,\sum_{\substack{R\subset
\widetilde{R}\\ R_1=\widetilde{R}_1}} \frac{\ip C_N, h_R,}{|R|} h_R.
p.
\end{align}
A given rectangle $\widetilde{R}$ as above contains precisely one
point $(p_1,p_2)$ from the set $\mathcal V_{n,\sigma}$. Thus,
\begin{align}\label{e.e2}
\sum_{\substack{R\subset \widetilde{R}\\ R_1=\widetilde{R}_1}} \frac{\ip C_N,
h_R,}{|R|} \, h_R (x_1, x_2) = C_{\widetilde{R}}(x_2) \frac{\ip
h_{\widetilde{R}_1}, \mathbf 1_{[p_1,1]},}{|\widetilde{R}_1|} \,
h_{\widetilde{R}_1} (x_1),
\end{align}
where
\begin{align}\label{e.e3}
C_{\widetilde{R}}(x_2) =\begin{cases} \sum_{I\subset \widetilde{R}_2} \frac{\ip
h_{I}, \mathbf 1_{[p_2,1]},}{|I|} \, h_{I} (x_2)
\nonumber = \mathbf 1_{[p_2,1]} (x_2) - \int_{\widetilde{R}_2}
\mathbf 1_{[p_2,1]} (x) dx / |\widetilde{R}_2|,\ x_2\in \widetilde{R}_2,\\ \\
0, \ x_2 \not\in \widetilde{R}_2.
\end{cases}
\end{align}
In any case, we have $|C_{\widetilde{R}}(x_2)|\le 2$. Now we fix
$x_2 \in [0,1]$. For fixed $x_2$ and $\widetilde{R}_1$, there is a
unique $\widetilde{R}$ such that the sum in \eqref{e.e2} is
non-zero. Thus, using \eqref{e.e1}
\begin{align*}
\sum_{R: |R|\ge 2^{-n}} \frac{\ip C_N, h_R,}{|R|} \, h_R (x_1, x_2)&= \sum_{k=0}^n \,\, \sum_{\widetilde{R}_1:\,
|\widetilde{R}_1|=2^{-k}} \frac{C_{\widetilde{R}}(x_2) \ip
h_{\widetilde{R}_1}, \mathbf
1_{[p_1,1]},}{|\widetilde{R}_1|} \, h_{\widetilde{R}_1} (x_1)\\
&=\sum_{k=0}^n \,\, \sum_{\widetilde{R}_1:\,
|\widetilde{R}_1|=2^{-k}}
\frac{\alpha_{\widetilde{R}_1}(x_2)}{|\widetilde{R}_1|} \,
h_{\widetilde{R}_1} (x_1),
\end{align*}
where the Haar coefficient $\alpha_{\widetilde{R}_1}(x_2)$ satisfies
$|\alpha_{\widetilde{R}_1}(x_2)|\lesssim |\widetilde{R}_1|$. Next,
we apply the one-dimensional Littlewood-Paley inequality in the
variable $x_1$:
\begin{align*}
\NOrm \sum_{R: |R|\ge 2^{-n}} \frac{\ip C_N, h_R,}{|R|} h_R .
L^p(x_1). \lesssim p^\frac12 \NORM \, \Bigg(
\sum_{\widetilde{R}_1:\, |\widetilde{R}_1|\ge 2^{-n}}
\frac{|\alpha_{\widetilde{R}_1}(x_2)|^2}{|\widetilde{R}_1|^2} \,
\mathbf 1_{\widetilde{R}_1} \Bigg)^\frac12 .L^p (x_1). \le
p^\frac12 n^\frac12 .
\end{align*}
We now integrate this estimate in $x_2$ to obtain
\begin{align*}
\NOrm \sum_{R: |R|\ge 2^{-n}} \frac{\ip C_N, h_R,}{|R|} h_R . p.
\lesssim p^\frac12 n^\frac12,
\end{align*}
and thus
\begin{align*}
\NOrm \sum_{R: |R|\ge 2^{-n}} \frac{\ip C_N, h_R,}{|R|} h_R .
\operatorname{exp}(L^2). \lesssim n^\frac12,
\end{align*}
in view of Proposition \ref{p.comparable}. Thus, we have estimated
the $\operatorname{exp}(L^2)$ norms of all the terms in
\eqref{e.hexp} by $n^\frac12$. The estimates for $(0,1)$ and $(1,0)$
Haars in \eqref{e.hexp1} can be easily incorporated, invoking
similar one-dimensional arguments and Lemma \ref{l.semihaarcoeffs}.
We skip these computations for the sake of brevity. We thus arrive
to
\begin{equation}
\Norm D_N. \operatorname{exp}(L^2). \lesssim \sqrt{n} \approx
\sqrt{\log N}.
\end{equation}
Proposition \ref{p.AA} and inequality \eqref{e.schmidtsharp} finish
the proof of Theorem \ref{t.vdc} for all $\alpha \ge 2$.
\subsection{Upper bound: The Proof of Theorem \ref{t.vdc} in the General Case.
We use a standard argument to generalize the previous proof to the case of arbitrary $ N$.
Fix $ 2 ^{n-1}< N < N' \coloneqq 2 ^{n}$. Set $ \tfrac 12 <t=N 2 ^{-n} + 2 ^{-n-1}<1$. Consider the following function
\begin{equation*}
\Delta _{N} (x_1, x_2) \coloneqq D_{N'} (t x _1, x_2)-\tfrac 12 x_1 \cdot x_2\,, \qquad (x_1,x_2)\in [0,1] ^2 \,.
\end{equation*}
Here, $ D _{N'}$ is the Discrepancy Function of a shifted
van der Corput set $ \mathcal V _{n, \sigma }$. (The `$ -\tfrac 12 x_1 \cdot x_2$' above arises from the
precise definition of the van der Corput set.)
The observation is that $ \Delta _N$ is in fact the Discrepancy Function of the set of points
$ \{ v _{n,\sigma } (\tau ) \;:\; \tau = 0, 1 ,\dotsc, N\}$, where this notation is given in Definition~\ref{d.vdc}.
For the linear part of the Discrepancy Function, note that
\begin{equation*}
N' (t x_1) \cdot x_2 -\tfrac 12 x_1 \cdot x_2= N x_1 \cdot x_2 \,.
\end{equation*}
And for the counting part, note that $ \mathbf 1_{ [ v _{n,\sigma } (\tau ),1) } (t x_1, x_2)$,
restricted to $ [0,1] ^2 $ will be the indicator of a rectangle with one corner anchored at the upper right
hand corner. Moreover, it will
will be
identically zero on $ [0,1] ^2 $ iff $ N<\tau \le N'$. Thus,
$ \Delta _{N} $ is a Discrepancy Function.
So it suffices for us to estimate the $ \operatorname {exp}(L ^{\alpha })$ norm of $ \Delta _N$.
But this is straight forward.
\begin{align*}
\norm \Delta _N . \operatorname {exp}(L ^{\alpha }).
& \le 1+ \norm D _{N'} (t x_1, x_2) . \operatorname {exp}(L ^{\alpha }).
\\
& \le 1 + t ^{-1} \norm D _{N'} ( x_1, x_2) . \operatorname {exp}(L ^{\alpha }).
\lesssim (\log N) ^{1/\alpha }\,, \qquad 2\le \alpha < \infty \,.
\end{align*}
\begin{remark}\label{r.bmo} We make a final remark on the other upper bound of the dyadic $ BMO$ estimate
of the digit-scrambled van der Corput set in Theorem~\ref{t.bmo}. It is natural to guess that this
estimate should hold for all $ N$, and for $ BMO$. A natural way to prove this is via the
approach developed in \cites{MR2400405,809.3288}, but carrying out this argument is not completely straight forward.
\end{remark}
\begin{bibsection}
\begin{biblist}
\bib{MR1032337}{article}{
author={Beck, J{\'o}zsef},
title={A two-dimensional van Aardenne-Ehrenfest theorem in
irregularities of distribution},
journal={Compositio Math.},
volume={72},
date={1989},
number={3},
pages={269\ndash 339},
issn={0010-437X},
review={MR1032337 (91f:11054)},
}
\bib{MR903025}{book}{
author={Beck, J{\'o}zsef},
author={Chen, William W. L.},
title={Irregularities of distribution},
series={Cambridge Tracts in Mathematics},
volume={89},
publisher={Cambridge University Press},
place={Cambridge},
date={1987},
pages={xiv+294},
isbn={0-521-30792-9},
review={MR903025 (88m:11061)},
}
\bib{math.CA/0609815}{article}{
author={Bilyk, Dmitriy},
author={Lacey, Michael T.},
title={On the small ball inequality in three dimensions},
journal={Duke Math. J.},
volume={143},
date={2008},
number={1},
pages={81--115},
issn={0012-7094},
review={\MR{2414745}},
eprint={ arXiv:math.CA/060981d},
}
\bib{0705.4619}{article}{
author={Bilyk, Dmitriy},
author={Lacey, Michael T.},
author={Vagharshakyan, Armen},
title={On the small ball inequality in all dimensions},
journal={J. Funct. Anal.},
volume={254},
date={2008},
number={9},
pages={2470--2502},
issn={0022-1236},
review={\MR{2409170}},
eprint={arXiv:0705.4619},
}
\bib{MR584078}{article}{
author={Chang, Sun-Yung A.},
author={Fefferman, Robert},
title={A continuous version of duality of $H\sp{1}$ with BMO on the
bidisc},
journal={Ann. of Math. (2)},
volume={112},
date={1980},
number={1},
pages={179--201},
issn={0003-486X},
review={\MR{584078 (82a:32009)}},
}
\bib{cf1}{article}{
author={Chang, Sun-Yung A.},
author={Fefferman, Robert},
title={Some recent developments in Fourier analysis and $H\sp p$-theory
on product domains},
journal={Bull. Amer. Math. Soc. (N.S.)},
volume={12},
date={1985},
number={1},
pages={1\ndash 43},
issn={0273-0979},
review={MR 86g:42038},
}
\bib{MR800004}{article}{
author={Chang, S.-Y. A.},
author={Wilson, J. M.},
author={Wolff, T. H.},
title={Some weighted norm inequalities concerning the Schr\"odinger
operators},
journal={Comment. Math. Helv.},
volume={60},
date={1985},
number={2},
pages={217\ndash 246},
issn={0010-2571},
review={MR800004 (87d:42027)},
}
\bib{MR610701}{article}{
author={Chen, W. W. L.},
title={On irregularities of distribution},
journal={Mathematika},
volume={27},
date={1980},
number={2},
pages={153--170 (1981)},
issn={0025-5793},
review={\MR{610701 (82i:10044)}},
}
\bib{MR711520}{article}{
author={Chen, W. W. L.},
title={On irregularities of distribution. II},
journal={Quart. J. Math. Oxford Ser. (2)},
volume={34},
date={1983},
number={135},
pages={257--279},
issn={0033-5606},
review={\MR{711520 (85c:11065)}},
}
\bib{Cor35}{article}{
author={van der Corput, J. G.},
title={Verteilungsfunktionen I},
journal={Akad. Wetensch. Amdterdam, Proc.},
volume={38},
date={1935},
pages={813\ndash 821},
}
\bib{2000b:60195}{article}{
author={Dunker, Thomas},
author={K{\"u}hn, Thomas},
author={Lifshits, Mikhail},
author={Linde, Werner},
title={Metric entropy of the integration operator and small ball
probabilities for the Brownian sheet},
language={English, with English and French summaries},
journal={C. R. Acad. Sci. Paris S\'er. I Math.},
volume={326},
date={1998},
number={3},
pages={347\ndash 352},
issn={0764-4442},
review={MR2000b:60195},
}
\bib{MR1439553}{article}{
author={Fefferman, R.},
author={Pipher, J.},
title={Multiparameter operators and sharp weighted inequalities},
journal={Amer. J. Math.},
volume={119},
date={1997},
number={2},
pages={337\ndash 369},
issn={0002-9327},
review={MR1439553 (98b:42027)},
}
\bib{MR637361}{article}{
author={Hal{\'a}sz, G.},
title={On Roth's method in the theory of irregularities of point
distributions},
conference={
title={Recent progress in analytic number theory, Vol. 2},
address={Durham},
date={1979},
},
book={
publisher={Academic Press},
place={London},
},
date={1981},
pages={79--94},
review={\MR{637361 (83e:10072)}},
}
\bib {HAZ}{article}{
AUTHOR = {Halton, J. H.},
author={ Zaremba, S. K.},
TITLE = {The extreme and {$L\sp{2}$} discrepancies of some plane sets},
JOURNAL = {Monatsh. Math.},
VOLUME = {73},
YEAR = {1969},
PAGES = {316--328},
MRCLASS = {10.33},
MRNUMBER = {MR0252329 (40 \#5550)},
MRREVIEWER = {O. P. Stackelberg},
}
\bib {KrPil}{article}{
AUTHOR = {Kritzer, P.}, author={ Pillichshammer, F.},
TITLE = {An exact formula for the $L_2$ discrepancy of the shifted Hammersley point set},
JOURNAL = {Uniform Distribution Theory},
VOLUME = {1},
YEAR = {2006},
number = {1}
PAGES = {1-13},
}
\bib{math.NT/0609817}{article}{
title={{On the Discrepancy Function in Arbitrary Dimension, Close to
$ L ^{1}$}},
author={Lacey, Michael T },
eprint={arXiv:math.NT/060981d},
journal={to appear in Analysis Mathematica},
date={2006}
}
\bib{MR0500056}{book}{
author={Lindenstrauss, Joram},
author={Tzafriri, Lior},
title={Classical Banach spaces. I},
note={Sequence spaces;
Ergebnisse der Mathematik und ihrer Grenzgebiete, Vol. 92},
publisher={Springer-Verlag},
place={Berlin},
date={1977},
pages={xiii+188},
isbn={3-540-08072-4},
review={\MR{0500056 (58 \#17766)}},
}
\bib{MR1697825}{book}{
author={Matou{\v{s}}ek, Ji{\v{r}}\'\i},
title={Geometric discrepancy},
series={Algorithms and Combinatorics},
volume={18},
note={An illustrated guide},
publisher={Springer-Verlag},
place={Berlin},
date={1999},
pages={xii+288},
isbn={3-540-65528-X},
review={\MR{1697825 (2001a:11135)}},
}
\bib{MR850744}{article}{
author={Pipher, Jill},
title={Bounded double square functions},
language={English, with French summary},
journal={Ann. Inst. Fourier (Grenoble)},
volume={36},
date={1986},
number={2},
pages={69\ndash 82},
issn={0373-0956},
review={MR850744 (88h:42021)},
}
\bib{MR2400405}{article}{
author={Pipher, Jill},
author={Ward, Lesley A.},
title={BMO from dyadic BMO on the bidisc},
journal={J. Lond. Math. Soc. (2)},
volume={77},
date={2008},
number={2},
pages={524--544},
issn={0024-6107},
review={\MR{2400405}},
}
\bib{MR0066435}{article}{
author={Roth, K. F.},
title={On irregularities of distribution},
journal={Mathematika},
volume={1},
date={1954},
pages={73--79},
issn={0025-5793},
review={\MR{0066435 (16,575c)}},
}
\bib{MR0319933}{article}{
author={Schmidt, Wolfgang M.},
title={Irregularities of distribution. VII},
journal={Acta Arith.},
volume={21},
date={1972},
pages={45--50},
issn={0065-1036},
review={\MR{0319933 (47 \#8474)}},
}
\bib{MR554923}{book}{
author={Schmidt, Wolfgang M.},
title={Lectures on irregularities of distribution},
series={Tata Institute of Fundamental Research Lectures on Mathematics
and Physics},
volume={56},
publisher={Tata Institute of Fundamental Research},
place={Bombay},
date={1977},
pages={v+128},
review={\MR{554923 (81d:10047)}},
}
\bib{MR0491574}{article}{
author={Schmidt, Wolfgang M.},
title={Irregularities of distribution. X},
conference={
title={Number theory and algebra},
},
book={
publisher={Academic Press},
place={New York},
},
date={1977},
pages={311--329},
review={\MR{0491574 (58 \#10803)}},
}
\bib{MR95k:60049}{article}{
author={Talagrand, Michel},
title={The small ball problem for the Brownian sheet},
journal={Ann. Probab.},
volume={22},
date={1994},
number={3},
pages={1331\ndash 1354},
issn={0091-1798},
review={MR 95k:60049},
}
\bib{MR96c:41052}{article}{
author={Temlyakov, V. N.},
title={An inequality for trigonometric polynomials and its application
for estimating the entropy numbers},
journal={J. Complexity},
volume={11},
date={1995},
number={2},
pages={293\ndash 307},
issn={0885-064X},
review={MR 96c:41052},
}
\bib{809.3288}{article}{
author={Sergei Treil},
title={$H^1$ and dyadic $H^1$},
date={2008},
eprint={http://arxiv.org/abs/0809.3288},
}
\bib{MR1018577}{article}{
author={Wang, Gang},
title={Sharp square-function inequalities for conditionally symmetric
martingales},
journal={Trans. Amer. Math. Soc.},
volume={328},
date={1991},
number={1},
pages={393--419},
issn={0002-9947},
review={\MR{1018577 (92c:60067)}},
}
\end{biblist}
\end{bibsection}
\end{document}
|
2,877,628,090,380 | arxiv | \section{Introduction}
\label{sec:intro}
In recordings produced in natural settings with multiple speakers present, it often occurs that more than one person will speak at the same time.
The resulting overlapped speech can cause a severe degradation in the performance of speech processing technologies designed for only a single speech signal, such as automatic speech recognition and speaker identification. Moreover, overlapped speech can be difficult to understand for human listeners as well. Speech separation systems aim to solve this problem by producing multiple waveforms, each estimating the clean speech of a single speaker, from recordings of overlapped speech.
Great advancements have been made in recent years on solving the speech separation problem through deep learning-based techniques~\cite{Hershey2016,Isik2016,Kolbaek2017,Wang2018ICASSP04Alternative,luo2019convTasNet,shi2019furcanext}. However, the overwhelming majority of research conducted thus far has used the wsj0-2mix dataset~\cite{Hershey2016}, which consists of synthetically-mixed studio recordings of read utterances from the WSJ0 corpus~\cite{garofolo1993csr} and is not representative of many real-world scenarios in which overlapped speech may be present~\cite{bengio2005machine}. In many cases where multiple people are speaking at the same time, they are not speaking directly into the microphone, and are instead captured by a microphone placed at some distance away in the room, as in meetings or in home settings. In these far-field conditions, the distance from the source to the microphone can lead to a relative increase in noise compared to the speech and to increased reverberation \cite{gannot2017perspective}, neither of which are present in the most common deep learning-based speech separation evaluations. %
The addition of noise not only masks the speech signal but also corrupts phase information, while reverberation causes spectral smearing of the source. These phenomena could be challenging for separation systems which rely on the spectral structure of speech in the time-frequency domain~\cite{Vincent2018textbook}. The introduction of the WHAM! dataset~\cite{wichern2019wham}, consisting of two speaker mixtures from the wsj0-2mix dataset together with real ambient noise, was a first step in the direction of more realism. It did not however consider reverberation or more generally spatialization of the speech signals, despite the noise samples being recorded in stereo.
To aid in the development and evaluation of speech separation systems in even more realistic conditions, we introduce the WHAMR! dataset that adds reverberation to WHAM!'s noise augmentation of wsj0-2mix. We have generated realistic room parameters which are used to generate room impulse responses that can produce reverberant audio waveforms for each source in a manner similar to the multi-channel version of wsj0-2mix introduced in \cite{Wang2018ICASSP04MultiChannel}, but with the microphone geometry constrained by the binaural recording setup used to collect the WHAM! noise corpus. %
Although some noisy and reverberant speech separation datasets were introduced in~\cite{maciejewski2019data}, they are constructed using actual recordings of noisy and reverberant speech. As such, they lack ground truth for clean and anechoic speech. WHAMR! provides a contrasting and complementary data paradigm; similarly to other WSJ0-based speech separation datasets, WHAMR! is constructed synthetically, with artificially-mixed speech plus noise and artificial reverberation. This synthetic construction provides the ground truth of all speech signals with and without reverberation, which is necessary to effectively train and evaluate deep learning-based systems.
In this paper, we investigate the performance of various systems for clean, noisy, reverberant, and noisy plus reverberant separation as well as enhancement (denoising and dereverberation) tasks based on the WHAMR! dataset, establishing strong baselines and proposing new cascaded combination systems that can be trained end-to-end. %
\section{WHAMR! Dataset}
\label{sec:data}
The WHAMR! dataset\footnote{Available at: \url{http://wham.whisper.ai}} is an extension of the WHAM! dataset \cite{wichern2019wham}, which is a noise-augmented version of the wsj0-2mix dataset \cite{Hershey2016}. The wsj0-2mix dataset consists of mixtures of utterances from the WSJ0 corpus, combined with random gain between 0 and 5 dB to create overlapping speech. There are four configurations: a \textit{min} condition where the mixture is trimmed to the length of the shorter utterance and the corresponding non-trimmed \textit{max} condition, both available at 8~kHz and 16~kHz sampling rate. The mixtures are partitioned into training, validation, and test sets of 20,000, 5,000, and 3,000 mixtures respectively. %
In the WHAM! dataset, each speech mixture from the wsj0-2mix corpus was associated to a randomly sampled excerpt from noises recorded with binaural microphones in various urban environments throughout the San Francisco Bay Area, and mixed such that the louder speaker was at a randomly selected SNR between $-6$ and $+3$~dB relative to the noise~\cite{wichern2019wham}.
\begin{table}[tbp]
\centering
\vspace{-.1cm}
\caption{Room impulse response parameter sampling distributions. Units for all parameters are meters with the exception of reverberation time~($T_{60}$) which is in seconds and angles in radians.}
\vspace{0.05cm}
\label{table:rir_params}
\begin{adjustbox}{max width=\columnwidth}
\subfloat{
\begin{tabular}{@{}cc|c@{}}
\toprule
\multirow{3}{*}{\textbf{Room}} & L & $\mathcal{U}(5, 10)$ \\
& W & $\mathcal{U}(5, 10)$ \\
& H & $\mathcal{U}(3, 4)$ \\ \midrule
\multirow{3}{*}{$\mathbf{T_{60}}$} & high & $\mathcal{U}(0.4, 1.0)$ \\
& med. & $\mathcal{U}(0.2, 0.6)$ \\
& low & $\mathcal{U}(0.1, 0.3)$ \\ \bottomrule
\end{tabular}
}
\quad
\subfloat{
\begin{tabular}{@{}cc|c@{}}
\toprule
\multirow{3}{*}{\begin{tabular}[c]{@{}c@{}}\textbf{Mic.}\\ \textbf{Center}\end{tabular}} & L & $\frac{L_\text{Room}}{2}+\mathcal{U}(-0.2, 0.2)$ \\[1pt]
& W & $\frac{W_\text{Room}}{2}+\mathcal{U}(-0.2, 0.2)$ \\[1pt]
& H & $\mathcal{U}(0.9, 1.8)$ \\ \midrule
\multirow{2}{*}{\begin{tabular}[c]{@{}c@{}}\textbf{Mic.}\\ \textbf{Array}\end{tabular}} & sep. & noise mic. separation \\
& $\theta$ & $\mathcal{U}(0, 2\pi)$ \\ \midrule
\multirow{3}{*}{\textbf{Sources}} & H & $\mathcal{U}(0.9, 1.8)$ \\
& dist. & $\mathcal{U}(0.66, 2)$ \\
& $\theta$ & $\mathcal{U}(0, 2\pi)$ \\ \bottomrule
\end{tabular}
} \end{adjustbox}
\vspace{-0.6cm}
\end{table}
WHAMR! extends WHAM! by introducing reverberation to the speech sources in addition to the existing noise. Room impulse responses were generated and convolved using pyroomacoustics~\cite{scheibler2018pyroomacoustics} according to the random room configurations shown in Table~\ref{table:rir_params}. Reverberation times were chosen to approximate domestic and classroom environments~\cite{gannot2017perspective} (as we expect these to be similar to the restaurants and coffee shops where the WHAM! noise was collected), and further classified as high, medium, and low reverberation based on a qualitative assessment of the mixture's noise recording.
We created spatialized versions---\textit{anechoic} and \textit{reverberant}---of all components of the original WHAM! dataset, except noise, which was recorded spatialized. The anechoic sources (i.e., direct path signals) serve as targets to reverberated sources for models involving dereverberation, allowing them to be trained without needing to account for the time delay of the spatialized sources.
In spatializing the audio, we generated a two-channel version of the dataset, using microphone spacing from the WHAM! noise metadata,
but in this study we focus on single-channel separation and use only the left channel.
The spatialized audio was rescaled to remove attenuation, such that the non-spatialized WHAM! and anechoic WHAMR! differ only by small time delays, and we found negligible performance differences when training and testing models using the two datasets. While the results for non-reverberant conditions in Section~\ref{sec:results} use anechoic WHAMR!, they are directly comparable with WHAM!~\cite{wichern2019wham}.
Since all source, noise, and reverberated components and their combinations are included in the corpus, several enhancement, separation, and joint enhancement-separation tasks are enabled for training and evaluation. %
For example, in separating noisy and reverberant speech, we may want to produce either two clean, anechoic recordings or two clean, reverberant recordings, leaving dereverberation to post-processing.
We choose to define four core separation tasks:
\begin{itemize}
\setlength{\itemsep}{-1pt}
\item \textbf{clean} -- anechoic clean mixture to anechoic sources
\item \textbf{noisy} -- anechoic noisy mixture to anechoic sources
\item \textbf{reverberant} -- reverberant clean mixture to anechoic sources
\item \textbf{noisy and reverberant} -- reverberant noisy mixture to anechoic sources
\end{itemize}
All other configurations are only considered and evaluated as sub-components to the above tasks. Since each condition has its own unprocessed signal-to-distortion ratio~(SDR), comparisons across tasks can be difficult. By restricting to the above tasks, where the targets are the same in all four conditions, raw SDR can be thought of as a directly comparable, ``objective'' quality metric of the output sources across tasks. SDR \textit{improvement} also brings insight by reporting how much improvement a system has made to the signal. %
\section{Experimental Configurations}
\label{sec:conf}
\subsection{Network Configurations}
\label{ssec:net_config}
For our experiments, we use four basic network configurations, all under the same paradigm. %
First, the waveforms are projected to a spectro-temporal representation. Next, an internal network takes the spectral representation and produces a spectral mask with values from 0 to 1. %
Finally, this spectral mask is applied to the original representation, suppressing interfering signals, before the representation is projected back to produce an estimated source waveform. In enhancement, the internal masking network produces a single mask, attempting to suppress noise and/or reverberation. In separation, the masking network produces a mask for each speech signal, attempting to suppress the interfering speakers from each target speaker.
The four configurations we use are the possible combinations of two spectral feature extractors and two internal masking networks. The feature extractors we compare are a standard short-time Fourier transform~(STFT) and a TasNet-style learned basis transform~\cite{Luo2018,luo2019convTasNet}, which consists of projecting sliding-window subsegments of the waveform onto a set of learned basis functions. The resulting weights can be applied to a reconstruction set of basis functions and summed together along the same sliding window to reconstruct the signal under a similar paradigm to overlap-and-add for the STFT. For internal masking, we evaluate both bi-directional long short-term memory~(BLSTM) networks (the typical internals of earlier deep learning-based speech separation systems~\cite{Hershey2016,Isik2016,Kolbaek2017,Wang2018ICASSP04Alternative,Luo2018,wichern2019wham}) and temporal convolutional networks~(TCN)~\cite{Lea2017} with dilated convolutions (popular in recent state-of-the-art separation techniques~\cite{luo2019convTasNet,shi2019furcanext}).
For consistency with the prior WHAM! work \cite{wichern2019wham}, our BLSTM architecture has four BLSTM layers with 600 units in each direction followed by a fully-connected layer for each output mask. A dropout of 0.3 is applied on each BLSTM layer output except the last. The TCN architecture was chosen to match the best system reported in~\cite{luo2019convTasNet}. It consists of a 128-dimensional bottleneck, 128-dimensional skip-connection paths, and 512 channels in the convolutional blocks, with kernel size 3, 8 blocks per repeat, and 3 repeats.
The STFT features are also chosen to be consistent with \cite{wichern2019wham}, with a window length of 32~ms and hop size of 8~ms. The log of the magnitude spectrum is used as input to the internal masking network. The learned basis feature parameters are also chosen to be consistent with \cite{wichern2019wham}, with a 10~ms window and 5~ms hop, with 500 learned basis vectors. While the original BLSTM TasNet~\cite{Luo2018} used a gated convolutional encoder, in this work we use a single learned encoder and ReLU nonlinearity as in Conv-TasNet~\cite{luo2019convTasNet} for both the BLSTM and TCN masking networks with learned bases.
For separation, we evaluate learned basis configurations only, as they have been shown to outperform STFT-based methods on clean data, and performed best in preliminary experiments.
However, %
we perform full comparisons of the differing features for enhancement, for which TasNet-like systems have only rarely been evaluated~\cite{luo2018dereverb}.
We train all networks using permutation invariant training~\cite{Hershey2016, Kolbaek2017} with the scale-invariant signal-to-distortion ratio (SI-SDR, also referred to as SI-SNR) waveform-level training objective~\cite{Isik2016,Luo2018,LeRoux2018SISDR}. SI-SDR is also the evaluation metric and allows for end-to-end joint training of cascaded enhancement and separation models:
\begin{gather}
\text{SI-SDR} = 10 \log_{10} ({\Vert \alpha s \Vert^2}/{\Vert \alpha s - \hat{s}\Vert^2}), \, \alpha = {\langle \hat{s},s \rangle}/{\Vert s \Vert^2}. \label{eq:sisnr}
\end{gather}
Because the loss is scale-invariant and the outputs are not constrained to sum up to the mixture, the outputs may be in a different dynamic range as the mixture, which as we will see can lead to problems with the cascaded models proposed in this work.
\subsection{Cascaded Models}
\label{ssec:cascade_models}
In addition to training single models for each of the WHAMR! core tasks, we evaluate combinations of models in which enhancement (i.e., denoising and/or dereverberation) and separation systems are cascaded, with the output of one system being fed into the next. %
The main motivation is that jointly separating and enhancing may be too difficult for a single network to learn, and modularization may allow the networks to focus on specific tasks. Two-stage approaches have previously been explored for denoising plus dereverberation~\cite{han2015learning, zhao2018two}, separation plus dereverberation~\cite{delfarah2019deep}, and denoising plus separation~\cite{wichern2019wham}.
The cascaded configurations we consider consist of an optional pre-enhancement system cascaded into a separation network cascaded into an optional post-enhancement system. We evaluate all combinations where noise is removed by either pre-enhancement or the separator, and reverberation is removed by either pre-enhancement, post-enhancement, or the separator. Post-separation denoising is not considered, as separation-without-denoising is a somewhat ill-defined task: %
noise does not `belong' to either speech signal, so it is unclear how the network should distribute the noise when not removing it.
For cascaded systems, the sub-models are trained with appropriate input and targets for each sub-task. %
For example, in the system consisting of denoising followed by separation then dereverberation, the networks are trained as follows: pre-enhancement is trained with noisy reverberant mixtures as input and clean reverberant mixtures as output; the separator with reverberant mixtures as input and reverberant sources as output; and post-enhancement with single reverberant sources as input and single anechoic sources as output.
As mentioned above, due to the scale-invariant loss function, each model's outputs have no constraint to be within any particular dynamic range, and we thus observe strong degradation in performance in cascaded systems when sub-models are trained separately, due to the scaling mismatch between the output of one model and the training data of the next. To address this problem, we scale each output $\hat{s}$, obtained from an input mixture $x$ as an estimate for a target source $s$, to make it consistent with the scaling of $s$ in $x$. Because $s$ is unknown, we need to rely on $\hat{s}$ and $x$ alone. If we assume that the interfering signal $n=x-s$ is orthogonal to $s$, which is generally approximately the case, and that the direction of $\hat{s}$ is close to that of $s$, then a reasonable choice for the rescaling factor $\beta(\hat{s}|x)$ is that obtained by ensuring that $\beta(\hat{s}|x)\hat{s}$ is orthogonal to the residual $\hat{n} = x - \beta(\hat{s}|x) \hat{s}$. %
This results in a scaling factor
\begin{gather}
\beta(\hat{s}|x) = \frac{\langle x,\hat{s}\rangle}{\Vert\hat{s}\Vert^2}.
\end{gather}
As the estimate $\hat{s}$ improves (i.e., $\hat{s}$ and $s$ become more colinear), the scaling factor improves as well.
When the best-performing system of a WHAMR! task is a cascaded model, we also evaluate the system with additional end-to-end tuning.
Since all component systems are waveform-to-waveform, we can tune the entire system by performing additional training through all cascaded sub-models directly.
End-to-end joint training of sub-models has been shown to be successful in joint training of automatic speech recognition with enhancement and separation~\cite{ochiai2017,Settle2018ICASSP04,seki2018purely,chang2019}.
\subsection{Training Configurations}
\label{ssec:train_config}
All networks are trained on 4 second segments using the Adam algorithm~\cite{Kingma2015Adam}. The learning rate is decreased by a factor of 2 if validation loss does not improve for 3 consecutive epochs. Gradient clipping is applied with a maximum $\ell_2$ norm of 5. Models are trained for 100 epochs with an initial learning rate of $10^{-3}$, with the exception of cascaded model tuning, during which we train the models for 25 epochs with a learning rate of $10^{-4}$. Because the SI-SDR loss is undefined for silent sources, training models on the \textit{max} data subset is cumbersome, as the 4 s segments randomly sampled during training occasionally fall within regions where only one speaker is talking. Thus, for the 16~kHz~\textit{max} condition, we train on 16~kHz~\textit{min}. Unless otherwise noted, all results are for the 8~kHz~\textit{min} condition.
\vspace{-.2cm}
\section{Experimental Results}
\label{sec:results}
\vspace{-.1cm}
For all experiments, we report results using scale-invariant source-to-distortion ratio~(SI-SDR)~\cite{LeRoux2018SISDR}, which is also the training objective. Furthermore, because the input SI-SDR between tasks is highly variable, we also report the SI-SDR improvement ($\Delta$), i.e., the difference between output and input SI-SDR. %
\begin{table}[tbp]
\centering
\vspace{-.2cm}
\caption{SI-SDR [dB] results for a single separation network. Highlighted rows represent new WHAMR! conditions.}
\vspace{.1cm}
\label{table:base_separation}
\begin{adjustbox}{max width=0.9\columnwidth}
\sisetup{table-format=2.1,round-mode=places,round-precision=1,table-number-alignment = center,detect-weight=true,detect-inline-weight=math}
\begingroup
\renewcommand*{\arraystretch}{0.9}
\begin{tabular}{cc|S[table-format=2.1,table-number-alignment = center]|S[table-format=2.1,table-number-alignment = center]S|SS}
\toprule
\multicolumn{2}{c}{Input} & \multicolumn{1}{c}{} & \multicolumn{2}{c}{Conv-TasNet} & \multicolumn{2}{c}{TasNet-BLSTM} \\ \cmidrule(lr){1-2} \cmidrule(lr){4-5} \cmidrule(lr){6-7}
Noise & \multicolumn{1}{c}{Reverb} & \multicolumn{1}{c}{Input} & Output & \multicolumn{1}{c}{$\Delta$} & Output & {$\Delta$} \\ \midrule
& & 0.00 & 12.91 & 12.91 & \bfseries 14.16 & \bfseries 14.16 \\
{\checkmark} & {} & -4.49 & 7.01 & 11.50 & \bfseries 7.48 & \bfseries 11.97 \\ \rowcolor[HTML]{FDD49F}
& {\checkmark} & -3.29 & 4.27 & 7.56 & \bfseries 5.58 & \bfseries 8.87 \\ \rowcolor[HTML]{FDD49F}
{\checkmark} & {\checkmark} & -6.13 & 2.22 & 8.34 & \bfseries 3.03 & \bfseries 9.16 \\ \bottomrule
\end{tabular}
\endgroup
\end{adjustbox}
\vspace{-0.3cm}
\end{table}
\begin{table}[tbp]
\centering
\vspace{-.2cm}
\caption{\!$\Delta$SI-SDR [dB] comparison of our implementations with the best Conv-TasNet number in~\cite{luo2019convTasNet} and the corresponding learned feature configuration of 512~bases, window length~16, window shift~8.}
\vspace{.1cm}
\label{table:small_window}
\begin{adjustbox}{max width=0.75\columnwidth}
\sisetup{table-format=2.1,round-mode=places,round-precision=1,table-number-alignment = center,detect-weight=true,detect-inline-weight=math}
\begingroup
\renewcommand*{\arraystretch}{0.9}
\begin{tabular}{S[table-format=2.1,table-number-alignment = center]|S|S}
\toprule
\multicolumn{1}{c}{TasNet-BLSTM} & \multicolumn{1}{c}{Conv-TasNet} & \multicolumn{1}{c}{Conv-TasNet~\cite{luo2019convTasNet}} \\ \midrule
\bfseries 16.580 & 14.40 & 15.3 \\ \bottomrule
\end{tabular}
\endgroup \end{adjustbox}
\vspace{-.3cm}
\end{table}
Table~\ref{table:base_separation} shows the results of our core systems, without cascade. Reverberation seems to be more challenging than noise as reflected by the lower SI-SDR. While the noisy and clean conditions are comparable in terms of SI-SDR improvement, they still differ significantly in terms of raw SI-SDR. Interestingly, we observe consistently better performance from the BLSTM model over the TCN model, which is somewhat unexpected. Indeed, although the BLSTM contains many more parameters than the TCN, this result contradicts prior results in the literature \cite{Luo2018,luo2019convTasNet}. A comparison of clean separation models with a smaller basis window is shown in Table~\ref{table:small_window}, confirming that the performance difference is not due to the window parameters.
In addition, we note that the TasNet-BLSTM numbers in the first two rows are considerably better than the corresponding numbers in the original WHAM! paper \cite{wichern2019wham}. The newer network uses the same configuration, but is trained with more aggressive gradient clipping and stagnation learning rate adjustment, which supports the findings regarding training optimizer parameters reported in~\cite{luo2018dereverb, luo2019convTasNet}.
Table~\ref{table:denoise_both} shows experimental results with enhancement networks. We use denoising and dereverberation of two-speaker mixtures as a proxy for all other enhancement conditions. Since performance trends are consistent across these two tasks, we think this is reasonable evidence to conclude that the learned feature BLSTM model (TasNet-BLSTM) is the best architecture for enhancement. While the learned basis TCN and BLSTM perform similarly, we see significant drops in performance moving from learned basis to STFT features. This suggests that the benefits shown in speech separation are also likely present in speech denoising and dereverberation.
\begin{table}[tbp]
\centering
\caption{SI-SDR [dB] for two-speaker enhancement tasks.}
\label{table:denoise_both}
\begin{adjustbox}{max width=0.9\columnwidth}
\sisetup{table-format=2.1,round-mode=places,round-precision=1,table-number-alignment = center,detect-weight=true,detect-inline-weight=math}
\begingroup
\renewcommand*{\arraystretch}{0.9}
\begin{tabular}{cc|S[table-format=2.1,table-number-alignment = center]S|SS}
\toprule
\multicolumn{2}{c}{Net} & \multicolumn{2}{c}{Denoise} & \multicolumn{2}{c}{Dereverb} \\ \cmidrule(lr){1-2} \cmidrule(lr){3-4} \cmidrule(lr){5-6}
\multicolumn{1}{c}{Feature} & \multicolumn{1}{c}{Processor} & Output & \multicolumn{1}{c}{$\Delta$} & Output & \multicolumn{1}{c}{$\Delta$} \\ \midrule
\multicolumn{1}{c}{Learned} & \multicolumn{1}{c|}{TCN} & 10.80 & 9.62 & 7.23 & 3.19 \\
\multicolumn{1}{c}{Learned} & \multicolumn{1}{c|}{BLSTM} & \bfseries 11.24 & \bfseries 10.06 & \bfseries 8.46 & \bfseries 4.42 \\
\multicolumn{1}{c}{STFT} & \multicolumn{1}{c|}{TCN} & 8.40 & 7.21 & 4.04 & 0.00 \\
\multicolumn{1}{c}{STFT} & \multicolumn{1}{c|}{BLSTM} & 9.54 & 8.36 & 5.89 & 1.84 \\ \midrule
\multicolumn{2}{c}{Input SI-SDR:} & \multicolumn{2}{c}{\num{1.19}} & \multicolumn{2}{c}{\num{4.03}} \\ \bottomrule
\end{tabular}
\endgroup
\end{adjustbox}
\vspace{-0.3cm}
\end{table}
Table~\ref{table:combos} shows the results of the cascaded model experiments. In accordance with the previous results, all sub-models are TasNet-BLSTM models. We see that in general, moving the speech enhancement (i.e., denoising and/or dereverberation) tasks to a separate model from separation seems to help performance. From Tables~\ref{table:combos}(b) and (c), reverberation appears to be particularly difficult for the separation network to remove. We also see that removing reverberation post-separation is slightly better than pre-separation. As two sources will not have the same room impulse response, the dual-source (pre-enhancement) dereverberation network would have to appropriately compensate for two reverberation patterns, while the single-source dereverberation (post-enhancement) network handles only one. The separator network likely has a harder time separating the still-reverberant speech, but this effect appears to be smaller than the difference in single- and double-source dereverberation.
\begin{table}[]
\centering
\caption{Comparison of cascaded models. A dash indicates speech separation without denoising/dereverberation, while \batsu\, indicates no enhancement sub-model was used. Results are sorted by increasing performance. The highlighted rows indicate the non-cascaded single-model baseline.}
\vspace{.1cm}
\label{table:combos}
\begin{adjustbox}{max width=0.74\columnwidth}
\subfloat[noisy condition]{%
\sisetup{table-format=2.1,round-mode=places,round-precision=1,table-number-alignment = center,detect-weight=true,detect-inline-weight=math}
\begingroup
\renewcommand*{\arraystretch}{0.9}
\begin{tabular}{ccS[table-format=2.1,table-number-alignment = center]S}
\toprule
\multicolumn{2}{c}{System} & \multicolumn{2}{c}{\multirow{3}{*}{\begin{tabular}[c]{@{}ccc@{}} & SI-SDR & \\ \cmidrule{1-3}\end{tabular}}} \\ \cmidrule(lr){1-2}
\multirow{2}{*}{\begin{tabular}[c]{@{}c@{}}Pre-Enh.\\ Removes\end{tabular}} & \multirow{2}{*}{\begin{tabular}[c]{@{}c@{}}Separate Speech\\ while Removing\end{tabular}} & & \\
& & Output & \multicolumn{1}{c}{$\Delta$} \\ \midrule \rowcolor[HTML]{FDD49F}
\multicolumn{1}{c|}{\batsu} & \multicolumn{1}{c|}{noise} & 7.48 & 11.97 \\
\multicolumn{1}{c|}{noise} & \multicolumn{1}{c|}{--} & \bfseries 8.10 & \bfseries 12.59 \\
\midrule
\multicolumn{2}{c}{Input SI-SDR:} & \multicolumn{2}{c}{\num{-4.49}} \\ \bottomrule
\end{tabular}
\endgroup
\label{subtable:combos_dn}}
\end{adjustbox}
\hfill
\begin{adjustbox}{max width=0.93\columnwidth}
\subfloat[reverberant condition]{%
\sisetup{table-format=2.1,round-mode=places,round-precision=1,table-number-alignment = center,detect-weight=true,detect-inline-weight=math}
\begingroup
\renewcommand*{\arraystretch}{0.9}
\begin{tabular}{cccS[table-format=2.1,table-number-alignment = center]S}
\toprule
\multicolumn{3}{c}{System} & \multicolumn{2}{c}{\multirow{3}{*}{\begin{tabular}[c]{@{}ccc@{}} & SI-SDR & \\ \cmidrule{1-3}\end{tabular}}} \\ \cmidrule(lr){1-3}
\multirow{2}{*}{\begin{tabular}[c]{@{}c@{}}Pre-Enh.\\ Removes\end{tabular}} & \multirow{2}{*}{\begin{tabular}[c]{@{}c@{}}Separate Speech\\ while Removing\end{tabular}} & \multirow{2}{*}{\begin{tabular}[c]{@{}c@{}}Post-Enh.\\ Removes\end{tabular}} & & \\
& & & Output & \multicolumn{1}{c}{$\Delta$} \\ \midrule \rowcolor[HTML]{FDD49F}
\multicolumn{1}{c|}{\batsu} & \multicolumn{1}{c|}{rev.} & \multicolumn{1}{c|}{\batsu} & 5.58 & 8.87 \\
\multicolumn{1}{c|}{rev.} & \multicolumn{1}{c|}{--} & \multicolumn{1}{c|}{\batsu} & 6.39 & 9.68 \\
\multicolumn{1}{c|}{\batsu} & \multicolumn{1}{c|}{--} & \multicolumn{1}{c|}{rev.} & \bfseries 6.59 & \bfseries 9.88 \\
\midrule
\multicolumn{1}{c}{\phantom{noise, re...}} &
\multicolumn{1}{c}{Input SI-SDR:} &
\multicolumn{1}{c}{} &
\multicolumn{2}{c}{\num{-3.29}} \\ \bottomrule
\end{tabular}
\endgroup
\label{subtable:combos_dr}}
\end{adjustbox}
\hfill
\begin{adjustbox}{max width=0.93\columnwidth}
\subfloat[noisy and reverberant condition]{%
\sisetup{table-format=2.1,round-mode=places,round-precision=1,table-number-alignment = center,detect-weight=true,detect-inline-weight=math}
\begingroup
\renewcommand*{\arraystretch}{0.9}
\begin{tabular}{cccS[table-format=2.1,table-number-alignment = center]S}
\toprule
\multicolumn{3}{c}{System} & \multicolumn{2}{c}{\multirow{3}{*}{\begin{tabular}[c]{@{}ccc@{}} & SI-SDR & \\ \cmidrule{1-3}\end{tabular}}} \\ \cmidrule(lr){1-3}
\multirow{2}{*}{\begin{tabular}[c]{@{}c@{}}Pre-Enh.\\ Removes\end{tabular}} & \multirow{2}{*}{\begin{tabular}[c]{@{}c@{}}Separate speech\\ while removing\end{tabular}} & \multirow{2}{*}{\begin{tabular}[c]{@{}c@{}}Post-Enh.\\ Removes\end{tabular}} & & \\
& & & Output & \multicolumn{1}{c}{$\Delta$} \\ \midrule \rowcolor[HTML]{FDD49F}
\multicolumn{1}{c|}{\batsu} & \multicolumn{1}{c|}{noise, rev.} & \multicolumn{1}{c|}{\batsu} & 3.03 & 9.16 \\
\multicolumn{1}{c|}{noise} & \multicolumn{1}{c|}{rev.} & \multicolumn{1}{c|}{\batsu} & 3.53 & 9.66 \\
\multicolumn{1}{c|}{noise, rev.} & \multicolumn{1}{c|}{--} & \multicolumn{1}{c|}{\batsu} & 3.56 & 9.69 \\
\multicolumn{1}{c|}{rev.} & \multicolumn{1}{c|}{noise} & \multicolumn{1}{c|}{\batsu} & 3.72 & 9.84 \\
\multicolumn{1}{c|}{\batsu} & \multicolumn{1}{c|}{noise} & \multicolumn{1}{c|}{rev.} & 3.66 & 9.78 \\
\multicolumn{1}{c|}{noise} & \multicolumn{1}{c|}{--} & \multicolumn{1}{c|}{rev.} & \bfseries 3.97 & \bfseries 10.10 \\
\midrule
\multicolumn{1}{c}{\phantom{noise, rev.}} &
\multicolumn{1}{c}{Input SI-SDR:} &
\multicolumn{1}{c}{} &
\multicolumn{2}{c}{\num{-6.13}} \\ \bottomrule
\end{tabular}
\endgroup
\label{subtable:combos_dndr}}
\end{adjustbox}
\vspace{-0.8cm}
\end{table}
While the cascaded systems do have 2 or 3 times as many parameters as the non-cascaded system, this does not seem to be the sole source of performance improvement, as single models with increased numbers of BLSTM layers provided little performance gain over the results in Table~\ref{table:base_separation}. Furthermore, training equivalent cascaded systems from scratch without individual pre-training of the pre-enhancement, separation, and post-enhancement stages %
provided noticeably less performance improvement over the single network results from Table~\ref{table:base_separation} than the reported cascaded systems in Table~\ref{table:combos}.
\begin{table}[tbp]
\centering
\caption{SI-SDR comparison of best models with and without additional training. Dashes indicate the best system was not cascaded.}
\vspace{.1cm}
\label{table:tuning}
\begin{adjustbox}{max width=0.95\columnwidth}
\sisetup{table-format=2.1,round-mode=places,round-precision=1,table-number-alignment = center,detect-weight=true,detect-inline-weight=math}
\begingroup
\renewcommand*{\arraystretch}{0.9}
\begin{tabular}{cc|S[table-format=2.1,table-number-alignment = center]|S[table-format=2.1,table-number-alignment = center]S|SS}
\toprule
& \multicolumn{1}{c}{} & \multicolumn{1}{c}{} & \multicolumn{2}{c}{\multirow{2}{*}{\begin{tabular}[c]{@{}c@{}}Best System\\ w/o Tuning\end{tabular}}} & & \\
\multicolumn{2}{c}{Input} & \multicolumn{1}{c}{} & & \multicolumn{1}{c}{} & \multicolumn{2}{c}{Tuned} \\ \cmidrule(lr){1-2} \cmidrule(lr){4-5} \cmidrule(lr){6-7}
Noise & \multicolumn{1}{c}{Reverb} & \multicolumn{1}{c}{Input} & Output & \multicolumn{1}{c}{$\Delta$} & Output & \multicolumn{1}{c}{$\Delta$} \\ \midrule
& & 0.00 & 14.16 & 14.16 & {--} & {--} \\
{\checkmark} & & -4.49 & 8.10 & 12.59 & 8.34 & 12.86 \\
& {\checkmark} & -3.29 & 6.59 & 9.88 & 6.99 & 10.27 \\
{\checkmark} & {\checkmark} & -6.13 & 3.97 & 10.10 & 4.72 & 10.84 \\ \bottomrule
\end{tabular}
\endgroup
\end{adjustbox}
\end{table}
Table~\ref{table:tuning} shows the results of tuning the cascaded systems with additional end-to-end training. Tuning the systems helps, although the performance gains are minor. The noisy and reverberant system, which contains three sub-models in contrast to the others with two, shows the greatest improvement. This suggests training helps with improving the coupling of the connected models.
\begin{table}[tbp]
\centering
\caption{SI-SDR evaluation of 16 kHz conditions using the best model configuration trained on the 16 kHz \textit{min} subset.}
\vspace{.1cm}
\label{table:16k}
\begin{adjustbox}{max width=\columnwidth}
\sisetup{table-format=2.1,round-mode=places,round-precision=1,table-number-alignment = center,detect-weight=true,detect-inline-weight=math}
\begingroup
\renewcommand*{\arraystretch}{0.9}
\begin{tabular}{cc|S[table-format=2.1,table-number-alignment = center]|S[table-format=2.1,table-number-alignment = center]S|S|SS}
\toprule
\multicolumn{2}{c}{Input} & \multicolumn{3}{c}{16 kHz Min} & \multicolumn{3}{c}{16 kHz Max} \\ \cmidrule(lr){1-2}\cmidrule(lr){3-5} \cmidrule(lr){6-8}
Noise & \multicolumn{1}{c}{Reverb} & \multicolumn{1}{c}{Input} & Output & \multicolumn{1}{c}{$\Delta$} & \multicolumn{1}{c}{Input} & Output & \multicolumn{1}{c}{$\Delta$} \\ \midrule
& & 0.00 & 12.86 & 12.86 & 0.00 & 12.71 & 12.71 \\
{\checkmark} & & -4.57 & 7.79 & 12.36 & -5.84 & 7.47 & 13.32 \\
& {\checkmark} & -3.30 & 5.63 & 8.93 & -3.41 & 5.39 & 8.81 \\
{\checkmark} & {\checkmark} & -6.19 & 3.74 & 9.94 & -7.20 & 3.50 & 10.70 \\ \bottomrule
\end{tabular}
\endgroup
\end{adjustbox}
\vspace{-0.1cm}
\end{table}
Table~\ref{table:16k} shows the results of our 16 kHz systems. As mentioned earlier, we trained on 16 kHz \textit{min} and evaluated on both the \textit{min} and \textit{max} conditions. Although the performance on 16 kHz data is worse than in the 8 kHz systems, there does not appear to be any significant breakdown in performance. Similarly, performance in the \textit{max} condition is only slightly worse than the \textit{min} condition. %
Although the SI-SDR improvement in the noisy case is better in \textit{max} than \textit{min}, this is likely due to differences in amount of speech and does not reflect any significant difference in performance.
\section{Conclusion}
\label{sec:conclusion}
We have introduced WHAMR!, an extension of the WHAM! noisy speech separation dataset to include reverberation, with the goal of further promoting the advancement of speech separation technologies towards more realistic conditions. Preliminary results demonstrate that, although noise and reverberation do degrade overall performance, networks with learned basis feature representations are effective not only in separation but also in speech enhancement. We have also demonstrated the value in using cascaded models combining pre-trained separation and enhancement modules, and of further jointly fine-tuning them, establishing strong baseline results for the WHAMR! dataset. Extending the proposed model cascades to stereo is an important topic of future work, and is supported in the WHAMR! scripts available at \url{http://wham.whisper.ai}.
\vfill\pagebreak
\bibliographystyle{IEEEtran}
|
2,877,628,090,381 | arxiv | \chapter{Introduction}
The condition for the vanishing of the conformal
anomaly for a bosonic string in a curved space is
expressed, in $\sigma$-model perturbation theory,
as $R_{\mu\nu}+\ldots=0$, where the ellipsis denotes
rank two tensors constructed from
derivatives and powers of the Riemann tensor~[1,2].
An example for which all
such correction terms vanish is provided by
`ordinary' plane-fronted gravitational waves,
$$
d{\bf x}^2-2dudv+K(u,{\bf x})du^2,
\equation
$$
where ${\bf x}$ belongs to the $(D-2)$-dimensional flat
transverse space and $u$ and $v$ are two additional
light-cone coordinates.
Then all higher-order correction terms vanish due to
the special form of the Riemann tensor, and the vanishing
anomaly condition reduces to
the vacuum Einstein equation $R_{\mu\nu}\=0$ i.e. to
$$
\Delta K=0,
\equation
$$
where $\Delta\equiv\Delta_{D-2}$ is the (flat) transverse
Laplacian~[3,4].
For
exact plane waves,
$K\=\sum_{ij}{K_{ij}(u)x^ix^j}$
where $K_{ij}$ is symmetric and traceless
(and for $D=26$), the constraint $\Delta K\=0$ enforces
the anomaly cancellation even non-perturbatively~[3].
This result can be extended by including other massless fields,
namely a dilaton, $\Phi(u)$, and an axion, $b_{\mu\nu}$,
with only non-zero component
$b_{iu}\=\2B_{ij}(u)x^j$
where $B_{ij}$ is antisymmetric.
Higher-order terms vanish again, and the vanishing anomaly
condition is simply~[4]
$$
\Delta K+{\smallover 1/{18}}B_{ij}B^{ij}+2\ddot\Phi=0.
\equation
$$
In the quadratic case the anomaly again vanishes
non-perturbatively~[5].
Another generalization was found by Rudd~[6] who has shown
the vanishing of the anomaly for a metric + dilaton system
with metric
$$
\sum_{i=1}^{D-2}[2\pi R_i(u)]^2(dx^i)^2-2dudv,
\equation
$$
provided
$$
\sum_{i=1}^{D-2}{{\ddot R_i\over R_i}}-2\ddot\Phi=0.
\equation
$$
Now both (1.1) and (1.4) are particular cases of Brinkmann's
generalized plane-fronted waves with parallel rays (shortly
pp-waves)~[7,8,9],
$$
{\widetilde g}_{\mu\nu}dx^{\mu}dx^{\nu}
=g_{ij}(u,{\bf x})dx^idx^j-2du\Big[dv+A_i(u,{\bf x})
dx^i\Big]+k(u,{\bf x})du^2.
\equation
$$
String propagation in the metric (1.6)
has been considered by Horowitz~[10] who found
however that for a non-trivial transverse metric
the higher-order terms did not simplify.
In this Letter we show that for a {\it special choice},
Eq.~(2.3) below, the Brinkmann metric
behaves exactly as (1.1) and (1.4).
\chapter{Vanishing of the anomaly in a special Brinkmann wave.}
Let us start with the general Brinkmann metric (1.6).
Observe first that it admits a covariantly constant null vector
$(\ell^\mu)$, namely $\partial_v$.
In order to study the Weyl anomaly, let us decompose the metric
(1.6) into the sum of a `background' metric $g_{\mu\nu}$
with the vector potential terms: setting $A_u\=k/2$ and $A_v\=0$,
(1.6) is re-written as
$$
\widetilde{g}_{\mu\nu}=g_{\mu\nu}+2\ell_{(\mu}A_{\nu)}
\qquad \hbox{where}\qquad
g_{\mu\nu}dx^\mu dx^\nu=g_{ij}(u,{\bf x})dx^idx^j-2dudv.
\equation
$$
The Riemann (resp. Ricci)
tensors are found to be
$$\left\{\eqalign{
&{\widetilde R}_{\mu\nu\rho\sigma}=R_{\mu\nu\rho\sigma}
-\ell_{[\mu}\nabla_{\nu]}F_{\rho\sigma}
-\ell_{[\rho}\nabla_{\sigma ]}F_{\mu\nu}
-\ell_{[\mu}F_{\nu]\,}^{\ \alpha}\ell_{[\rho}F_{\sigma]\alpha},
\cr\noalign{\medskip}
&{\widetilde R}_{\mu\rho}=R_{\mu\rho}
-\ell_{(\mu}\nabla^{\nu}F_{\rho)\nu}
-{\smallover1/4}\ell_{\mu}\ell_{\rho}F^{\nu\sigma}F_{\nu\sigma}.
\cr
}
\right.
\equation
$$
where $\nabla$ is covariant derivative with respect to the metric
$g_{\mu\nu}$, also $F_{\mu\nu}\equiv2\partial_{[\mu}A_{\nu]}$ and
$F_{\ \nu}^{\mu\ }\equiv{\widetilde g}^{\mu\rho}F_{\rho\nu}$.
A look at Eq.~(2.2) confirms that the higher order terms in the
perturbation expansion do not vanish in general~[10]. For
example,
$\widetilde{R}_{\mu\rho\sigma\lambda}
{\widetilde R}_{\nu}^{\ \rho\sigma\lambda}\!\neq\!0$,
etc.
The clue of Horowitz and Steif to overcome this is to express the
Riemann tensor using the covariantly constant null vector and
demand that it contain two $\ell_\mu$'s~[4,10]. We show below that
this is satisfied by the following special choice:
$$
{\widetilde g}_{\mu\nu}dx^{\mu}dx^{\nu}
=g_{ij}(u)dx^idx^j
-2du\Big[dv+\2e_{ij}(u)x^j dx^i\Big]+k(u,{\bf x})du^2,
\equation
$$
where $e_{ij}$ is an $u$-dependent matrix.
Firstly, the background Riemann tensor,
$$
R_{\mu\nu\rho\sigma}
=
-2\ell_{[\nu}\ddot{g}_{\mu][\sigma}\ell_{\rho]}
+
g^{\alpha\beta}
\ell_{[\nu}\dot{g}_{\sigma]\alpha}
\ell_{[\rho}\dot{g}_{\mu]\beta},
\equation
$$
is proportional to two $\ell_\mu$'s as required.
Secondly, the two middle terms in the Riemann tensor
in Eq.~(2.2) will contain two
$\ell$'s if no triple transverse indices arise,
$$
\nabla_iF_{jk}=0
\hbox{\ \ for all\ }\,i,j, k\=1,\ldots,D\-2,
\equation
$$
which is automatically satisfied by the choice (2.3).
Then the argument of Horowitz and Steif~[4]
shows that all higher-order terms in the perturbation
expansion vanish:
\item{(a)} In those terms with at least two
$\widetilde{R}_{\mu\nu\rho\sigma}$'s
at least one of the null vectors $\ell$'s are contracted, and
vanish therefore.
\item{(b)} Terms of the form
${\widetilde\nabla}^{\mu}
{\widetilde\nabla}^\rho\widetilde{R}_{\mu\nu\rho\sigma}$
are related to the covariant derivative of the Ricci tensor by the
Bianchi identity and hence vanish also.
\item{(c)} Terms containing
$\widetilde{R}_{\mu\nu\rho\sigma}$'s
as well as its covariant derivatives, one again has to contract
at least one $\ell$ on an index of either the curvature tensor or
its covariant derivative. In both cases, one gets zero.
Explicitly, the only non-vanishing components of the
background Riemann (resp. Ricci) tensors are
$$
\left\{\eqalign{
&R_{iuuj}={\smallover 1/2}\ddot{g}_{ij}
-\smallover1/4
g^{mn}\dot{g}_{mi}\dot{g}_{nj},
\cr\noalign{\medskip}
&R_{uu}=\Big({\smallover 1/2}\ddot{g}_{ij}
-\smallover1/4
g^{mn}
\dot{g}_{mi}\dot{g}_{nj}\Big)g^{ij}.
\cr
}\right.
\equation
$$
Thus, the vacuum
Einstein equations $\widetilde{R}_{\mu\nu}\=0$ require
$$
\widetilde{R}_{uu}=\Big({\smallover 1/2}\ddot{g}_{ij}
-\smallover1/4
g^{mn}
\dot{g}_{mi}\dot{g}_{nj}\Big)g^{ij}
-\nabla^iF_{ui}
-\smallover1/4F_{ij}F^{ij}=0,
\equation
$$
and we conclude that if (2.7) holds, then we get an exact string
solution at all orders in sigma-model perturbation theory.
\chapter{Reduction to ordinary plane wave.}
Now we explain why the special Brinkmann metric (2.3) works.
We prove in fact that (2.3) can be brought into the simple form
(1.1) by a sequence of coordinate transformations.
At each step, $x,u,v$ (resp. $X,U,V$) denote the old (resp. new)
coordinates.
\parag\underbar{Step 1}. The positive transverse metric
$g_{ij}\=g_{ij}(u)$ is, by assumption,
a function of $u$ only.
There exists therefore a (time-dependent) matrix
$C\=(C_i^a)\in GL(D-2,{\bf R})$ such that
$$
g_{ij}(u)=
\delta_{ab}C_i^a(u)C_j^b(u).
\equation
$$
Such a $C$ is unique up to an orthogonal matrix.
Then, introducing the inverse matrix $D\=C^{-1}$,
the coordinate transformation
$$
{\bf X}=C{\bf x},
\qquad
U=u,\qquad
V=v
\equation
$$
flattens out the transverse metric
while preserving the form of
the other terms, i.e. yields
$$
d{\bf X}^2-2dU\big[dV
+\2E_{ij}X^jdX^i\big]
+K(U,{\bf X})dU^2,
\equation
$$
where
$$
\eqalign{
E&=(E_{ij})=
-2D^{-1}\dot{D}
+D^T\,e\,D,
\qquad
e=(e_{ij}),
\cr\noalign{\medskip}
K&=k+{\bf X}^T\left[\dot{D}^T\big(D^{-1}\big)^T
D^{-1}\dot{D}
-\dot{D}^T\,e\,D\right]{\bf X},
\cr
}
\equation
$$
the superscript `$T$' denoting transposition. This generalizes
a result of Gibbons~[11].
Let us now decompose the matrix $E_{ij}(u)$ into the sum of a
symmetric and an antisymmetric matrix, $E_{ij}\=S_{ij}+A_{ij}$.
\parag\underbar{Step 2}. The symmetric part yields a gradient,
$S_{ij}x^j\=\partial_i\Lambda$ with $\Lambda(u,{\bf x})
=\2S_{ij}(u)x^ix^j$, and can therefore be gauged away as
$$
{\bf X}={\bf x}
\qquad
U=u
\qquad
V=v+{\smallover 1/2}\Lambda(u,{\bf x}),
\equation
$$
brings the metric (3.3) into the form
$$
d{\bf X}^2-2dU\big[dV
+\2A_{ij}X^jdX^i\big]
+K(U,{\bf X})dU^2,
\equation
$$
with
$$
K=k(U,{\bf X})+\partial_u\Lambda
=
k+{\smallover 1/2}\dot{S}_{ij}X^iX^j.
\equation
$$
\parag\underbar{Step 3}. The freedom in choosing the matrix $C$ can be
used to transform away the antisymmetric part $A_{ij}$:
there exists an {\it orthogonal} time-dependent matrix
$\big(O_{ij}\big)\in O(D\-2)$ such that
$$
A\equiv(A_{ij})=-2\,O^{-1}\dot{O},
\equation
$$
and the coordinate transformation
$$
{\bf X}=O{\bf x}
\qquad
U=u
\qquad
V=v
\equation
$$
gives the metric (3.6) the simple form (1.1) with
$$
K=k+\smallover1/4A_{i\ }^{\ k}A_{kj}X^iX^j.
\equation
$$
The absence of the conformal anomaly follows therefore from those
results proved in Refs~[3-5] for ordinary plane waves, provided
Eq.~(1.2) is satisfied.
At this stage, we can include axions and dilatons: first, the metric
is brought into the ordinary plane wave form (1.1), and then the
condition for the vanishing of the anomaly is simply derived from
the Horowitz-Steif condition (1.3).
Observe that, at each step, the additional term added to the
coefficient of $k$ is quadratic in the transverse variable.
Thus, if the function $k$ in Eq.~(2.3) we started with was
quadratic, we would end up with an exact plane wave and the
vanishing of the anomaly would follow non-perturbatively from
Refs.~[3] and~[5].
\goodbreak
\chapter{Examples.}
i) Consider first the special Brinkmann metric
$$
\phi(u)\,d{\bf x}^2
-2du\Big[
dv+{\smallover 1/2}\,A_{ij}(u)x^jdx^i
-
\smallover1/4\dot\phi(u)\,\delta_{ij}x^jdx^i\Big]
+k(u,{\bf x})du^2,
\equation
$$
where
$A_{ij}$ is antisymmetric and $\phi>0$~[12,9]. As explained
in Section 2, this yields
an exact string vacuum as soon as the vacuum
Einstein equation ${\widetilde R}_{uu}\=0$, i.e.
$$
\Delta k-{A_{ij}A^{ij}\over2\phi}
+{D\-2\over 2}\,(\log \phi)\ddot{\ }\phi=0,
\equation
$$
is satisfied.
Another way of obtaining this result is to carry out the
coordinate transformations indicated in Section 3,
$$
\left\{\matrix{
1.&{\bf X}=\sqrt{\phi}\,{\bf x},\hfill&U=u,&V=v\hfill
\cr\noalign{\medskip}
2.&{\bf X}={\bf x},\hfill&U=u,
&V=v+\smallover1/8{\displaystyle\dot\phi\over\displaystyle\phi\,}
\,{\bf x}^2\hfill
\cr\noalign{\medskip}
3.&{\bf X}=O\,{\bf x},\hfill
&U=u,&V=v\hfill.
\cr
}\right.
\equation
$$
where $-2O^T\,\dot{O}=(A_{ij})$. This results in expressing
(4.1) as the ordinary plane wave (1.1) with
$$
K(U,{\bf X})=k\big(u,{{\bf X}\over\sqrt{\phi}}\big)
-{1\over4\phi^2}A_{ik}A_{j\ }^{\ k}X^iX^j,
\equation
$$
and then Eq.~(4.2) follows simply from (1.2), $\Delta\,K\=0$.
Interestingly, the Ansatz (4.1) is built from the same
ingredients as the metric + axion + dilaton system studied by
Horowitz and Steif~[4,5,10], and the condition (4.2) is, {\it up
to the sign of the quadratic term} and to some re-definition of
the fields, the same as the condition (1.3) of Horowitz and
Steif~[4]. Adding to the metric (4.1) an axion and a dilaton,
$B_{ij}$ and $\Phi(u)$, the Horowitz-Steif condition (1.3) is
generalized to
$$
\Delta\,k
+\Big[{D-2\over2}\,(\log\phi)\ddot{\ }\phi+2\ddot{\Phi}\Big]
+\Big[\smallover1/{18}B_{ij}B^{ij}-{A_{ij}A^{ij}\over2\phi}\Big]
=0.
\equation
$$
Let us point out that Eq.~(4.5) hints also to a possible
cancellation between the vector potential and the
axion. Using the non-symmeric connection-approach~[13], we have
shown recently~[14] that this is indeed the case.
For $\phi\=1,\Phi\=0$ for example, we recover a recent
result presented in Ref.~14.
ii) Things work similarly it Rudd's toroidal case (1.4).
The metric is plainly of the form
(2.3) and provides therefore an exact string vacuum as soon as
Einstein's equation,
${\widetilde R}_{uu}\=\sum_i{\ddot{R}_i/R_i}\=0$,
is satisfied.
The more general condition (1.5) can be derived by
`straightening out' the transverse metric following Step~1 and
then by gauge-transforming,
$$
\left\{\matrix{
1.&X^i=2\pi R_i(u)x^i,\hfill
&U=u,&V=V\hfill
\cr\noalign{\medskip}
2.&{\bf X}={\bf x},\hfill
&U=u,&V=v+{\smallover 1/2}\,\sum_i{\displaystyle\dot{R}_i
\over\displaystyle R_i\,}\,(x^i)^2\hfill
\cr
}\right.
\equation
$$
which
takes (1.4) into the exact plane wave
$$
d{\bf x}^2-2dudv+
\left(\sum_i{\ddot{R}_i\over R_i}\,({x^i})^2\right)du^2.
\equation
$$
The associated Einstein equation $\Delta K\=0$ is plainly the same
as ${\widetilde R}_{uu}\=0$ above. Adding a dilaton, $\Phi(u)$,
condition (1.5) follows from the Horowitz-Steif condition (1.3).
The vanishing of the anomaly follows from the results in Ref.~[4]
perturbatively, and from~[5] non-perturbatively.
(Note that one could also add an axion in the same way.)
iii) The two previous Ans\"atze, (1.4) and (4.1), can be unified
by considering rather
$$
\phi(u)\,R_i^2(u) (dx^i)^2
-2du\Big[
dv+\2A_{ij}(u)x^jdx^i
-\smallover1/4\dot{\phi}(u)\,R_i^2(u)
\,\delta_{ij}x^jdx^i
\Big]
+k(u,{\bf x})du^2.
\equation
$$
Repeating the previous calculation we get that
if
$$
\Delta k-{A_{ij}A^{ij}\over2\phi}
+\sum_i{\ddot{R}_i\over R_i}\,\phi
+{D-2\over2}\big(\log\phi\big)\ddot{\ }\phi=0,
\equation
$$
then the anomaly vanishes at all orders in
sigma-model perturbation theory.
\chapter{Discussion.}
In the `one-coupling case' considered in the first example of
Section~4, i.e. for $g_{ij}\=\phi(u)\,\delta_{ij}$, Step~1 can be
replaced by a conformal rescaling.
Indeed,
if ${\widetilde g}_{\mu\nu}\=\phi(u)\,{\widehat g}_{\mu\nu}$
then (see, e.g.~[15])
$$
{\widetilde R}_{uu}
=
{\widehat R}_{uu}
+{D-2\over2}\Big[
(\log\phi)\ddot{\ }+{\smallover 1/2}\left({\dot\phi\over\phi}\right)^2
\Big].
\equation
$$
Calculating the Ricci tensor of the rescaled metric
and using (5.1) we, again, get the constraint (4.2).
Our Ansatz (2.3) is the most general pp-wave in
$D\=4$ dimensions~[7].
In higher dimensions this is no longer true, however, and (1.6) is
indeed more general than the ordinary plane wave (1.1).
In suitable coordinates all Brinkmann metrics~[7] can be written
as $g_{ij}(u,{\bf x})dx^idx^j-2dudv$,
so that all information is encoded in the transverse metric.
In Section~3 we have `flattened out' the transverse metric under
the assumption that this latter is a function of $u$ only.
This is, however, {\it not} the most general case when higher-order
correction terms vanish.
Consider, for example,
the time {\it and} space dependent metric
$$
\sum_{i=1}^{D-2}{
\cosh^2\left(\sqrt{\epsilon}_i w_i(x^i+u)\right) (dx^i)^2
}
-2dudv,
\equation
$$
where $\epsilon_i\=\pm1$ and $w_i=\math{const}$.
Introducing
$$
\left\{\eqalign{
&X^i={1\over\sqrt{\epsilon_i}w_i}
\sinh\left(\sqrt{\epsilon_i}w_i(x^i+u)\right),
\cr\noalign{\medskip}
&V=v+{\smallover 1/2}\sum_i{\left(
{1\over2\sqrt{\epsilon_i}w_i}
\sinh\left(2\sqrt{\epsilon_i}w_i(x^i+u)\right)+x^i+u
\right)},
\cr\noalign{\medskip}
&U=u,
\cr
}\right.
\equation
$$
then (5.2) is turned into the exact plane wave (1.1) with
$K\=\sum_i{\left(1+\epsilon_iw_i^2(X^i)^2\right)}$, which
is an exact string vacuum as soon as the vacuum Einstein
equation $\sum_i{\epsilon_iw_i^2}\=0$ is satisfied.
More generally, Step 1 can be implemented whenever the transverse
metric is conformally flat, which requires the transverse
Weyl tensor to vanish for each fixed value of~$u$.
At last, it would be interesting to know
(i) precisely when can a general Brinkmann metric (1.6) be brought
into the ordinary plane-wave form (1.1)
and
(ii) if this is indeed necessary for the vanishing of all
higher-order terms.
\parag\underbar{Note added}. After this paper was published, anomaly cancellation has been proved non-perturbatively
in a Wess-Zumino context [16].
\parag\underbar{Acknowledgements}.
Parts of the results presented here are contained in an unpublished
note [12]. We are indebted to Gary Gibbons and Malcolm Perry for
their interest and collaboration at the early stages of this work.
After our paper was completed, we discovered that similar results
were found, independently, by Arkady Tseytlin, to whom we are
indebted for correspondence.
One of us (Z.~H.) would like to thank Tours University for
hospitality extended to him, and to the Hungarian National Science
and Research Foundation (Grant No. 2177) for a partial financial
support.
\vskip 0.3cm
\goodbreak
\centerline{\bf\BBig References}
\reference
C.~Lovelace, Phys. Lett. {\bf B138}, 75 (1984);
E.~Fradkin and A.~A.~Tseytlin, Phys. Lett. {\bf B158}, 316 (1985);
{\bf B160}, 69 (1985);
Nucl. Phys. {\bf B261}, 1 (1985);
C.~Callan, D.~Friedan, E.~Martinec and M.~Perry,
Nucl. Phys. {\bf 262B}, 593 (1985);
H.~J.~de Vega, in
Erice School "String quantum gravity and physics at the
Planck energy scale",
June 1992, Proc. ed. N.~Sanchez, World Scientific.
\reference
M.~B.~Green, J.~H.~Schwarz and E.~Witten, {\it Superstring Theory}, Vol. 1,
Cambridge University Press, (1987).
\reference
D.~Amati and C.~Klim\v{c}\'\i{k}, Phys. Lett. {\bf B219}, 443 (1989);
R.~G\"uven, Phys. Lett. {\bf B191}, 275 (1987);
H.~J.~de Vega and N.~Sanchez, Nucl. Phys. {\bf B317}, 706 (1989);
M.~E.~V.~Costa and H.~J.~de Vega,
Ann. Phys. {\bf 211}, 223 (1991).
\reference
G.~T.~Horowitz and A.~R.~Steif, Phys. Rev. Lett. {\bf 64} (1990), 260;
Phys. Rev. {\bf D42}, 1950 (1990).
\reference
A.~R.~Steif, Phys. Rev. {\bf D42}, 2150 (1990).
\reference
R.~Rudd, Nucl. Phys. {\bf B352}, 489 (1991).
\reference
H.~W.~Brinkmann,
Math. Ann. {\bf 94}, 119 (1925).
\reference
C.~Duval, G.~W.~Gibbons and P.~Horv\'athy,
Phys. Rev. {\bf D43}, 3907 (1991) [hep-th/0512188].
\reference
A.~A.~Tseytlin, Phys. Lett. {\bf B288}, 279 (1992);
Nucl. Phys. {\bf B390}, 153 (1993);
Phys. Rev. {\bf D47}, 3421 (1993).
\reference
G.~T.~Horowitz, in Proc. VIth Int. Superstring Workshop
{\it Strings'90}, Texas '90, Singapore, World Scientific (1991).
\reference
G.~W.~Gibbons,
Commun. Math. Phys. {\bf 45}, 191 (1975).
\reference
C.~Duval, G.~W.~Gibbons, P.~A.~Horv\'athy, and M.~J.~Perry,
unpublished (1991).
\reference
E.~Braaten, Phys. Rev. Lett. {\bf 53}, 1799 (1984);
E.~Braaten, T.~L.~Curtright and C.~K.~Zachos,
Nucl. Phys. {\bf B260}, 630 (1985);
H.~Osborn, Ann. Phys. {\bf 200}, 1 (1990).
\reference
C.~Duval, Z.~Horv\'ath and P.~A.~Horv\'athy,
Phys. Lett. {\bf 313B}, 10 (1993)
[hep-th/0306059].
\reference
D.~Kramer, H.~Stephani, E.~Herlt, M.~McCallum,
{\it Exact solutions of Einstein's field equations},
Cambridge Univ. Press (1980).
\reference
C. Nappi and E. Witten,
Phys. Rev. Lett. {\bf 71}, 3751 (1993)
[hep-th/9310112]
\vfill\eject
\bye
|
2,877,628,090,382 | arxiv |
\section{Introduction} \label{sec:intro}
Every calculus student knows that computing the derivative of a
function directly from the definition is an excruciating task, while
computing the derivative using the rules of differentiation is a
pleasure. A differentiation rule is a function, but not a usual
function like the square root function or the limit of a sequence
operator. Instead of mapping a function to its derivative, it maps
one syntactic representation of a function to another. For example,
the \emph{product rule} maps an expression of the
form \[\frac{d}{dx}(u \cdot v),\] where $u$ and $v$ are expressions
that may include occurrences of $x$, to the
expression \[\frac{d}{dx}(u) \cdot v + u \cdot \frac{d}{dx}(v).\]
We call a mapping, like a differentiation rule, that takes one
syntactic expression to another syntactic expression a
\emph{transformer}~\cite{FarmerMohrenschildt03}. A full formalization
of calculus requires a reasoning system in which (1) the derivative of
a function can be defined, (2) the differentiation rules can be
represented as transformers, and (3) the transformers representing the
differentiation rules can be shown to compute derivatives. Such a
reasoning system must provide the means to reason about the syntactic
manipulation of expressions as well as the connection these
manipulations have to the semantics of the expressions. In other
words, the reasoning system must allow one to reason about syntax and
its relationship to semantics. See \cite{Farmer13} for a detailed
discussion about the formalization of symbolic differentiation and
other syntax-based mathematical algorithms.
An \emph{interpreted language} is a language $L$ such that each
expression $e$ in $L$ is mapped to a \emph{semantic value} that serves
as the meaning of $e$. What facilities does a reasoning system need
for reasoning about the interplay of the syntax and semantics of an
interpreted language $L$? Here are four candidates:
\begin{enumerate}
\item A set of \emph{syntactic values} that represent the syntactic
structures of the expressions in $L$.
\item \begin{sloppypar} A language for expressing statements about syntactic
values and thereby indirectly about the syntactic structures of
the expressions in $L$.\end{sloppypar}
\item A mechanism called \emph{quotation} for referring to the
syntactic value that represents a given expression in $L$.
\item A mechanism called \emph{evaluation} for referring to the
semantic value of the expression whose syntactic structure is
represented by a given syntactic value.
\end{enumerate}
Quotation and evaluation together provide the means to integrate
reasoning about the syntax of the expressions with reasoning about
what the expressions mean.
This paper has three objectives. The first objective is to introduce a
mathematical structure called a \emph{syntax framework} that is
intended to be an abstract model of a system for reasoning about the
syntax of an interpreted language. A syntax framework for an
interpreted language $L$ contains four components corresponding to the
four facilities mentioned just above:
\begin{enumerate}
\item A function called a \emph{syntax representation} that maps
each expression $e$ in $L$ to a \emph{syntactic value} that
represents the syntactic structure of $e$.
\item A language called a \emph{syntax language} whose expressions
denote syntactic values.
\item A \emph{quotation} function that maps an expression $e$ in $L$
to an expression in the syntax language that denotes the syntactic
value of $e$.
\item An \emph{evaluation} function that maps an expression $e$ in
the syntax language to an expression in $L$ whose semantic value
is the same as that of the expression in $L$ whose syntactic value
is denoted by $e$.
\end{enumerate}
The second objective is to demonstrate that a syntax framework has the
ingredients needed for reasoning effectively about syntax. We discuss
the benefits of a syntax framework for reasoning about syntax and
particularly for reasoning about transformers like the differentiation
rules. We explain how the liar paradox can be avoided when quotation
and evaluation are built-in operators. And we define in a syntax
framework a notion of quasiquotation which greatly facilitates
constructing expressions that denote syntactic values.
The third objective is to show that the notion of a syntax framework
embodies a common structure that is found in a variety of systems for
reasoning about the interplay of syntax and semantics. In particular,
we show that the standard systems in which syntactic structure is
represented by strings, G\"odel numbers, and members of an inductive
type are instances of a syntax framework. We also show that several
more sophisticated systems from the literature, including a simplified
version of Lisp, can be viewed as syntax frameworks.
\emph{Reflection} is a technique to embed reasoning about a reasoning
system (i.e., metareasoning) in the reasoning system itself.
Reflection has been employed in logic~\cite{Koellner09}, theorem
proving~\cite{Harrison95}, and programming~\cite{DemersMalenfant95}.
Since metareasoning very often involves the syntactic manipulation of
expressions, a syntax framework is a natural subcomponent of a
reflection mechanism.
\iffalse
The ideas underlying our notion of a syntax framework are not deep,
but they tend to be confusing since they deal with the interplay of
syntax and semantics. This confusion is well known to programmers who
are trying to sort out the meanings of \texttt{quote} and
\texttt{eval} in Lisp. Our objective is to explicate what quotation
and evaluation are and how they interact. We believe that these ideas
are useful in both logic and programming. Most of the paper is
devoted to examples and definitions, but some applications of syntax
frameworks are briefly discussed.
\fi
The rest of the paper is organized as follow. The next section,
section~\ref{sec:syn-frame}, defines the notion of a syntax framework
and discusses it benefits. Section~\ref{sec:examples} presents three
standard syntax reasoning systems that are instances of a syntax
framework. Section~\ref{sec:built-in} discusses built-in operators
for quotation and evaluation as found in Lisp and other languages and
explains how the liar paradox is avoided in a syntax framework.
Section~\ref{sec:quasiquotation} defines a notion of quasiquotation in
a syntax framework. Section~\ref{sec:literature} identifies some
sophisticated syntax reasoning systems in the literature that are
instances of a syntax framework. The paper ends with a conclusion in
section~\ref{sec:conclusion}.
\section{Syntax Frameworks} \label{sec:syn-frame}
In this section we will define a mathematical structure called a
\emph{syntax framework}. In the subsequent sections we will give
several examples of syntax reasoning systems that can be interpreted
as instances of this structure.
The reader should note that the notion of a syntax framework presented
here is not adequate to interpret syntax reasoning systems, such as
programming languages, that contain context-sensitive expressions
(such as mutable variables). To interpret these kinds of systems, a
syntax framework must be extended to a \emph{contextual syntax
framework} that includes mutable contexts. For further discussion,
see Remark~\ref{rem:contextual}.
\subsection{Interpreted Languages} \label{subsec:inter-lang}
Let a \emph{formal language} be a set of expressions each having a
unique mathematically precise syntactic structure. We will leave
``expression'' and ``mathematically precise syntactic structure''
unspecified. A formal language $L$ is a \emph{sublanguage} of a
formal language $L'$ if $L \subseteq L'$.
An interpreted language is a formal language with a semantics:
\begin{df}[Interpreted Language] \label{df:interp-lang} \em \begin{sloppypar}
An \emph{interpreted language} is a triple $I=(L,D_{\rm sem},V_{\rm
sem})$ where:
\begin{enumerate}
\item $L$ is a formal language.
\item $D_{\rm sem}$ is a nonempty domain (set) of \emph{semantic
values}.
\item $V_{\rm sem} : L \rightarrow D_{\rm sem}$ is a total function,
called a \emph{semantic valuation function}, that assigns each
expression $e \in L$ a semantic value $V_{\rm sem}(e) \in D_{\rm
sem}$. \hfill $\Box$
\end{enumerate}
\end{sloppypar}
\end{df}
An interpreted language is thus a formal language with an associated
assignment of a semantic meaning to each expression in the language.
Each expression of an interpreted language thus has both a syntactic
structure and a semantic meaning. There is no restriction placed on
what can be a semantic value. An interpreted language is graphically
depicted in Figure~\ref{fig:interp-lang} (we will add elements to this
figure as the discussion advances).
\begin{figure}
\center
\begin{tikzpicture}[scale=.75]
\draw[very thick] (-3,0) circle (3);
\draw (-5.5,0) node {\Large $L$};
\draw[very thick] (7,0) circle (3.5);
\draw (4.4,0) node {\Large $D_{\rm sem}$};
\draw[-triangle 45] (-3,3) .. controls (-1,5) and (5,5.5) .. (7,3.5);
\draw[right] (2.5,5.2) node {\Large $V_{\rm sem}$};
\end{tikzpicture}
\caption{An Interpreted Language} \label{fig:interp-lang}
\end{figure}
\begin{eg}[Many-Sorted First-Order Languages] \label{eg:ms-fol} \em
\begin{sloppypar} Let $L$ be the set of the terms and formulas of a many-sorted
first-order language with sorts $\alpha_1,\ldots,\alpha_n$. Define
$L_i$ to be the set of terms of sort $\alpha_i$ for each $i$ with $1
\le i \le n$ and $L_{\rm f}$ to be the set of formulas of the
many-sorted first-order language. \end{sloppypar}
Let $(D_1,\ldots,D_n,I)$ be a model for the many-sorted first-order
language $L$ where each $D_i$ is a nonempty domain and $I$ is an
interpretation function for the individual constants, function
symbols, and predicate symbols of $L$. Let $\phi_i$ be a mapping from
the variables in $L_i$ to $D_i$ for each $i$ with $1 \le i \le n$.
The model $(D_1,\ldots,D_n,I)$ and variable assignments
$\phi_1,\ldots,\phi_n$ determine a semantic valuation function $V_i :
L_i \rightarrow D_i$ on terms of sort $\alpha_i$ for each $i$ with $1 \le
i \le n$ and a semantic valuation function $V_{\rm f} : L_{\rm f}
\rightarrow \set{\mbox{{\sc t}},\mbox{{\sc f}}}$ on formulas. Then \[(L,D_1 \cup \cdots
\cup D_n \cup \set{\mbox{{\sc t}},\mbox{{\sc f}}}, V_1 \cup \cdots \cup V_n \cup V_{\rm
f})\] is an interpreted language. \hfill $\Box$
\end{eg}
\subsection{Syntax Representations and Syntax Languages}
A syntax representation of a formal language is an assignment of
syntactic values to the expressions of the language:
\begin{df}[Syntax Representation] \label{df:syn-rep} \em \begin{sloppypar}
Let $L$ be a formal language. A \emph{syntax representation} of $L$ is
a pair $R=(D_{\rm syn},V_{\rm syn})$ where:
\begin{enumerate}
\item $D_{\rm syn}$ is a nonempty domain (set) of \emph{syntactic
values}. Each member of $D_{\rm syn}$ represents a syntactic
structure.
\item $V_{\rm syn} : L \rightarrow D_{\rm syn}$ is an injective, total
function, called a \emph{syntactic valuation function}, that
assigns each expression $e \in L$ a syntactic value $V_{\rm
syn}(e) \in D_{\rm syn}$ such that $V_{\rm syn}(e)$ represents
the syntactic structure of $e$. \hfill $\Box$
\end{enumerate}
\end{sloppypar}
\end{df}
A syntax representation of a formal language is thus an assignment of
a syntactic meaning to each expression in the language. Notice that,
if $R=(D_{\rm syn},V_{\rm syn})$ is a syntax representation of $L$,
then $(L,D_{\rm syn},V_{\rm syn})$ is an interpreted language.
\begin{eg}[Expressions as Strings: Syntax Representation] \label{eg:strings-a} \em \begin{sloppypar}
Let $L$ be a many-sorted first-order language. The expressions of $L$
--- i.e.,~the terms and formulas of $L$ --- can be viewed as certain
strings of symbols. For example, the term $f(x)$ can be viewed as the
string \texttt{"f(x)"} composed of four symbols. Let $\mbox{$\cal A$}$ be the
alphabet of symbols occurring in the expressions of $L$ and
$\mname{strings}_{\cal A}$ be the set of strings over $\mbox{$\cal A$}$. Then the
syntactic structure of an expression can be represented by a string in
$\mname{strings}_{\cal A}$, and we can define a function $S : L
\rightarrow \mname{strings}_{\cal A}$ that maps each expression of $L$ to
the string over $\mbox{$\cal A$}$ that represents its syntactic structure. $S$ is
an injective, total function since, for each $e \in L$, there is
exactly one string in $\mname{strings}_{\cal A}$ that represents the
syntactic structure of $e$. Therefore, $(\mname{strings}_{\cal A},
S)$ is a syntax representation of $L$. \hfill $\Box$ \end{sloppypar}
\end{eg}
A syntax language for a syntax representation is a language of
expressions that denote syntactic values in the syntax representation:
\begin{df}[Syntax Language] \label{df:syn-lang} \em \begin{sloppypar}
Let $R=(D_{\rm syn},V_{\rm syn})$ be a syntax representation of a
formal language $L_{\rm obj}$. A \emph{syntax language} for $R$ is a pair
$(L_{\rm syn}, I)$ where:
\begin{enumerate}
\item $I = (L,D_{\rm sem},V_{\rm sem})$ in an interpreted language.
\item $L_{\rm obj}\subseteq L$, $L_{\rm syn} \subseteq L$, and
$D_{\rm syn} \subseteq D_{\rm sem}$.
\item $V_{\rm sem}$ restricted to $L_{\rm syn}$ is a total function
$V'_{\rm sem} : L_{\rm syn} \rightarrow D_{\rm syn}$. \hfill $\Box$
\end{enumerate}
\end{sloppypar}
\end{df}
Notice that, if $(L_{\rm syn}, I)$ is a syntax language for $R$ (as
in the definition above), then $(L_{\rm syn}, D_{\rm syn}, V'_{\rm sem})$ is an
interpreted language.
\begin{eg}[Expressions as Strings: Syntax Language] \label{eg:strings-b} \em
\begin{sloppypar} Let $I= (L, D, V)$ where \[D = D_1 \cup \cdots \cup D_n \cup
\set{\mbox{{\sc t}},\mbox{{\sc f}}}\] and \[V = V_1 \cup \cdots \cup V_n \cup V_{\rm
f}\] be the interpreted language given in Example~\ref{eg:ms-fol}.
Recall that $L$ is the set of terms and formulas of a many-sorted
first-order language with sorts $\alpha_1,\ldots,\alpha_n$. Suppose
$\alpha_1 = \mname{Symbol}$, $\alpha_2 = \mname{String}$, $D_1$ is the
alphabet of $L$, and $D_2$ is the set of strings over $D_1$. Let $S :
L \rightarrow D_2$ be the total function that maps each $e \in L$ to the
string in $D_2$ that represents the syntactic structure of $e$. Then
$R= (D_2,S)$ is a syntax representation of $L$ as in
Example~\ref{eg:strings-a} and $(L_2,I)$ is a syntax language for $R$
since $L_2 \subseteq L$, $D_2 \subseteq D$, and $V$ restricted to
$L_2$ is $V_2 : L_2 \rightarrow D_2$. \hfill $\Box$ \end{sloppypar}
\end{eg}
\subsection{Definition of a Syntax Framework} \label{subsec:frameworks}
A syntax framework is a structure that is built from an interpreted
language $I = (L,D_{\rm sem},V_{\rm sem})$ in three stages.
The first stage is to choose an object language $L_{\rm obj} \subseteq
L$ and a syntax representation $R=(D_{\rm syn},V_{\rm syn})$ for
$L_{\rm obj}$ such that $D_{\rm syn} \subseteq D_{\rm sem}$. ($L_{\rm
obj}$ could be the entire language $L$ as in
Example~\ref{eg:strings-a}.) This first stage is depicted in
Figure~\ref{fig:syn-frame-stage-1}.
\begin{figure}
\center
\begin{tikzpicture}[scale=.75]
\draw[very thick] (-3,0) circle (3);
\draw (-5.5,0) node {\Large $L$};
\draw[very thick] (-4,1) circle (1);
\draw (-4,1) node {\Large $L_{\rm obj}$};
\draw[very thick] (7,0) circle (3.5);
\draw (4.4,0) node {\Large $D_{\rm sem}$};
\draw[very thick] (8,-1) circle (1.5);
\draw (8,-1) node {\Large $D_{\rm syn}$};
\draw[-triangle 45] (-3,3) .. controls (-1,5) and (5,5.5) .. (7,3.5);
\draw[right] (2.5,5.2) node {\Large $V_{\rm sem}$};
\draw[-triangle 45] (-4,2) .. controls (-1,4) and (5,5) .. (8,.5);
\draw[right] (2.5,4) node {\Large $V_{\rm syn}$};
\end{tikzpicture}
\caption{Stage 1 of a Syntax Framework} \label{fig:syn-frame-stage-1}
\end{figure}
The second stage is to choose a language $L_{\rm syn} \subseteq L$
such that $(L_{\rm syn},I)$ is a syntax language for $R$. This second
stage, depicted in Figure~\ref{fig:syn-frame-stage-2}, establishes
$L_{\rm syn}$ as a language that can be used to make statements in $L$
about the syntax of the object language $L_{\rm obj}$ via the syntax
representation established in the stage 1. ($V'_{\rm sem}$ is $V_{\rm
sem}$ restricted to $L_{\rm syn}$.)
\begin{figure}
\center
\begin{tikzpicture}[scale=.75]
\draw[very thick] (-3,0) circle (3);
\draw (-5.5,0) node {\Large $L$};
\draw[very thick] (-4,1) circle (1);
\draw (-4,1) node {\Large $L_{\rm obj}$};
\draw[very thick] (-2,-1) circle (1);
\draw (-2,-1) node {\Large $L_{\rm syn}$};
\draw[very thick] (7,0) circle (3.5);
\draw (4.4,0) node {\Large $D_{\rm sem}$};
\draw[very thick] (8,-1) circle (1.5);
\draw (8,-1) node {\Large $D_{\rm syn}$};
\draw[-triangle 45] (-3,3) .. controls (-1,5) and (5,5.5) .. (7,3.5);
\draw[right] (2.5,5.2) node {\Large $V_{\rm sem}$};
\draw[-triangle 45] (-4,2) .. controls (-1,4) and (5,5) .. (8,.5);
\draw[right] (2.5,4) node {\Large $V_{\rm syn}$};
\draw[-triangle 45] (-2,0) .. controls (-1,1.5) and (5,3) .. (8,.51);
\draw[right] (2.5,2.4) node {\Large $V'_{\rm sem}$};
\end{tikzpicture}
\caption{Stage 2 of a Syntax Framework} \label{fig:syn-frame-stage-2}
\end{figure}
The third and final stage is to link $L_{\rm obj}$ and $L_{\rm syn}$
using mappings $Q : L_{\rm obj} \rightarrow L_{\rm syn}$ and $E : L_{\rm
syn} \rightarrow L_{\rm obj}$ as depicted in Figure~\ref{fig:syn-frame}.
$Q$ is an injective, total function such that, for all $e \in L_{\rm
obj}$, \[V_{\rm sem}(Q(e)) = V_{\rm syn}(e).\] For $e \in L_{\rm
obj}$, $Q(e)$ is called the \emph{quotation} of $e$. $Q(e)$ denotes
a value in $D_{\rm syn}$ that represents the syntactic structure of
$e$. $E$ is a (possibly partial) function such that, for all $e \in
L_{\rm syn}$, \[V_{\rm sem}(E(e)) = V_{\rm sem}(V_{\rm
syn}^{-1}(V_{\rm sem}(e)))\] whenever $E(e)$ is defined. For $e \in
L_{\rm syn}$, $E(e)$ is called the \emph{evaluation} of $e$. If it is
defined, $E(e)$ denotes the same value in $D_{\rm sem}$ that the
expression represented by the value of $e$ denotes. Notice that the
equation above implies $E(e)$ is undefined if $V_{\rm sem}(e)$ is not
in the image of $L_{\rm obj}$ under $V_{\rm syn}$. Since there will
usually be different $e_1,e_2 \in L_{\rm syn}$ that denote the same
syntactic value, $E$ will usually not be injective.
\begin{figure}
\center
\begin{tikzpicture}[scale=.75]
\draw[very thick] (-3,0) circle (3);
\draw (-5.5,0) node {\Large $L$};
\draw[very thick] (-4,1) circle (1);
\draw (-4,1) node {\Large $L_{\rm obj}$};
\draw[very thick] (-2,-1) circle (1);
\draw (-2,-1) node {\Large $L_{\rm syn}$};
\draw[very thick] (7,0) circle (3.5);
\draw (4.4,0) node {\Large $D_{\rm sem}$};
\draw[very thick] (8,-1) circle (1.5);
\draw (8,-1) node {\Large $D_{\rm syn}$};
\draw[-triangle 45] (-3,3) .. controls (-1,5) and (5,5.5) .. (7,3.5);
\draw[right] (2.5,5.2) node {\Large $V_{\rm sem}$};
\draw[-triangle 45] (-4,2) .. controls (-1,4) and (5,5) .. (8,.5);
\draw[right] (2.5,4) node {\Large $V_{\rm syn}$};
\draw[-triangle 45] (-2,0) .. controls (-1,1.5) and (5,3) .. (8,.51);
\draw[right] (2.5,2.4) node {\Large $V'_{\rm sem}$};
\draw[-triangle 45] (-3,1) -- (-2,0);
\draw[right] (-2.6,.8) node {\Large $Q$};
\draw[-triangle 45] (-3,-1) -- (-4,0);
\draw[right] (-4.3,-.9) node {\Large $E$};
\end{tikzpicture}
\caption{A Syntax Framework} \label{fig:syn-frame}
\end{figure}
The full definition of a syntax framework is obtained when we put these
three stages together:
\begin{df}[Syntax Framework in an Interpreted Language]\label{df:syn-frame-lang}\em
\begin{sloppypar}
Let $I=(L,D_{\rm sem},V_{\rm sem})$ be an interpreted language
and $L_{\rm obj}$ be a sublanguage of $L$. A \emph{syntax framework}
for $(L_{\rm obj},I)$ is a tuple $F=(D_{\rm syn},V_{\rm syn}, L_{\rm
syn}, Q, E)$ where:\end{sloppypar}
\begin{enumerate}
\item $R = (D_{\rm syn},V_{\rm syn})$ is a syntax representation of
$L_{\rm obj}$.
\item $(L_{\rm syn},I)$ is syntax language for $R$.
\item $Q : L_{\rm obj} \rightarrow L_{\rm syn}$ is an injective, total
function, called a \emph{quotation function}, such that:
\textbf{Quotation Axiom.} For all $e \in L_{\rm obj}$, \[V_{\rm
sem}(Q(e)) = V_{\rm syn}(e).\]
\item $E : L_{\rm syn} \rightarrow L_{\rm obj}$ is a (possibly partial)
function, called an \emph{evaluation function}, such that:
\textbf{Evaluation Axiom.} For all $e \in L_{\rm syn}$, \[V_{\rm
sem}(E(e)) = V_{\rm sem}(V_{\rm syn}^{-1}(V_{\rm sem}(e)))\]
whenever $E(e)$ is defined. \hfill $\Box$
\end{enumerate}
\end{df}
\begin{sloppypar} \noindent $L$ is called the \emph{full language} of the $F$.
When $D_{\rm sem}$ and $V_{\rm sem}$ are understood, we will say that
$F$ is a syntax framework for $L_{\rm obj}$ in $L$. Notice that a
syntax framework contains three interpreted languages: $(L,D_{\rm
sem},V_{\rm sem})$, $(L_{\rm obj},D_{\rm syn},V_{\rm syn})$, and
$(L_{\rm syn}, D_{\rm syn}, V'_{\rm sem})$. Notice also that the
functions $Q$ and $E$ are part of the metalanguage of $L$ and the
expressions of the form $Q(e)$ and $E(e)$ are not necessarily
expressions of $L$. In section~\ref{sec:built-in} we will discuss
syntax frameworks in which quotations and evaluations are expressions
in $L$ itself. \end{sloppypar}
\subsection{Two Basic Lemmas}
\begin{sloppypar} Let $I=(L,D_{\rm sem},V_{\rm sem})$ be an interpreted language,
$L_{\rm obj}$ be a sublanguage of $L$, and $F=(D_{\rm syn}, V_{\rm
syn}, L_{\rm syn}, Q, E)$ be a syntax framework for $(L_{\rm
obj},I)$. \end{sloppypar}
\begin{lem}[Law of Disquotation] \label{lem:disquotation}
For all $e \in L_{\rm obj}$, \[V_{\rm sem}(E(Q(e))) = V_{\rm sem}(e)\]
whenever $E(Q(e))$ is defined.
\end{lem}
\begin{proof}
Let $e \in L_{\rm obj}$ such that $E(Q(e))$ is defined. Then
\setcounter{equation}{0}
\begin{eqnarray}
V_{\rm sem}(E(Q(e))) & = & V_{\rm sem}(V_{\rm syn}^{-1}(V_{\rm sem}(Q(e)))) \\
& = & V_{\rm sem}(V_{\rm syn}^{-1}(V_{\rm syn}(e))) \\
& = & V_{\rm sem}(e)
\end{eqnarray}
(1) follows from the Evaluation Axiom since $E(Q(e))$ is defined. (2)
follows from the Quotation Axiom. And (3) is by the fact that $V_{\rm
syn}(e)$ is total on $L_{\rm obj}$.
\end{proof}
\bigskip
The Law of Disquotation does not hold universally in general because
$E$ may not be total on quotations.
\begin{df}[Direct Evaluation] \label{df:direct-eval} \em \begin{sloppypar}
Let $E^{\ast} : L_{\rm syn} \rightarrow L_{\rm obj}$ to be the (possibly
partial) function such that, for all $e \in L_{\rm syn}$, $E^{\ast}(e)
= V_{\rm syn}^{-1}(V_{\rm sem}(e))$ whenever $V_{\rm syn}^{-1}(V_{\rm
sem}(e))$ is defined. $E^{\ast}$ is called the \emph{direct
evaluation function for $F$}. \hfill $\Box$ \end{sloppypar}
\end{df}
\begin{lem}[Direct Evaluation] \label{lem:direct-eval}
\begin{enumerate}
\item[]
\item $E^{\ast}$ satisfies the Evaluation Axiom.
\item For all $e \in L_{\rm syn}$, if $E^{\ast}(e)$ and $E(e)$ are
defined, then \[V_{\rm sem}(E^{\ast}(e)) = V_{\rm sem}(E(e)).\]
\item If $V_{\rm syn}$ is surjective, then $E^{\ast}$ is total.
\end{enumerate}
\end{lem}
\begin{proof}
\bigskip
\noindent \textbf{Part 1} \ Follows immediate from the definition
of $E^{\ast}$.
\bigskip
\noindent \textbf{Part 2} \ Let $e \in L_{\rm syn}$ such that
$E^{\ast}(e)$ and $E(e)$ are defined. By the Evaluation Axiom,
$V_{\rm sem}(E(e)) = V_{\rm sem}(V_{\rm syn}^{-1}(V_{\rm sem}(e))) =
V_{\rm sem}(E^{\ast}(e))$.
\bigskip
\noindent \textbf{Part 3} \ Let $V_{\rm syn}$ be surjective and
$e \in L_{\rm syn}$. Then $V_{\rm syn}^{-1}(V_{\rm sem}(e))$ is
defined and hence $E^{\ast}$ is total by its definition.
\end{proof}
\bigskip
Thus the direct evaluation function is a special evaluation function
that is defined for every syntax framework and is total if the
syntactic valuation function is surjective.
\subsection{Syntax Frameworks in an Interpreted Theory}
The notion of a syntax framework can be easily lifted from an
interpreted language to an interpreted theory. Let a \emph{theory} be
a pair $T = (L,\Gamma)$ where $L$ is a language and $\Gamma$ is a set
of sentences in $L$ (that serve as the axioms of theory). A
\emph{model} of $T$ is a pair $M = (D^{M}_{\rm sem},V^{M}_{\rm sem})$
such that $D^{M}_{\rm sem}$ is a set of values that includes the truth
values $\mbox{{\sc t}}$ (true) and $\mbox{{\sc f}}$ (false) and $V^{M}_{\rm sem} : L
\rightarrow D^{M}_{\rm sem}$ is a total function such that, for all
sentences $A \in \Gamma$, $V^{M}_{\rm sem}(A) = \mbox{{\sc t}}$. An
\emph{interpreted theory} is then a pair $I=(T,\mbox{$\cal M$})$ where $T$ is a
theory and $\mbox{$\cal M$}$ is a set of models of $T$.
A syntax framework in an interpreted theory is a syntax framework with
respect to each model of the interpreted theory:
\begin{df}[Syntax Framework in an Interpreted Theory] \label{df:syn-frame-thy} \em
\hspace{2ex}\\
Let $I=(T,\mbox{$\cal M$})$ be an interpreted theory where $T = (L,\Gamma)$ and
$L_{\rm obj}$ be a sublanguage of $L$. A \emph{syntax framework} for
$(L_{\rm obj},I)$ is a triple $F=(L_{\rm syn}, Q, E)$ where:
\begin{enumerate}
\item $L_{\rm syn} \subseteq L$.
\item $Q : L_{\rm obj} \rightarrow L_{\rm syn}$ is an injective, total
function.
\item $E : L_{\rm syn} \rightarrow L_{\rm obj}$ is a (possibly partial)
function.
\item For all $M = (D^{M}_{\rm sem},V^{M}_{\rm sem}) \in \mbox{$\cal M$}$,
$F^M=(D^{M}_{\rm syn},V^{M}_{\rm syn}, L_{\rm syn}, Q, E)$ is a
syntax framework for $(L_{\rm obj},(L,D^{M}_{\rm sem},V^{M}_{\rm
sem}))$ where $D^{M}_{\rm syn}$ is the range of $V^{M}_{\rm sem}$
restricted to $L_{\rm syn}$ and $V^{M}_{\rm syn} = V^{M}_{\rm sem}
\circ Q$.\hfill $\Box$
\end{enumerate}
\end{df}
\subsection{Benefits of a Syntax Framework}
The purpose of a syntax framework is to provide the means to reason
about the syntax of a designated object language. We will briefly
examine the specific benefits that a syntax framework offers for this
purpose.
Let $I=(L,D_{\rm sem},V_{\rm sem})$ be an interpreted language,
$L_{\rm obj}$ be a sublanguage of $L$, and $F=(D_{\rm syn}, V_{\rm
syn}, L_{\rm syn}, Q, E)$ be a syntax framework for $(L_{\rm
obj},I)$.
The first, and most important, benefit of $F$ is that it provides a
language, $L_{\rm syn}$, for expressing statements in $L$ about the
syntactical structure of expressions in $L_{\rm obj}$. These
statements refer to the syntax of $L_{\rm obj}$ via the syntax
representation of $F$. For example, if $A$ is a formula in $L_{\rm
obj}$, $e_A$ is an expression in $L_{\rm syn}$ that denotes the
representation of $A$, and $L$ is sufficiently expressive, we could
express in $L$ a statement of the form $\mname{is-implication}(e_A)$
that \emph{indirectly} says ``$A$ is an implication''.
\begin{sloppypar} Having quotation in $F$ enables statements about the syntax of
$L_{\rm obj}$ to be expressed directly in the metalanguage of $L$.
For example, $\mname{is-implication}(Q(A))$ would \emph{directly} say
``$A$ is an implication''. Quotation also allows us to construct new
expressions from deconstructed components of old expressions. For
example, if $A \Rightarrow B$ is a formula in $L_{\rm obj}$ and $L$ is
sufficiently expressive,
\[\mname{build-implication}(\mname{succedent}(Q(A \Rightarrow
B)),\mname{antecedent}(Q(A \Rightarrow B)))\] would denote the
representation of $B \Rightarrow A$. \end{sloppypar}
Having evaluation in $F$ enables statements about the semantics of the
expressions represented by members of $D_{\rm syn}$ to be expressed
directly in the metalanguage of $L$. For example, if $c$ is the
expression given in the previous paragraph, then $E(c)$ would be a
formula in $L_{\rm obj}$ that asserts $B \Rightarrow A$.
By virtue of these basic benefits, a syntax framework is well equipped
to define and specify transformers. As we have mentioned in the
introduction, a \emph{transformer} maps expressions to expressions.
More precisely, an \emph{$n$-ary transformer over a language $L$} maps
expressions $e_1,\ldots,e_n$ in $L$ to an expression $e$ in $L$ (where
$n \ge 0$). A transformer can be defined by either an algorithm
(e.g., a program in a programming language) or a function (e.g., an
expression in a logic that denotes a function). Transformers include
symbolic computation rules (like the product rule mentioned in the
Introduction), rules of inference, rewrite rules, expression
simplifiers, substitution operations, decision procedures, etc.
A transformer over a language $L$ is usually defined only in the
metalanguage of $L$ and is not defined by an expression in $L$ itself.
For example, the rules of inference for first-order logic are not
expressions in first-order logic. A syntax framework with a
sufficiently expressive language can be used to transfer a transformer
over $L$ from the metalanguage of $L$ to $L$ itself. To see this, let
$T : L_{\rm obj} \times \cdots \times L_{\rm obj} \rightarrow L_{\rm obj}$
be an $n$-ary transformer over $L_{\rm obj}$ defined in the
metalanguage of $L$. If $L$ is sufficiently expressive, it would be
possible to define an operator $e_T : L_{\rm syn} \times \cdots \times
L_{\rm syn} \rightarrow L_{\rm syn}$ in $L$ that denotes a function $f_T :
D_{\rm syn} \times \cdots \times D_{\rm syn} \rightarrow D_{\rm syn}$ that
represents $T$. Using quotation, $e_T$ is specified by the following
statement in the metalanguage of $L$: \[\forall\, e_1,\ldots,e_n
\mathrel: L_{\rm obj} \mathrel. e_T(Q(e_1),\ldots,Q(e_n)) =
Q(T(e_1,\ldots,e_n)).\]
The full power of a syntax framework is exhibited in a specification
of the semantic meaning of a transformer. Suppose $L$ is a language
of natural number arithmetic, the expressions in $L_{\rm obj}$ denote
natural numbers, $L_{\rm obj}$ contains a sublanguage $L_{\rm nat}$ of
terms denoting natural numbers, and $L_{\rm syn}$ contains a
sublanguage $L_{\rm num}$ of terms denoting natural number numerals
$Q(0),Q(1),Q(2),\ldots$. Further suppose that $\mname{add}$ is a
binary transformer over $L_{\rm nat}$ that ``adds'' two natural number
terms so that, e.g., $\mname{add}(2,3) = 5$. Then, using evaluation,
the semantic meaning of $e_{\sf add}$, the representation of
\mname{add} in $L$, is specified by the following statement in the
metalanguage of $L$:
\[\forall\, e_1,e_2 \mathrel: L_{\rm num} \mathrel. E(e_{\sf add}(e_1,e_2)) =
E(e_1) + E(e_2)\] where $+ : L_{\rm nat} \times L_{\rm nat} \rightarrow
L_{\rm nat}$ is a binary operator in $L$ that denotes the sum
function.
See \cite{Farmer13} for further discussion on how transformers can be
formalized using a syntax framework.
\subsection{Further Remarks}
\begin{rem}[Syntax Representation]\em
Although a syntax representation is a crucial component of a syntax
framework, very little restriction is placed on what a syntax
representation can be. Almost any representation that captures the
syntactic structure of the expressions in the object language is
acceptable. In fact, it is not necessary to capture the entire
syntactic structure of an expression, only the part of the syntactic
structure that is of interest to the developer of the syntax
framework.\hfill $\Box$
\end{rem}
\begin{rem}[Theories of Quotation]\em
The quotation function $Q$ of a syntax framework is based on the
\emph{disquotational theory of quotation}~\cite{Quotation12}.
According to this theory, a quotation of an expression $e$ is an
expression that denotes $e$ itself. In our definition of a syntax
framework, $Q(e)$ denotes a value that represents $e$ (as a syntactic
entity). Andrew Polonsky presents in~\cite{Polonsky11} a set of
axioms for quotation operators of this kind. There are several other
theories of quotation that have been
proposed~\cite{Quotation12}.\hfill $\Box$
\end{rem}
\begin{rem}[Theories of Truth]\em\begin{sloppypar}
When $e$ is a representation of a truth-valued expression $e'$, the
evaluation $E(e)$ is a formula that asserts the truth of $e'$. Thus
the evaluation function $E$ of a syntax framework is a \emph{truth
predicate}~\cite{Truth13}. A truth predicate is the face of a
\emph{theory of truth}: the properties of a truth predicate
characterize a theory of truth~\cite{Leitgeb07}. The definition of a
syntax framework imposes no restriction on $E$ as a truth predicate
other than that the Evaluation Axiom must hold. What truth is and how
it can be formalized is a fundamental research area of logic, and
avoiding inconsistencies derived from the liar paradox (which we
address below) and similar statements is one of the major research
issues in the area (see~\cite{Halbach11}).\hfill $\Box$\end{sloppypar}
\end{rem}
\begin{rem}[Contextual Syntax Frameworks]\em\label{rem:contextual}
We have mentioned already that a syntax framework cannot interpret
syntax reasoning systems that contain context-sensitive expressions.
This means that a syntax framework is not suitable for programming
languages with mutable variables. For programming languages, a syntax
framework needs to be generalized to a \emph{contextual syntax
framework} that includes a semantic valuation function that takes a
\emph{valuation context} as part of its input and returns a modified
valuation context as part of its output. \emph{Metaprogramming} is
the writing of programs that manipulate other programs. It requires a
means to manipulate the syntax of the programs in a programming
language. In other words, metaprogramming requires code to be data.
Examples of metaprogramming languages include Lisp,
Agda~\cite{Norell07,Norell09}, F\#~\cite{FSharp11},
MetaML~\cite{TahaSheard00}, MetaOCaml~\cite{MetaOCaml11},
reFLect~\cite{GrundyEtAl06}, and Template
Haskell~\cite{SheardJones02}. An appropriate contextual syntax
framework would provide a good basis for discussing the code
manipulation done in metaprogramming. We will present the notion of a
contextual syntax framework in a future paper.\hfill $\Box$
\end{rem}
\section{Three Standard Examples} \label{sec:examples}
We will now present three standard syntax reasoning systems that are
examples of a syntax framework.
\subsection{Example: Expressions as Strings} \label{subsec:strings}
We will continue the development of Example~\ref{eg:strings-b}.
Suppose $L$ contains the following operators:
\begin{itemize}
\item An individual constant $c_a$ of sort \mname{Symbol} for each
$a \in \mbox{$\cal A$}$.
\item An individual constant \mname{nil} of sort \mname{String}.
\item A function symbol \mname{cons} of sort $\mname{Symbol} \times
\mname{String} \rightarrow \mname{String}$.
\item A function symbol \mname{head} of sort $\mname{String} \rightarrow
\mname{Symbol}$.
\item A function symbol \mname{tail} of sort $\mname{String} \rightarrow
\mname{String}$.
\end{itemize}
The terms of sort \mname{String} are intended to denote strings over
$\mbox{$\cal A$}$ in the usual way. \mname{cons} is used to describe the
construction of strings, while \mname{head} and \mname{tail} are used
to describe the deconstruction of strings. The terms of sort
\mname{String} can thus be used as a language to reason
\emph{directly} about strings over $\mbox{$\cal A$}$ and \emph{indirectly} about
the syntactic structure of the expressions of $L$ (including the terms
of sort \mname{String} themselves).
This reasoning system for the syntax of $L$ can be strengthened by
interconnecting the expressions of $L$ and the terms of sort
\mname{String}. This is done by defining a quotation function $Q$ and
an evaluation function $E$.
$Q : L \rightarrow L_2$ maps each expression $e$ of $L$ to a term $Q(e)$
of sort \mname{String} such that $Q(e)$ denotes $S(e)$, the string
over $\mbox{$\cal A$}$ that represents $e$. For example, $Q$ could map $f(x)$
to \[\mname{cons}(c_{\tt f}, \mname{cons}(c_{\tt (},
\mname{cons}(c_{\tt x}, \mname{cons}(c_{\tt )}, \mname{nil})))),\]
which denotes the string \texttt{"f(x)"}. Thus $Q$ provides the means
to refer to a representation of the syntactic structure of an
expression of $L$.
$E : L_2 \rightarrow L$ maps each term $t$ of sort \mname{String} to the
expression $E(t)$ of $L$ such that the syntactic structure of $E(t)$
is represented by the string denoted by $t$ provided $t$ denotes a
string that actually represents the syntactic structure of some
expression of $L$. For example, $E$ maps the term displayed above
(i.e., $Q(f(x))$) to $f(x)$. Thus $E$ provides the means to refer to
the value of the expression whose syntactic structure is represented
by the string that a term of sort \mname{String} denotes. $E$ is a
partial function on the terms of sort \mname{String} since not every
string in $D_2$ represents the syntactic structure of some expression
in $L$ and $V_2$ is surjective. Notice that, for all expressions $e$
of $L$, $E(Q(e)) = e$ --- that is, the \emph{law of disquotation}
holds universally.
We showed previously that $I = (L, D, V)$ is an interpreted language,
$R= (D_2,S)$ is a syntax representation of $L$, and $(L_2,I)$ is a
syntax language for $R$. $Q$ is injective since the syntactic
structure of each expression in $L$ is represented by a unique string
in $D_2$. For $e \in L$, \[V(Q(e)) = V_2(Q(e)) = S(e),\] and thus $Q$
satisfies the Quotation Axiom if $L_{\rm obj} = L$, $D_{\rm syn} =
D_2$, $V_{\rm syn} = S$, and $L_{\rm syn} = L_2$. For $t \in L_2$
such that $E(t)$ is defined,
\[V(E(t)) = V(V^{-1}_{\rm syn}(V_2(t))) = V(V^{-1}_{\rm syn}(V(t))),\]
and thus $E$ satisfies the Evaluation Axiom if $L_{\rm obj} = L$,
$D_{\rm syn} = D_2$, $V_{\rm syn} = S$, and $L_{\rm syn} = L_2$.
Therefore, \[F = (D_2,S,L_2,Q,E)\] is a syntax framework for
$(L,I)$. Notice that $E$ is actually the direct evaluation function
for $F$.
\subsection{Example: G\"odel Numbering} \label{subsec:goedel}
Let $L$ be the expressions (i.e., terms and formulas) of a first-order
language of natural number of arithmetic, and let $\mbox{$\cal A$}$ be the
alphabet of symbols occurring in the expressions of $L$. Once again
the expressions of $L$ can be viewed as strings over the alphabet
$\mbox{$\cal A$}$. As Kurt G\"odel famously showed in 1931~\cite{Goedel31}, the
syntactic structure of an expression $e$ of $L$ can be represented by
a natural number called the G\"odel number of $e$. Define $G$ to be
the total function that maps each expression of $L$ to its G\"odel
number. $G$ is injective since each expression in $L$ has a unique
G\"odel number. The terms of $L$, which denote natural numbers, can
thus be used to reason \emph{directly} about G\"odel numbers and
\emph{indirectly} about the syntactic structure of the expressions of
$L$.
We will show that this reasoning system based on G\"odel numbers can
be interpreted as a syntax framework. Let $L_{\rm t}$ be the set of
terms in $L$ and $L_{\rm f}$ be the set of formulas in $L$. Then \[I=
(L, \mathbb{N} \cup \set{\mbox{{\sc t}},\mbox{{\sc f}}}, V),\] where $L = L_{\rm t}
\cup L_{\rm f}$, $\mathbb{N}$ is the set of natural numbers, and $V =
V_{\rm t} \cup V_{\rm f}$, is an interpreted language corresponding to
the language given in Example~\ref{eg:ms-fol}.
Since $G : L \rightarrow \mathbb{N}$ is an injective, total function that
maps each expression in $L$ to its G\"odel number, $R =
(\mathbb{N},G)$ is a syntax representation of $L$. Since $L_{\rm t}
\subseteq L$, $\mathbb{N} \subseteq \mathbb{N} \cup
\set{\mbox{{\sc t}},\mbox{{\sc f}}}$, and $V$ restricted to $L_{\rm t}$ is $V_{\rm t} :
L_{\rm t} \rightarrow \mathbb{N}$, $(L_{\rm t},I)$ is a syntax language
for $R$.
Let $Q : L \rightarrow L_{\rm t}$ be a total function that maps each
expression $e \in L$ to a term $t \in L_{\rm t}$ such that $V_{\rm
t}(t) = G(e)$. $Q$ is injective since each expression in $L$ has a
unique G\"odel number. For $e \in L$, \[V(Q(e)) = V_{\rm t}(Q(e)) =
G(e),\] and thus $Q$ satisfies the Quotation Axiom if $L_{\rm obj} =
L$, $D_{\rm syn} = \mathbb{N}$, $V_{\rm syn} = G$, and $L_{\rm syn} =
L_{\rm t}$.
Let $E : L_{\rm t} \rightarrow L$ be the function that, for all $t \in
L_{\rm t}$, $E(t)$ is the expression in $L$ whose G\"odel number is
$V_{\rm t}(t)$ if $V_{\rm t}(t)$ is a G\"odel number of some
expression in $L$ and $E(t)$ is undefined otherwise. For $t \in L_{\rm
t}$ such that $E(t)$ is defined, \[V(E(t)) = V(G^{-1}(V_{\rm t}(t)))
= V(G^{-1}(V(t))),\] and thus $E$ satisfies the Evaluation Axiom if
$L_{\rm obj} = L$, $D_{\rm syn} = \mathbb{N}$, $V_{\rm syn} = G$, and
$L_{\rm syn} = L_{\rm t}$. Since not every natural number is a
G\"odel number of an expression in $L$, $G : L \rightarrow \mathbb{N}$ is
not surjective and thus $E: L_{\rm t} \rightarrow L$ is partial. For an
expression $e$ of $L$, $Q(e) = t$ such that $V_{\rm t}(t) = G(e)$ by
the definition of $Q$ and then $E(t) = e$ by the the definition of
$E$. Hence $E(Q(e)) = e$ and so the law of disquotation holds
universally.
Therefore,
\[F = (\mathbb{N},G,L_{\rm t},Q,E)\]
is a syntax framework for $(L,I)$. Notice that $E$ is actually the
direct evaluation function for $F$.
Define $L'_{\rm t}$ to be the sublanguage of $L_{\rm t}$ such that $t
\in L'_{\rm t}$ iff $V(t)$ is a G\"odel number of some expression in
$L$. Then \[F' = (\mathbb{N},G,L'_{\rm t},Q,E'),\] where $E'$ is $E$
restricted to $L'_{\rm t}$, is a syntax framework for $(L,I)$ in which
the evaluation function $E'$ is total.
\subsection{Example: Expressions as Members of an Inductive Type} \label{subsec:ind-type}
In the previous two subsections we saw how strings of symbols and
G\"odel numbers can be used to represent the syntactic structure of
expressions. These two syntax representations are very popular, but
they are not convenient for practical applications. In this example
we will see a much more practical syntax representation in which
expressions are represented as members of an inductive type.
Let $L_{\rm prop}$ be a language of propositional logic (with logical
connectives for negation, conjunction, and disjunction). An
interpreter for the language $L_{\rm prop}$ is a program that receives
user input (which we assume is a string), parses the input into a
usable internal representation (i.e., a parse or syntax tree),
computes the value of the internal representation in the form of a new
internal representation, and then displays the new internal
representation in a user-readable form (which we again assume is a
string). We will describe the components of such an interpreter.
Let \texttt{formula} be the type of the internal data structures
representing the propositional formulas in $L_{\rm prop}$. This type can be
implemented as an inductive type, e.g., in F\#~\cite{FSharp14}
as:
\begin{verbatim}
type formula =
| True
| False
| Var of string
| Neg of formula
| And of (formula * formula)
| Or of (formula * formula)
\end{verbatim}
Notice that the type constructors correspond precisely to the various
ways of constructing a well-formed formula in propositional logic.
The interpreter for $L_{\rm prop}$ is the composition of the following functions:
\begin{enumerate}
\item A function \texttt{parse} of type $\mathtt{string} \rightarrow
\mathtt{formula}$ which parses a user input string into an
internal representation of a well-formed propositional formula ---
or raises an error if the input does not represent one. For the
sake of simplicity, we assume that $L_{\rm prop}$ is chosen so
that \texttt{parse} is injective.
\item A function \texttt{value} of type $\mathtt{formula} \rightarrow
\mathtt{formula}$ which determines the truth value of a
propositional formula of $L_{\rm prop}$ --- or simplifies it in cases that
contain unknown variables. We will later see how this function
also requires an additional input $\phi$ of a variable assignment.
\item A function \texttt{print} of type $\mathtt{formula} \rightarrow
\mathtt{string}$ which prints an internal representation of a
formula as a string for the user. We assume that, for each string
$e$ representing a well-formed propositional formula of $L_{\rm
prop}$, $\mathtt{print}(\mathtt{parse}(e)) = e$.
\end{enumerate}
\noindent
For example, suppose $e = \texttt{"p \& true"}$ is a user input string
that denotes a propositional formula in $L_{\rm prop}$. Then $f =
\texttt{parse}(e) = \texttt{And (Var "p",True)}$ is the expression of
type \texttt{formula} that denotes its internal representation, $f' =
\texttt{value}(f) = \texttt{Var "p"}$ is the expression of type
\texttt{formula} that denotes its computed value, and $e' =
\texttt{print}(f') = \texttt{"p"}$ is the string representation of its
computed value. Hence the interpretation of $e$
is \[\mathtt{print}(\mathtt{value}(\mathtt{parse}(e))).\]
We will show that this system for interpreting propositional formulas
can be regarded as a syntax framework. This example demonstrates how
to add a syntax representation and a syntax language to a language
that does not inherently support reasoning about syntax. It also
demonstrates that any typical implementation of a formal language can
be interpreted as a syntax framework.
Let $L_{\rm prop}$ be the set of well-formed formulas of propositional
logic represented by strings as discussed above, $D_{\rm prop} =
\set{\mbox{{\sc t}},\mbox{{\sc f}}}$ be the domain of truth values (i.e., the values
formulas of propositional logic denote), and $V_{\rm prop}^\phi:
L_{\rm prop} \rightarrow D_{\rm prop}$ be the semantic valuation function
for propositional logic relative to a variable assignment $\phi$. Then
$I_{\rm prop} = (L_{\rm prop},D_{\rm prop},V_{\rm prop}^\phi)$ is an
interpreted language for propositional logic.
Similarly, let $L_{\rm form}$ be the set of expressions of type
\texttt{formula}, $D_{\rm form}$ be the members of the inductive type
\texttt{formula}, and $V_{\rm form}: L_{\rm form} \rightarrow D_{\rm
form}$ be the semantic valuation function for the expressions of
type \texttt{formula}. Then $I_{\rm form} = (L_{\rm form},D_{\rm
form},V_{\rm form})$ is also an interpreted language. This secondary
interpreted language is the augmentation that we are adding to the
language of propositional logic in order to represent the syntax of
$L_{\rm prop}$. Using functions similar to \texttt{parse},
\texttt{value}, and \texttt{print} shown above, we can implement the
language $I_{\rm prop}$ in a programming language.
Let $P: L_{\rm prop} \rightarrow D_{\rm form}$ be the function such that,
for $e \in L_{\rm prop}$, $P(e)$ is the value of type
$\mathtt{formula}$ denoted by $\texttt{parse}(e)$. Then $P$ is an
injective, total function since each $e \in L_{\rm prop}$ has exactly
one parse tree that represents the syntactic structure of $e$.
Therefore, $R = (D_{\rm form},P)$ is a syntax representation of
$L_{\rm prop}$. The structures $I_{\rm prop}$, $I_{\rm form}$, and
$R$ are depicted in Figure~\ref{fig:impl-lang}.
\begin{figure}
\center
\begin{tikzpicture}[scale=.75]
\draw[very thick] (0,0) circle (1);
\draw (0,0) node {\Large $L_{\rm prop}$};
\draw[very thick] (0,-3) circle (1);
\draw (0,-3) node {\Large $L_{\rm form}$};
\draw[very thick] (10,0) circle (1);
\draw (10,0) node {\Large $D_{\rm prop}$};
\draw[very thick] (10,-3) circle (1);
\draw (10,-3) node {\Large $D_{\rm form}$};
\draw[-triangle 45] (1,-3) .. controls (5,-4) and (6,-4) .. (9,-3);
\draw[right] (4.5,-4.3) node {\Large $V_{\rm form}$};
\draw[-triangle 45] (1,0) .. controls (4,1) and (6,1) .. (9,0);
\draw[right] (4.5,1.3) node {\Large $V_{\rm prop}^\phi$};
\draw[-triangle 45] (0.9,-0.5) .. controls (4,-0.8) and (6, -1) .. (9.3,-2.3);
\draw[right] (4.5,-0.5) node {\Large $P$};
\draw[-triangle 45] (0.7,-0.7) -- (0.7,-2.3);
\draw[right] (1,-1.5) node {\Large $\mathtt{parse}$};
\draw[-triangle 45] (-0.7,-2.3) -- (-0.7,-0.7);
\draw[right] (-3,-1.5) node {\Large $\mathtt{print}$};
\draw[-triangle 45] (-0.9,-3.5) .. controls (-1.9,-4) and (-1,-4.9) .. (-0.5,-3.9);
\draw[right] (-3.8,-4.2) node {\Large $\mathtt{value}^\phi$};
\end{tikzpicture}
\caption{Domains and Mappings related to $L_{\rm prop}$} \label{fig:impl-lang}
\end{figure}
Let $L=L_{\rm prop} \cup L_{\rm form}$, $D=D_{\rm prop} \cup D_{\rm
form}$, and $V^\phi = V_{\rm prop}^\phi \cup V_{\rm form}$. $V^\phi$
is a function since the two functions $V_{\rm prop}^\phi$ and $V_{\rm
form}$ have disjoint domains. Then $I=(L,D,V^\phi)$ is an
interpreted language and $(L_{\rm form},I)$ is a syntax language for
$R$ by construction.
The tuple \[F=(D_{\rm form},P,L_{\rm
form},\mathtt{parse},\mathtt{print})\] is a syntax framework for
$(L_{\rm prop},I)$ since:
\begin{enumerate}
\item $R=(D_{\rm form},P)$ is a syntax representation of $L_{\rm
prop}$ as shown above.
\item $(L_{\rm form},I)$ is syntax language for $R$ as shown above.
\item \textbf{Quotation Axiom}: For all $e \in L_{\rm prop}$,
$P(e)=V_{\rm form}(\mathtt{parse}(e))$ by definition, and thus
\[V^\phi(\mathtt{parse}(e)) = V_{\rm form}(\mathtt{parse}(e)) = P(e).\]
\item \textbf{Evaluation Axiom}: For all $e \in L_{\rm form}$,
$P^{-1}(V_{\rm form}(e)) = \mathtt{parse}^{-1}(e) =
\mathtt{print}(e)$ since $\mathtt{print}(\mathtt{parse}(e)) = e$,
and thus
\[V^\phi(\mathtt{print}(e)) = V^\phi(P^{-1}(V_{\rm form}(e))) = V^\phi(P^{-1}(V^\phi(e))).\]
\end{enumerate}
\noindent
Since $\texttt{print}(\mathtt{parse}(e)) = e$ holds for all
expressions in $L_{\rm prop}$, the Law of Disquotation holds
universally.
The syntax framework for this example provides the structure that is
needed to understand the function $\mathtt{value}^\phi$ shown in
Figure~\ref{fig:impl-lang} as an implementation of the semantic
valuation function $V_{\rm prop}^\phi$. The formula that specifies
$\mathtt{value}^\phi$, \[V^\phi(e) =
V^\phi(\mathtt{print}(\mathtt{value}^\phi(\mathtt{parse}(e)))),\]
illustrates the interplay of syntax and semantics that is inherent in
its meaning.
The approach employed in this third example, in which the syntactic
values are members of an inductive type, is commonly used in
programming to represent syntax (see~\cite{FriedmanWand08}). It
utilizes a \emph{deep embedding}~\cite{BoultonEtAl93} of the object
language $L_{\rm obj}$ into the full underlying formal language $L$.
\subsection{Further Remarks}
\begin{rem}[Variable Binding]\em
None of the standard examples discussed above treat variable binding
constructions in any special way. There are other syntax
representation methods that identify expressions that are the same up
to a renaming of the variables that are bound by variable binders.
One method is \emph{higher-order abstract
syntax}~\cite{Miller00,PfenningElliot88} in which the syntactic
structure of an expression with variable binders is represented by a
term in typed lambda calculus. Another method is \emph{nominal
techniques}~\cite{GabbayPitts02,Pitts03} in which the swapping of
variable names can be explicitly expressed. The
paper~\cite{NanevskiPfenning05} combines quotation/evaluation
techniques with nominal techniques.\hfill $\Box$
\end{rem}
\begin{rem}[Types]\em
The languages in a syntax framework are not required to be typed.
However, it is natural that, if an expression $e$ in the object
language is of type $\alpha$, then $Q(e)$ should be of some type
$\mname{expr}(\alpha)$. The operator $\mname{expr}$ behaves like the
necessity operator $\Box$ in modal logic~\cite{DaviesPfenning01}. An
important design decision for such a type system is whether or not
every expression of the syntax language equals a quotation of an
expression. In other words, should a syntax framework with a type
system admit only expressions in the syntax language that denote the
syntactic structure of well-formed expressions or should it admit in
addition expressions that denote the syntactic structure of ill-formed
expressions. Recall that in the example of
subsection~\ref{subsec:goedel} the syntax language of $F$ contains the
latter kind of expressions, while the syntax language of $F'$ contains
only the former kind.\hfill$\Box$
\end{rem}
\section{Syntax Frameworks with Built-In Operators} \label{sec:built-in}
The three examples in the previous section illustrate how a syntax
framework provides the means to reason about the syntax of a
designated object language $L_{\rm obj} \subseteq L$. In all three
examples, only \emph{indirect statements} about the syntax of $L_{\rm
obj}$ can be expressed in $L$, while direct statements using $Q$ and
$E$ can be expressed in the metalanguage of $L$. In this section we
will explore syntax frameworks in which \emph{direct statements} about
the syntax of $L_{\rm obj}$, such as $E(Q(e)) = e$, can be expressed
in $L$ itself.
\subsection{Built-in Quotation and Evaluation}
\begin{sloppypar} Let $I=(L,D,V)$ be an interpreted language, $L_{\rm obj}$ be a
sublanguage of $L$, and $F=(D_{\rm syn}, V_{\rm syn}, L_{\rm syn}, Q,
E)$ be a syntax framework for $(L_{\rm obj},I)$. $F$ has
\emph{built-in quotation} if there is an operator (which we will
denote as \mname{quote}) such that, for all $e \in L_{\rm obj}$,
$Q(e)$ is the syntactic result of applying the operator to $e$ (which
we will denote as $\mname{quote}(e)$). $F$ has \emph{built-in
evaluation} if there is an operator (which we will denote as
\mname{eval}) such that, for all $e \in L_{\rm syn}$, $E(e)$ is the
syntactic result of applying the operator to $e$ (which we will denote
as $\mname{eval}(e)$) whenever $E(e)$ is defined.\footnote{If $L_{\rm
obj}$ is a typed language, it may be necessary for the
\mname{eval} operator to include a parameter that ranges over the
types of the expressions in $L_{\rm obj}$.} There are similar
definitions of built-in quotation and evaluation for syntax frameworks
in interpreted theories. \end{sloppypar}
Assume $F$ has both built-in quotation and evaluation. Then
quotations and evaluations are expressions in $L$, and $F$ thus
provides the means to reason directly in $L$ about the interplay of
the syntax and semantics of the expressions in $L_{\rm obj}$. In
particular, it is possible to specify in $L$ the semantic meanings of
transformers. The following lemma shows that, since the quotations
and evaluations in $F$ begin with the operators \mname{quote} and
\mname{eval}, respectively, $E$ cannot be the direct evaluation for
$F$.
\begin{lem}
Suppose $F$ is a syntax framework that has built-in quotation and
evaluation. Then $E \not= E^{\ast}$.
\end{lem}
\begin{proof}
Suppose $E = E^{\ast}$. Let $e \in L_{\rm obj}$. Then
\setcounter{equation}{0}
\begin{eqnarray}
e & = & V_{\rm syn}^{-1}(V_{\rm syn}(e)) \\
& = & V_{\rm syn}^{-1}(V_{\rm sem}(\mname{quote}(e))) \\
& = & E^{\ast}(\mname{quote}(e)) \\
& = & E(\mname{quote}(e)) \\
& = & \mname{eval}(\mname{quote}(e))
\end{eqnarray}
(1) is by the fact that $V_{\rm syn}$ is total on $L_{\rm obj}$; (2)
is by built-in quotation and the Quotation Axiom; (3) is by the
definition of the direct evaluation function; (4) is by hypothesis;
and (5) is by the fact that $E$ is built in. Hence $e =
\mname{eval}(\mname{quote}(e))$, which is a contradiction since these
are syntactically distinct expressions.
\end{proof}
\bigskip
The syntax framework $F$ is \emph{replete} if the object language of
$F$ is equal to the full language of $F$ (i.e., $L_{\rm obj} = L$) and
$F$ has both built-in quotation and evaluation. A replete syntax
framework whose full language is $L$ has the facility to reason about
the syntax of all of $L$ within $L$ itself. $F$ is \emph{weakly
replete} if $L_{\rm syn} \subseteq L_{\rm obj}$ and $F$ has both
built-in quotation and evaluation. There are similar definitions of
replete and weakly replete for syntax frameworks in interpreted
theories. We will give two examples of a replete syntax framework,
one in the next subsection and one in section~\ref{sec:literature}.
We will also give another example in section~\ref{sec:literature} of a
syntax framework that is almost replete.
\begin{rem}\em
A \emph{biform
theory}~\cite{CaretteFarmer08,Farmer07b,FarmerMohrenschildt03} is a
combination of an axiomatic theory and an algorithmic theory. It is a
basic unit of mathematical knowledge that consists of a set of
\emph{concepts}, \emph{transformers}, and \emph{facts}. The concepts
are symbols that denote mathematical values and, together with the
transformers, form a language $L$ for the theory. The transformers
are programs whose input and output are expressions in $L$; they
represent syntax-based algorithms like reasoning rules. The facts are
statements expressed in $L$ about the concepts and transformers. A
logic with a replete syntax framework (such as Chiron discussed in
subsection~\ref{subsec:chiron}) is well-suited for formalizing biform
theories~\cite{Farmer07b}.\hfill $\Box$
\end{rem}
\subsection{Example: Lisp} \label{subsec:lisp}
We will show that the Lisp programming language with a simplified
semantics is an instance of a syntax framework with built-in quotation
and evaluation.
Choose some standard implementation of Lisp. Let $L$ be the set of
S-expressions that do not change the Lisp valuation context when they
are evaluated by the Lisp interpreter. Let $V : L \rightarrow L \cup
\set{\bot}$ be the total function that, for all S-expressions $e \in
L$, $V(e)$ is the S-expression the interpreter returns when $e$ is
evaluated if the interpreter returns an S-expression in $L$ and $V(e)
= \bot$ otherwise. $I = (L,L \cup \set{\bot},V)$ is thus an
interpreted language.
$R = (L,\mname{id}_L)$, where $\mname{id}_L$ is the identity function
on $L$, is a syntax representation of $L$ since each S-expression
represents its own syntactic structure. Let $L'$ be the sublanguage
of $L$ such that, for all $e \in L$, $e \in L'$ iff $V(e) \not= \bot$.
It follows immediately by the definition of $L'$ that $(L',I)$ is a
syntax language for $R$.
Let $Q : L \rightarrow L'$ be the total function that maps each $e \in L$
to the S-expression $(\texttt{quote} \; e)$. For $e \in L$, $Q(e) \in
L'$ since $V((\texttt{quote} \; e)) = e \not= \bot$. $Q$ is obviously
injective. For $e \in L$, \[V(Q(e)) = V((\texttt{quote} \; e)) = e =
\mname{id}_L(e),\] and thus $Q$ satisfies the Quotation Axiom if
$L_{\rm obj} = L$, $D_{\rm syn} = L$, $V_{\rm syn} = \mname{id}_L$,
and $L_{\rm syn} = L'$.
Let $E : L' \rightarrow L$ be the total function that, for all $e \in L'$,
$E(e)$ is the S-expression $(\texttt{eval} \; e)$. For all $e \in
L'$, \[V(E(e)) = V((\texttt{eval} \; e)) = V(V(e)) =
V(\mname{id}^{-1}_L(V(e))).\] (Notice that $V(V(e))$ is always defined
since $e \in L'$.) Thus $E$ satisfies the Evaluation Axiom if $L_{\rm
obj} = L$, $D_{\rm syn} = L$, $V_{\rm syn} = \mname{id}_L$, and
$L_{\rm syn} = L'$.
Therefore, \[F = (L,\mname{id}_L,L',Q,E)\] is a replete syntax
framework for $(L,I)$.
Suppose $L$ were the full set of S-expressions, including the
S-expressions that modify the Lisp valuation context when they are
evaluated by the interpreter. Then, in order to interpret Lisp as a
syntax framework, we would need to extend the notion of a \emph{syntax
framework} to the notion of \emph{contextual syntax framework} as
mentioned in Remark~\ref{rem:contextual}
\subsection{Example: Liar Paradox} \label{subsec:liar}
The virtue of a syntax framework with built-in quotation and
evaluation is that it provides the means to express statements about
the interplay of the syntax and semantics of the expressions in
$L_{\rm obj}$ in $L$. On the other hand, the vice of such a syntax
framework is that, if $L$ is sufficiently expressive, the liar paradox
can be expressed in $L$ using quotation and evaluation.
\begin{sloppypar} Let $I= (L, \mathbb{N} \cup \set{\mbox{{\sc t}},\mbox{{\sc f}}}, V)$ be the
interpreted language and $F' = (\mathbb{N},G,L'_{\rm t},Q,E')$ be the
syntax framework for $(L,I)$ given in subsection~\ref{subsec:goedel}. Assume
that $V$ is defined so that the axioms of first-order Peano arithmetic
are satisfied (see~\cite{Mendelson09}). Assume also that $F'$ has
been modified so that it has both built-in quotation and built-in
evaluation. \end{sloppypar}
\begin{sloppypar} We claim $E'$ cannot be total. Assume otherwise. By the
diagonalization lemma~\cite{Carnap34}, there is an expression $A \in
L$, such that $V(A) = V(\mname{quote}(\Neg(\mname{eval}(A))))$. Then
\end{sloppypar} \setcounter{equation}{0}
\begin{eqnarray}
V(\mname{eval}(A))
& = & V(G^{-1}(V(A))) \\
& = & V(G^{-1}(V(\mname{quote}(\Neg(\mname{eval}(A)))) \\
& = & V(G^{-1}(G(\Neg(\mname{eval}(A))))) \\
& = & V(\Neg(\mname{eval}(A)))
\end{eqnarray}
(1) is by built-in evaluation, the totality of $E'$, and the
Evaluation Axiom; (2) is by the definition of $A$; (3) is by built-in
quotation and the Quotation Axiom, and (4) is by the fact $G$ is total
on $L$. Hence $V(\mname{eval}(A)) = V(\Neg(\mname{eval}(A)))$, which
contradicts the fact that $V$ never assigns a formula and its negation
the same truth value. Therefore, $E'$ cannot be total and, in
particular, cannot be total on quotations.
The formula $\mname{eval}(A)$ expresses the \emph{liar paradox} and
the argument above is a proof of Alfred Tarski's 1933 theorem on the
undefinability of truth~\cite{Tarski33,Tarski35,Tarski35a}, which says
that built-in evaluation cannot serve as a truth predicate over all
formulas. This example demonstrates why evaluation is allowed to be
partial in a syntax framework: if evaluation were required to be
total, the notion of a syntax framework would not cover reasoning
systems with built-in quotation and evaluation in which the liar
paradox can be expressed.
\subsection{Example: G\"odel Numbering with Built-In Quotation}
A syntax framework without built-in quotation and evaluation can
sometimes be modified to have built-in quotation or evaluation.
\begin{sloppypar} Let $I = (L, \mathbb{N} \cup \set{\mbox{{\sc t}},\mbox{{\sc f}}}, V)$ be the
interpreted language and $F' = (\mathbb{N},G,L'_{\rm t},Q,E')$ be the
syntax framework for $(L,I)$ given in subsection~\ref{subsec:goedel}.
Extend $L$ to the language $L^{\ast}$ and $L'_{\rm t}$ to
$L^{\ast}_{\rm t}$ by adding a new operator $\mname{quote}$ so that
$\mname{quote}(e) \in L^{\ast}_{\rm t}$ for all $e \in L^{\ast}$.
Extend $G$ to $G^{\ast}: L^{\ast} \rightarrow \mathbb{N}$ so that
$G^{\ast}(e)$ is the G\"odel number of $e$ for all $e \in L^{\ast}$.
Extend $V$ to $V^{\ast} : L^{\ast} \rightarrow \mathbb{N} \cup
\set{\mbox{{\sc t}},\mbox{{\sc f}}}$ so that $V^{\ast}(\mname{quote}(e)) = G^{\ast}(e)$
for all $e \in L^{\ast}$. And, finally, define $Q^{\ast}(e)$ to be
$\mname{quote}(e)$ for all $e \in L^{\ast}$. (We do not need to
change the definition of $E'$.) Then $I^{\ast} = (L^{\ast},
\mathbb{N} \cup \set{\mbox{{\sc t}},\mbox{{\sc f}}}, V^{\ast})$ is an interpreted
language and \[F^{\ast} = (\mathbb{N},G^{\ast},L^{\ast}_{\rm
t},Q^{\ast},E')\] is a syntax framework for $(L^{\ast},I^{\ast})$
that has built-in quotation. \end{sloppypar}
See \cite{Farmer13} for further discussion on the challenges involved
in modifying a traditional logic to embody the structure of a replete
syntax framework.
\section{Quasiquotation} \label{sec:quasiquotation}
Quasiquotation is a parameterized form of quotation in which the
parameters serve as holes in a quotation that are filled with the
values of expressions. It is a very powerful syntactic device for
specifying expressions and defining macros. Quasiquotation was
introduced by Willard Quine in 1940 in the first version of his book
\emph{Mathematical Logic}~\cite{Quine03}. It has been extensively
employed in the Lisp family of programming
languages~\cite{Bawden99}.\footnote{In Lisp, the standard symbol for
quasiquotation is the backquote (\texttt{`}) symbol, and thus in
Lisp, quasiquotation is usually called \emph{backquote}.}
\begin{sloppypar} We will show in this section how quasiquotation can be defined in
a syntax framework. Let $I=(L,D,V)$ be an interpreted language, $L_{\rm
obj}$ be a sublanguage of $L$, and $F=(D_{\rm syn}, V_{\rm syn},
L_{\rm syn}, Q, E)$ be a syntax framework for $(L_{\rm obj},I)$. \end{sloppypar}
\subsection{Marked Expressions} \label{subsec:marked-expr}
Suppose $e \in L$. A \emph{subexpression} of $e$ is an occurrence in
$e$ of some $e' \in L$. We assume that there is a set of
\emph{positions} in the syntactic structure of $e$ such that each
subexpression of $e$ is indicated by a unique position in $e$. Two
subexpressions $e_1$ and $e_2$ of $e$ are \emph{disjoint} if $e_1$ and
$e_2$ do not share any part of the syntactic structure of $e$.
Let $e \in L_{\rm obj}$. A \emph{marked expression} derived from $e$
is an expression of the form $e\mlist{(p_1,e_1),\ldots,(p_n,e_n)}$
where $n \ge 0$, $p_1,\ldots,p_n$ are positions of pairwise disjoint
subexpressions of $e$, and $e_1,\ldots,e_n$ are expressions in $L$.
Define $L^{\rm m}_{\rm obj}$ to be the set of marked expressions
derived from members of $L_{\rm obj}$.
\begin{sloppypar} Let $S: L^{\rm m}_{\rm obj} \rightarrow L_{\rm obj}$ be the function
that, given a marked expression $m =
e\mlist{(p_1,e_1),\ldots,(p_n,e_n)} \in L^{\rm m}_{\rm obj}$,
simultaneously replaces each subexpression in $e$ at position $p_i$
with $E^{\ast}(e_i)$ (the application of the direct evaluation
function for $F$ to $e_i$) for all $i$ with $1 \le i \le n$. $S(e)$
will be undefined if either $E^{\ast}(e_i)$ is undefined or
$E^{\ast}(e_i)$ does not have the same type as the subexpression at
position $p_i$ for some $i$ with $1 \le i \le n$. \end{sloppypar}
\subsection{Quasiquotation}
Define $\overline{Q} : L^{\rm m}_{\rm obj} \rightarrow L_{\rm syn}$ to be
the (possibly partial) function such that, if $m =
e\mlist{(p_1,e_1),\ldots,(p_n,e_n)} \in L^{\rm m}_{\rm obj}$, then
$\overline{Q}(m) = Q(S(m))$. $\overline{Q}(m)$ is defined iff $S(m)$
is defined. For $m \in L^{\rm m}_{\rm obj}$, $\overline{Q}(m)$ is
called the \emph{quasiquotation} of $m$.\footnote{The
position-expression pairs $(p_i,e_i)$ in a quasiquotation
$\overline{Q}(e\mlist{(p_1,e_1),\ldots,(p_n,e_n)})$ are sometimes
called \emph{antiquotations}.}
$F$ has \emph{built-in quasiquotation} if there is an operator (which
we will denote as \mname{quasiquote}) such that, for all $m =
e\mlist{(p_1,e_1),\ldots,(p_n,e_n)} \in L^{\rm m}$, $\overline{Q}(m)$
is the syntactic result of applying the operator to
$e,p_1,\ldots,p_n,e_1,\ldots,e_n$ (which we will denote as
$\mname{quasiquote}(m)$).
\subsection{Backquote in Lisp} \label{subsec:backquote}
Let us continue the example in subsection~\ref{subsec:lisp} involving
Lisp with a simplified semantics. In Lisp, a \emph{backquote} of $L$
is an expression of the form $\texttt{`}e$ where $e$ is an
S-expression in $L$ in which some of the subexpressions of $e$ are
marked by a comma (\texttt{,}). For example,
\[\texttt{`(+ 2 ,(+ 3 1))}\] is a backquote in which \texttt{(+ 3 1)}
is a subexpression marked by a comma. We will restrict our attention
to unnested backquotes. The Lisp interpreter normally returns an
S-expression when it evaluates a backquote $\texttt{`}e \in L$. In
this case the S-expression returned is obtained from $e$ by replacing
each subexpression $e'$ in $e$ marked by a comma with the S-expression
$V(e')$. For example, when evaluating \texttt{`(+ 2 ,(+ 3 1))}, the
interpreter returns \mbox{\texttt{(+ 2 4)}}. Let $L$ be extended to
$L^\ast$ to include the backquotes of $L$ and $V^\ast : L^\ast \rightarrow
L^\ast \cup \set{\bot}$ be the total function such that, for all
S-expressions and backquotes $e \in L^\ast$, $V^\ast(e)$ is the
S-expression the interpreter returns when $e$ is evaluated if the
interpreter returns an S-expression and $V^\ast(e) = \bot$ otherwise.
\begin{sloppypar} A backquote $\texttt{`}e$ in $L^\ast$ corresponds to a marked
expression $m = e\mlist{(p_1,e_1),\ldots,(p_n,e_n)} \in L^{\rm m}$
where each $p_i$ is the position of a subexpression $\texttt{,}e_i$ in
$e$ marked by a comma for all $i$ with $1 \le i \le n$. Let
$\texttt{`}e \in L^\ast$ be a backquote and $m =
e\mlist{(p_1,e_1),\ldots,(p_n,e_n)} \in L^{\rm m}$ be a marked
expression that corresponds to it. We will show that the semantic
value of the backquote $\texttt{`}e$, when it is not $\bot$, is the
same as the semantic value of the quasiquotation $\overline{Q}(m)$.
Assume $V^\ast(\texttt{`}e) \not= \bot$. Then
\setcounter{equation}{0}
\begin{eqnarray}
V^\ast(\texttt{`}e)
& = & S(m) \\
& = & V(Q(S(m))) \\
& = & V(\overline{Q}(m))
\end{eqnarray}
(1) is by the semantics of backquote and the definition of $S$ since
\[V(e_i) = \mname{id}_{L}^{-1}(V(e_i)) = V_{\rm syn}^{-1}(V(e_i)) =
E^{\ast}(e_i)\] for each $i$ with $1 \le i \le n$. (2) is by the
Quotation Axiom and the fact that $V_{\rm syn}$ is the identity
function. And (3) is by the definition of $\overline{Q}(m)$. \end{sloppypar}
\section{Examples from the Literature} \label{sec:literature}
\subsection{Example: Lambda Calculus} \label{subsec:lambda}
\newcommand{\twoheadrightarrow_{\beta}}{\twoheadrightarrow_{\beta}}
\newcommand{{\rm NF}_{\Lambda}}{{\rm NF}_{\Lambda}}
\newcommand{\nflambda \cup \{\bot\}}{{\rm NF}_{\Lambda} \cup \{\bot\}}
In 1994 Torben Mogensen~\cite{Mogensen94} introduced a method of self
representing and interpreting terms of lambda calculus. We will
analyze this method and demonstrate how the self-interpretation of
lambda calculus is almost an instance of a replete syntax framework.
Let $\Lambda = V ~|~ \Lambda ~ \Lambda ~|~ \lambda V \mathrel. \Lambda$ be
the set of $\lambda$-terms where $V$ is a countable set of
variables. $\Lambda$ is the language of lambda calculus consisting of
all the $\lambda$-terms. A $\lambda$-term is a \emph{normal form} if
$\beta$-reduction cannot be applied to it. Given a $\lambda$-term
$M$, let the \emph{normal form of $M$}, ${\rm NF}_M$, be the normal
form that results from repeatedly applying $\beta$-reduction to $M$
until a normal form is obtained. The normal form of $M$ is undefined
if a normal form is never obtained after repeatedly applying
$\beta$-reduction to $M$. We will introduce two different syntax
representations of this language. The first syntax representation of
$\Lambda$ uses an inductive type similar to
subsection~\ref{subsec:ind-type} such that $V_A$ is the syntactic
valuation function where: \setcounter{equation}{0}
\begin{eqnarray}
V_A(x) & = & {\tt Var}(x)\\
V_A(M~N) & = & {\tt App}(V_A(M), V_A(N))\\
V_A(\lambda x \mathrel. M) & = & {\tt Abs}(\lambda x \mathrel. V_A(M))
\end{eqnarray}
\noindent
Let $D_A$ be the domain of values of this inductive type.
Then $R_A = (D_A,V_A)$ is a syntax representation of $\Lambda$.
Mogensen~\cite{Mogensen94} suggests a different syntax representation
of lambda calculus. Let $\bsynbrack{\cdot} : \Lambda \rightarrow
{\rm NF}_{\Lambda}$ be a {\em representation schema} for lambda calculus such
that:
\setcounter{equation}{0}
\begin{eqnarray}
\bsynbrack{x} & = & \lambda a b c \mathrel. a ~ x\\
\bsynbrack{M~N} & = & \lambda a b c \mathrel. b ~ \bsynbrack{M} ~ \bsynbrack{N}\\
\bsynbrack{\lambda x \mathrel. M} & = & \lambda a b c \mathrel. c ~ (\lambda x \mathrel. \bsynbrack{M})
\end{eqnarray}
\noindent
where $a,b,c$ are variables not occurring free in the $\lambda$-terms
$M$ and $N$. This representation of $\lambda$-terms is an equivalent
representation to the method described earlier which utilizes the
constructs of lambda calculus itself instead of an external data type.
Then $R_{\Lambda} = ({\rm NF}_{\Lambda}, \bsynbrack{\cdot})$ is a syntax
representation of $\Lambda$ and $({\rm NF}_{\Lambda}, I_{\Lambda})$ is a
syntax language for $R_{\Lambda}$. Notice that, since $\bsynbrack{M}$
is in normal form for any $M \in \Lambda$, then trivially
$\bsynbrack{M} \twoheadrightarrow_{\beta} \bsynbrack{M}$.
Let a {\em self-interpreter} $E$ be a $\lambda$-term such that for any
$M \in {\Lambda}$, $E \bsynbrack{M}$ is $\beta$-equivalent to $M$,
i.e., $E \bsynbrack{M} =_{\beta} M$ (which means ${\rm NF}_{E
\bsynbrack{M}}$ and ${\rm NF}_M$ are $\alpha$-convertible when these
normal forms exist). Mogensen proves that the $\lambda$-term
\[E = Y ~ \lambda e \mathrel. \lambda m \mathrel. m ~ (\lambda x \mathrel. x) ~ (\lambda m n \mathrel. (e~m)~(e~n)) ~ (\lambda m \mathrel. \lambda v \mathrel. e(m~v)),\]
\noindent where $Y$ is the Y-combinator, is a self-interpreter.
Define $E_{\Lambda} : {\rm NF}_{\Lambda} \rightarrow \Lambda$ to be the partial
function such that $E_{\Lambda}(M) = E~M$ if $M = \bsynbrack{N_M}$ for
some $\lambda$-term $N_M$ and is undefined otherwise.
\iffalse
The immediate implementation of $E$ for the syntax representation $R_A$ is as follows:
\setcounter{equation}{0}
\begin{eqnarray}
E_A[{\tt Var}(x)] & = & x \\
E_A[{\tt App}(V_A(M), V_A(N))] & = & E_A[M] ~ E_A[N]\\
E_A[{\tt Abs}(\lambda x \mathrel. V_A(M))] & = & \lambda v \mathrel. E_A[M ~ v]
\end{eqnarray}
\fi
\begin{thm}\begin{sloppypar}
Let $\Lambda$ be the language of lambda calculus and $I_{\Lambda} =
(\Lambda, \nflambda \cup \{\bot\}, \twoheadrightarrow_{\beta})$ be the interpreted language of
lambda calculus as defined earlier. Let $\bsynbrack{\cdot}$ be the
representation schema of $\Lambda$ and $E_{\Lambda}$ be the function
defined above. Then \[F_{\Lambda} = ({\rm NF}_{\Lambda}, \bsynbrack{\cdot},
{\rm NF}_{\Lambda}, \bsynbrack{\cdot}, E_{\Lambda})\] is a syntax framework for
$(\Lambda,I_{\Lambda})$. \end{sloppypar}
\end{thm}
\begin{proof}
$F_{\Lambda}$ is a syntax framework since it satisfies the four
conditions of Definition~\ref{df:syn-frame-lang}:
\begin{enumerate}
\item $R_{\Lambda} = ({\rm NF}_{\Lambda}, \bsynbrack{\cdot})$ is a syntax
representation of $\Lambda$.
\item $({\rm NF}_{\Lambda}, I_{\Lambda})$ is syntax language for
$R_{\Lambda}$.
\item $\bsynbrack{\cdot} : \Lambda \rightarrow {\rm NF}_{\Lambda}$ is an
injective, total function such that, for all $M \in \Lambda$,
$\bsynbrack{M} \twoheadrightarrow_{\beta} \bsynbrack{M}$ (Quotation Axiom).
\item $E_{\Lambda} : {\rm NF}_{\Lambda} \rightarrow \Lambda$ is a partial
function such that, for all $M \in {\rm NF}_{\Lambda}$ with $M =
\bsynbrack{N_M}$ for some $\lambda$-term $N_M$, $E_{\Lambda}(M) =
E~M = E\bsynbrack{N_M} =_{\beta} N_M$ (Evaluation Axiom) since $E$
is a self-interpreter.
\end{enumerate}
\end{proof}
\bigskip
$F_{\Lambda}$ is almost replete: $\Lambda$ is both the object and full
language of $F_{\Lambda}$ and $F_{\Lambda}$ has built-in evaluation,
but $F_{\Lambda}$ does not have built-in quotation.
\iffalse
$F_{\Lambda}$ does not have built-in quotation, but this can be
defined in $F_{\Lambda}$ by introducing a constant $C$ such that $C~M
= \bsynbrack{M}$ for all $M \in \Lambda$. Hence the Mogensen
self-interpretation of lambda calculus can be formulated as a replete
syntax framework.
\fi
\iffalse
\begin{sloppypar} Mogensen also introduces a {\em self-reducer} $R$ for
$\lambda$-terms such that $R ~ \bsynbrack{M} =_{\beta} \bsynbrack{{\rm
NF}_M}$ and provides a proof of correctness for the
self-reducer. Let $M \in \Lambda$ be a $\lambda$-term and
$\overline{M} \in {\Lambda}^m$ be the marked expression
$M\mlist{(p,\bsynbrack{M})}$, where $p$ is the top position in $M$, as
in section~\ref{subsec:marked-expr}. Then $\overline{Q}(\overline{M})
= \bsynbrack{E^{\ast}(\bsynbrack{M})} = \bsynbrack{{\rm
NF}_M}$. Therefore, $R ~ \bsynbrack{M} =_{\beta}
\overline{Q}(M\mlist{(p,\bsynbrack{M})})$ and the self-reducer for
lambda calculus is a built-in special form of the quasiquotation in
syntax frameworks. \end{sloppypar}
\fi
\subsection{Example: The Ring Tactic in Coq} \label{subsec:coq}
Coq~\cite{Coq8.4} is an interactive theorem prover based on the
calculus of inductive constructions. Let $R$ be a ring with the
associative, commutative binary operators $+$ and $*$ and the
constants $0$ and $1$ that are the identities of $+$ and $*$,
respectively. A {\em polynomial} in $R$ is an expression that consists
of the constants of $R$, the operators $+$ and $*$, and variables
$v_0, v_1, \dots$ of type $R$.
The {\em ring tactic} in Coq is a polynomial simplifier that converts
any polynomial to its equivalent {\em normal form}. The normal form of
a polynomial is defined as the ordered sum of unique monomials in
lexicographic order.
Earlier we mentioned that syntax-based operations such as
(symbolically) computing derivatives require a syntax framework to
manipulate and reason about syntax using quotation and evaluation.
Polynomial simplification is a term rewriter that uses the quotation
and evaluation mechanisms. The {\tt ring} tactic in Coq automatically
quotes and simplifies every polynomial expression.
Internally, when the {\tt ring} tactic is applied, the polynomials are
represented by an inductive type {\tt polynomial}. The Coq
reference manual~\cite{Coq8.4} defines this type as:
\begin{verbatim}
Inductive polynomial : Type :=
| Pvar : index -> polynomial
| Pconst : A -> polynomial
| Pplus : polynomial -> polynomial -> polynomial
| Pmult : polynomial -> polynomial -> polynomial
| Popp : polynomial -> polynomial.
\end{verbatim}
which represents polynomials similar to the inductive type example in
subsection \ref{subsec:ind-type}.
Let $L$ be the language of Coq, $D$ be the semantic domain of values
in the calculus of inductive constructions, and $V$ be the semantic
interpreter of Coq, then $I = (L,D,V)$ is the interpreted language for
Coq. Let $L_R \subseteq L$ be the language of polynomials of type $R$
(i.e., expressions in $L$ that are built with operators and constants
of $R$ and variables $v_0, v_1, \dots$ as defined earlier), $L_{\rm
poly} \subseteq L$ be the language of expressions belonging to the
inductive type {\tt polynomial}, $D_{\rm poly} \subseteq D$ be the
image of $L_{\rm poly}$ under $V$, and $V_{\rm poly}$ be the internal
quotation mechanism of Coq the {\tt ring} tactic uses to lift
polynomial expressions in $L_R$ to expressions in $L_{\rm poly}$.
Then $(D_{\rm poly},V_{\rm poly})$ is a syntax representation and
$(L_{\rm poly},I)$ is a syntax language for this syntax representation
which is suitable for describing the \texttt{ring} tactic in Coq.
Coq's ring normalization library ({\tt Ring\_normalize.v}) also defines
an interpretation function that transforms a polynomial expression of
type {\tt polynomial} back to a ring value of type $R$:
\begin{verbatim}
Fixpoint interp_p (p:polynomial) : A :=
match p with
| Pconst c => c
| Pvar i => varmap_find Azero i vm
| Pplus p1 p2 => Aplus (interp_p p1) (interp_p p2)
| Pmult p1 p2 => Amult (interp_p p1) (interp_p p2)
| Popp p1 => Aopp (interp_p p1)
end.
\end{verbatim}
To finish a definition of a syntax framework for the \texttt{ring}
tactic in Coq, we need to construct two functions $Q: L_R \to L_{\rm
poly}$ and $E : L_{\rm poly} \to L_R$ in the metalanguage of Coq.
Their definitions are:
\begin{enumerate}
\item For all $e \in L_R$, $Q(e)$ is the $e' \in L_{\rm poly}$ such
that $V(e') = V_{\rm poly}(e).$
\item \begin{sloppypar} For all $e' \in L_{\rm poly}$, $E(e')$ is the $e \in L_R$ such
that $V(e) = V(\mathtt{interp\_p})(V(e'))$. \end{sloppypar}
\end{enumerate}
Then $F = (D_{\rm poly},V_{\rm poly},L_{\rm poly},Q,E)$ is a syntax
framework for $(L_R,I)$.
Notice that the two functions $Q$ and $E$ are not normally present in
Coq and were constructed by the machinery in Coq described above
specifically to satisfy the requirements of a syntax framework.
Although the concepts of the syntax language and the syntax
representation arose naturally from the internal mechanism for the
\texttt{ring} tactic in Coq, a syntax framework for the \texttt{ring}
tactic does \emph{not} reside in Coq as explicitly as our previous
examples.
\subsection{Example: Chiron} \label{subsec:chiron}
Chiron~\cite{Farmer07a,Farmer12}, is a derivative of
von-Neumann-Bernays-G\"odel ({\mbox{\sc nbg}}) set theory~\cite{Goedel40,
Mendelson09} that is intended to be a practical, general-purpose
logic for mechanizing mathematics. Unlike traditional set theories
such as Zermelo-Fraenkel ({\mbox{\sc zf}}) and {\mbox{\sc nbg}}, Chiron is equipped with a
type system, and unlike traditional logics such as first-order logic
and simple type theory, Chiron admits undefined terms. The most
noteworthy part of Chiron is its facility for reasoning about the
syntax of expressions that includes built-in quotation and evaluation.
We will assume that the reader is familiar with the definitions
concerning Chiron in~\cite{Farmer12}. Let $L$ be a language of
Chiron, $\mbox{$\cal E$}_L$ be the set of expressions in $L$, $M$ be a standard
model for $L$, $D_M$ be the set of values in $M$, $V$ be the valuation
function in $M$, and $\phi$ be an assignment into $M$. Then
$I=(\mbox{$\cal E$}_L,D_M,V_\phi)$ is an interpreted language.
$D_M$ includes certain sets called \emph{constructions} that are
isomorphic to the syntactic structures of the expressions in $\mbox{$\cal E$}_L$.
$H$ is a function in $M$ that maps each expression in $\mbox{$\cal E$}_L$ to a
construction representing it. Let $D_{\rm syn}$ be the range of $H$
and $\mbox{$\cal T$}_{\rm syn}$ be the set of terms $a$ such that $V_\phi(a) \in
D_{\rm syn}$. For $e \in \mbox{$\cal E$}_L$, define $Q(e) = (\mname{quote},e)$.
For $a \in \mbox{$\cal T$}_{\rm syn}$, define $E(a)$ as follows:
\begin{enumerate}
\item If $V_\phi(a)$ is a construction that
represents a type and $H^{-1}(V_\phi(a))$ is eval-free, then
$E(a) = (\mname{eval},a,\mname{type}).$
\item If $V_\phi(a)$ is a construction that represents a term,
$H^{-1}(V_\phi(a))$ is eval-free, and
$V_{\phi}(H^{-1}(V_{\phi}(a))) \not= \bot$, then $E(a) =
(\mname{eval},a,\mname{C}).$
\item If $V_\phi(a)$ is a construction that
represents a formula and $H^{-1}(V_\phi(a))$ is eval-free, then
$E(a) = (\mname{eval},a,\mname{formula}).$
\item Otherwise, $E(a)$ is undefined.
\end{enumerate}
\begin{thm}
\begin{sloppypar} $F = (D_{\rm syn}, H, \mbox{$\cal T$}_{\rm syn}, Q, E)$ is a syntax framework
for $(\mbox{$\cal E$}_L,I)$.\end{sloppypar}
\end{thm}
\begin{proof}
$F$ is a syntax framework since it satisfies the four conditions of
the Definition~\ref{df:syn-frame-lang}:
\begin{enumerate}
\item $H$ maps each $e \in \mbox{$\cal E$}_L$ to a construction that represents
the syntactic structure of $e$. Thus $D_{\rm syn}$ is a set of
values that represent syntactic structures and $H: \mbox{$\cal E$}_L \rightarrow
D_{\rm syn}$ is injective and total. So $R$ is a syntax
representation of $\mbox{$\cal E$}_L$.
\item $I$ is an interpreted language. $\mbox{$\cal E$}_L \subseteq \mbox{$\cal E$}_L$.
$\mbox{$\cal T$}_{\rm syn} \subseteq \mbox{$\cal E$}_L$. $D_{\rm syn} \subseteq D_M$
(since since $D_{\rm syn}$ is the range of $H$, $H \mathrel: \mbox{$\cal E$}_L
\rightarrow D_{\rm v}$, and $D_{\rm v} \subseteq D_M$). And $V_\phi$ restricted
to $\mbox{$\cal T$}_{\rm syn}$ is a total function $V' : \mbox{$\cal T$}_{\rm syn} \rightarrow
D_{\rm syn}$. So $(\mbox{$\cal T$}_{\rm syn},I)$ is a syntax language for
$R$.
\item Let $e \in \mbox{$\cal E$}_L$. Then $V_\phi(Q(e)) =
V_\phi((\mname{quote},e)) = H(e)$ by the definition of $Q$ and the
definition of $V_\phi$ on quotations. So $Q : \mbox{$\cal E$}_L \rightarrow
\mbox{$\cal T$}_{\rm syn}$ is an injective, total function such that, for all
$e \in \mbox{$\cal E$}_L$, $V_\phi(Q(e)) = H(e)$.
\item Let $a \in \mbox{$\cal T$}_{\rm syn}$ such that $E(a)$ is defined. Hence
$V_\phi(a)$ is a construction that represents a type, term, or
formula. If $V_\phi(a)$ represents a type, term, or formula, let
$k$ be \mname{type}, \mname{C}, or \mname{formula},
respectively. Then $V_\phi(E(a)) = V_\phi((\mname{eval},a,k)) =
V_{\phi}(H^{-1}(V_{\phi}(a)))$ by the definition of $E$ and the
definition of $V_\phi$ on evaluations. So $E : \mbox{$\cal T$}_{\rm syn}
\rightarrow \mbox{$\cal E$}_L$ is a partial function such that, for all $a \in
\mbox{$\cal T$}_{\rm syn}$, $V_\phi(E(a)) = V_\phi(H^{-1}(V_\phi(a)))$
whenever $E(a)$ is defined.
\end{enumerate}
Finally, $F$ is replete since $\mbox{$\cal E$}_L$ is both the object and full
language of $F$ and $F$ has built-in quotation and evaluation.
\end{proof}
\bigskip
Quasiquotation is a notational definition in Chiron; it is not a
built-in operator in Chiron as quotation and evaluation
are~\cite{Farmer12}. The quasiquotation defined in Chiron is
semantically equivalent to the notion of quasiquotation defined in the
previous section.
\newpage
\section{Conclusion} \label{sec:conclusion}
We have introduced a mathematical structure called a \emph{syntax
framework} consisting of six major components:
\begin{enumerate}
\item A formal language $L$ with a semantics.
\item A sublanguage $L_{\rm obj}$ of $L$ that is the object language
of the syntax framework.
\item A domain $D_{\rm syn}$ of values that represent the syntactic
structures of expressions in $L_{\rm obj}$.
\item A sublanguage $L_{\rm syn}$ of $L$ whose expressions denote
values in $D_{\rm syn}$.
\item A quotation function $Q : L_{\rm obj} \rightarrow L_{\rm syn}$.
\item An evaluation function $E : L_{\rm syn} \rightarrow L_{\rm obj}$.
\end{enumerate}
A syntax framework provides the means to reason about the interplay of
the syntax and semantics of the expressions in $L_{\rm obj}$ using
quotation and evaluation. In particular, it provides three basic
syntax activities:
\begin{enumerate}
\item Expressing statements in $L$ about the syntax of $L_{\rm
obj}$.
\item Constructing expressions in $L_{\rm syn}$ that denote values
in $D_{\rm syn}$.
\item Employing expressions in $L_{\rm syn}$ as expressions in
$L_{\rm obj}$.
\end{enumerate}
These activities can be used to specify, and even implement,
transformers that map expressions in $L_{\rm obj}$ to expressions in
$L_{\rm obj}$. They are needed, for example, to specify the rules of
differentiation and to prove that these rules correctly produce
representations of expressions that denote
derivatives~\cite{Farmer13}. A syntax framework also provides a basis
for defining a notion of quasiquotation which is very useful for the
second basic activity.
When a syntax framework has built-in quotation and evaluation, it
provides the means to reason \emph{directly} in $L$ about the syntax
and semantics of the expressions in $L_{\rm obj}$. However, in this
case, the evaluation function $E$ cannot be the direct evaluation
function (Lemma~\ref{lem:direct-eval}) and, if $L$ is sufficiently
expressive, $E$ cannot be total on quotations
(subsection~\ref{subsec:liar}) and thus the Law of Disquotation
(Lemma~\ref{lem:disquotation}) cannot hold universally.
We showed that the notion of a syntax framework embodies a common
structure found in a variety of systems for reasoning about the
interplay of syntax and semantics. We did this by showing how several
examples of such systems can be regarded as syntax frameworks. Three
of these examples were the standard syntax-reasoning systems based on
expressions as strings, G\"odel numbers, and members of an inductive
type. The other, more sophisticated, examples were taken from
the literature.
We have also mentioned that a syntax framework is not adequate for
modeling syntax reasoning in programming languages with mutable
variables. This requires a generalization of a \emph{syntax
framework} to a \emph{contextual framework} that will be presented
in a future paper.
\section*{Acknowledgments}
\begin{sloppypar}
The authors are grateful to Marc Bender, Jacques Carette, Michael
Kohlhase, Russell O'Connor, and Florian Rabe for their comments about
the paper.
\end{sloppypar}
\iffalse
The authors would also like to thank the referees for their detailed
examination of the paper and valuable suggestions.
\fi
|
2,877,628,090,383 | arxiv | \section{Introduction}
Active suspensions of micro-swimmers such as spermatozoa (\cite{Creppy2015}), bacteria (\cite{Wensink2012}) or microalgae are common both in the natural environment, such as oceans (\cite{Pedley1992,Stocker2012,Durham2013}), lakes, ponds and within living organisms, such as the human body. Such suspensions can exhibit the formation of coherent structures or complex flow patterns (\cite{Saintillan2011,Marchetti2013,Wensink2012}) which may lead to enhanced mixing of chemicals in the surrounding fluid, the alteration of suspension rheology (\cite{Rafai2010,Lopez2015}), or increased nutrient uptake. Mixing and transport of microscopic, inert particles by motile microorganisms have been a topic of recent interest as such suspensions are a prime example of out-of-equilibrium systems. The particles experience Brownian motion due to their small size and are further affected by hydrodynamics interactions and collisions with the micro-swimmers. Enhanced particle transport in active suspensions has been observed in the presence of collective motion (\cite{Wu2000}), but also in the absence of it (\cite{Kurtuldu2011}). Understanding the mechanism of enhanced tracer transport can provide insight into biological processes such as predator-prey interactions and the plankton food chain (\cite{Kiorboe2014}), as well as chemical signalling and quorum-sensing (\cite{Kim2016}). In addition, the underlying mechanism may also provide the foundation for the design of novel biomimetic micro-fluidic devices that use similar strategies for enhanced mixing and stirring at small scales.
Generally speaking, the motion of a small particle in a suspension of micro-swimmers results from the interplay between Brownian diffusion due to thermal fluctuations and transport by the flow field induced by all the swimmers. The diffusion coefficient of the tracer, $D_0$ is set by the Stokes-Einstein diffusivity
\begin{equation}
D_0 = \dfrac{k_BT}{\zeta},
\label{eq:Stokes_einstein}
\end{equation}
where $\zeta$ is the friction coefficient of the tracer particle and $k_BT$ the thermal energy. The tracer particles are advected by the velocity field $u(\mathbf{x})$ which corresponds to the disturbances generated by the collection of micro-swimmers in the vicinity of the tracer. The velocity $u(\mathbf{x})$ depends highly on the magnitude and the spatial rate of decay of the disturbances, as well as the local swimmer volume fraction, $\phi_v$.
As the swimmers are moving, over time, the ever changing advection of the tracer particles results in their diffusive behaviour. In the dilute regime where $\phi_v \ll 1$, many experiments (\cite{Leptos2009,Mino2011,Jepson2013,Kasyap2014}) have shown that the scaling between the tracer diffusivity, $D^{act}$, due to swimmer activity and swimmer volume fraction is linear
\begin{equation}
D^{act} = D_{eff}-D_0 = \alpha \phi_v
\label{eq:linear_diff}
\end{equation}
where $D_{eff}$ is the total effective tracer diffusivity and $\alpha$ has the units of a diffusion coefficient. In terms of swimmer number density, $n$ and swimming speed, $U$, this can also be expressed as $D^{act} = nU\Lambda$, where $nU$ is the ``active flux'' of micro-swimmers and $\Lambda$ scales like the fourth power of the swimmer size based on dimensional analysis. In the dilute limit, where swimmers move along straight paths and the interactions between tracers and swimmers are well characterised, the effective diffusion coefficient can be computed (\cite{Lin2011, Mino2011,Pushkin2013,Pushkin2013b, Kasyap2014,Thiffeault2014, Burkholder2017}) by averaging the displacements of a single particle due to repeated interactions with swimmers that move independently of one another. In many of these studies, the swimmers are modelled as spherical squirmers. The spherical squirmer model was first introduced by \cite{Lighthill1952} and furthered by \cite{Blake1971} and provides the motion and flow generated by a spherical swimmer propelled by small distortions of its surface.
Theoretical studies have shown that $D^{act}$ can be separated into two contributions: random swimmer reorientations and the entrainment by the swimmer along the straight trajectories. In the experiments of \cite{Leptos2009}, the tracer diffusion coefficient in 3D dilute suspensions of \emph{C. Rheinardtii} was obtained from data taken over period of two seconds. Since the reorientation time of \emph{C. Rheinardtii} due to phase slips between flagella is approximately ten seconds (\cite{Goldstein2009}), swimmer reorientation does not significantly affect the tracer diffusion. Hence, by considering entrainment only, \cite{Pushkin2013} obtained an estimate ($(D_{eff}-D_0)/\phi_v \simeq 83\mu m^2s^{-1}$), surprisingly close to the experiments of \cite{Leptos2009} ($(D_{eff}-D_0)/\phi_v \simeq 81.3\mu m^2s^{-1}$). Using a similar approach, \cite{Thiffeault2014} showed that the distribution of tracer displacements from \cite{Leptos2009} can also be reproduced using the squirmer model. However, these two works only consider point tracers while particles in experiments are micron-sized beads (\cite{Wu2000,Leptos2009,Mino2011,Kasyap2014}), macromolecules, or dead cells (\cite{Jepson2013}) with a finite size. Recent experiments (\cite{Polin2016}) in the dilute regime have shown that the front-mounted flagella of \emph{C. Rheinardtii} can trap such micron-sized particles and generate very large displacements through direct entrainment. Such dramatic events are due to near-field interactions and are strongly related to swimmer geometry, as well as actuation mechanism.
Beyond the dilute limit, there can be a break down in the linear dependence of the effective diffusion coefficient on swimmer volume fraction. The departure from linearity, however, appears to depend strongly on the details of the system used for study. For example, \cite{Wu2000} showed that the linear scaling holds up to $\phi_v = 10\%$ in two-dimensional films of \emph{E. coli}, while \cite{Kasyap2014} observed in a three-dimensional bath that the linear trend breaks down for $\phi_v\geq 2.5\%$. As stated by \cite{Kasyap2014}, the reason for the deviation from linearity is still unclear, and ``one possibility is the occurrence of multi-bacterial effects on the tracer diffusivity." While suspensions of bacteria are known to display large-scale collective dynamics (\cite{Wu2000,dombrowski_04}) characterised by swirls and jets, the nonlinear dependence is also observe in swimmer suspensions that do not exhibit larger scale motion. For example, \cite{Kurtuldu2011} measured tracer displacements in a suspension of \emph{C. Rheinardtii} confined to a thin film of liquid and obtained a power-law for the effective diffusivity $D_{eff}/D_0 \sim \phi_v^{3/2}$. A numerical investigation of the semi-dilute regime using the squirmer model and Stokesian Dynamics \cite{Ishikawa2010}) to compute the motion of non-Brownian fluid particles yielded a linear scaling of the tracer diffusivity with volume fraction for values up to $\phi_v = 15\%$.
In this paper, we present results from simulations exploring tracer transport in dilute and semi-dilute suspensions of squirmers. In our simulations, we employ recently developed numerical tools based on the force-coupling method (FCM), (\cite{Maxey2001,Lomholt2003,Keaveny2014,Delmotte_2015a,Delmotte_2015b}) that allow for multibody hydrodynamic interactions between active and passive particles, polydispersity in particle size, particle Brownian motion that satisfies the fluctuation-dissipation theorem, and steric interactions. A description of the squirmer model and fluctuating FCM are provided in Section \ref{sec:model}. In the dilute regime, we obtain quantitative agreement between our simulation results and the experimental results of \cite{Leptos2009} for the effective tracer diffusion coefficient, as well as for the tracer displacement distribution. By selectively removing the flow disturbances due to the squirming modes, we quantify the contributions of hydrodynamic and steric interactions on tracer displacement and provide insight into physical mechanisms giving rise to particular features of the tracer displacement distribution. These results are shown in Section \ref{sec:Dilute}. We extend these results to semi-dilute concentrations in Section \ref{sec:tracer_concentrated}. Here, we examine the non-linear dependence of the effective tracer diffusion coefficient on the swimmer volume fraction, as well as characterise the distribution of tracers in the suspension. Finally, we discuss our results and future directions in Section \ref{sec:Discussion}.
\section{Mathematical model for the simulations}
\label{sec:model}
In our simulations, we consider $N_p$ squirmers dispersed in fluid containing $N_t$ tracers giving a total number of particles $N = N_p + N_t$. All particles are spherical, however the squirmers have radius $a_{sw}$, while the smaller tracers have radius $a$. The position of particle $n$ is denoted by $\mathbf{Y}_n$, while its orientation is $\mathbf{p}_n$. The motion of both the spherical tracers and squirmers is governed by overdamped Langevin dynamics, which may be expressed as
\begin{align}
\frac{d\mathcal{Y}}{dt}& = \mathcal{V}_{sq} +\mathcal{V} + \tilde{\mathcal{V}} + k_BT\nabla_{\mathcal{Y}}\cdot \mathcal{M}^{\mathcal{V}\mathcal{F}} \\
\frac{d\mathcal{P}}{dt} &= \mathcal{Q}(\mathcal{W} + \mathcal{W}_{sq} + \tilde{\mathcal{W}}) + k_BT\left(\mathcal{Q} \nabla_{\mathcal{Y}}\cdot \mathcal{M}^{\mathcal{W}\mathcal{F}} - 2 \mathcal{D}^{\mathcal{W}\mathcal{T}} \mathcal{P}\right)
\end{align}
where $\mathcal{Y}$ is the $3N \times 1$ vector containing all particle positions, i.e. $[\mathbf{Y}^T_1, \mathbf{Y}^T_2,\dots, \mathbf{Y}^T_N]^T$, while $\mathcal{P} = [\mathbf{p}^T_1, \mathbf{p}^T_2,\dots, \mathbf{p}^T_N]^T$ is that for the particle orientations. The matrix $\mathcal{Q}$ is the block diagonal matrix whose non-zero entries are given by $\mathcal{Q}_{3n+k,3n+l} = \epsilon_{klm}\mathbf{p}^n_{m}$ for $n = 1,\dots, N$.
The vectors $\mathcal{V}$ and $\mathcal{W}$ are the translational and angular velocities, respectively, of the particles that are related to the vector of forces, $\mathcal{F}$, and torques, $\mathcal{T}$, on the particles through
\begin{align}
\left[\begin{array}{c}
\mathcal{V} \\
\mathcal{W} \\
\end{array}\right]=
\left[\begin{array}{cc}
\mathcal{M}^\mathcal{VF} & \mathcal{M}^\mathcal{VT} \\
\mathcal{M}^\mathcal{WF} & \mathcal{M}^\mathcal{WT} \\
\end{array}\right]
=\mathcal{M}
\left[\begin{array}{c}
\mathcal{F} \\
\mathcal{T} \\
\end{array}\right]
\label{eq:mobrel}
\end{align}
where $\mathcal{M}$ is the $6N \times 6N$ low Reynolds number mobility matrix for all particles. In Eq. (\ref{eq:mobrel}), we indicate explicitly the four $3N \times 3N$ submatrices relating either forces or torques with translational or angular velocities.
In addition to $\mathcal{V}$ and $\mathcal{W}$, the particles have velocities $\mathcal{V}_{sq}$ and angular velocities $\mathcal{W}_{sq}$ which encapsulate the swimming velocity, $U \mathbf{p}_n$, of each squirmer, as well as the additional velocities and angular velocities of all particles due to the flows induced by each of the first two squirming modes (\cite{Blake1971,Ishikawa2006}). For a squirmer centred at the origin and in a frame moving with the squirmer, this induced flow is given by
\begin{align}
\mathbf{u}(\mathbf{x})&= -\frac{B_1}{3}\frac{a^3}{r^3}\left(\mathbf{I} - 3\frac{\mathbf{xx}^T}{r^2}\right)\mathbf{p} + \left(\frac{a^4}{r^4} - \frac{a^2}{r^2}\right)\frac{B_{2}}{2}\left(3 \left(\frac{\mathbf{p}\cdot\mathbf{x}}{r} \right)^2 - 1\right)\frac{\mathbf{x}}{r} \nonumber\\
&- \frac{a^4}{r^4} B_{2}\left(\frac{\mathbf{p}\cdot\mathbf{x}}{r}\right)\left(\mathbf{I} - \frac{\mathbf{xx}^T}{r^2}\right)\mathbf{p}. \label{eq:squ}
\end{align}
where $B_1$ is related to the swimming speed through $B_1 = 3U/2$ (\cite{Blake1971}) and $B_2$ controls the strength and sign of the swimming stresslet, $\mathbf{G}$, through
\begin{align}
\mathbf{G} &= \frac{4}{3}\pi\eta a^2\left(3\mathbf{p}\mathbf{p} - \mathbf{I}\right)B_2 \label{eq:sqstresslet}.
\end{align}
The squirming parameter $\beta = B_2/B_1$ describes the relative stresslet strength (\cite{Ishikawa2006}). For $\beta > 0$, the squirmer is a `puller,' bringing fluid in along $\mathbf{p}$ and expelling it laterally, whereas if $\beta < 0$, the squirmer is a `pusher,' expelling fluid along $\mathbf{p}$ and bringing it in laterally.
The random velocities, $\tilde{\mathcal{V}}$ and angular velocities, $\tilde{\mathcal{W}}$, obey the fluctuation-dissipation theorem and their statistics are related to the mobility matrix $\mathcal{M}$ through
\begin{align}
\langle \tilde{\mathcal{V}}(t)\rangle &= 0\\
\langle \tilde{\mathcal{W}}(t)\rangle &= 0\\
\left\langle \left[\begin{array}{c}
\tilde{\mathcal{V}}(t) \\
\tilde{\mathcal{W}}(t) \\
\end{array}\right]
\left[\begin{array}{cc}
\tilde{\mathcal{V}}^T(t') & \tilde{\mathcal{W}}^T(t')
\end{array}\right]
\right\rangle &= 2k_BT \mathcal{M}\delta(t-t').
\label{eq:PM2}
\end{align}
Also appearing in the equations of motion are the thermal drift terms $k_BT\nabla_{\mathcal{Y}}\cdot \mathcal{M}^{\mathcal{V}\mathcal{F}}$, $k_BT \mathcal{Q}\nabla_{\mathcal{Y}}\cdot \mathcal{M}^{\mathcal{W}\mathcal{F}}$, and $-2k_BT \mathcal{D}^{\mathcal{W}\mathcal{T}} \mathcal{P}$ that arise from taking the overdamped limit and are required to obtain particle distribution dynamics that are governed by Smoulochowski's equation. The diagonal matrix $\mathcal{D}^{\mathcal{W}\mathcal{T}}$ has entries $\mathcal{D}^{\mathcal{W}\mathcal{T}}_{ij} = \mathcal{M}^{\mathcal{W}\mathcal{T}}_{ij}$ for $i = j$ and zero otherwise.
\subsection{Force-coupling method}
To compute the particle dynamics, we rely on the force-coupling method and two recent extensions to active particles (\cite{Delmotte_2015a}) based on the steady squirmer model (\cite{Lighthill1952,Blake1971,Ishikawa2006}) and to Brownian suspensions (\cite{Keaveny2014,Delmotte_2015b}) using concepts from fluctuating hydrodynamics. The force-coupling method uses regularized force distribution and volume averaging to generate the far-field approximation of the mobility matrix, $\mathcal{M}$.
To compute the particle velocities and angular velocities arising from the forces and torques on the particles, as well as those due to squirming, we first solve for the Stokes flow
\begin{align}
\bm{\nabla} p - \eta \nabla^2 \mathbf{u} & = \sum_{n=1}^N \mathbf{F}_n \Delta_n(\mathbf{x}) + \sum_{n=1}^{N_p} \mathbf{S}_{n} \cdot \bm{\nabla}\Theta_n(\mathbf{x}) \nonumber\\
&+ \mathbf{G}_{n} \cdot \bm{\nabla} \Delta_n(\mathbf{x}) + \mathbf{H}_n\nabla^2\Theta_n(\mathbf{x}) \label{eq:FCM1}\\
\bm{\nabla}\cdot \mathbf{u}& = 0 \label{eq:FCM2}
\end{align}
where $\mathbf{F}_n$ is the force particle $n$ exerts on the fluid and $\mathbf{S}_{n}$ is the stresslet of particle $n$ due to its rigidity. We note in our simulations that all particles are torque-free, i.e. $\tau_n = 0$ for all $n$, and we ignore the tracer stresslets due to their small size. The only forces we consider are short-range pairwise repulsive forces that represent contact forces between two particles, and prevent overlap (\cite{Delmotte_2015a}). Additionally, for the squirmers, we have
\begin{align}
\mathbf{H}_n &= -\frac{4}{3}\pi\eta a^3 B_1 \mathbf{p}_n.\\
\mathbf{G}_n &= \frac{4}{3}\pi\eta a^2\left(3\mathbf{p}_n\mathbf{p}_n - \mathbf{I}\right)\beta B_1 \label{eq:sqstresslet}
\end{align}
as to yield flow corresponding to Eq. \ref{eq:squ} for each squimer. In Eq. (\ref{eq:FCM1}), we also have the two Gaussian envelopes that are used to project the particle forces onto the fluid,
\begin{eqnarray}
\Delta_n(\mathbf{x})&=&(2\pi\sigma_{n; \Delta}^2)^{-3/2}\textrm{e}^{-|\mathbf{x} - \mathbf{Y}_n|^2/2\sigma_{n;\Delta}^2} \nonumber\\
\Theta_n(\mathbf{x})&=&(2\pi\sigma_{n;\Theta}^2)^{-3/2}\textrm{e}^{-|\mathbf{x} - \mathbf{Y}_n|^2/2\sigma_{n;\Theta}^2}.
\label{eq:FCM2}
\end{eqnarray}
The length scales $\sigma_{n;\Delta}$ and $\sigma_{n;\Theta}$ are related to the radius, $a_n$, of particle $n$ through $\sigma_{n;\Delta} = a_n/\sqrt{\pi}$ and $\sigma_{n;\Theta} = a_n/\left(6\sqrt{\pi}\right)^{1/3}$. After solving Eq. (\ref{eq:FCM1}) for $\mathbf{u}$, the velocity of each tracer is determined from
\begin{align}
\mathbf{V}_n&=\int\mathbf{u}\Delta_n(\mathbf{x})d^3\mathbf{x} \label{eq:FCM3a},
\end{align}
while for the squirmers, the velocities, angular velocities, and local-rates-of-strain are given by
\begin{align}
\mathbf{V}_n &= U\mathbf{p}_n - \mathbf{W}_n + \int \mathbf{u} \Delta_n(\mathbf{x}) d^3\mathbf{x} \label{eq:part_vel_self_ind}\\
\bm{\Omega}_n &= \frac{1}{2}\int\left[\bm{\nabla}\times\mathbf{u}\right] \Theta_n(\mathbf{x})d^3\mathbf{x} \label{eq:part_rot}\\
\mathbf{E}_n&=-\mathbf{K}_n + \frac{1}{2}\int \left[\bm{\nabla}\mathbf{u} + (\bm{\nabla}\mathbf{u})^T\right]\Theta_n(\mathbf{x})d^3\mathbf{x}. \label{eq:ROS_self_ind} = 0
\end{align}
where the terms $\mathbf{W}_n$ and $\mathbf{K}_n$ are included to subtract away artificial self-induced velocities and local-rates-of-strain due to the volume integration of the squirming modes. Expressions for these terms are provided in \cite{Delmotte_2015a}. The local rate-of-strain for each squirmer is required to be zero, $\mathbf{E}_n = 0$, giving rise to the stresslets, $\mathbf{S}_n$.
To compute the random velocities and angular velocities, we consider the Stokes flow
\begin{align}
\bm{\nabla} p - \eta \nabla^2 \tilde{\mathbf{u}} &= \bm{\nabla}\cdot \mathbf{P} + \sum_{n=1}^{N_p} \tilde{\mathbf{S}}_{n} \cdot \bm{\nabla}\Theta_n(\mathbf{x}) \nonumber\\
\bm{\nabla}\cdot \tilde{\mathbf{u}} &= 0.
\label{eq:RanVel1}
\end{align}
where $\mathbf{P}$ is a fluctuating stress driving the random fluid flow. The statistics for $\mathbf{P}$, in index notation, are given by
\begin{align}
\left\langle P_{jl}\right\rangle&=0\\
\left\langle P_{jl}(\mathbf{x},t)P_{pq}(\mathbf{x}',t') \right\rangle&=2k_BT\eta\left(\delta_{jp}\delta_{lq} + \delta_{jq}\delta_{lp}\right)\delta(\mathbf{x}-\mathbf{x}')\delta(t-t').
\label{eq:RanVel2}
\end{align}
Using the FCM volume averaging operators, we first enforce
\begin{align}
\tilde{\mathbf{E}}_n&= \frac{1}{2}\int \left[\bm{\nabla}\tilde{\mathbf{u}} + (\bm{\nabla}\tilde{\mathbf{u}})^T\right]\Theta_n(\mathbf{x})d^3\mathbf{x} = 0
\label{eq:ROS_self_ind}
\end{align}
to obtain the values of $\tilde{\mathbf{S}}_n$ and subsequently compute
\begin{align}
\tilde{\mathbf{V}}_n &= \int \tilde{\mathbf{u}} \Delta_n(\mathbf{x}) d^3\mathbf{x} \label{eq:randvel}\\
\tilde{\bm{\Omega}}_n &= \frac{1}{2}\int\left[\bm{\nabla}\times\tilde{\mathbf{u}}\right] \Theta_n(\mathbf{x})d^3\mathbf{x} \label{eq:randrot}
\end{align}
As demonstrated in \cite{Keaveny2014}, the resulting velocities and angular velocities will satisfy the fluctuation-dissipation theorem with the covariance given by the FCM approximation of the mobility matrix. We note that while we have described the computation of deterministic and stochastic velocities separately, due to the linearity of Stokes equations, these can be combined into a single computation of the total particle velocities.
While this describes how we obtain the deterministic and random motions of the particles, it remains to integrate the equations of motion with the correct thermal drift. We accomplish this by employing the midpoint drifter-corrector (DC) time integration scheme (\cite{Delmotte_2015b}) that inherently accounts for the drift terms without having to compute them directly. In words, the DC recovers the drift by initially advancing the particle positions to the midstep using only the random velocities determined from the fluctuating fluid flow with no stresslets for particle rigidity. At the midstep, the full computation is performed to find the deterministic velocities, as well as new random particle velocities using the same initial realisation of the fluctuating stress. The result is correlated motion between the initial and midstep velocities that then can be exploited to capture the correct thermal drift. For details of the scheme, including error expansions showing explicitly that the drift is recovered, the reader is referred to \cite{Delmotte_2015b}.
\subsection{Simulations parameters and set-up}
In our simulations, we set our parameters to match the \emph{C. rheindardtii} experiments of \cite{Leptos2009}. We set the swimmer to tracer radius ratio to $a_{sw}/a = 5$ and set the swimming speed, and hence $B_1$, to match $U = 100\mu m s^{-1} = 20 a_{sw}s^{-1}$. Following \cite{Thiffeault2014}, we set $\beta = 0.5$. The simulations are carried out in a triply periodic domain with edge length $L = 23a_{sw}$. The Stokes equations are solved using a Fourier spectral method with $N_{g} = 192$ grid points in each direction. The number of tracers is set to $N_t = 1255$, resulting in a tracer volume fraction of less than 0.34\%. The number of swimmers is varied from $N_{p} = 12$ to $451$ to examine very dilute swimmer suspensions of $\phi_{v} = 0.4\%$, as well as semi-dilute cases where $\phi_{v} = 15\%$. Finally, the thermal energy $k_BT$ is set to obtain the same distribution of tracers displacements as \cite{Leptos2009} in the absence of swimmers. We note that the resulting value of the diffusion, $D_0 = 0.34 \mu^2s^{-1}$ is slightly higher than the value $D_0 = 0.28 \mu^2s^{-1}$ given in \cite{Leptos2009}. A snapshot taken from a representative simulation is shown in Figure \ref{fig:snapshot_concentrated_tracers}.
\begin{figure}
\centering
\includegraphics[width=0.95\columnwidth]{Snapshot_fluid_particle_301swimmers_1255tracers.pdf}
\caption{\label{fig:snapshot_concentrated_tracers} Snapshot of the simulation domain containing $N_p=301$ swimmers and $1255$ tracers, with $a_{sw}/a =5$. The volume fraction is $\phi_v = 10\%$. The large grey spheres are the squirmers and the small black dots correspond to the tracers. Vectors represent the swimmers' orientations $\mathbf{p}^n, \, n=1,..,N_p$. Slices show the norm of the fluid velocity field normalized by the intrinsic swimming speed $\|\mathbf{u}\|/U$. One can observe the fluid velocity fluctuations arising from the fluctuating stress.
}
\end{figure}
\section{Results}
\subsection{Dilute regime}
\label{sec:Dilute}
\subsubsection{Tracer displacements}
In their experiments, \cite{Leptos2009} measured the time-dependent probability distribution function, $P(\Delta x, \Delta t)$, for tracer displacements for suspensions with swimmer volume fractions $\phi_v = 0-2.2\%$ over $0.3$s. As the beat period for the flagella of \emph{C. rheindardtii} is $T = 0.02$s, this observation time corresponds to 15 beat cycles. They find that unlike previous experiments using bacterial baths (\cite{Wu2000}), the tracer displacement distributions exhibit non-Gaussian tails. The non-Gaussian tails are attributed to rare entrainment events that occur when a tracer particle comes in close proximity to a swimmer's surface.
\begin{figure}
\begin{centering}
\subfloat[PDF for tracer displacements at time $\Delta t = 0.12s$ for swimmer volume fractions $\phi_v = 0 - 2.2\%$. Symbols represent the data from \cite{Leptos2009}. Solid lines correspond to the simulations.]{\label{fig:PDF_disp_beta_0_5} \includegraphics[width=0.5\columnwidth]{PDF_disp_no_adim_Ntracers_1255_kbt_1_1em3_Nsaves_800_Nit_20000.pdf}}
\hspace{0.5cm}
\subfloat[Diffusive scaling of the tracer displacement PDF for $\phi_v = 2.2\%$ from the simulations]{\label{fig:PDF_disp_beta_0_5_diff_scaling} \includegraphics[width=0.5\columnwidth]{PDF_disp_adim_Nsaves_800_Nit_20000_ind_start_24_ind_end_120.pdf}}
\end{centering}
\caption{Tracer displacements in dilute suspensions of squirmers with $\beta = 0.5$. Displacements are averaged over the three spatial directions.
}
\end{figure}
We performed simulations corresponding to the same swimmer volume fractions and observation times as in the experiments of \cite{Leptos2009}. The resulting tracer displacement distributions from our simulations are shown in Fig. \ref{fig:PDF_disp_beta_0_5}. Our squirmer simulations adequately capture the Gaussian core of the distributions. As in \cite{Leptos2009}, we find that the distributions are self-similar with respect to the diffusive scaling $\Delta x/(2D_{eff}\Delta t)^{1/2}$ (Fig. \ref{fig:PDF_disp_beta_0_5_diff_scaling}). We find, however, that the larger, but rarer, displacement events related to the tails are slightly underestimated by the model. This is consistent with similar findings of tracer displacements by squirmers (\cite{Thiffeault2014}). We attribute this to the fact that in the near-field, the squirmer does not replicate the flow induced by swimming \emph{C. Rheinardtii} and the tails of the PDF for short-time tracer displacements depend on the details of the flow near the swimming micro-organism. The differences in entrainment due to differences in swimming behaviour have been examined previously in \cite{Pushkin2013}.
Fig. \ref{fig:Tails_pdf_log_log} shows the distribution tails in more detail. Though over less than a decade of displacements, we observe that our simulation data follows a $x^{-4}$ power-law, while the data from \cite{Leptos2009} appears to behave as $x^{-3}$. We note that \cite{Leptos2009} originally fit the non-Gaussian tails by exponentials. The $x^{-4}$ decay was observed in experiments by \cite{Kurtuldu2011} and predicted by \cite{Pushkin2014} for suspensions of dipolar swimmmers with random orientations.
\begin{figure}
\centering
\includegraphics[width=0.95\columnwidth]{Loglog_PDF_disp_no_adim_Ntracers_1255_kbt_1_1em3_Nsaves_800_Nit_20000.pdf}
\caption{\label{fig:Tails_pdf_log_log} The PDF of tracer displacements in log-scale. Symbols represent the data from \cite{Leptos2009}. Solid lines correspond to the simulations. The dashed lines correspond to the power laws:
\symbol{\tdash \tdash \tdash \tdash \nobreak\ }{}{10}{0}{black}{black}: $x^{-4}$.
\symbol{\tdash \tdash \tdash \tdash \nobreak\ }{}{10}{0}{magenta}{black}: $\textcolor{magenta}{x^{-3}}$.
}
\end{figure}
Previous studies (\cite{Zaid2011,Thiffeault2014}) have argued that the non-Gaussian tails are due to the shortness of the observation time coupled with the rarity of the entrainment events. To verify this assertion, we have computed the evolution of the tracer displacement PDF over a longer time $\Delta t = 0 - 2s$ as shown in Figure \ref{fig:PDF_disp_beta_0_5_phi_2_2_long_time} for $\phi_v = 2.2\%$. As $\Delta t$ increases, we see that the Gaussian core of the distribution broadens. We also observe that small shifts in the mean value appears, which we attribute to statistical fluctuations in the data. After rescaling by the diffusive timescale, we see in Figure \ref{fig:PDF_disp_beta_0_5_diff_scaling_phi_2_2_long_time} that the tails decay more rapidly and the distribution approaches a Gaussian as $\Delta t$ increases. This convergence to a Gaussian is observed for any volume fraction $\phi_v = 0-2.2\%$, see Figure \ref{fig:PDF_disp_beta_0_5_long_time}, with the rate of convergence increasing as $\phi_v$ increases. \cite{Thiffeault2014} derived the criterion for reaching a Gaussian distribution. He showed that for spherical squirmers with $\beta = 0.5$, the time to reach Gaussianity is $\Delta t = 3.57, 1.74, 0.8, 0.5 s$ for $\phi_v = 0.4, 0.8, 1.6, 2.2\%$ respectively. Our simulation data (Figure \ref{fig:Long_time_dilute_tracers}) are in inaccordance with these results. We also note that convergence to a Gaussian distribution in dilute suspensions was observed in experiments where the algal suspension was confined to a liquid film (\cite{Kurtuldu2011}).
While our results point towards convergence to a Gaussian distribution, we note that in \cite{Zaid2011}, using point sources with no steric interactions, they found that Gaussianity should arise only when the disturbance flow induced by the swimmers decays as $r^{-n}$ with $n=1$, and if $n\geq 2$, as is the case of squirmers, the fluid velocity distribution, and thus tracer displacements, should deviate from Gaussianity, even at long times. Their results are supported by the work of \cite{Rushkin2010} on dilute suspensions of \emph{Volvox} ($\phi_v \leq 1.5\%$). Their study shows that when considering only settling forces ($n=1$), the fluid velocity fluctuations follow a normal distribution. When accounting for the degenerate quadrupole due to ciliary beating ($n=3$), the fluid velocity fluctuations exhibit strong deviations from Gaussianity, as confirmed by experimental data.
\begin{figure}
\centering
\subfloat[PDF for tracer displacements at times $\Delta t = 0.06-2s$ for $\phi_v = 2.2\%$.]{\label{fig:PDF_disp_beta_0_5_phi_2_2_long_time} \includegraphics[width=0.45\columnwidth]{Phi_2_2_PDF_disp_no_adim_Nsaves_800_Nit_20000_ind_start_24_ind_end_800.pdf}}
\hspace{0.5cm}
\subfloat[Diffusive scaling of the PDF for tracer displacements for $\phi_v = 2.2\%$.]{\label{fig:PDF_disp_beta_0_5_diff_scaling_phi_2_2_long_time} \includegraphics[width=0.45\columnwidth]{Phi_2_2_PDF_disp_adim_Nsaves_800_Nit_20000_ind_start_24_ind_end_800.pdf}}\\
\subfloat[PDF for tracer displacements at time $\Delta t = 2s$ for various swimmer volume fractions $\phi_v = 0 - 2.2\%$.]{\label{fig:PDF_disp_beta_0_5_long_time} \includegraphics[width=0.5\columnwidth]{PDF_disp_no_adim_Ntracers_1255_kbt_1_1em3_Nsaves_800_Nit_20000_ind_plot_800_Nsw_5.pdf}}
\caption{\label{fig:Long_time_dilute_tracers} PDF of tracer displacements for long times. Displacements are averaged over the three spatial directions.
}
\end{figure}
\subsubsection{Enhanced Diffusion}
In addition to the tracer displacement distribution, \cite{Leptos2009} measured the mean-squared displacement of tracers as a function of time and observed that the motion of tracers is diffusive obeying $\langle\Delta x ^2\rangle = 2 D_{eff} t$ for all times. Figure \ref{fig:MSD_beta_0_5} shows the mean-squared tracer displacement from our squirmer simulations. As $\phi_v$ increases, we observe that the onset of the diffusive regime occurs after a period of anomalous transport at short times where $\langle\Delta x ^2\rangle \sim t^{\Theta}, \, 1<\Theta<2$. We note that similar behaviour was observed in other experiments (\cite{Wu2000,Kurtuldu2011,Kurihara_2017}) and the theoretical prediction ${\Theta}=3/2$ has also been proposed (\cite{Kurihara_2017}). Once we have reached the diffusive regime at $\Delta t = 2s$, however, our values for the mean-squared displacement are very close to those given in \cite{Leptos2009}.
We extract the effective diffusion coefficient from our data by fitting the mean-squared displacement by the solution of the Langevin equation (\cite{Wu2000})
\begin{equation}
\langle \Delta x ^2 \rangle = 2D_{eff}\Delta t\left[1-\exp(-t/\tau) \right]
\end{equation}
where $\tau$ is the timescale over which ballistic motion ($\langle \Delta x ^2 \rangle \sim2 \frac{D_{eff}}{\tau}t^2$) transitions to diffusive behavior $(\langle\Delta x ^2\rangle \sim 2 D_{eff} t)$. In Figure \ref{fig:Deff_beta_0_5_kbts}, we compare the effective diffusion coefficient from our simulations with those obtained by \cite{Leptos2009}. We can see that the simulated values match the experimental ones within the statistical errors reported for the experiments.
\begin{figure}
\begin{centering}
\subfloat[Mean-squared displacement over time.]{ \includegraphics[width=0.5\columnwidth]{MSD_Ntracers_1255_kbt_1_1em3_Nsaves_800_Nsw_5.pdf}}
\subfloat[Mean-squared displacement over time in log scale.]{ \includegraphics[width=0.5\columnwidth]{Loglog_MSD_Ntracers_1255_kbt_1_1em3_Nsaves_800_Nsw_5.pdf}}
\end{centering}
\caption{\label{fig:MSD_beta_0_5} Mean-squared displacement of tracers for different swimmer concentrations. Displacements are averaged over the three spatial directions.
}
\end{figure}
\begin{figure}
\begin{centering}
\subfloat{\includegraphics[width=0.5\columnwidth]{Deff_Ntracers_1255_kbt_1_1em3_Nsaves_800_Nsw_5.pdf}}
\caption{\label{fig:Deff_beta_0_5_kbts} Effective diffusion coefficient of tracers for the squirmer model with $\beta = 0.5$.
\symbol{\drawline{22}{1.0}\nobreak\ }{$\bigcirc$\nobreak\ }{22}{0}{black}{black}: simulations.
\symbol{}{\ssquareb}{0}{0}{black}{blue}: data from \cite{Leptos2009}.
}
\end{centering}
\end{figure}
\subsubsection{Steric vs. hydrodynamic interactions}
In our simulations, and in all experiments, the tracer particles have a finite size and experience contact forces with nearby swimmers. Hence, three phenomena contribute to the tracer diffusivity in the dilute regime: the flows generated by the swimmers, collisions with swimmers due to contact forces, and tracer Brownian motion. In order to quantify the effect of each, we consider three situations: (i) a bidisperse suspension of passive particles achieved by setting $U = 0, \mathbf{H}_n = \mathbf{0},$ and $\mathbf{G}_n = \mathbf{0}$ for all swimmers, (ii) a suspension where the swimmers move in the fluid without generating any swimming disturbances and interacting with the tracers through contact forces ($U \neq 0, \mathbf{H}_n = \mathbf{0},$ $\mathbf{G}_n = \mathbf{0}$), and (iii) the full simulation model.
Figure \ref{fig:steric_vs_hydro} shows the resulting PDF of tracer displacements for a dilute suspension ($\phi_v = 2.2 \%$) of squirmers with $\beta = 0.5$.
In the purely passive case (i), the PDF is Gaussian as expected. For case (ii), the motion of the larger particles and collisions with the tracers generate a non-Gaussian PDF. The Gaussian core of this PDF is similar to that observed in case (i), but we see that there are additional non-Gaussian tails with power-law decay. When moving to the full model (case (iii)), the Gaussian core widens and the power-law decay of the tails increases. This widening of the Gaussian core was also seen in the experiments. These results show that rare swimmer-tracer collisions produce the large displacements that lead to the non-Gaussian tails, while the flows generated by the swimmers act to broaden the Gaussian core, thus enhancing the diffusivity of the tracer particles at short times.
\begin{figure}
\begin{centering}
\includegraphics[width=0.5\columnwidth]{PDF_disp_no_adim_phi_2_2_comparison_beta_0_5_no_disturb.pdf}
\caption{\label{fig:steric_vs_hydro} PDF for tracer displacements for $\Delta t = 0.12s$ for swimmer volume fraction $\phi_v = 2.2 \%$ and $\beta = 0.5$. Dotted line: case (i), swimmers replaced by passive particles, Dashed line: case (iI), moving swimmers that do not generate flow, Solid line: case (iii), full model, Symbols: experiments from \cite{Leptos2009}.}
\end{centering}
\end{figure}
\begin{figure}
\centering
\includegraphics[width=0.5\columnwidth]{PDF_disp_no_adim_Ntracers_1255_kbt_1_1em3_Nsaves_1200_Nit_60000_ind_plot_48_Beta_0_12_0_5_plus_steady.pdf}
\caption{\label{fig:Tracers_disp_time_dep} PDF of tracer displacements at $\Delta t = 0.12s$ for the time-dependent and steady models with $\phi_v = 2.2\%$.
\symbol{}{$\Diamond$\nobreak\ }{0}{0}{black}{black}: data from \cite{Leptos2009}.
\symbol{\drawline{22}{1.0}\nobreak\ }{}{0}{0}{blue}{black}: $\bar{\beta} = 0.5$, time-dependent.
\symbol{\tdash \tdash \tdash \tdash \nobreak\ }{}{0}{0}{blue}{black}: $\beta = 0.5$, steady.
\symbol{\tdash \tdot \tdash \tdot \tdash \nobreak\ }{}{0}{0}{magenta}{black}: $\bar{\beta} = 0.12$, time-dependent.
\color{magenta}{\hbox{\drawline{1}{1.0}\spacce{2}} \hbox{\drawline{1}{1.0}\spacce{2}} \hbox{\drawline{1}{1.0}\spacce{2}} \hbox{\drawline{1}{1.0}\spacce{2}} \hbox{\drawline{1}{1.0}\spacce{2}} \hbox{\drawline{1}{1.0}\spacce{2}}} \color{black}{: $\beta = 0.12$, steady.}
}
\end{figure}
\subsubsection{Role of time-dependence}
The flows generated by \emph{C. rheinardtii} are time-dependent (\cite{Guasto2010}) due to the beating of its two flagella. We can include this time dependence by allowing the parameters $B_1$ and $B_2$ appearing in the steady squimer model to be periodic functions of time. These functions can then be tuned to match the time-dependent swimming speed of \emph{C. rheinardtii} and the location of the flow stagnation point as measured in \cite{Guasto2010}. The details of our tuning procedure can be found in \cite{Delmotte_2015a}. After tuning, we find the average value of the squirming parameter over one beat period is $\bar{\beta} = 0.12$.
Figure \ref{fig:Tracers_disp_time_dep} shows the PDF of tracer displacements for $\phi_v = 2.2$\% from the fully time-dependent model. For comparison, we have also included results from simulations of the time-dependent model with $\bar{\beta} = 0.5$, as well as results from the steady model with $\beta = 0.12$ and $\beta=0.5$. For $\beta = 0.12$, we find that the steady and time-dependent PDFs are nearly indistinguishable. As $\bar{\beta}$ increases to $\bar{\beta} = 0.5$, the PDF is slightly narrower with respect to that of its corresponding steady simulation $\beta = 0.5$. The difference, however, is quite small, and we find that time-dependence of the squirming modes does not significantly affect the PDF of the tracer displacements. These results are in accordance with \cite{Dunkel2010} that showed in the dilute regime the time-dependent and stroke-averaged swimming disturbances provide similar tracer scattering.
\subsection{Semi-dilute regime}
\label{sec:tracer_concentrated}
In this section, we perform simulations on more concentrated suspensions using the same value for the squirming parameter $\beta=0.5$. Figure \ref{PDF_disp_concentrated} shows the time-dependent PDF for tracer displacements $P(\Delta x/a, \Delta t)$ at times $\Delta t = 0.3s$ and $\Delta t = 2s$, and for volume fractions $\phi_v = 0-15\%$. For all volume fractions, we observe that the PDF tends towards a Gaussian at long times, however, the convergence rate increases with volume fraction. Using an analysis based on singular flows fields, \cite{Zaid2011} state that the tracer displacement distribution should only be Gaussian for $\phi_v>25\%$ if the flows generated by the swimmers decay like $r^{-n}$ with $n\geq2$. The appearance of a Gaussian, however, agrees with the theoretical predictions of \cite{Thiffeault2014}, as well as the experimental observations of \cite{Kurtuldu2011} that observed an increase in Gaussianity with concentration. This, perhaps, highlights the importance of resolving the finite-size of both swimmers and tracers to the observation of a Gaussian distribution of displacements. In addition, we note a shift of the mean at long times for high volume fraction. This is attributed to the onset of polar ordering (\cite{Delmotte_2015a}) that is typically observed in periodic squirmer suspensions.
\begin{figure}
\begin{centering}
\subfloat[$\Delta t = 0.3s$]{\label{fig:PDF_disp_concentrated_DT_03} \includegraphics[width=0.5\columnwidth]{PDF_disp_no_adim_Ntracers_1255_kbt_1_1em3_Nsaves_800_Nit_40000_ind_plot_48_Nsw_6.pdf}}
\subfloat[$\Delta t = 2s$]{\label{fig:PDF_disp_concentrated_DT_2} \includegraphics[width=0.5\columnwidth]{PDF_disp_no_adim_Ntracers_1255_kbt_1_1em3_Nsaves_800_Nit_40000_ind_plot_800_Nsw_6.pdf}}
\end{centering}
\caption{\label{PDF_disp_concentrated} PDF for tracer displacements at times $\Delta t = 0.3s$ (a), and $\Delta t = 2s$ (b), for $\phi_v = 0 - 15\%$.
}
\end{figure}
As in the experiments of \cite{Kurtuldu2011} and \cite{Kasyap2014}, our simulations show (Figure \ref{fig:Deff_concentrated}) the linear scaling $(D_{eff}-D_0)/\phi_v =$ const. breaks down for $\phi_v>2.2\%$, though we do not observe a clear power-law as in \cite{Kurtuldu2011}. Additionally, our simulations yield $D_{eff}/D_0 = 11 - 22$ for $\phi_v = 5-10\%$, while \cite{Kurtuldu2011} measured $D_{eff}/D_0 \simeq 900$ for $\phi_v = 7\%$. The large difference in enhanced diffusion can be due to our simulations being three-dimensional, while \cite{Kurtuldu2011} carried out experiments into a two-dimensional film with thickness on the order of the cells' diameter ($H\simeq 15 \pm 5 \mu m$), resulting in a slower decay of the flows induced by the swimming cells, as well as an increased frequency of swimmer-tracer collisions. To quantify the effect of swimmer concentration on the swimmers' motion, we examine the swimmer mean-squared displacement as a function of concentration, as shown in Figure \ref{fig:MSD_swimmers}. While we see a slight decrease in the slope as $\phi_v$ increases, we observe that the swimmers are ballistic for all concentrations. Longer simulation times would be required to reach a diffusive regime in order to determine swimmer effective diffusivity $D_{eff,sw}$, as done in \cite{Ishikawa2010}. Our simulations, therefore, correspond to a different regime than that explored in \cite{Ishikawa2010} where both swimmers and tracers were diffusive. We note, however, that the values $|\beta| > 1$ used in \cite{Ishikawa2010} are greater than the values used here to match experimental data and higher values of $\beta$ do lead to the onset of diffusive behaviour for the swimmers at earlier times.
While for each concentration the swimmers' motion is ballistic, we do observe clear differences in the tracer distribution as the swimmer concentration increases. Figure \ref{fig:RDF_phi} shows the squirmer-tracer pair distribution function $g(r,\theta)$ for $\phi_v = 2.2$ and $15\%$. The distribution function $g(r,\theta)$ corresponds to the likelihood, relative to a uniform distribution, of finding a tracer at a distance $r$ from the swimmer centre of mass and with its relative position vector forming an angle $\theta$ with the swimmer orientation. At higher concentrations, there is a much higher probability of finding a tracer directly in front of the swimmer, while the probability of finding tracers aft of the swimmer is reduced. This suggests that accurately representing the details of swimmer-tracer interactions for tracers directly in front of the swimmer is important in reproducing the large displacements. In addition, the increased localisation of tracer particles in the vicinity of the swimmers that occurs at higher swimmer concentrations could lead to increased rates of nutrient uptake by the population.
\begin{figure}
\centering
\subfloat[Effective diffusion coefficient of tracers. \symbol{\drawline{22}{1.0}\nobreak\ }{$\bigcirc$\nobreak\ }{20}{0}{black}{black}: simulations.
\symbol{\tdash \tdash \tdash \tdash \nobreak\ }{}{0}{0}{black}{black}: linear fit for the dilute regime $\phi_v = 0-2.2\%$.
\symbol{}{\ssquareb}{0}{0}{black}{blue}: data from \cite{Leptos2009}.]{\label{fig:Deff_concentrated} \includegraphics[width=0.45\columnwidth]{Deff_Ntracers_1255_kbt_1_1em3_Nsaves_800_Nsw_8.pdf}}
\hspace{0.5cm}
\subfloat[Mean-squared displacement of swimmers. Inset: zoom at long times to show the decrease of the slope with the concentration.]{\label{fig:MSD_swimmers} \includegraphics[width=0.45\columnwidth]{Loglog_MSD_sw_Ntracers_1255_kbt_1_1em3_Nsaves_800_Nsw_7.pdf}}
\caption{Effective diffusion coefficient of tracers (a) and mean-squared displacement of swimmers (b) with $\phi_v = 0 - 15\%$.
}
\end{figure}
\begin{figure}
\begin{centering}
\subfloat[$\phi_v = 2.2\%$]{\label{fig:RDF_phi_2_2} \includegraphics[width=0.5\columnwidth]{Phi_2_2_Barrier_4_RDF_r_theta_Nit_20000.pdf}}
\subfloat[$\phi_v = 15\%$]{\label{fig:RDF_phi_15} \includegraphics[width=0.5\columnwidth]{Phi_15_Barrier_4_RDF_r_theta_Nit_40000.pdf}}
\end{centering}
\caption{\label{fig:RDF_phi} Squirmer-tracer pair distribution function $g(r,\theta)$ for $\beta = 0.5$. The white solid line represent the swimmer's radius. The white dashed line corresponds to the excluded volume region. The white arrow is the swimming direction.
}
\end{figure}
\section{Discussion and conclusion}
\label{sec:Discussion}
In this study, we performed simulations of tracer motion in dilute and semidilute suspensions of swimming particles. Our model includes the coupled effects of the finite tracer size relative to that of the swimmers, particle Brownian motion satisfying the fluctuation-dissipation theorem, and the flow disturbances induced by the swimmers using the steady squirmer model. For dilute suspensions, our simulations reproduce quantitatively the tracer displacement distribution and tracer diffusivity measured experimentally by \cite{Leptos2009}, as well as those predicted theoretically by \cite{Thiffeault2014}. We demonstrate that the non-Gaussian tails of the tracer displacement distribution are linked to collisions between the swimmers and tracers, while the width of the Gaussian core is set by the many-body hydrodynamic interactions between the tracers and swimmers, as well as latent tracer diffusion. We also show that time-dependence of the squirmer modes has little effect on the distribution. In addition, our results demonstrate that at longer observation times, the number of swimmer-tracer collisions increase leading to the disappearance of the non-Gaussian power law tails and a Gaussian tracer displacement distribution. This corresponds with the predictions of \cite{Thiffeault2014}. In the semi-dilute regime, the swimmer-tracer collision rate increases and we observe a faster convergence to a Gaussian distribution of displacements. Additionally, the simulations yield a nonlinear dependence of the effective diffusion coefficient on $\phi_v$ though the enhancement observed was much more modest than that measured (\cite{Kurtuldu2011}) in the suspensions confined to a thin film. In the nonlinear regime, we found that at the simulation timescales, the swimmers were still behaving ballistically, but there was a notable increase in the likelihood of finding tracers directly in front of the swimmers.
A particularly interesting property of this system is the importance of near contact swimmer-tracer collisions, even in dilute systems, as they lead to rare, but very large displacements. Even though the squirmer model provides a reasonable description of the far-field flow of many microorganisms, it is the near-field details that matter when resolving the collisions (\cite{Pushkin2013b}). Thus, at longer times when there are sufficiently many collision events, the effective tracer diffusion coefficient will be determined almost exclusively by the collisions. The details of how the organisms generate flow and possibly deform their surfaces or move their flagella are then crucial in quantifying the effective diffusion (\cite{Polin2016}). This could have important implications in predator-prey interactions, making only certain locomotion strategies viable for particle capture. For example, a swimmer with a rigid front surface could possibly be more effective at carrying along small particles as it swims, allowing more time for them to be ingested. In addition, the importance of near-field interactions on particle diffusion may certainly impact the design of synthetic artificial swimmers or other colloidal active particles for tracer mixing in microfluidic devices. Exploring the impact of different near-field swimmer-tracer interactions for swimmers that induce the same far-field flow, as well as how heterogeneity throughout the swimmer population affect tracer transport are areas of interest for future investigation.
In bacterial suspensions, the nonlinear dependence of the effective tracer diffusion coefficient on swimmer volume fraction has been attributed (\cite{Kasyap2014}) to the onset of large-scale collective motion in the suspension. In squirmer suspensions, especially at semi-dilute concentrations, the long-time dispersion properties might well be affected by polar ordering (\cite{Delmotte_2015a}), which may lead to net tracer transport along a particular axis, possibly aligned with the mean swimming direction.
Finally, it would be interesting to understand the impact of confinement on tracer diffusivity, especially with regard to the thin film experiments of \cite{Kurtuldu2011}. Due to the stress-free boundaries, the hydrodynamic interactions are longer ranged than in bulk and may therefore enhance tracer diffusivity more than in unconfined suspensions. Additionally, we expect the swimmer-tracer collision rate to increase in confined systems which should further increase long time tracer diffusivity. Our numerical framework can simulate thin film geometries with stress-free boundary conditions while retaining Brownian motion and hydrodynamic interactions (\cite{Delmotte_2015b}). These simulations may provide some insight into the role of confinement and boundary condtions on tracer transport and form the basis of future studies.
\acknowledgement{
This work is developed within the MOTIMO ANR Project. Simulations were performed on the Calmip supercomputing mesocenter. We thank INPT for funding the international collaboration between IMFT and Imperial College (grant no. SMI 2014). EEK gratefully acknowledges support from EPSRC under grant EP/P013651/1. The authors also thank the many members of the COST Action MP1305 on Flowing Matter for fruitful discussions.}
\bibliographystyle{apalike}
|
2,877,628,090,384 | arxiv | \section{Introduction}
Accurate prediction is important in any number of applications. In a
modern multi-core computer, for instance, an accurate forecast of
processor load could be used by the operating system to balance the
workload across the cores. The traditional models used in the
computer systems community use linear, time-invariant (and often
stochastic) methods, e.g., autoregressive moving average (ARMA),
multiple linear regression, etc. \cite{jain-artof}. While these
models are widely accepted---and for the most part easy to
construct---they cannot capture the nonlinear interactions that have
recently been shown to play critical roles in a computer's
performance~\cite{berry06,mytkowicz09}. As computers become more and
more complex, these interactions are beginning to cause
problems---e.g., hardware design ``improvements'' that do not work as
expected. Awareness about this issue is growing in the computer
systems community~\cite{tippdirk}, but the modeling strategies used in
that field have not yet caught up with those concerns.
An alternative approach that captures those complex effects is to
model a computer as a nonlinear dynamical
system~\cite{mytkowicz09,todd-phd}---or as a {\sl collection} of
nonlinear dynamical systems, i.e., an iterated function system
\cite{zach-ifs-Chaos}. In this view, the register and memory contents
are treated as state variables of these dynamical systems. The logic
hardwired into the computer, combined with the code that is executing
on that hardware, defines the system's dynamics---that is, how its
state variables change after each processor cycle. As described in
previous IDA papers \cite{zach-IDA10,josh-ida2011}, this framework
lets us bring to bear the powerful machinery of nonlinear time-series
analysis on the problem of modeling and predicting those dynamics. In
particular, the technique called {\sl delay-coordinate embedding} lets
one reconstruct the state-space dynamics of the system from a
time-series measurement of a single state
variable\footnote{Technically, the measurement need only be a smooth
function of at least one state variable}. One can then build
effective prediction models in this embedded space. One of the first
uses of this approach was to predict the future path of a ball on a
roulette wheel, as chronicled in~\cite{pie} and revisited recently
in~\cite{small12}. Nonlinear modeling and forecasting methods that
rely on delay-coordinate embedding have since been used to predict
signals ranging from currency exchange rates to Bach fugues;
see~\cite{casdagli-eubank92,weigend-book} for good reviews.
\label{page:roulette}
This paper is a comparison of how well those two modeling
approaches---linear and nonlinear---perform in a classic computer
performance application: forecasting the processor loads on a CPU. We
ran a variety of programs on an Intel i7-based machine, ranging from
simple matrix operation loops to SPEC cpu2006 benchmarks. We measured
various performance metrics during those runs: cache misses, processor
loads, branch-prediction success, and so on. The experimental setup
used to gather these data is described in Section~\ref{sec:methods}.
From each of the resulting time-series data sets, we built two models:
a garden-variety linear one (multiple linear regression) and a basic
nonlinear one: the ``Lorenz method of analogues,'' which is
essentially nearest-neighbor prediction in the embedded state space
\cite{lorenz-analogues}. Details on these modeling procedures are
covered in Section~\ref{sec:models}. We evaluated each model by
comparing its forecast to the true continuation of the associated time
series; results of these experiments are covered in
Section~\ref{sec:prediction}, along with some explanation about when
and why these different models are differently effective. In
Section~\ref{sec:conclusion}, we discuss some future directions and
conclude.
\section{Experimental Methods}\label{sec:methods}
The testbed for these experiments was an HP Pavilion Elite computer
with an Intel Core\textsuperscript{\textregistered} i7-2600 CPU
running the 2.6.38-8 Linux kernel. This so-called ``Nehalem'' chip is
representative of modern CPUs; it has eight cores running at 3.40Ghz
and an 8192 kB cache. Its kernel software allows the user to monitor
events on the chip, as well as to control which core executes each
thread of computation. This provides a variety of interesting
opportunities for model-based control. An effective prediction of the
cache-miss rate of individual threads, for instance, could be used to
preemptively migrate threads that are bogged down waiting for main
memory to a lower-speed core, where they can spin their wheels without
burning up a lot of power\footnote{Kernels and operating systems do
some of this kind of reallocation, of course, but they do so using
current observations (e.g., if a thread is halting ``a lot'') and/or
using simple heuristics that are based on computer systems knowledge
(e.g., locality of reference).}.
To build models of this system, we instrumented the kernel software to
capture performance traces of various important internal events on the
chip. These traces are recorded from the hardware performance
monitors (HPMs), specialty registers that are built into most modern
CPUs in order to log hardware event information. We used the {\tt
libpfm4} library, via PAPI \cite{papi}, to interrupt the executables
periodically and read the contents of the HPMs. At the end of the
run, this measurement infrastructure outputs the results in the form
of a time series. In any experiment, one must be attentive to the
possibility that the act of measurement perturbs the dynamics under
study. For that reason, we varied the rate at which we interrupted
the executables, compared the results, and used that comparison to
establish a sample rate that produced a smooth measurement of the
underlying system dynamics. A detailed explanation of the mechanics
of this measurement process can be found in
\cite{zach-IDA10,mytkowicz09,todd-phd}.
The dynamics of a running computer depend on both hardware and
software. We ran experiments with four different C programs: two
benchmarks from the SPEC cpu2006 benchmark suite (the {\tt 403.gcc}~ compiler
and the {\tt 482.sphinx}~ speech recognition system) and two four-line programs
({\tt col\_major}~ and {\tt row\_major}~)
\label{page:programs}
that repeatedly initialize a matrix---in column-major and row-major
order, respectively. These choices were intended to explore the range
of current applications. The two SPEC benchmarks are complex pieces
of code, while the simple loops are representative of repetitive
numerical applications. {\tt 403.gcc}~ works primarily with integers, while
{\tt 482.sphinx}~ is a floating-point benchmark. Row-major matrix
initialization works naturally with modern cache design, whereas the
memory accesses in the {\tt col\_major}~ loop are a serious challenge to that
design, so we expected some major differences in the behavior of these
two simple loops.
Figure~\ref{fig:ipctrace} shows traces of the instructions executed
per cycle, as a function of time, during the execution of the two SPEC
benchmarks on the computer described in the first paragraph of this
section. There are clear patterns in the processor load during the
operation of {\tt 482.sphinx}~. During the first 250 million instructions of
this program's execution, roughly two instructions are being carried
out every cycle, on the average, by the Nehalem's eight cores.
Following that period, the processor loads oscillate, then stabilize
at an average of one instruction per cycle for the period from 400-800
million instructions. Through the rest of the trace, the dynamics
move between different regimes, each with characteristics that reflect
how well the different code segments can be effectively executed
across the cores.
\label{page:sphinx-stochastic}
The processor load during the execution of {\tt 403.gcc}~, on the other hand,
appears to be largely stochastic. This benchmark takes in and compiles
a large number of small C files, which involves repeating a similar
process, so the lack of clear regimes makes sense.
\begin{figure}
\centering
\includegraphics[width=0.49\columnwidth]{ipcsphinktrace}
\includegraphics[width=0.49\columnwidth]{ipcgcctrace}
{{\tt 482.sphinx}~ \hspace*{1.5truein} {\tt 403.gcc}~}
\caption{Processor load traces of the programs studied here}
\label{fig:ipctrace}
\end{figure}
The dynamics of {\tt row\_major}~ and {\tt col\_major}~ (not shown due to space constraints)
were largely as expected. The computer cannot execute as many
instructions during {\tt col\_major}~ because of the mismatch between its
memory-access pattern and the design of the cache, so the baseline
level of the {\tt col\_major}~ trace is much lower than {\tt row\_major}~. Temporally, the {\tt row\_major}~
trace looks very much like {\tt 403.gcc}~: largely stochastic. {\tt col\_major}~, on the
other hand, has a square-wave pattern because of the periodic stalls
that occur when it requests data that are not in the cache.
The following section describes the techniques that we use to build
models of this time-series data.
\section{Modeling computer performance data}
\label{sec:models}
\subsection{Overview}
The goal of this paper is to explore the effectiveness of linear and
nonlinear models of computer performance. Many types of models, of
both varieties, have been developed by the various communities that
are interested in data analysis. We have chosen multiple linear
regression models as our {\sl linear} exemplar because that is the
gold standard in the computer systems literature~\cite{tippdirk}. In
order to keep the comparison as fair as possible, we chose the Lorenz
method of analogues, which is the simplest of the many models used in
the nonlinear time-series analysis community, as our {\sl nonlinear}
exemplar. In the remainder of this section, we give short overviews
of each of these methods; Sections~\ref{sec:multilinear}
and~\ref{sec:nonlinear} present the details of the model-building
processes.
\subsubsection{Multiple Linear Regression Models}
\label{sec:multilinear-overview}
\input{multilinear-overview}
\subsubsection{Nonlinear Models}
\label{sec:nonlinear-overview}
Delay-coordinate embedding \cite{packard80,sauer91,takens} allows one
to reconstruct a system's full state-space dynamics from time-series
data like traces in Figure~\ref{fig:ipctrace}. There are only a few
requirements for this to work. The data, $x_i(t)$, must be evenly
sampled in time ($t$) and both the underlying dynamics and the
measurement function---the mapping from the unknown $d$-dimensional
state vector $\vec{Y}$ to the scalar value $x_i$ that one is
measuring---must be smooth and generic. When these conditions hold,
the delay-coordinate map
\begin{equation}\label{eqn:takens}
F(\tau,d_{embed})(x_i) = ([x_i(t), ~ x_i(t+\tau), ~ \dots , ~x_i(t+d_{embed}\tau)])
\end{equation}
from a $d$-dimensional smooth compact manifold $M$ to
$\mathbb{R}^{2d+1}$ is a diffeomorphism on $M$ \cite{sauer91,takens}:
in other words, that the reconstructed dynamics and the true (hidden)
dynamics have the same topology. This method has two free parameters,
the delay $\tau$ and the embedding dimension $d_{embed}$, which must
also meet some conditions for the theorems to hold, as described in
Section~\ref{sec:nonlinear}. Informally, delay-coordinate embedding
works because of the internal coupling in the system---e.g., the fact
that the CPU cannot perform a computation until the values of its
operands have been fetched from some level of the computer's memory.
This coupling causes changes in one state variable to percolate across
other state variables in the system. Delay-coordinate embedding is
designed to bring out those indirect effects explicitly and
geometrically.
The mathematical similarity of the true and reconstructed dynamics is
an extremely powerful result because it guarantees that $F$ is a good
model of the system. As described on page~\pageref{page:roulette},
the nonlinear dynamics community has recognized and exploited the
predictive potential of these models for some time. Lorenz's method
of analogues, for instance, is essentially nearest-neighbor prediction
in the embedded space: given a point, one looks for its nearest
neighbor and then uses {\sl that} point's future path as the
forecast~\cite{lorenz-analogues}. Since computers are deterministic
dynamical systems~\cite{mytkowicz09}, these methods are an effective
way to predict their performance. That claim, which was first made
in~\cite{josh-ida2011}, was the catalyst for this paper---and the
motivation for comparison of linear and nonlinear models that appears
in the following sections.
\begin{quote}
{\sl Advantages:} models based on delay-coordinate embeddings capture
nonlinear dynamics and interactions, which the linear models ignore,
and they can be used to predict forward in time to arbitrary horizons.
They only require measurement of a single variable.
\smallskip
{\sl Disadvantages:} these models are more difficult to construct, as
estimating good values for their two free parameters can be quite
challenging. The prediction process involves near-neighbor
calculations, which are computationally expensive.
\end{quote}
\subsection{Building MLR forecast models for computer performance traces}\label{sec:multilinear}
In the experiments reported here, the response variable is the
instructions per cycle (IPC) executed by the CPU.
Following~\cite{tippdirk}, we chose the following candidate
explanatory variables {\sl (i)} instructions retired {\sl (ii)} total
L2 cache\footnote{Modern CPUs have many levels of data and instruction
caches: small, fast memories that are easy for the processor to
access. A key element of computer design is anticipating what to
``fetch'' into those caches.} misses {\sl (iii)} number of branches
taken {\sl (iv)} total L2 instruction cache misses {\sl (v)} total L2
instruction cache hits and {\sl (vi)} total missed branch predictions.
The first step in building an MLR model is to ``reduce'' this list:
that is, to identify any explanatory variables---aka {\sl
factors}---that are meaningless or redundant. This is important
because unnecessary factors can add noise, obscure important effects,
and increase the runtime and memory demands of the modeling algorithm.
We employed the stepwise backward elimination method~\cite{Faraway},
with the threshold value (0.05) suggested in~\cite{jain-artof}, to
select meaningful factors. This technique starts with a ``full
model''---one that incorporates every possible factor---and then
iterates the following steps:
\begin{enumerate}
\item If the $p$-value of any factor is higher than the threshold,
remove the factor with the largest $p$-value
\item Refit the MLR model
\item If all $p$-values are less than the threshold, stop; otherwise
go back to step 1
\end{enumerate}
For all four of the traces studied here, this reduction algorithm
converged to a model with three factors: L2 total cache misses, number
of branches taken, and L2 instruction cache misses.
MLR models are meant to {\sl explain} the value of the response
variable in terms of the values of the explanatory variables, but they
can also be used to {\sl predict} it. To do this, one takes a
measurement of each of the factors that appear in the reduced model
(say, $[e_1,\dots,e_m]$). One then makes a prediction of IPC by
simply evaluating the function $[1,e_1,\dots,e_m]\hat{\vec{\beta}}$
and assigning that response to the \emph{next} time-step, i.e,
$r_{t+1}$. That is how the predictions in the next section were
constructed.
Like any model, MLR is technically only valid if the data meet certain
conditions. Two of those conditions are {\sl not} true for
computer-performance traces: linear relationship between explanatory
and response variables (which was disproved in~\cite{mytkowicz09}) and
normal distribution of errors, which is clearly not the case in our
data, given the nonlinear trends in residual quantile-quantile plots
of our data (not shown). Despite this, MLR models are used routinely
in the computer systems community~\cite{tippdirk}. And they actually
work surprisingly well, as the results in Section~\ref{sec:prediction} show.
\subsection{Building nonlinear forecast models for computer performance traces}
\label{sec:nonlinear}
The first step in constructing a nonlinear forecast model of a
time-series data set like the ones in Figure~\ref{fig:ipctrace} is to
perform a delay-coordinate embedding using
equation~(\ref{eqn:takens}). We followed standard procedures
\cite{kantz97} to choose appropriate values for the embedding
parameters: the first minimum of the mutual information
curve~\cite{fraser-swinney} as an estimate of the delay $\tau$ and the
false-nearest neighbors technique~\cite{KBA92}, with a threshold of
10-20\%, to estimate the embedding dimension $d_{embed}$. For both
traces in
Figure~\ref{fig:ipctrace}, $\tau=100000$ instructions and
$d_{embed}=12$.
A plot of the reconstructed dynamics of these
two traces appears in Figure~\ref{fig:embedding}.
\begin{figure}
\centering
\includegraphics[width=.49\columnwidth]{ipc3dembedsphink}
\includegraphics[width=.49\columnwidth]{ipc3dembedgcc}
{{\tt 482.sphinx}~} \hspace{.4\columnwidth}{{\tt 403.gcc}~}
\caption{3D projections of delay-coordinate embeddings of the traces
from Figure~\ref{fig:ipctrace}.}
\label{fig:embedding}
\end{figure}
The coordinates of each point on these plots are differently delayed
elements of the IPC time series: that is, IPC at time $t$ on the first
axis, IPC at time $t+\tau$ on the second, IPC at time $t+2\tau$ on the
third, and so on. An equivalent embedding (not shown here) of the
{\tt row\_major}~ trace looks very like {\tt 403.gcc}~: a blob of points. The embedded {\tt col\_major}~
dynamics, on the other hand, looks like {\sl two} blobs of points
because of its square-wave pattern. Recall from
Section~\ref{sec:nonlinear} that these trajectories are guaranteed to
have the same topology as the true underlying dynamics, provided that
$\tau$ and $d_{embed}$ are chosen properly. And structure in these
kinds of plots is an indication of determinism in that dynamics.
The nonlinear dynamics community has developed dozens of methods that
use the structure of these embeddings to create forecasts of the
dynamics; see~\cite{casdagli-eubank92,weigend-book} for overviews.
The Lorenz method of analogues (LMA) is one of the earliest and
simplest of these strategies~\cite{lorenz-analogues}. LMA creates a
prediction of the future path of a point $\vec{x}_o$ through the
embedded space by simply finding its nearest neighbor and then using
{\sl that} point's future path as the forecast\footnote{The original
version of this method requires that one have the true state-space
trajectory, but others (e.g.,~\cite{kennel92}) have validated the
theory and method for the kinds of embedded trajectories used
here.}. The nearest neighbor step obviously makes this algorithm
very sensitive to noise, especially in a nonlinear system. One way to
mitigate that sensitivity is to find the $l$ nearest neighbors of
$\vec{x}_o$ and average their future paths. These comparatively
simplistic methods work surprisingly well for computer-performance
prediction, as reported at IDA 2010~\cite{josh-ida2011}. In the
following section, we compare the prediction accuracy of LMA models
with the MLR models of Section~\ref{sec:multilinear}.
\vspace*{-1mm}
\section{When and why are nonlinear models better at predicting computer performance?}
\label{sec:prediction}
\subsection{Procedure}
\label{sec:procedure}
Using the methods described in Sections~\ref{sec:multilinear}
and~\ref{sec:nonlinear}, respectively, we built and evaluated linear
and nonlinear models of performance traces from the four programs
described on page~\pageref{page:programs} ({\tt 403.gcc}~, {\tt 482.sphinx}~, {\tt col\_major}~ and
{\tt row\_major}~), running on the computer described in Section~\ref{sec:methods}.
The procedure was as follows. We held back the last $k$ points of
\label{page:horizon}
each time series (referred to as ``the test signal,'' $c_i$).
We then constructed the model with the remaining portion of the time
series (``the learning signal'') and used the model to build a
prediction $\hat{p}_i$. We computed the root mean squared prediction
error between that prediction and the test signal in the usual way:
$$RMSE~ = \sqrt{\frac{\sum_{i=1}^k(c_i-\hat{p_i})^2}{k}}$$
To compare the results across signals with different units, we
normalized the RMSE~ as follows:
$$nRMSE~ = \frac{RMSE~}{max\{c_i\} - min\{c_i\}}$$
The smaller the nRMSE~, obviously, the more accurate the prediction.
\vspace*{-1mm}
\subsection{Results and Discussion}
First, we compared linear and nonlinear models of the two SPEC
benchmark programs: the traces in the top row of
Figure~\ref{fig:ipctrace}. For {\tt 403.gcc}~, the nonlinear LMA model was
better than the linear MLR model (0.128 nRMSE~ versus 0.153). For
{\tt 482.sphinx}~, the situation was reversed: 0.137 nRMSE~ for LMA and 0.116 for
MLR. This was contrary to our expectations; we had anticipated that
the LMA models would work better because their ability to capture both
the gross and detailed structure of the trace would allow them to more
effectively track the regimes in the {\tt 482.sphinx}~ signal. Upon closer
examination, however, it appears that those regimes overlap in the IPC
range, which could negate that effectiveness. Moreover, this
head-to-head comparison is not really fair. Recall that MLR models
use {\sl multiple} measurements of the system---in this case, L2 total
cache misses, number of branches taken, and L2 instruction cache
misses---while LMA models are constructed from a {\sl single}
measurement (here, IPC). In view of this, the fact that LMA beats MLR
for {\tt 403.gcc}~ and is not too far behind it for {\tt 482.sphinx}~ is impressive,
particularly given the complexity of these programs. Finally, we
compared the linear and nonlinear model results to a simple ``predict
the mean'' strategy, which produces a 0.140 and 0.250 nRMSE~ for {\tt 403.gcc}~
and {\tt 482.sphinx}~, respectively---higher than either MLR or LMA.
In order to explore the relationship between code complexity and model
performance, we then built and tested linear and nonlinear models of
the {\tt row\_major}~ and {\tt col\_major}~ traces. The resulting nRMSE~ values for these
programs, shown in the third and fourth row of
Table~\ref{tbl:results}, were lower than for {\tt 403.gcc}~ and {\tt 482.sphinx}~,
supporting the intuition that simpler code has easier-to-predict
dynamics.
\begin{table}[htdp]
\begin{center}
\begin{tabular}{|c|c|c|c|c|}
\hline Program & Interrupt Rate (cycles) & LMA nRMSE~ & MLR nRMSE~ & naive nRMSE~ \\ \hline
{\tt 403.gcc}~ & 100,000 & 0.128&0.153&0.140 \\
{\tt 482.sphinx}~ & 100,000 & 0.137&0.116&0.250 \\
{\tt row\_major}~ & 100,000 & 0.063& 0.091& 0.078 \\
{\tt col\_major}~ & 100,000 &0.020 &0.032 & 0.045 \\ \hline
{\tt 403.gcc}~ & 1,000,000 & 0.196 &0.208&0.199 \\
{\tt 482.sphinx}~ & 1,000,000 & 0.137&0.187&0.462\\
{\tt row\_major}~ & 1,000,000 &0.057 & 0.129& 0.103\\
{\tt col\_major}~ & 1,000,000 &0.028 & 0.305& 0.312\\ \hline
\end{tabular}
\caption{Normalized root-mean-squared error between true and predicted
signals for linear (MLR), nonlinear (LMA), and ``predict the mean''
forecast strategies}
\end{center}
\label{tbl:results}
\end{table}
Note that the nonlinear modeling strategy was more accurate than MLR
for {\sl both} of these simple four-line matrix initialization loops.
The repetitive nature of these loops leaves its signature in their
dynamics: structure that is exposed by the embedding process. LMA
captures and uses that global structure---in effect, ``learning''
it---while MLR does not. Again, LMA's success here is even more
impressive in view of the fact that the linear models require more
information to construct. Finally, note that the LMA models beat the
naive strategy for both {\tt row\_major}~ and {\tt col\_major}~, but the linear MLR model did
not.
Another important issue in modeling is sample rate. We explored this
by changing the sampling rate of the traces while keeping the overall
length the same: i.e., by sampling the same runs of the same programs
at 1,000,000 instruction intervals, rather than every 100,000
instructions. This affected the accuracy of the different models in
different ways, depending on the trace involved. For {\tt 403.gcc}~, MLR was
still better than LMA, but not by as much. For {\tt 482.sphinx}~, the previous
result (MLR better than LMA) was reversed. For {\tt row\_major}~ and {\tt col\_major}~, the
previous relationship not only persisted, but strengthened. In both
of these traces, predictions made from MLR models were less accurate
than simply predicting the mean; LMA predictions were {\sl better}
than this naive strategy. See the bottom four rows of Table~1 for a
side-by-side comparison of these results to the more sparsely sampled
results described in the previous paragraphs.
To explore which model worked better as the prediction horizon was
extended, we changed that value (the $k$ in
Section~\ref{sec:procedure}) and plotted nRMSE~. In three of the four
traces---all but {\tt 482.sphinx}~---the nonlinear model held and even extended
its advantage as the prediction horizon lengthened; see
Figure~\ref{fig:horizon} for some representative plots.
\begin{figure}
\centering
\includegraphics[width=0.8\columnwidth]{col1mfull}
\includegraphics[width=0.8\columnwidth]{sphink100Full}
\caption{nRMSE~ versus prediction horizon.Top: {\tt col\_major}~ Bottom: {\tt 482.sphinx}~}
\label{fig:horizon}
\end{figure}
The initial variability in these plots is an artifact of normalizing
over short signal lengths and should be disregarded. The vertical
discontinuities (e.g., at the 1300 and 2100 marks on the horizontal
axis of the {\tt col\_major}~ plot, as well as the 1600 and 3000 marks of {\tt 482.sphinx}~)
are also normalization artifacts\footnote{When the signal moves into a
heretofore unvisited regime, that causes the $max - min$ term in the
nRMSE~ denominator to jump.}. The sawtooth pattern in the top two
traces in {\tt col\_major}~ nRMSE are due to the cyclical nature of that loop's
dynamics. LMA captures this structure, while MLR and the naive
strategy do not, and thus produces a far better prediction.
\vspace*{-3mm}
\section{Conclusion}\label{sec:conclusion}
\vspace*{-2mm}
The experiments reported in this paper indicate that even a very basic
nonlinear model is generally more accurate than the state-of-the-art
linear model in the computer performance literature. This result is
even more striking because those linear models require far more
information to build than the nonlinear models do---and they cannot be
used to predict further than one timestep into the future. It is somewhat surprising that these linear models
work at all, actually, since many of the assumptions upon which they
rest are not satisfied in computer performance applications.
Nonlinear models that work in delay-coordinate embedding space may be
somewhat harder to construct, but they capture the structure of the
dynamics in a truly effective way.
It would be of obvious interest to apply some of the better linear
models that have been developed by the data analysis and modeling
communities to the problem of computer performance prediction. Even a
segmented or piecewise version of multiple linear
regression~\cite{segmented-regression,piece-regression}, for instance,
would likely do a better job at handling the nonlinearity of the
underlying system. The difficulty with that approach, of course, is
how to choose the breakpoints between the segments. And MLR is not
really designed to be a temporal predictor anyway; linear predictors
like the ones presented in~\cite{LinearPredTutorial} might be much
more effective. There are also regression-based {\sl nonlinear}
models that we could use, such as~\cite{ghkss}, as well as the many
other more-sophisticated models in the nonlinear dynamics
literature~\cite{casdagli-eubank92,weigend-book}. It might be useful
to develop nonlinear models that use sliding windows in the data in
order to adapt to regime shifts, but the choice of window size is an
obvious issue. Finally, nonlinear models that use multiple
probes---like MLR does---could be extremely useful, but the underlying
mathematics for that has not yet been developed.
\bibliographystyle{splncs03}
|
2,877,628,090,385 | arxiv | \section{Abstract}
\begin{center}
\begin{quote}
We present a method for calculating expectation values of operators
in terms of a corresponding c-function formalism
which is not the Wigner--Weyl position-momentum phase-space,
but another space.
Here,
the quantity representing
the quantum system
is the expectation value
of the displacement operator,
parametrized by the position and momentum displacements,
and
expectation values are evaluated
as classical integrals over these parameters.
The displacement operator is found to offer
a complete orthogonal basis for operators,
and some of its other properties are investigated.
Connection to the Wigner distribution and Weyl procedure are discussed
and examples are given.
\end{quote}
\end{center}
\maketitle{}
\begin{multicols}{2}
Wolfgang Schleich
is a master of quantum mechanics in phase-space,
and of physics in general.
He is always full of insight
and excels in finding simplicity.
We believe we have found
an interesting way to
do quantum mechanics in phase-space,
a field in which Prof.\ Schleich has made enormous contributions,
and tried investigating its source
to the bottom,
in the spirit of Prof.\ Schleich.
We are elated
to dedicate these results to him.
\section{Introduction}
In the early 1930's,
Wigner pioneered the phase-space formulation of quantum mechanics,
introducing the Wigner distribution \cite{Wigner}.
He did so in an attempt
to make quantum mechanics more classical (statistical mechanics) looking,
and indeed,
to calculate some quantum properties of gasses.
Since then,
other phase-space formulations of quantum mechanics
were created.
Besides to construct a quantum theory of statistical mechanics,
the drive to create phase-space distributions was due to several aspects:
One was a fundamental aspect
--
trying to create new formulations of qunatum mechanics
and studying the uncertainty principle;
another,
not too distant motivation
was the study of the classical--quantum interface;
still other reasons are for mathematical,
as well as conceptual simplicity.
Indeed,
Wolfgang Schleich is notorious for utilizing the Wigner distribution
to simplify physical problems and to give insight into their inner-workings
\cite{schleich}.
Independently of these developments,
there was a push to determine
which operators should be used in quantum mechanics
to describe systems.
That is,
starting with a system which we might describe classically by some quantity $A_{\subsc C}(q,p)$,
what is the quantum analog?
Due to the quantum commutation relations between the position and momentum operators,
it is far from obvious how to obtain the `quantum version' of $A_{\subsc C}(q,p)$.
Several answers were given,
including one by Weyl \cite{weyl}.
In the late 1940's,
Moyal realized the connection \cite{moyal} between Weyl's procedure and Wigner's distribution.
Namely,
that%
\footnote{We denote operators by boldface characters throughout.}%
$\trace{\boldsymbol\rho\mathbf A} = \iint \D q \D p \, \tilde P(q,p) \tilde A(q,p)$,
if $\tilde P$ is Wigner's distribution coming from $\boldsymbol\rho$
and $\mathbf A$ is the operator coming from $\tilde A$ vie Weyl's procedure.
In the mid 1960's,
Cohen found the connection between all of the distributions and the operators,
and gave a way to generate arbitrary distributions \cite{Cohen66}.
We present the theory behind one such distribution,
the ambiguity distribution,
which is seldom used in quantum mechanics,
and present many of its properties
and the properties of its accompanying classical operator,
including the $\mathbf A \leftrightarrow A$.
We also show that it has multiple advantages
over the widely-used distributions.
Royer \cite{royer} has shown that
the Wigner distribution \cite{Wigner,scully,schleich}
is the expectation value of
a seed operator%
\footnote{
The semicolon (;) in the exponent in Eq.\ \eqref{eq:05-37}
means that the entire exponential operator is ordered
in the sense that \cite{englert,schwinger}
\begin{align}
\exp[\mathbf A;\mathbf B]
=
\sum_{n=0}^\infty
\frac1{n!}
\mathbf A^n \mathbf B^n
\ne
\sum_{n=0}^\infty
\frac1{n!}
(\mathbf A \mathbf B)^n
.
\label{eq:05-47}
\end{align}
}
$\mathbf W(q,p) = 2\exp[2i(\mathbf p-q);(\mathbf q-q)/\hbar]/\hbar$,
also called the ``displaced parity operator,''
and Englert \cite{englert} has found the above elegant expression
for $\mathbf W$
in terms of operator-ordered exponentials.
We \cite{ben} have reviewed these expressions for $\mathbf W$
and found some additional ones,
and showed how operator-ordering emerges naturally from the seed operator.
It is in general interesting to see how such distributions come about
\cite{ben,chaturvedi}.
Now we find that the displacement operator,
which we denote by $\mathbf\Theta$,
is another such seed operator;
not for Wigner functions,
rather for their characteristic function,
also known as the ambiguity function.
We thus call $\mathbf\Theta$ the ambiguity seed-operator.
This operator has some attractive properties,
such as operator orthogonality [Eq.\ \eqref{eq:01-14}]
and operator completeness [Eq.\ \eqref{eq:02-15}],
and could be used for phase-space quantum mechanics [Eq.\ \eqref{eq:01-13}].
As we show in Sec.\ \ref{sec:Wigner},
it happens that $\mathbf\Theta$
is intimately-related to the Wigner seed operator $\mathbf W$.
This formalism
could be used for mathematical manipulations
of operators,
and it was used in \cite{Unruh}
for studying the decoherence
of a harmonic oscillator in a heat bath,
and in \cite{Lin1,Lin2},
for studying fundamental issues
in relativistic quantum field theory.
Already in \cite{amb1}
it was suggested that the ambiguity function
could be used for quantum mechanics,
but as far as we know,
the formalism
which we present in the next section is new.
Compared to the Wigner distribution,
our new formalism has some advantages
and some disadvantages.
In our formalism,
calculation of the expectation value
of (polynomial) operators
does not involve any integration
-- just derivatives and multiplication
(as in Eq.\ \eqref{eq:05-48a}.
This is a clear advantage over the Wigner distribution.
Also,
in some cases,
e.g.\ in \cite{Unruh},
the distribution we present is easier to evolve in time.
The Wigner c-number corresponding to a Hermitian operator is real,
while in our formalism it need not be.
This may or may not be advantageous
--
for example
in some cases,
obtaining $A(\eta,\xi)$ from $\mathbf A$
might be easier than obtaining $\widetilde A$ from $\mathbf A$.
Like the Wigner distribution,
the ambiguity function transforms nicely under canonical transformations.
In contrast with the Wigner distribution,
the ambiguity distribution does not satisfy the marginals.
\section{c-numbers for use in quantum mechanics}
We present a method for obtaining the c-number distribution
(ambiguity function) $A(\eta,\xi)$
for any arbitrary operator $\mathbf A$,
and show how it can be used
for calculating
quantum mechanical
expectation values.
Like the Wigner function \cite{Wigner,scully,schleich,Cohen},
this c-number distribution
is also obtained from the expectation value of some seed operator,
$\mathbf\Theta(\eta,\xi)$,
the ambiguity seed operator.
Starting with an arbitrary operator $\mathbf A$,
we define the c-number
\begin{align}
A(\eta,\xi)
=
\trace
{
\mathbf A
\mathbf \Theta(\eta,\xi)
}
,
\label{eq:01-11}
\end{align}
from which the operator $\mathbf A$ could be recovered by%
\footnote{All integrations range from $-\infty$ to $\infty$ unless otherwise specified.}
\begin{align}
\mathbf A
&=
\iint \D\eta\D\xi \,
A(\eta,\xi)
\mathbf \Theta(-\eta,-\xi)
,
\label{eq:01-10}
\end{align}
where the operator $\mathbf\Theta(\eta,\xi)$ is
\begin{align}
\mathbf \Theta(\eta,\xi)
=
\
\frac{ e^{i(\eta\mathbf q+\xi\mathbf p)/\hbar} }{ \sqrt{2\pi\hbar} }
,
\label{eq:03-22}
\end{align}
which is known as the displacement operator
(divided by $\sqrt{2\pi\hbar}$).
As we show in Sec.\ \ref{sec:Wigner},
the c-number $A(\eta,\xi)$ in Eq.\ \eqref{eq:01-11}
is the double Fourier transform,
or characteristic function,
of the Wigner function $\widetilde A(q,p)$
corresponding to the operator $\mathbf A$,
which is also known as the ambiguity function in the field of radar \cite{radar1,radar2}.
We note that $\eta$ and $\xi$ have dimensions of momemtum and position,
respectively.
We may use our definition in Eq.\ \eqref{eq:01-11}
for the c-number functions
in order to compute quantum mechanical expectation values,
or more generally,
traces of operator products.
We find that the trace of
the product of two arbitrary operators,
$\mathbf A$ and $\mathbf B$,
could be computed by
\begin{align}
\trace{\mathbf A\mathbf B}
=
\iint \D\eta\D\xi \,
A(\eta,\xi)
B(-\eta,-\xi)
.
\label{eq:01-13}
\end{align}
The expectation value of $\mathbf A$
is obtained when $\mathbf B$ is the density matrix $\boldsymbol\rho$.
As we show in Apps.\ \ref{sec:12} and \ref{sec:3},
Eqs.\ \eqref{eq:01-11}, \eqref{eq:01-10}, and \eqref{eq:01-13}
are consequences of
the facts that the trace
\begin{align}
\trace{
\mathbf\Theta(\eta,\xi)
\mathbf\Theta(-\eta',-\xi')
}
=
\delta(\eta-\eta')
\delta(\xi -\xi' )
,
\label{eq:01-14}
\end{align}
and that
\begin{align}
\iint \D\eta\D\xi \,
&
\expect
{k'}
{ \mathbf\Theta(\eta,\xi) }
{x'}
\expect
{x}
{ \mathbf\Theta(-\eta',-\xi') }
{k}
\nonumber
\\&=
\delta(x-x')
\delta(k-k')
,
\label{eq:02-15}
\end{align}
which we derive in App.\ \ref{sec:12}.
\section{Time evolution}
\label{sec:time}
We may use this formulation to evolve the quantum state
$P(\eta,\xi)=\mbox{Tr}\{\boldsymbol\rho\mathbf\Theta\}$
in time
using the Schr\"odinger (von Neumann) equation,
or to evolve an arbitrary operator
$A(\eta,\xi)=\mbox{Tr}\{\mathbf A\mathbf\Theta\}$
in time
using the Heisenberg equation.
In particular,
we give a prescription
purely in terms of ambiguity quantities.
\begin{align}
\pderiv t{}
P(\eta,\xi,t)
&=
\frac 2\hbar
\iint \frac{\D\eta' \D\xi'}{ \sqrt{2\pi\hbar} }
\sin \frac{\eta'\xi-\eta\xi'}{2\hbar}
\nonumber
\\&\times
H \left( \Big. \frac\eta2+\eta', \frac\xi2+\xi' \right)
P \left( \Big. \frac\eta2-\eta', \frac\xi2-\xi', t \right)
\\&=
\frac2\hbar
\iint \frac{\D\eta' \D\xi'}{ \sqrt{2\pi\hbar} }
\sin \frac{\eta'\xi-\eta\xi'}{2\hbar}
\nonumber
\\&\times
H (\eta',\xi')
P (\eta-\eta',\xi-\xi',t)
,
\label{eq:08-49}
\end{align}
for the quantum state.
To evolve an arbitrary operator,
$A(\eta,\xi)$,
in time,
take $t \longrightarrow -t$
and replace $P$ by $A$
in Eq.\ \eqref{eq:08-49}.
For example,
(using Eqs.\ \eqref{eq:08-49} and \eqref{eq:05-43})
time evolution under the constant force Hamiltonian is
\begin{align}
\pderiv t{}
P(\eta,\tau,t)
&=
\left( \Big.
-\frac\eta m
\pderiv \xi{}
+
i\frac F\hbar
\xi
\right)
P(\eta,\xi,t)
,
\label{eq:14-60}
\end{align}
which could also be obtained from
\begin{align}
i\hbar \pderiv t{}
\trace{\boldsymbol\rho\mathbf\Theta}
=
\trace
{
\left( \Big.
\mathbf H \boldsymbol\rho
-
\boldsymbol\rho \mathbf H
\right)
\mathbf\Theta
}
.
\end{align}
The solution to Eq.\ \eqref{eq:14-60} is
\begin{align}
P(\eta,\xi,t)
&=
e^{ imF\xi^2/2\hbar\eta }
e^{-imF(\xi-\eta t/m)^2/2\hbar\eta }
P \left( \Big. \xi-\frac\eta m t, \eta, 0 \right)
\\&=
e^{ i Ft (2\xi-\eta t/m) /2\hbar }
P \left( \Big. \xi-\frac\eta m t, \eta, 0 \right)
.
\end{align}
That is,
the ambiguity function of the quantum state
is evolved to some time $t$
by simple substitution
at time $t=0$
and multiplication be a phase.
In Ref.\ \cite{Unruh},
the time evolution enters through the $\mathbf\Theta$ operator.
Particularly,
$\mathbf\Theta(\eta,\xi,t)$ is found by using the time-evolution of the position and momentum operators.
Using the formula
for the derivative of the exponential of an operator
\begin{align}
\deriv t{}
e^{\mathbf B(t)}
=
\int_0^1 \D\lambda \,
e^{\lambda\mathbf B(t)}
\Deriv{\mathbf B(t)}t{}
e^{(1-\lambda)\mathbf B(t)}
,
\end{align}
it was found that
\begin{align}
\deriv t{}
P(\eta,\xi)
&=
\frac {i/\hbar}{ \sqrt{2\pi\hbar} }
\mbox{Tr}
\Big\{
\boldsymbol\rho
\int_0^1 \D\lambda \,
e^{i\lambda[\eta\mathbf q(t)+\xi\mathbf p(t)]/\hbar}
\nonumber
\\&
\left[ \Big.
\eta \Deriv{\mathbf q(t)}t{}
+
\xi \Deriv{\mathbf p(t)}t{}
\right]
e^{i(1-\lambda)[\eta\mathbf q(t)+\xi\mathbf p(t)]/\hbar}
\Big\}
.
\end{align}
\section{Connection to Wigner functions and to the Weyl procedure}
\label{sec:Wigner}
The Weyl procedure
is a procedure introduced by Weyl
for obtaining quantum operators $\mathbf A$
from c-numbers $A_{\subsc W}$
(which Moyal showed that $A_{\subsc W}$ is the Wigner function $\tilde A$ corresponding to $\mathbf A$ \cite{moyal}).
Weyl was interested in determining
what quantum operators one should use
given classical analog%
\footnote{
There is also interest in the inverse procedure
--
obtaining
the classical function $\widetilde A(q,p)$
from the operator $\mathbf A$
\cite{invWeyl2,epsj}.
}.
He proposed \cite{weyl,Cohen}
\begin{align}
\mathbf A
=
\iint \D\eta\D\xi \,
\mathbf\Theta(-\eta,-\xi)
\iint \frac{\D q \D p}{2\pi\hbar} \,
\widetilde A(q,p)
\frac
{ e^{ i(\eta q + \xi p)/\hbar} }
{ \sqrt{2\pi\hbar} }
.
\label{eq:03-23}
\end{align}
Therefore,
comparing Eqs.\ \eqref{eq:01-10} and \eqref{eq:03-23},
we find that
the c-number $A$ which we defined in Eq.\ \eqref{eq:01-11}
is the double Fourier transform of the Wigner function $\widetilde A$ of $\mathbf A$.
An interesting connection also exists between the seed operator for the ambiguity function,
$\mathbf\Theta(\eta,\xi)$
and the seed operator for the Wigner function $\mathbf W(q,p)$.
The Wigner function $\widetilde A$ is obtained from $\mathbf A$ via
\cite{englert,ben}
\begin{align}
\widetilde A(q,p)
=
\trace{\mathbf A \mathbf W(q,p)}
,
\end{align}
and $\mathbf\Theta(\eta,\xi)$ is connected to $\mathbf W(q,p)$
by Fourier transform
(see App.\ \ref{sec:Fourier})
\begin{align}
\mathbf W(q,p)
=
\frac1{ \sqrt{2\pi\hbar} }
\iint \D\eta\D\xi \,
e^{-i(\eta q + \xi p)/\hbar}
\mathbf\Theta(\eta,\xi)
.
\label{eq:03-25}
\end{align}
So we see it is no accident that also
$A(\eta,\xi)$ and $\widetilde A(q,p)$ are related by Fourier transform.
Another interesting connection could be found
when calculating $A$ and $\widetilde A$ from $\mathbf A$.
In the position representation,
the ambiguity function is
\begin{align}
A(\eta,\xi)
=
\int \D q \,
e^{-i\eta q/\hbar}
\expect
{q-\xi/2}
{\mathbf A}
{q+\xi/2}
,
\end{align}
while the Wigner--Weyl function is
\begin{align}
\widetilde A(q,p)
=
\int \D \xi \,
e^{-ip \xi/\hbar}
\expect
{q-\xi/2}
{\mathbf A}
{q+\xi/2}
.
\end{align}
\section{Examples}
\label{sec:exmp}
\bigskip
\noindent%
\textbf{Ex: Position states.}
Because $A(\eta,\xi)$ is complex,
it describes the position state $\mathbf A=\Outer xx$ as
a function with depdendence on both $\eta$ and $\xi$,
\begin{align}
A(\eta,\xi)
=
\delta(\xi)
e^{i\eta x}
,
\label{eq:05-34}
\end{align}
in contrast to the Wigner function of $\mathbf A$,
\begin{align}
\widetilde A(q,p)
=
\delta(q-x)
,
\end{align}
which has no momentum dependence.
The superposition of two positions
$\ket\psi=\alpha\ket{x_1}+\beta\ket{x_2}$
is
\begin{align}
A
&
(\eta,\xi)
=
\abs{\alpha}^2
e^{i\eta x_1}
\delta(\xi)
+
\alpha\beta^*
e^{i\eta x_2}
\delta(x_1-x_2-\xi)
\nonumber
\\&+
\beta\alpha^*
e^{i\eta x_1}
\delta(x_2-x_1-\xi)
+
\abs{\beta}^2
e^{i\eta x_2}
\delta(\xi)
\\&=
\abs{\alpha}^2
e^{i\eta x_1}
\delta(\xi)
+
\abs{\beta}^2
e^{i\eta x_2}
\delta(\xi)
+
e^{i\eta(x_1+x_2-\xi)/2\hbar}
\nonumber
\\&\times
\left( \Big.
\alpha\beta^*
\delta(x_1-x_2-\xi)
+
\beta\alpha^*
\delta(x_2-x_1-\xi)
\right)
.
\end{align}
\bigskip
\noindent%
\textbf{Ex: Gaussian state.}
The density matrix of the Gaussian state wavefunction
peaked about the position $q=x$,
and having average momentum $k$,
\begin{align}
\psi(q)
=
\left( \Big.
\frac{\Delta}{\pi}
\right)^{1/4}
\exp
\left[ \Big.
-\frac{(q-x)^2}{2\Delta}
-
iq k/\hbar
\right]
,
\end{align}
is
\begin{align}
\boldsymbol\rho
&=
\sqrt2
\exp \left[ \Big. \frac{(\mathbf q-x)^2}{-2\Delta} \right]
e^{ (\mathbf q-x);(\mathbf p-k)/i\hbar }
\exp \left[ \Big. \frac{(\mathbf p-k)^2}{-2\hbar^2/\Delta} \right]
,
\label{eq:05-37}
\end{align}
corresponding to the ambiguity function
\begin{align}
P(\eta,\xi)
=
\frac
{ e^{ i\eta x/\hbar + i\xi k/\hbar } }
{ \sqrt{2\pi\hbar} }
\exp
\left[ \Big.
-\frac{\eta^2}{4\hbar^2/\Delta}
-\frac{\xi^2 }{4\Delta}
\right]
,
\end{align}
and to the Wigner function
\begin{align}
\widetilde P(q,p)
=
\frac1{\pi\hbar}
\exp
\left[ \Big.
-\frac{(q-x)^2}{\Delta}
-\frac{(p-k)^2}{\hbar^2/\Delta}
\right]
,
\end{align}
where the semicolon (;) in the exponent
is Schwinger ordered-exponential notation,
as in Eq.\ \eqref{eq:05-47}.
\bigskip
\noindent%
\textbf{Ex: Constant force Hamiltonian.}
The constant force Hamiltonian
\begin{align}
\mathbf H
=
\frac{\mathbf p^2}{2m}
-
F\mathbf q
,
\end{align}
corresponds to the
Wigner function
\begin{align}
\widetilde H(q,p)
=
\frac{ p^2}{2m}
-
F q
.
\end{align}
To find the ambiguity function
associated with this Hamiltonian,
we use that
\begin{equation}
\begin{aligned}
\trace{\mathbf q\mathbf A\mathbf\Theta(\eta,\xi)}
&=
\left( \Big.
\frac \hbar i
\pderiv\eta{}
+
\frac\xi2
\right)
A(\eta,\xi)
\\
\trace{\mathbf p\mathbf A\mathbf\Theta(\eta,\xi)}
&=
\left( \Big.
\frac \hbar i
\pderiv\xi{}
-
\frac\eta2
\right)
A(\eta,\xi)
,
\label{eq:04-30}
\end{aligned}
\end{equation}
and that
\begin{equation}
\begin{aligned}
\trace{\mathbf q\mathbf A\mathbf\Theta(-\eta,-\xi)}
&=
-\left( \Big.
\frac \hbar i
\pderiv\eta{}
+
\frac\xi2
\right)
A(-\eta,-\xi)
\\
\trace{\mathbf p\mathbf A\mathbf\Theta(-\eta,-\xi)}
&=
-\left( \Big.
\frac \hbar i
\pderiv\xi{}
-
\frac\eta2
\right)
A(-\eta,-\xi)
,
\end{aligned}
\end{equation}
which are like Bopp operators \cite{Bopp},
which could be used for
obtaining Wigner functions from operators
in a simple way \cite{ben}.
Using Eqs.\ \eqref{eq:04-30},
the ambiguity function $H(\eta,\xi)$ is
\begin{align}
&
H
(\eta,\xi)
=
\sqrt{2\pi\hbar} \,
\mathbf H
\left( \Big.
\frac \hbar i
\pderiv\eta{}
+
\frac\xi2
,
\frac \hbar i
\pderiv\xi{}
-
\frac\eta2
\right)
\delta(\eta)
\delta(\xi)
\\&= \!
\sqrt{2\pi\hbar}
\left[ \Big.
\frac1{2m}
\left( \Big.
\frac\hbar i
\pderiv \xi{}
-
\frac\eta2
\right)^2 \! \!
-
F
\left( \Big.
\frac\hbar i
\pderiv \eta{}
+
\frac\xi2
\right)
\right] \! \!
\delta(\eta)
\delta(\xi)
\\&= \!
-\sqrt{2\pi\hbar}
\left[ \Big.
\frac{\hbar^2}{2m}
\pderiv \xi2
+
i\hbar F
\pderiv \eta{}
\right] \! \!
\delta(\eta)
\delta(\xi)
,
\label{eq:05-43}
\end{align}
and the expectation value of the Hamiltonian
is calculated via Eq.\ \eqref{eq:01-13}
\begin{align}
&
\ave{\mathbf H}
=
\iint \D\eta\D\xi \,
P(\eta,\xi)
H(-\eta,-\xi)
\\&=
-\sqrt{2\pi\hbar}
\iint \D\eta\D\xi \,
P(\eta,\xi)
\left[ \Big.
\frac{\hbar^2}{2m}
\pderiv \xi2
-
i\hbar F
\pderiv \eta{}
\right] \! \!
\delta(\eta)
\delta(\xi)
\\&=
-\sqrt{2\pi\hbar}
\left[ \Big.
\frac{\hbar^2}{2m}
\pderiv \xi2
+
i\hbar F
\pderiv \eta{}
\right] \! \!
P(\eta=0,\xi=0)
,
\label{eq:05-48a}
\end{align}
where in the last equality
in Eq.\ \eqref{eq:05-48a},
the expression is evaluated at $\eta=\xi=0$,
and
$P(\eta,\xi)$
is the ambiguity function
which is obtained from the state's density matrix
via Eq.\ \eqref{eq:01-11}.
Interestingly,
it is the value of the
operated-on quantum state $P(\eta,\xi)$
at zero displacement ($\eta=\xi=0$)
that gives the expectation value of the operator.
Amazingly,
no integrals are required;
this expression involves only derivatives
and multiplication.
This is generally the case for operators which are polynomial
in $\mathbf q$ and $\mathbf p$.
\section{Properties of $\mathbf\Theta(\eta,\xi)$}
\label{sec:prop}
We now discuss some properties of the ambiguity seed operator.
The ambiguity seed operator
has the property that
its form is unchanged
under a canonical transformation
\begin{align}
\mathbf q
=
\alpha \mathbf Q
+
\beta \mathbf P
,\qquad
\mathbf p
=
\gamma \mathbf Q
+
\delta \mathbf P
,
\end{align}
that is to say,
the new parameters become
\begin{align}
\eta
\longrightarrow
\alpha\eta
+
\gamma\xi
,\quad
\xi
\longrightarrow
\beta\eta
+
\delta\xi
.
\label{eq:04-27}
\end{align}
Specifically,
\begin{align}
\mathbf\Theta_{\mathbf q,\mathbf p}(\eta,\xi)
=
\mathbf\Theta_{\mathbf Q,\mathbf P}
( \alpha\eta+\gamma\xi, \beta\eta+\delta\xi )
,
\label{eq:05-48b}
\end{align}
where $\mathbf\Theta_{\mathbf Q,\mathbf P}(\eta,\xi)$
is the operator
$\exp[i(\eta\mathbf Q+\xi\mathbf P)/\hbar]/\sqrt{2\pi\hbar}$.
If the Jacobian $J$
of the transformation in Eq.\ \eqref{eq:04-27} is unity,
that is,
$J=\alpha\delta-\beta\gamma=1$,
then
the commutators
$\commute{\mathbf q}{\mathbf p}=J[\mathbf Q,\mathbf P]$
are equal
(which makes the transformation canonical),
and
we have the symmetry that
\begin{align}
\mathbf A(\mathbf Q,\mathbf P)
&=
\iint \D\eta\D\xi \,
A(\eta,\xi)
\mathbf\Theta_{\mathbf Q,\mathbf P}(\eta,\xi)
\\&=
\iint \frac{\D\eta\D\xi}{ \sqrt{2\pi\hbar} } \,
A( \alpha\eta+\gamma\xi, \beta\eta+\delta\xi )
\nonumber
\\&\times
\mathbf\Theta_{\mathbf Q,\mathbf P}
( \alpha\eta+\gamma\xi, \beta\eta+\delta\xi )
\\&=
\iint \D\eta\D\xi \,
A( \alpha\eta+\gamma\xi, \beta\eta+\delta\xi )
\mathbf\Theta_{\mathbf q,\mathbf p}(\eta,\xi)
,
\end{align}
where we have used Eqs.\ \eqref{eq:01-10} and \eqref{eq:05-48b},
and $\eta$ and $\xi$ instead of $-\eta$ and $-\xi$.
Using Eq.\ \eqref{eq:01-14},
we find that
\begin{align}
\trace
{
\mathbf A(\mathbf Q,\mathbf P)
\mathbf\Theta_{\mathbf q,\mathbf p}(-\eta,-\xi)
}
=
A_{\mathbf q,\mathbf p}( \alpha\eta+\gamma\xi, \beta\eta+\delta\xi )
,
\end{align}
which means that
if we calculate
$A_{\mathbf q, \mathbf p}(\eta,\xi)$
corresponding to some operator
$\mathbf A(\mathbf q,\mathbf p)$,
then we can obtain the c-number
$A_{\mathbf Q, \mathbf P}(\eta,\xi)$
corresponding to
$\mathbf A(\mathbf Q,\mathbf P)$
via the simple substitution,
Eq.\ \eqref{eq:04-27}
\begin{align}
A_{\mathbf Q, \mathbf P}(\eta,\xi)
=
A_{\mathbf q,\mathbf p}( \alpha\eta+\gamma\xi, \beta\eta+\delta\xi )
.
\end{align}
The trace of $\mathbf\Theta(\eta,\xi)$ is
\begin{align}
\trace{\mathbf\Theta(\eta,\xi)}
=
\sqrt{2\pi\hbar} \,
\delta(\eta)
\delta(\xi)
,
\end{align}
and its integral is
\begin{align}
\iint \D\eta\D\xi \,
\mathbf\Theta(\eta,\xi)
=
\sqrt{2\pi\hbar} \,
2 e^{-2i\mathbf q;\mathbf p/\hbar}
,
\end{align}
which is the parity operator \cite{englert},
where the semicolon (;) in the exponent
has the same meaning as it does in Eq.\ \eqref{eq:05-37}
[Eq.\ \eqref{eq:05-47}].
This is not shocking,
because it offers another connection to the Wigner distribution
which is the expectation value of the \emph{displaced} parity operator \cite{royer}.
Integrating the ambiguity seed over one variable,
we get
\begin{equation}
\begin{aligned}
\int \frac{\D\eta}{ \sqrt{2\pi\hbar} }
\mathbf\Theta(\eta,\xi)
&=
\Outer{q=-\frac\xi2}{q= \frac\xi2}
\\
\int \frac{\D\xi}{ \sqrt{2\pi\hbar} }
\mathbf\Theta(\eta,\xi)
&=
\Outer{p= \frac\eta2}{p=-\frac\eta2}
,
\end{aligned}
\end{equation}
which means that
\begin{equation}
\begin{aligned}
\int \frac{\D\eta}{ \sqrt{2\pi\hbar} }
A(\eta,\xi)
&=
\expect
{q= \frac\xi2}
{\mathbf A}
{q=-\frac\xi2}
\\
\int \frac{\D\xi}{ \sqrt{2\pi\hbar} }
A(\eta,\xi)
&=
\expect
{p=-\frac\eta2}
{\mathbf A}
{p= \frac\eta2}
.
\end{aligned}
\end{equation}
\noindent%
\textbf{Composition rule.}
Since
the ambiguity seed operators
are phase-space displacement operators,
we would expect that they could be combined
into a different displacement.
This is true,
however,
up to a phase
\begin{align}
\mathbf\Theta(\eta_1,\xi_1)
\mathbf\Theta(\eta_2,\xi_2)
=
\frac{ e^{ i(\eta_1\xi_2-\eta_2\xi_1)/2\hbar } }{ \sqrt{2\pi\hbar} }
\mathbf\Theta(\eta_1+\eta_2,\xi_1+\xi_2)
,
\end{align}
which comes from the
Campbell-Baker-Hausdorff relation.
However,
when actually displacing an operator%
\footnote{
The `$f$' is boldface,
indicating that operator-ordering is important.
}
$\mathbf f(\mathbf q, \mathbf p)$,
the phase cancels
\begin{align}
\mathbf\Theta(\eta,\xi)
&
\mathbf f(\mathbf q, \mathbf p)
\mathbf\Theta^\dagger(\eta,\xi)
\nonumber
\\&=
\mathbf\Theta(\eta_1,\xi_1)
\mathbf\Theta(\eta_2,\xi_2)
\mathbf f(\mathbf q, \mathbf p)
\mathbf\Theta^\dagger(\eta_2,\xi_2)
\mathbf\Theta^\dagger(\eta_1,\xi_1)
\\&=
\mathbf f(\mathbf q+\xi, \mathbf p+\eta)
.
\end{align}
\section{Conclusions}
We have presented a formalism
for obtaining c-function distributions
corresponding to quantum states
and to quantum operators,
and showed how to use them
for calculation of expectation values.
Surprisingly,
in contrast with the usual phase-space formalisms
(Wigner, Kirkwood--Rihaczek, etc.),
expectation values of operators
do not involve any integration
--
only derivatives.
These distributions were shown to posess attractive features,
such as symmetries under canonical quantum transformations.
There are prospects for generalizing this treatment
using ideas from Refs.\ \cite{Cohen66,CohenAmb}.
\section{Acknowledgements}
This paper is warmly dedicated to Wolfgang Schleich.
The authors would like to thank the organizers of this special issue
for allowing them to participate,
and would like to thank
Profs.\ M.\ Scully and L.\ Cohen for their valuble comments and suggestions.
In addition,
JSB
would like to thank
the Robert A.\ Welch Foundation (Grant No.\ A-1261),
the Office of Naval Research (Award No.\ N00014-16-1-3054),
and
the Air Force Office of Scientific Research (FA9550-18-1-0141)
for their the support,
and
WGU would like to thank
NSERC Canada (Natural Science and Engineering Research Council),
the Hagler Fellowship from HIAS (Hagler Institute for Advanced Studies),
Texas A\&M University,
CIfAR,
and the Humbolt Foundation
for support.
|
2,877,628,090,386 | arxiv | \section{Introduction}
Dealing with a given Lie algebra $\fg$ and modules over it,
especially when $q$-quantizing, we need a convenient {\it
presentation} of $\fg$, i.e., a description in terms of generators
and defining relations. Obviously, the basis elements qualify as
generators, but there are too many of them. It is well-known
\cite{GL2} that
\begin{equation}\label{max}
\begin{array}{l}
\text{{\sl For any nilpotent Lie algebra $\fn$, the natural set
of relations is}}\\
\text{{\sl a basis of $\fn/[\fn, \fn]=H_1(\fn)$; relations
between these generators}}\\
\text{{\sl can be described in terms of the basis of $H_2(\fn)$.
}}
\end{array}
\end{equation}
A simple Lie (super)algebra $\fg$ (finite dimensional, Kac-Moody
or of polynomial vector fields) is conventionally split into the sum
$\fg=\fn_-\oplus\fh\oplus\fn_+$ of two maximal
nilpotent subalgebras $\fn_{\pm}$ (positive and negative) and the
commutative Cartan subalgebra; the corresponding generators are
called {\it Chevalley generators}; the relations between them are
also known, cf. \cite{GL3}, \cite{GLP}. They are numerous ($3n$
generators for a $\rank \,\, n$ algebra and $\sim n^2$ relations), but
these relations are simple and therefore convenient.
For comparison: for the simplest case, $\fgl(n)$, the matrix units
are obvious generators, and the relations between them are simple,
but far too numerous ($n^2$ generators and $\sim n^4$
relations).
Jacobson was, perhaps, the first to observe that every
simple finite dimensional Lie algebra can be generated by just a {\it pair} of
generators, but he did not specify his pairs, so no discussion of relations was made.
Grozman and Leites \cite{GL1} introduced a
pair of generators associated with the
principal embedding of $\fsl(2)$, and the relations between them
are rather simple (at least, for computers). There are more
generators similar to those Grozman and Leites had chosen, but
experiments performed so far show that the ones Grozman and Leites
considered are most convenient, and are related to various applications
\cite{GL2}, \cite{LS}.
There are, however, certain pairs of generators indigenous only to
the $\fsl$ series, and only over an algebraically closed field, e.g. $\Cee$.
Below, we describe such a pair of generators
for $\fsl(n)$ and their analogs for $\fsl(n|n)$ and give relations between them.
Let $a=\exp{\left(\frac{2i\pi}{n}\right)}$ and define \emph{Sylvester's generators}
(also called {\it clock-and-shift or 't Hooft matrices}) to be
\begin{equation}
D=\mathrm{diag}(1,a,a^2,\ldots,a^{n-1}),\qquad
S=\begin{pmatrix}
0&1 &0&0&0\cr
0&\ddots&\ddots&\ddots&0\cr
0&\ddots&\ddots&\ddots&0\cr
0&\ddots&\ddots&\ddots&1\cr
1&0&0&0&0
\end{pmatrix}
\label{sylv}
\end{equation}
Zachos \cite{Z1} points out that \\[1em]
\lq\lq {\sl apparently, Sylvester \cite{S} was the first to study
these (\ref{sylv}) generators\footnote{More precisely, Sylvester
used them as generators of an associative algebra, where they
yield the algebra of $n\times n$ matrices $\Mat(n)$. Having
replaced the dot product by the bracket we endow the space of
$\Mat(n)$ with the structure of the Lie algebra $\fgl(n)$; having
introduced parity in $\Mat(n)$ by attributing parity to each basis
vector (and hence to each row and column) and replacing the dot
product by the superbracket we endow the superspace of $\Mat(n;
\Par)$, where $\Par$ is an ordered collection of parities, with
the structure of the Lie superalgebra $\fgl(\Par)$. As generators
of a Lie algebra or Lie superalgebra, Sylvester's generators can
only generate $\fsl(n)$ and $\fsl(n|n)$, not $\fgl(n)$.} of
$\fsl(n)$; he worked them out for $\fsl(3)$ first, and called them
\lq\lq nonions" (after quaternions), and then generalized to
$\fsl(n)$.
They became popular in the 30s in the context of QM-around-the circle,
i.e., on a discrete periodic lattice of $N$ points, see \cite{W}.
That effort has continued to date, with the work of Schwinger,
Santhanam, Tolar, Floratos, and others.
They also became popular among high-energy theorists, with the
work of 't~Hooft \cite{tH}, on order-disorder confinement
operators in QCD, so that many in my end of the woods intriguingly
call them \lq\lq 't Hooft matrices".
I have been using them every few years, starting from \cite{FFZ}
to identify cases of a Sine-algebra we found at that time
with $\fsl(N)$, and also with the Moyal Bracket algebra
\cite{Moy} on a toroidal phase space; and hence take the $N\tto \infty$
limit
to get Poisson Brackets more directly than in Hoppe's first derivation
\cite{Ho} on a spherical phase space.
Our latest use of them was in our recent diversion, \cite{FZ}, on
ring-indexed Lie algebras. They are apparently the most systematic
basis for dealing with all $\fsl(N)$s on an equal footing and taking
naive $N\tto \infty$ limits.}"\\[1em]
For the passage from the notation of Zachos et al.
to ours, observe that, e.g. in \cite{FFZ}, the authors generate $\fgl(n)$ from
Sylvester's generators $D,S$ (\ref{sylv}) in the form
\[
J_{(m_1,m_2)}=a^{m_1m_2/2}D^{m_1}S^{m_2}
\]
which are $n^2$ independent matrices labelled by two integers $0\leq m_1,m_2<n$. Under the
bracket, the identity matrix $J_{(0,0)}$ spans the center. So dividing it out leaves
$\fsl(n)$ with the bracket
\[
[J_{(m_1,m_2)},J_{(k_1,k_2)}]=-2i\sin\left( \frac{2\pi}{n}(m_1k_2-m_2k_1) \right) J_{(m_1+k_1,m_2+k_2)}
\]
Another important
application of Sylvester's generators is the classical Yang-Baxter equation for a function taking
values in a simple Lie algebra $\fg$. It turns out \cite{BD1, BD2} that for this equation to
have elliptic solutions, $\fg$ has to possess two automorphisms
of finite order which have no common nonzero eigenvector with
eigenvalue 1. Sylvester's generators are such automorphisms for $\fg=\fsl(n)$;
in fact, \cite{BD1, BD2} prove that any $\fg$ possessing such automorphisms must
be isomorphic to $\fsl(n)$, and the elliptic solutions can
be characterised by the images of Sylvester's generators (\ref{sylv}) under this isomorphism.
Also, they play a vital role in the study of orthogonal
decompositions of Lie algebras \cite{KKU, KT, FOS}.
Finally, a more applied subject on which these generators have been used is
hydrodynamics and the statistical theory of turbulent fluids and gases, in particular,
the study of lattice models of inviscid fluids (Euler fluids), see, e.g., \cite{MWC},\cite{MW},\cite{Ze}.
The aim of this paper is to give an algorithm that generates $\fsl(n)$ and $\fsl(n|n)$ from
Sylvester's generators and which also produces a presentation for them. This presentation contains
redundancies, but might be of interest for practical problems since it allows quick and easy
computations in the adjoint representation. The main statements are the following ones.
\begin{Theorem}
Fix an integer $n\geq 2$. Then the matrices (\ref{sylv}) are generators for $\fsl(n)$:
\begin{equation}
\mathfrak{sl}(n)=Span(D,S,T_m^k\mid 1\leq k,m \leq n, \,\,\mathrm{and}\,\,k\neq n\,\,%
\mathrm{for}\,\,m=1,n,\,\,\mathrm{and}\,\,k\neq 1\,\,\mathrm{for}\,\,m=n),
\label{spansl}
\end{equation}
where for $1\leq k,m\leq n$, we set
\begin{eqnarray*}
T_m^k &=& (\ad\,D)^{k-1}((\ad\,S)^{m-1}((\ad\,D(S))),\\
T_n^k &=& \ad\,S(T_{n-1}^k).
\end{eqnarray*}
A defining set of relations for generators (\ref{sylv}) can be
obtained in the following way. The relations
\begin{eqnarray}
\label{rel1}
(\ad D)^n(S) &=& (1-a)^n S,\\
\label{rel2}
(\ad D)^{n}((\ad S)^{m-1}(\ad D(S))) &=& (1-a)^m(1-a^m)^n(-1)^{m+1}(\ad S)^{m-1}(\ad D(S))\\
\label{rel3}
\ad S(T_{n-1}^1) &=& (1-a)^n(-1)^{n}D,\\
\label{rel4}
\ad S(T_{n-1}^n) &=& 0,\\
\label{rel5}
\ad S(T_n^k) &=& (-1)^n(1-a^k)^2(1-a^{n-1})^{k-1}(1-a)^{n-k}T_1^k,\\
\label{rel6}
\ad D(T_n^k) &=& 0
\end{eqnarray}
prohibit generation of elements of order higher than $n$ in both
$D$ and $S$. Besides them, for each $T_m^k$ with $2\leq m\leq
n-1$, except for $T_2^2$, $m-1$ relations have to hold, which can
be written as
\[
(\ad\,S)^{s_1}((\ad\,D)^{k-1}((\ad\,S)^{s_2}(T_1^1)))=\left(\frac{1-a^k}{1-a}\right)^{s_1}%
\left(\frac{1-a^{s_1}}{1-a^{m-1}}\right)^{k-1}T^k_m,
\]
where $s_1+s_2=m-1$ and $s_1=1,2, \ldots, m-1$. \label{main1}
\end{Theorem}
\begin{Theorem}
Considered as $2n\times 2n$ supermatrices on a superspace with an
alternating format (even, odd, even, odd, ...), (\ref{sylv}) are
generators for $\fsl(n|n)$:
\[
\mathfrak{sl}(n|n)=Span(D,S,T_m^k\mid 1\leq k,m \leq 2n, \,\,\mathrm{and}\,\,k\neq 2n\,\,%
\mathrm{for}\,\,m=1\,\,\mathrm{and}\,\,k\neq 1\,\,\mathrm{for}\,\,m=2n)
\]
with the same definition of $T_m^k$ as in Thm. (\ref{main1}).
A defining set of relations in this case are
(\ref{srel1})-(\ref{srel5}) and $m-1$ relations for each $T_m^k$ with $2\leq m\leq 2n-1$.
These can be written as
\[
(\ad\,S)^{s_1}(T^k_{s_2+1})=\begin{cases}
(-1)^{s_1}(a^{2k}-1)^{\frac{s_1-1}{2}}\frac{(a^k+1)}{1-a}\left(\frac{1-a^{s_1}}{1-a^{m-1}}\right)^{k-1}%
\left(-\frac{1-a}{1+a}\right)^{\frac{s_1-1}{2}}\tilde{T}_m^k & \text{for $s_1$ odd and}\\
& \text{$s_2$ even,}\\
(-1)^{s_1}(a^{2k}-1)^{\frac{s_1-1}{2}}\frac{(a^k-1)}{1-a}\left(\frac{1-a^{s_1}}{1-a^{m-1}}\right)^{k-1}%
\left(-\frac{1-a}{1+a}\right)^{\frac{s_1-1}{2}}\tilde{T}_m^k & \text{for $s_1,s_2$ odd, or}\\
(-1)^{s_1}\frac{(a^{2k}-1)^{s_1/2}}{1-a}\left(\frac{1-a^{s_1}}{1-a^{m-1}}\right)^{k-1}%
\left(-\frac{1-a}{1+a}\right)^{s_1/2}\tilde{T}_m^k & \text{for $s_1,s_2$ even,}\\
& \!\!\!\!\!\text{or $s_1$ even, $s_2$ odd,}
\end{cases}
\]
where again $s_1+s_2=m-1$ and $s_1=1,2,\ldots,m-1$.
\label{main2}
\end{Theorem}
In the following we show why the set of relations indicated in
Thms. (\ref{main1}), (\ref{main2}) is a defining set. This will
then automatically also deliver an upper bound on the number of
independent relations for Sylvesters's generators.
\section{Relations between Sylvester's generators for $\fsl(n)$}
Setting
\[
T_1^k:=(\ad D)^k(S)\qquad\mathrm{for}\quad k=1,\ldots,n-1
\]
we obtain the matrices
\[
T_1^k=(1-a)^{k}\left(\begin{array}{ccccc}
0 & 1 & 0 & \ldots & 0\\
0 & 0 & a^k & \ldots & 0\\
\vdots\\
0 & 0 & \ldots & \ldots & a^{k(n-2)}\\
a^{k(n-1)} & 0 & \ldots & 0 & 0
\end{array}\right)
\]
which are, clearly, all linearly independent. For $k=n$, we get
the relation (\ref{rel1}). Proceeding likewise, we generate a
basis for $\mathfrak{sl}(n)$. We set
\[
T_m^k=(\ad D)^{k-1}((\ad S)^{m-1}(T_1^1))=(\ad D)^{k-1}((\ad\,S)^{m-1}((\ad\,D(S)))
\]
where $m=1,\ldots,n-1$ and $k=1,\ldots,n$. In matrix form,
\[
T_m^k=(1-a)^m(1-a^m)^{k-1}(-1)^{m+1}\left(\begin{array}{cccccccc}
0 & \ldots & 0 & 1 & 0 & \ldots & \ldots & 0\\
0 & \ldots & 0 & 0 & a^k & 0 & \ldots & 0\\
\vdots & \vdots &&&&& & \vdots\\
0 & \ldots &&&&& & a^{k(n-m-1)}\\
a^{k(n-m)} & 0 & \ldots &&&& \ldots & 0\\
\vdots &&&&&&& \vdots\\
0 & \ldots & a^{k(n-1)} & 0 & \ldots && \ldots & 0
\end{array}\right)
\]
These are all the non-diagonal matrices needed for a basis of
$\mathfrak{sl}(n)$. Their linear independence is easily checked.
We also immediately read off the relation (\ref{rel2}) for $k=n+1$.
It remains to generate $n-2$ diagonal matrices, which we do as
follows:
\[
T_n^k=\ad S(T_{n-1}^k)=\ad S((\ad D)^{k-1}((\ad S)^{n-2}(\ad
D(S)))),
\]
where $k=2,\ldots,n-1$, and we obtain the relations (\ref{rel3})-(\ref{rel6}).
Since we have obtained $n^2-1$ linearly independent matrices, we
have found a basis for $\mathfrak{sl}(n)$, see (\ref{spansl}). One
might, however, wish to describe $\mathfrak{sl}(n)$ as the
quotient of the free Lie algebra generated by the two Sylvester
generators modulo certain defining relations. Since we know that
the matrices $T_m^k$ span $\mathfrak{sl}(n)$, we know that any
commutator of them must yield a relation. The relations stated
above are merely those ones that are first encountered when we
proceed through our chosen algorithm for the generation of the
basis of $\mathfrak{sl}(n)$. To find out the number and an
explicit realization of the minimal defining relations turns out
to be quite a tough job, despite the seeming simplicity of the
problem. P.~ Grozman was able to find those minimal relations for
$n=2,3,4$:
\begin{equation}\label{grrel}
\renewcommand{\arraystretch}{1.3}
\begin{array}{l}
\underline{n=2}: \;(\ad\,S)^2(D)=4D,\quad (\ad\,D)^2S=4S;\\
\underline{n=3}: \;(\ad\,S)^3(D)=-3(a-a^2)D,\quad (\ad\,D)^3(S)=3(a-a^2)S;\quad
[T^1_2, T^2_1]=0;\\
\underline{n=4}: \; (\ad\,S)^4(D)=-4D,\quad (\ad\,D)^4(S)=-4S,\quad [T_1^1, T^1_2]=[D,T^1_3],\\
\quad [T_1^1,[T^1_2, T^1_3]]=-4T^2_1,\quad [T_1^3, T_3^1]=0,\quad%
2[T^1_2, T^3_1]=[T^2_1, [D,T^1_2]],\\
\quad 2[T^2_1, T^1_3]=[T^1_2, [S, T_1^2]], T^2_1]=[S,T^3_1]\quad
[T^2_1, T^3_1]=4T^1_2.
\end{array}
\end{equation}
with the help of {\it Mathematica} and his {\bf SuperLie} package \cite{Gr}, but did not succeed to
deduce from (\ref{grrel}) a
general formula. On the other hand, neither the number nor an
explicit form of the \emph{minimal} set of relations is of great practical
importance when working with these generators. Rather one would
like to have, e.g., formulae that describe the action of arbitrary
products of the elements of $\mathfrak{sl}(n)$
in the adjoint representation. Such formulae will be given below and,
additionally, a set of relations offered which
contains redundancies, but which allows immediate reduction of an arbitrary expression of
the form (with the $X_i$ and $Y$ being arbitrary elements of $\fsl(n)$)
\[
\ad\,X_1(\ad\,X_2(\ldots(\ad\,X_q(Y))\ldots))
\]
to a linear combination of the basis elements produced by our algorithm.
By explicit calculation one first verifies that
\[
[T_m^k,T_{m'}^{k'}]=\frac{(1-a^m)^{k-1}(1-a^{m'})^{k'-1}}{(1-a^{m+m'})^{k+k'-1}}(a^{k'm'}-1)T^{k+k'}_{m+m'}
\]
for any of the $T_m^k,T_{m'}^{k'}$ defined above. Hereafter, $k+k'$ and $m+m'$
have to be understood $\mathrm{mod}\,\, n$. This directly shows the following statement.
\begin{Lemma} The result of the application of an arbitrary product of elements in the adjoint
representation to a $T_m^k$ depends \emph{up to a factor} only on the number of $S$'s and $D$'s
contained in these operators. That is,
\begin{equation}
\ad \,X_1(\ad\, X_2(\cdots\ad\, X_N (T_m^k))\cdots) = C(k,k',m,m',n)T_{m+m'}^{k+k'}
\label{lem1_stat}
\end{equation}
where $m_i$ and $k_i$ is the number of $S$'s and $D$'s, respectively, contained in $X_i$,
\[
m'=\sum_i m_i,\qquad k'=\sum_i k_i,
\]
and $C(k,k',m,m',n)$ is a constant depending on all the indices.
\label{lem1}
\end{Lemma}
Therefore we conclude that it suffices to check only relations between elements which are at most of
degree $n$ in both $S$'s and $D$'s. If we know all relations of this type, then any relation of a
higher degree will follow from these and (\ref{rel1})-(\ref{rel6}).
In order to find these relations, it is most convenient to visualise the generated basis as
a grid of points.
\begin{figure}
\begin{center}
\includegraphics{pathexample.eps}
\caption{The two paths that generate $T_2^2$ in $\mathfrak{sl}(3)$}
\label{fig2p}
\end{center}
\end{figure}
Fig. \ref{fig2p} shows the basis of $\mathfrak{sl}(3)$, starting from $D$ and $S$ in the upper
left corner. Below them is $T_1^1=[D,S]$. The other solid points are those that we generate with our algorithm by
going only horizontally on each level, and vertically only along the left edge. The white points
are those which are ruled out by the relations (\ref{rel1})-(\ref{rel6}), i.e., they do not represent
basis elements of $\mathfrak{sl}(3)$. Now, an arbitrary product of $r$-many $(\ad D)$'s and
$s$-many $(\ad S)$'s applied to
$T_1^1$ corresponds to a path on the grid starting at $T_1^1$ and reaching $T_{s+1}^{r+1}$,
but one which will in general
only produce a matrix proportional to $T_{s+1}^{r+1}$, with a factor $\neq 1$. A horizontal step of the
path
describes the action of $\ad\,D$, a vertical one the action of $\ad\,S$. In the picture, the
solid line shows the way our algorithm went to generate $T_2^2$, while the dotted lines show the
alternative path, i.e.,
\begin{eqnarray}
\textrm{solid line}\qquad & \Leftrightarrow & \quad\ad D(\ad S(\ad D(S))) \label{path1}\\
\textrm{dotted line}\qquad & \Leftrightarrow & \quad\ad S((\ad D)^2(S)) \label{path2}
\end{eqnarray}
It is clear that any expression we have to examine can be expressed as a path from $T_1^1$ to some
admissible $T_m^k$ which only moves right and downwards (compare to Fig. 1). In general, there are
\[
\left(\begin{array}{c}
m+k-2\\
k
\end{array}\right)\qquad\textrm{paths from}\,\, T_1^1\,\,\mathrm{to}\,\,T_m^k
\]
However, we can rule out some of these. The algorithm always uses paths which run through all
vertical steps first, then through all horizontal ones (called the algorithm path in what follows).
A relation is obtained by running through
any different path and comparing the result to what the algorithm path would have produced at this vertex.
\begin{Proposition} For a path ending at $T_s^r$ to yield an independent relation, it has to
\begin{itemize}
\item end with a vertical step if $s<n$,
\item end with a
horizontal step if $s=n$.
\end{itemize}
\label{prop1}
\end{Proposition}
\begin{proof}
Look at the $s<n$ case first.
We know that at the vertex where the last vertical step ends, we will have produced a matrix
proportional to the one that the algorithm path would have produced there (cf. Lemma \ref{lem1}).
Thus, at this vertex we obtain a relation.
But if it is followed by horizontal steps, these will then trivially also yield matrices proportional
to those that the algorithm would have produced. Thus, the relations we can read off at these vertices
are generated from the one obtained at the end of the last vertical step.
An analogous argument holds for $s=n$, except that at the last step of the algorithm there is a vertical
step, so a path producing an independent relation cannot have a vertical step at its end.
\end{proof}
\begin{Corollary}
Apart from those vertical steps which lie on the left edge of the grid,
a path that leads to $T_s^r$ and yields an independent relation for $s<n$ must contain all other
vertical steps at its end. For $s=n$, the only path yielding a nontrivial relation is the algorithm
path to $T_n^{r-1}$ followed by a horizontal step.
\label{cor1}
\end{Corollary}
\begin{proof}
As a counterexample for the $s<n$ case, consider Fig. \ref{dstep}.\\
\begin{figure}[!ht]
\begin{center}
\includegraphics{doublestep.eps}
\caption{Example of a path ruled out by Corollary \ref{cor1}}
\label{dstep}
\end{center}
\end{figure}
\newline
Up to vertex $a$, it follows the algorithm path, then going to $b$ will yield a relation. But
proceeding further horizontally after $b$ yields only dependent relations, as seen before.
In the $s=n$ case, we have seen in Lemma \ref{lem1} that the last step of a path yielding a relation
must be horizontal. Since going a horizontal step in the $n$-th row always gives zero
(cf. (\ref{rel6})), a
nontrivial path can only have exactly one horizontal piece at its end. So the second last step is
always the last step of the algorithm to $T_n^{r-1}$, and therefore any other path leading to
$T_n^{r-1}$ followed by a horizontal step would trivially yield a result proportional to what the
algorithm path followed by the horizontal step gives. The relations so obtained are precisely those of
(\ref{rel6})
\end{proof}
This reduces the number of possibly independent relations considerably: for any vertex $T_m^k$ with $m<n$,
there can now
be at most $m-1$ independent relations, which result from the paths leading there and having
between zero and $m-1$ vertical steps at their ends. For $T_n^k$, there can only be one
relation. Among the relations thus obtained,
there will still be redundancies, which are not obvious at first glance. To reveal them, one has to
apply the Jacobi identity and other relations one has already obtained. As an example, look at
$T_2^2$ in $\mathfrak{sl}(n)$ for $n\geq 3$. Two paths lead there, described in (\ref{path1})
and (\ref{path2}). Since they are both admissible in the sense of Proposition \ref{prop1}, one might
think that we obtain a relation here between $T_2^2$ generated by the algorithm and
the result of another path. However,
\[
\ad D(\ad S(\ad D(S)))=\ad S((\ad D)^2(S))+\ad(\ad D\,(S))(\ad D(S))
\]
due to the Jacobi identity and the last term is of the form $\ad z(z)\equiv 0$.
Therefore, the two paths \emph{trivially} yield the same result, and we obtain no relation here.
We will show now that there are no other interdependencies of this sort
except the above one for $T_2^2$.\\
\begin{Lemma}
It is impossible to trivially identify the result of two paths to a given $T_s^r$
by rearranging them using the Jacobi identity, except for the case $r=s=2$, where we have
\[
\ad D(\ad S(\ad D(S)))=\ad S((\ad D)^2(S))
\]
\label{comml}
\end{Lemma}
\begin{proof}
Any admissible path in the sense of Lemma \ref{lem1} and its corollary is of the form
\begin{equation}
T_s^r=(\ad\,S)^{s_2}((\ad\,D)^{r-1}((\ad\,S)^{s_1}(T_1^1)))
\label{path}
\end{equation}
where $s_1+s_2=s-1$. For $s_2=0$, we obtain the algorithm path.
In order to show that two paths give the same result, we want to apply the Jacobi identity
\[
\ad(\ad\,x\,\,(y))z=\ad\,x\,(\ad\,y\,\,(z))-\ad\,y\,(\ad\,x\,\,(z))
\]
in such a way that the left hand side of it becomes zero, i.e. is of the form
$\ad\,z(z)$. This would rule out one of the relations these paths produce. We see
immediately that for this to happen for adjoint operators $x,y,z$, the element $z$
would have to contain as
many $D$'s and $S$'s as $x$ and $y$ together. Looking at (\ref{path}), which we would like to
identify with $\ad\,x\,(\ad\,y\,\,(z))$, this implies $r=s$. We have to split (\ref{path}) in two
equally long subpaths, the head (including $T_1^1$) being $z$ and the tail being $\ad\,x\,(\ad\,(y))$ and
each containing $\frac{s}{2}$-many $D$'s and $S$'s, implying $s$ must be even.
For the case $r=s=2$, we find that $z=T_1^1$, $x=\ad\,S$ and $y=\ad\,D$ meet these requirements.
Let now $r=s=2n$, $n>1$ and let $x,y,z$ satisfy the above conditions.
Then $z$ represents the path of the algorithm to
$T_{s/2}^{s/2}$ and $\ad\,x(\ad\,y)$ is of the form $(\ad\,S)^{s/2}((\ad\,D)^{s/2})$. But we
see that it is impossible then to find $x,y$ such that $\ad\,y((\ad\,x)(z))$ would again be an
admissible path.
\end{proof}
It is important to note that this still does not exclude all possible dependencies between the
relations that various admissible paths yield. By clever rearrangement, it might still be possible
to bring a bracket of two elements into a form which, when expanded into paths, yields only a few admissible
paths and several others which run over already excluded pieces. We could find no way to rule out
all such possibilities. This seems only possible with the help of computers.
But, as stated above, the minimal number might not be of practical interest. The preceding discussion
still gives us an upper bound on the number of relations.\\
\begin{Theorem}
The number $R(n)$ of independent relations between Sylvester's generators is
bounded from above by:
\[
R(n)\leq\begin{cases}
2 & \mathrm{for}\,\,n=2,\\
n^2-3 & \mathrm{for}\,\,n\geq 3.
\end{cases}
\]
\end{Theorem}
\begin{proof}
\underline{$n=2$:}
See Fig. \ref{n2fig} for the diagram.
\begin{figure}[!ht]
\begin{center}
\includegraphics{n2grid.eps}
\caption{The grid of basis elements for $\mathfrak{sl}(2)$}
\label{n2fig}
\end{center}
\end{figure}
\newline
The white dots are ruled out by the relations stated in the beginning, however the dot in the
lower right corner is not independent here. Thus, the only relations are
$$
(\ad D)^2(S)=4S,\qquad
(\ad S)^2(D)=4D.
$$
which was also Grozman's result (\ref{grrel}).\\
\underline{$n\geq 3$:}
As an example for the generic case, look at the $n=5$ grid (Fig. \ref{n5fig}).
\begin{figure}[!ht]
\begin{center}
\includegraphics{n5grid.eps}
\caption{The grid of basis elements for $\mathfrak{sl}(5)$}
\label{n5fig}
\end{center}
\end{figure}
\newline
We get here the following relations:
\begin{itemize}
\item 2 relations for the $n$-th powers of $\ad D$ and $\ad S$,
\item $(n-2)$ relations that limit the application of $\ad\,D$ (the rightmost white dots),
\item $(n-2)$ relations that limit the application of $\ad\,S$ (lowermost white dots),
\item 1 relation corresponding to relation (\ref{rel4}) (white dot in the lower right corner),
\item $(n-3)$ relations for the vertical paths from the first to the second row,
\item $(n-3)(n-1)$ relations for the vertical paths between the second and third row, third and
fourth row and so on down to the $(n-1)$st row,
\item $(n-2)$ relations for the horizontal paths in the $n$-th row.
\end{itemize}
This makes a total of $n^2-3$ relations. Lemma \ref{lem1} and its corollary exclude the possibility
that one of them is obtained by another by application of $\ad\,D$ or $\ad\,S$. Lemma \ref{comml}
shows that none of them is a consequence of another via a rearrangement using the Jacobi identity.
\end{proof}
We see that even for $n=3$, the bound overestimates the exact number of relations. However, the
number of relations found in the above manner is only of order $\sim n^2$, which can be expected to lie
pretty close to the true behaviour of $R(n)$ so that the relative error will decrease for
growing $n$. But the main advantage of our method is that it
explicitly produces a \emph{presentation} (albeit a redundant one): all relations can be directly read
off from the grid representation of the basis of $\fsl(n)$.
\section{Relations between Sylvester's generators for $\fsl(n|n)$}
Sylvester's generators can as well be used to generate a basis of $\fsl(n|n)$, and only for this
simple (up to a nontrivial center) finite dimensional Lie superalgebra, see \cite{LSe}. It is
most convenient to choose an alternating format for the superspace in which we express
the supermatrices, i.e., if $(e_1,e_2,\ldots,e_n)$ is a basis of this vector space, let the $e_{2k+1}$
be
odd vectors and the $e_{2k}$ be even ones for all $k$. This format has the advantage that we can use
the same matrices $D,S$ as above as Sylvester's generators, where now $D$ is an even supermatrix and
$S$ an odd one. The result obtained below remains valid in any format, but looks nicest in the chosen one.
To be able to compare the matrices obtained for the $\fsl(n)$ and $\fsl(n|n)$ cases, we put a twiddle
on the supermatrices: $\tilde{D},\tilde{S}$.
As above, set
\[
\tilde{T}_1^1=[\tilde{D},\tilde{S}]
\]
which is now an odd supermatrix, but with the same entries as in the $\fsl(n)$ case. Likewise,
\[
\tilde{T}_1^k=(\ad\,\tilde{D})^{k-1}(\tilde{T}_1^1)
\]
are all odd supermatrices, but look the same as in the $\fsl(n)$ case, and we find the analogue of relation
(\ref{rel1}) to be
\begin{equation}
(\ad\, \tilde{D})^{2n}(\tilde{S})=(1-a)^{2n}\tilde{S}.
\label{srel1}
\end{equation}
We follow the same algorithm as in the $\fsl(n)$ case, now using the superbracket: set
\[
\tilde{T}_2^1=[\tilde{S},[\tilde{D},\tilde{S}]]=[\tilde{S},\tilde{T}_1^1]
\]
which is the same matrix as in the $\fsl(n)$ case, except for the prefactor, which is now $(1-a^2)$
instead of $-(1-a)^2$. In general, for $k=1,\ldots,2n;\,\,m=1,\ldots,2n-2$, we have
\begin{eqnarray*}
(\ad\,\tilde{D})^{k-1}([\tilde{D},\tilde{S}]) &=& \tilde{T}^k_1 = T^k_1\qquad\mathrm{for}\quad %
k=1,\ldots,2n-1\\
(\ad\,\tilde{D})^{k-1}((\ad\,S)(\tilde{T}^1_m)) &=& \tilde{T}^k_{m+1} = %
\left\{\begin{array}{ll}
-\left(-\frac{1+a}{1-a}\right)^{\frac{m+1}{2}}T^k_{m+1} & \mathrm{for}\,\,m\,\,\mathrm{odd}\\
-\left(-\frac{1+a}{1-a}\right)^{\frac{m}{2}}T^k_{m+1} & \mathrm{for}\,\,m\,\,\mathrm{even}
\end{array}\right.\\
\end{eqnarray*}
so $\tilde{T}^k_m$ is proportional to $T^k_m$. The $(1+a)$-factors stem from the
application of anticommutators. One obtains the analogue of the
relations (\ref{rel2}) for $k=2n+1$ and $2\leq m \leq 2n-1$:
\begin{equation}
(\ad \tilde{D})^{2n}((\ad \tilde{S})^{m-1}((\ad \tilde{D})(\tilde{S})))=%
\left(-\frac{1+a}{1-a}\right)^{\frac{m(+1)}{2}}(1-a^m)^{2n}(-1)^{m}%
(\ad \tilde{S})^{m-1}(\ad\,\tilde{D}(\tilde{S})).
\label{srel2}
\end{equation}
For the diagonal basis elements, we set
\[
\tilde{T}_n^{k}=\ad\,\tilde{S}(\tilde{T}_{n-1}^k)\qquad\mathrm{for}\quad 2\leq k\leq n
\]
and obtain the following relations:
\begin{eqnarray}
\label{srel3}
\ad\, \tilde{S}(\tilde{T}_{n-1}^1) &=& \left(-\frac{1+a}{1-a}\right)^n(-1)^{n+1} \tilde{D},\\
\label{srel4}
\ad\,\tilde{S}(\tilde{T}_n^k) &=& %
\left(-\frac{1+a}{1-a}\right)^{n-1}(1-a)(a^{2k}-1)(1-a^{2n-1})^{k-1}(-1)^{n}\tilde{T}_1^k.
\end{eqnarray}
Note that $\tilde{T}^{2n}_{2n}$ is not zero here, but is proportional to the identity matrix. On any
$(n|n)$-dimensional superspace, the identity matrix is supertraceless, and therefore an element of $\fsl(n|n)$.
Thus, no relation corresponds to (\ref{rel4}) in the super case.
We have to add one more relation, which did not exist in the non-super case: the supercommutator of
$\tilde{S}$ with itself:
\begin{equation}
[\tilde{S},\tilde{S}]=\frac{1}{(1+a)(1-a)^{2n-1}}\tilde{T}^{2n}_2\qquad\text{for $n>1$.}
\label{srel5}
\end{equation}
For $n=1$, this is not a relation, but really generates a new element, see Thm.
(\ref{superthm}).
Thinking of the set of basis elements again as a grid of points, we see that we have found relations of the
same sort as in the $\fsl(n)$ case, with one exception: there is one more element, the one proportional
to the identity matrix, represented by the rightmost dot in the last row.
One can again verify by explicit calculation that $[\tilde{T}^k_m,\tilde{T}^{k'}_{m'}]$ is proportional
to $\tilde{T}^{k+k'}_{m+m'}$. This extends the validity of Lemma \ref{lem1} to the super case.
To find a bound for the number of relations again reduces to checking all paths from $\tilde{T}_1^1$
to the other $\tilde{T}_m^k$'s. This is done in the same way as before, it is clear that our algorithm
proceeds on the same paths as in the $\fsl(n)$ case and that Prop. \ref{prop1} and Cor. \ref{cor1} also
apply in the super case.
Also Lemma \ref{comml} generalises to the super case, now using the super Jacobi identity. But here we have
to be careful about a specialty of the super case: supercommutators of elements with themselves do not
necessarily vanish. Consider, for example, $\tilde{T}_2^2$:
\begin{equation}
\ad\,\tilde{D}(\ad\,\tilde{S}(\ad\,\tilde{D}\,(\tilde{S})))=%
-\ad(\ad\,\tilde{S}(\tilde{D}))(\ad\,\tilde{D}(\tilde{S}))-\ad\,\tilde{S}((\ad\,\tilde{D})^2(\tilde{S})).
\end{equation}
Here, the first term on the right hand side does not vanish. Therefore the relation between the two paths
to $\tilde{T}_2^2$ that we ruled out as being trivial in the $\fsl(n)$ case is nontrivial in the super
case. Except for this fact, Lemma \ref{comml} remains valid.
\begin{Theorem}
For $\fsl(n|n)$, the number $R(n)$ of independent relations between Sylvester's generators is bounded by
\begin{equation}
R(n)\leq\begin{cases}
4 & \text{for $n=1$,}\\
(2n)^2-1 & \text{for $n>1$.}
\end{cases}
\end{equation}
\label{superthm}
\end{Theorem}
\begin{proof}
The $n=1$ case differs from the $\fsl(2)$ case because of the relation
\[
[\tilde{S},\tilde{S}]=2\cdot\mathbbmss{1},
\]
where $\mathbbmss{1}$ is the identity matrix.
\begin{figure}[!ht]
\begin{center}
\includegraphics{n1_1grid.eps}
\caption{The grid of basis elements of $\fsl(1|1)$}
\label{fig11}
\end{center}
\end{figure}
The basis elements of $\fsl(1|1)$ can be represented by the grid in
Fig \ref{fig11}.
There are four relations:
\begin{eqnarray}
[\tilde{D},[\tilde{D},\tilde{S}]] &=& 4\tilde{S}\\
{}[\tilde{S},[\tilde{D},\tilde{S}]] &=& 0\\
{}[\tilde{D},[\tilde{S},\tilde{S}]] &=& 0\\
{}[\tilde{S},[\tilde{S},\tilde{S}]] &=& 0
\end{eqnarray}
\begin{figure}[!ht]
\begin{center}
\includegraphics{n2_2grid.eps}
\caption{The grid of basis elements of $\fsl(2|2)$}
\label{fig22}
\end{center}
\end{figure}
For $n>1$, the grid looks like in Fig. \ref{fig22}.
Note that now there is one more black dot in the lower right corner which we generate from the dot above
it. This provides one more relation. Another additional relation is obtained from the two paths to
$\tilde{T}_2^2$, which are now independent. Apart from this, the situation is identical to the non-super
case.
\end{proof}
\vspace{2cm}
|
2,877,628,090,387 | arxiv | \section{Circuit Model}
\label{app:CircuitModel}
In this appendix, we consider a flux qubit coupled to transmission lines (see Fig.~\ref{fig:model}) and derive the effective spin-boson model~\cite{Peropadre2013}.
For example, we consider a uniform transmission line with constant capacitance and inductance ($C_i = C$, $L_i = L$) while neglecting-resistance ($R_i = 0$).
We then derive a general linear response relation between the spectral density function and the circuit impedance.
The Hamiltonian of the present circuit is given by
\begin{eqnarray}
\label{eq:H_circuit_dis}
& & H = H_{\rm S} + H_{\rm B} + H_{{\rm I}},
\\
\label{eq:H_qubit1}
& & H_{\rm S} = \sum_{k=1}^{3} \left[\frac{Q_{J,k}^2}{2C_{J,k}} - E_{J,k}\cos(\phi_{J,k}/\phi_{0})\right],\\
\label{eq:H_transmission_dis}
& & H_{{\rm B}} = \sum_{\nu} \sum_{j=1}^{N} \left[\frac{Q_{\nu,j}^2}{2C} + \frac{(\phi_{\nu,j+1}-\phi_{\nu,j})^2}{2L}\right], \\
\label{eq:H_int_dis}
& & H_{{\rm I}} =
\frac{(\phi_{a}-\phi_{L,N})^2}{2L_N}
+ \frac{(\phi_{R,N}-\phi_{b})^2}{2L_N},
\end{eqnarray}
where $H_S$, $H_{{\rm B}}(=\sum_\nu H_{{\rm B},\nu})$, and $H_{\rm I}(=\sum_\nu H_{{\rm I},\nu})$ describe the flux qubit, the transmission lines, and the system-reservoir coupling, respectively, and $\phi_0 = \hbar/2e$ is the flux quantum.
The flux qubit comprises three Josephson junctions with Josephson energies $E_{J,k}$ ($k=1,2,3$), and the charge and flux operator of the $k$-th Josephson junction are denoted by $Q_{J,k}$ and $\phi_{J,k}$, respectively.
Similarly, the charge and flux operators of the transmission line (see Fig.~\ref{fig:model}~(b)) are denoted by $Q_{\nu,j}$ and $\phi_{\nu,k}$, respectively, and these operators satisfy the exchange relations $[\phi_{J,k},Q_{J,k'}] = i\delta_{k,k'}$ and $[\phi_{\nu,j},Q_{\nu',j'}] = i\delta_{j,j'}\delta_{\nu,\nu'}$, respectively.
The flux operators at the two sides of the flux qubit are expressed by $\phi_a$ and $\phi_b$ (refer Fig.~\ref{fig:model}~(a)).
To make the flux qubit, the area of one junction is reduced by a factor of $\alpha$ ($E_{J,1} = E_{J,3} = E_J$, $C_{J,1} = C_{J,3} = C_J$, $E_{J,2} = \alpha E_{J}$, and $C_{J,2} = \alpha^{-1} C_{J}$).
Then, the Hamiltonian of the flux qubit Hamiltonian~(\ref{eq:H_qubit1}) can be rewritten~\cite{Mooij1999,Peropadre2013}
\begin{eqnarray}
\label{eq:H_qubit2}
H_{\rm qb} &=& \frac{Q_{J,+}^2}{2C_{J,+}} + \frac{Q_{J,-}^2}{2C_{J,-}} + V(\phi_{J,+},\phi_{J,-}), \\
V(\phi_{J,+},\phi_{J,-}) &=& - E_{J}[2\cos(\phi_{J,+}/2\phi_{0})\cos(\phi_{J,-}/2\phi_{0}) \nonumber \\
& & \hspace{5mm} + \alpha\cos((\Phi_{\rm ext}-\phi_{J,-})/2\phi_{0})],
\end{eqnarray}
where $\phi_{J,\pm} = (\phi_{J,1} \pm \phi_{J,3})/2$, its conjugate operator is denoted by $Q_{J,\pm}$, and $V(\phi_{J,+},\phi_{J,-})$ is the Josephson energy that plays the role of the potential energy.
When the magnetic flux through the loop is tuned to be half of the flux quantum ($\Phi_{\rm ext}=\phi_0/2$), the Josephson energy, $V(\phi_{J,+},\phi_{J,-})$, has two energy minima on the line $\phi_{J,+}=0$.
Due to quantum tunneling effects, there is an energy splitting $\Delta$ between the ground state and the first-excited state.
Since these lowest two eigenstates are well separated from the other eigenstates, we can truncate the system into the lowest two eigenstates, thus leading to the two-state system Hamiltonian (\ref{eq:H_system}).
The wavefunctions of the lowest two states are described as $\ket{\sigma_x=+1} = (\ket{\uparrow} + \ket{\downarrow})/\sqrt{2}$ and
$\ket{\sigma_x=-1} = (\ket{\uparrow} - \ket{\downarrow})/\sqrt{2}$, where $\ket{\uparrow}$ and $\ket{\downarrow}$ are the two-dimensional wavefunctions localized at the two potential energy minima, respectively
Introducing the new variables $\phi_{\pm} = \phi_{R,{\rm N}} \pm \phi_{L,{\rm N}}$ and $\Phi_{\pm} = \phi_{b} \pm \phi_{a}$ and using $\phi_{J,+} \propto \Phi_{+} \simeq 0$, the system-reservoir coupling~(\ref{eq:H_int_dis}) is rewritten as $H_{\rm I} = -\phi_{-}\Phi_{-}/2L_N$.
After truncation into the two-state system, we obtain:
\begin{eqnarray}
\label{eq:H_int_dis2}
H_{\rm I} = -\frac{\phi_{-}}{2L_N}\phi_0\Braket{\varphi_{-}}\sigma_z,
\end{eqnarray}
where $\bra{\uparrow}\Phi_{-}\ket{\uparrow} \equiv \phi_0\Braket{\varphi_{-}}$, $\bra{\downarrow}\Phi_{-}\ket{\downarrow} \equiv -\phi_0\Braket{\varphi_{-}}$, and $\bra{\uparrow}\Phi_{-}\ket{\downarrow} = \bra{\downarrow}\Phi_{-}\ket{\uparrow} = 0$.
For simplicity, we consider the continuous limit $\Delta x \rightarrow 0$ while keeping the length of the transmission line, $L_t =N \Delta x$, constant, where $\Delta x$ is the size of each elementary island.
Then, the system-reservoir coupling can be rewritten by~\cite{Peropadre2013}:
\begin{eqnarray}
\label{eq:H_int_con}
& & H_{{\rm I}} = -\frac{1}{l}\left.\frac{\partial \phi(x)}{\partial x}\right|_{x = 0}\phi_0\Braket{\varphi_{-}}\sigma_z,
\end{eqnarray}
where $l$ is the inductance per unit length.
The flux, $\phi(x)$, can be expressed by:
\begin{eqnarray}
\phi(x) = \sum_k \frac{1}{\sqrt{2c\omega_k}}(b_k+b_k^\dagger)\frac{e^{ikx}}{\sqrt{L_t}},
\end{eqnarray}
where $c$ is the capacitance per unit length, and $b_k$ and $b_k^\dagger$ are bosonic annihilation and creation operators, respectively.
Then, the Hamiltonians for the transmission lines and the system-reservoir coupling can be rewritten as follows:
\begin{eqnarray}
\label{eq:H_transs}
& & H_{{\rm B}} = \sum_{k} \omega_{k}b^\dagger_{k}b_{k}, \\
\label{eq:H_int_con2}
& & H_{{\rm I}} = -\frac{\sigma_z}{2}\sum_{k}\lambda_{k}(b_k+b_k^\dagger), \\
& & \lambda_{k} = \frac{2\phi_0\Braket{\varphi_{-}}}{v l\sqrt{L_t}}\sqrt{\frac{\omega_k}{2c}},
\label{eq:H_int_con3}
\end{eqnarray}
where $v = 1/\sqrt{lc}$ is the speed of light in the transmission line.
This model corresponds to the spin-boson model with an ohmic reservoir.
Now, we discuss the general linear response relation.
The electric current operator at the position $x$ is defined by $\mathcal{I}(x) = l^{-1} \partial \phi(x)/\partial x$ and is calculated at $x = 0$:
\begin{eqnarray}
\label{eq:electric_current}
\mathcal{I}_0 \equiv \mathcal{I}(x=0) = \sum_{k}\frac{i\lambda_{k}}{2\phi_0\Braket{\varphi_{-}}}(b_k+b_k^\dagger).
\end{eqnarray}
From Eqs.~(\ref{eq:spectral_trans}) and (\ref{eq:H_int_con2})-(\ref{eq:electric_current}), the spectral density function can be rewritten as:
\begin{eqnarray}
\label{eq:spectral_circuit}
I(\omega) = \frac{4\phi_0^2\Braket{\varphi_{-}}^2}{\pi}{\rm Im}[G_{\mathcal{I}_0}^{\rm R}(\omega)],
\end{eqnarray}
where $G_{\mathcal{I}_0}^{\rm R}(\omega)$ is the Fourier transform of the current-current correlation function defined by $G_{\mathcal{I}_0}^{\rm R}(t) = -i\theta(t)\Braket{[\mathcal{I}_0(t),\mathcal{I}_0(0)]}$.
Using linear response theory~\cite{Bruus2004}, $G_{\mathcal{I}_0}^{\rm R}(\omega)$ can be related to the total impedance of the transmission lines:
\begin{eqnarray}
\frac{1}{Z(\omega)} = \frac{i}{\omega} G_{\mathcal{I}_0}^{\rm R}(\omega).
\label{eq:linearresponse}
\end{eqnarray}
Substituting Eq.~(\ref{eq:linearresponse}) into Eq.~(\ref{eq:spectral_circuit}), we can derive Eqs.~(\ref{eq:spectral_impedance}) and (\ref{eq:spectral_impedance2}) in the main text.
Although we have derived them for a special case, i.e., the case of uniform transmission lines without damping, Eqs.~(\ref{eq:spectral_impedance}) and (\ref{eq:spectral_impedance2}) hold for arbitrary circuits of the transmission lines.
\section{Asymptotically-Exact Formula for Co-tunneling}
\label{app:Cotunneling}
When the ground state is a delocalized state ($\alpha < \alpha_{\rm c}$), heat transport is induced by the virtual excitation of the two-state system for $T \ll \Delta_{\rm eff}$, where $\Delta_{\rm eff}$ is a renormalized tunneling amplitude.
This process is called co-tunneling.
By utilizing the generalized Shiba relation~\cite{Sassetti1990}, the asymptotically-exact formula for the thermal conductance in the co-tunneling regime ($T \ll \Delta_{\rm eff}$) is derived as follows~\cite{Yamamoto2018}:
\begin{eqnarray}
\label{eq:conductance_co}
\kappa_{\rm co} = \frac{\pi\chi_0^2}{8}\int_0^\infty d\omega~I_L(\omega)I_R(\omega)\left[\frac{\beta\omega/2}{\sinh(\beta\omega/2)}\right]^2,
\end{eqnarray}
where $\chi_0$ is the static susceptibility defined by Eq.~(\ref{eq:static_susceptibility}).
This formula leads to thermal conductance proportional to $T^{2s+1}$.
\section{Critical Exponents}
\label{app:CriticalExponent}
\begin{figure}[tbp]
\centering
\includegraphics[width=8.0cm]{FIG_QPT.pdf}
\caption{
Population, $\Braket{\sigma_z}$, as a function of the detuning energy, $\varepsilon$. For the delocalized phase (blue line; $\alpha<\alpha_{\rm c}$), $\Braket{\sigma_z}$ is a continuous function of $\varepsilon$, and the susceptibility, $\chi_0$, can be defined by the slope at $\varepsilon=0$. At the critical point (green line; $\alpha=\alpha_{\rm c}$), $\Braket{\sigma_z}$ is continuous, but the susceptibility diverges at $\varepsilon=0$. For the localized phase (red line; $\alpha>\alpha_{\rm c}$), $\Braket{\sigma_z}$ is discontinuous at $\varepsilon=0$.
}
\label{fig:QPT}
\end{figure}
In this Appendix, we briefly discuss the critical exponents of several observables at the quantum phase transition for sub-ohmic reservoirs~\cite{Bulla2003,Vojta2005,Winter2009}.
Fig.~\ref{fig:QPT} shows schematics of the population, $\average{\sigma_z}$, as a function of the detuning energy, $\varepsilon$, near the critical point, $\alpha=\alpha_{\rm c}$.
In the delocalized phase ($\alpha < \alpha_{\rm c}$), the slope at $\varepsilon = 0$ corresponds to the static susceptibility:
\begin{eqnarray}
\label{eq:static_susceptibility}
\chi_0 = \lim_{\varepsilon\rightarrow0}\frac{\average{\sigma_z}_{\rm eq}}{\varepsilon}.
\end{eqnarray}
The static susceptibility, $\chi_0$, diverges as the value of $\alpha$ approaches $\alpha_{\rm c}$ from below.
In the localized phase ($\alpha > \alpha_{\rm c}$), $\average{\sigma_z}$ jumps from $-m_z$ to $m_z$ at $\varepsilon = 0$, where $m_z= \average{\sigma_z}|_{\varepsilon\rightarrow +0}$ is the spontaneous magnetization.
\begin{table}[tb]
\label{table:exponent}
\caption{Summary of the critical exponents.}
\begin{center}
\begin{tabular}{lll} \hline \hline
Exponent & Definition & Condition \\ \hline
$\gamma $ & $\chi_0 \propto (\alpha_{\rm c}-\alpha)^{-\gamma}$ & $\alpha<\alpha_{\rm c}$, $T=0$ \\
$\beta'$ & $m_z \propto (\alpha-\alpha_{\rm c})^{\beta'}$ & $\alpha>\alpha_{\rm c}$, $T=0$ \\
$\eta$ & $m_z \propto T^{\eta/2}$ & $\alpha=\alpha_{\rm c}$, $T> 0$ \\
$x$ & $\chi_0 \propto T^{-x}$ & $\alpha=\alpha_{\rm c}$, $T> 0$ \\ \hline \hline
\end{tabular}
\end{center}
\end{table}
In Table~\ref{table:exponent}, we summarize the critical exponents.
All of the exponents can be determined experimentally by measuring the population, $\average{\sigma_z}$.
By using $y_h^*$ and $y_t^*$, two exponents related to the QPT fix point,
these critical exponents are expressed as follows~\cite{Winter2009}:
\begin{eqnarray}
& & \beta' = (1-y_h^*)/y_t^*, \\
& & \gamma = (2y_h^*-1)/y_t^*, \\
& & \eta = 1-x = 2-2y_h^*.
\end{eqnarray}
Since the transition occurs above the upper critical dimension,
for $0<s \le 0.5$, the exponents $y_t^*$ and $y_h^*$ are given by mean-field theory as follows:
\begin{eqnarray}
y_t^* = 1/2, \quad y_h^* = 3/4.
\end{eqnarray}
Therefore, we obtain
\begin{eqnarray}
\beta' = 1/2, \quad \gamma = 1, \quad \eta = 1/2, \quad x = 1/2.
\end{eqnarray}
For $s > 0.5$, $y_t^*$ and $y_h^*$ are nontrivial functions of $s$.
By the $\varepsilon$-expansion, the exponents are calculated as~\cite{Luijten_thesis}:
\begin{eqnarray}
\label{eq:y_t}
& & y_t^* =s + \varepsilon/6 - 4\varepsilon^2A(s)/9s + \mathcal{O}(\varepsilon^3),\\
\label{eq:y_h}
& & y_h^* = (1+s)/2 + \varepsilon/4 - \varepsilon^2A(s)/6s + \mathcal{O}(\varepsilon^3),
\end{eqnarray}
where $\varepsilon = 2s-1$, $A(s) = s[\psi(1)-2\psi(s/2)+\psi(s)]$, and $\psi(x)$ is the Digamma function.
The results of the critical exponents are confirmed in previous numerical studies~\cite{Luijten_thesis,Luijten1997,Winter2009,Vojta2005}.
\section{Analytic Expression of the Spectral Density Function}
\label{app:spect}
We analyze the frequency dependence of the spectral density function for the circuit model discussed in Sec.~\ref{sec:realization}.
Assuming $|\omega C_j Z_{j-1}(\omega)| \ll 1$, the following recurrence relation (\ref{eq:recurrence}) is given approximately:
\begin{eqnarray}
Z_j(\omega) \simeq R_j + i\omega L_j + Z_{j-1}(\omega)-i\omega C_j Z_{j-1}(\omega)^2.
\end{eqnarray}
In the continuous limit $N \rightarrow \infty$, this recurrence relation reduces to the differential equation:
\begin{eqnarray}
\frac{dZ(\omega,x)}{dx} = r(x) + i\omega l(x) - i\omega c(x)Z(\omega,x)^2,
\end{eqnarray}
where $r(x)$, $l(x)$, and $c(x)$ ($0 \le x =j/N \le 1$) are the resistance, inductance, and capacitance per unit length, respectively.
From Eq.~(\ref{eq:setting}), they are given as
\begin{eqnarray}
& & r(x) = r_0(1-x)^{n}, \\
& & l(x) = l_0, \\
& & c(x) = c_0(1-x)^{m},
\end{eqnarray}
where $r_0=R_0/\Delta x$, $l_0=L_0/\Delta x$, and $c_0=C_0/\Delta x$.
We note that $Z(\omega) = Z(\omega,x\rightarrow 1)$.
Since $\dot{Z}(\omega,x) = dZ(\omega,x)/dx$ and $r_0(1-x)^{n}$ are sufficiently small compared with other terms, we can neglect them and obtain:
\begin{eqnarray}
Z_A(\omega,x) = \sqrt{\frac{l_0}{c_0}}(1-x)^{-m/2},
\end{eqnarray}
for
\begin{eqnarray}
\label{eq:condition_1-x}
1-x^* \equiv\left(\frac{n}{2\omega\sqrt{l_0c_0}}\right)^{2/(m+2)} \! \! \! \! \! \ll 1-x \ll \left(\frac{\omega l_0}{r_0}\right)^{1/n} \! \!.
\end{eqnarray}
In contrast, for $x \simeq 1$, we can neglect $r(x)$ and $c(x)$, and obtain the following:
\begin{eqnarray}
Z_B(\omega,x) = i\omega l_0 x + A(\omega).
\end{eqnarray}
The constant of integration, $A(\omega)$, can be determined by the equation $Z_A(\omega,x^*) = Z_B(\omega,x^*)$.
Thus, we arrive at $Z(\omega)$ as follows:
\begin{eqnarray}
Z(\omega) &\sim& Z_B(\omega,x \rightarrow 1) \nonumber \\
&=& i\omega l_0(1-x^*) + \sqrt{\frac{l_0}{c_0}}(1-x^*)^{-m/2}.
\end{eqnarray}
From Eq. (\ref{eq:spectral_impedance}), we obtain the following spectral density function:
\begin{eqnarray}
I(\omega) \propto \omega{\rm Re}[Z(\omega)^{-1}] \propto \omega^{2/(m+2)}.
\end{eqnarray}
This frequency dependence appears for $\omega^* \ll \omega \ll \omega_{\rm c}$, where the lower bound, $\omega^*$, is obtained by considering the condition (\ref{eq:condition_1-x}):
\begin{eqnarray}
\omega^* = \left[\left(\frac{m}{2}\right)^{2n}\frac{r_0^{m+2}}{c_0^{n}l_0^{m+n+2}}\right]^{1/(m+2n+2)}.
\end{eqnarray}
This corresponds to Eq.~(\ref{eq:omegastar}) in the main text.
\section{Introduction}
Quantum critical phenomena (QCP) induced by second-order quantum phase transitions (QPTs) are a central topic in condensed matter physics~\cite{Sachdev2011}.
Although QPTs have been studied in various highly-correlated systems, it is still challenging to realize them in controlled experimental systems.
Recently, QCP have been studied for the multi-channel Kondo effect realized in artificial nano structures~\cite{Potok2007,Mebrahtu2012,Mebrahtu2013,Keller2015,Iftikhar2015,Iftikhar2018}, and quantum critical behavior observed experimentally via electronic transport properties is in good agreement with theoretical results~\cite{Cox1998,Vojta2006,Bulla2008}.
This great success encourages further study of QCP in transport properties using different mesoscopic systems.
Heat transport in nano structures is another important topic in mesoscopic physics.
In particular, heat transport carried by photons (phonons) via a two-state system has been studied in several theoretical works~\cite{Segal2010,Ruokola2011,Saito2013,Ren2010,Chen2013,Segal2014,Yang2014,Wang2015,Taylor2015}, because it has considerable similarities to electronic transport in quantum dots.
The heat transport via a two-state is described by the spin-boson model, whose properties are characterized by the spectral density function $I(\omega)\propto\omega^s$~\cite{Leggett1987,Weiss2012}.
For sub-ohmic reservoirs ($0<s<1$), this model displays a QPT at zero temperature when a system-reservoir coupling is tuned to a critical value~\cite{Kehrein1995,Kehrein1996,Winter2009,Bulla2003,Vojta2005,Vojta2009,Vojta2012,Chin2011,Weiss2012}.
In a recent paper by the authors and the other two co-authors~\cite{Yamamoto2018}, the temperature dependence of thermal conductance is studied in detail for all types of reservoirs (arbitrary $s$) via continuous-time quantum Monte Carlo (CTQMC) simulations.
For sub-ohmic reservoirs, however, QCP near the transition point have not been discussed.
The recent, rapid development in nano structure fabrication and experimental heat measurement that has enabled us to experimentally access heat current in nano scale objects is remarkable~\cite{Forn-Diaz2017,Magazzu2017,Ronzani2018}.
It has been demonstrated that transmission lines coupled to a superconducting qubit indeed realize the spin-boson model with an ohmic ($s=1$) reservoir~\cite{Yu2012,Bourassa2009,Leppakangas2018,Peropadre2013,Forn-Diaz2017,Magazzu2017}.
However, to the best of our knowledge, the design of a superconducting circuit to realize the sub-ohmic spin-boson model has only been discussed in Ref.~\cite{Tong2006}, in which experimental realization of the sub-ohmic reservoirs of $s=0.5$ is discussed.
To study QCP, considering the realization of the sub-ohmic spin-boson model for an arbitrary value of $s$ is advantageous.
In this paper, we investigate QCP in heat transport via a two-state system carried by photons or phonons for sub-ohmic reservoirs.
The temperature dependence of the thermal conductance is calculated using the CTQMC method~\cite{Rieger1999,Winter2009,Yamamoto2018}.
In the previous work~\cite{Yamamoto2018}, it has been shown that the thermal conductance is always proportional to $T^{2s+1}$ at low temperatures when the system-reservoir coupling is below a critical value, reflecting a non-degenerate ground state of the system.
However, in the quantum critical regime near QPT, the power of the temperature dependence changes into a different value, reflecting the nature of QPT.
We discuss the critical exponents related to QPT in detail.
We also consider a superconducting circuit to realize the sub-ohmic spin-boson model with arbitrary value of $s$.
This paper is organized as follows.
The spin-boson model is described in Sec.~\ref{sec:Model}, and the heat current via a two-state system is formulated in Sec.~\ref{sec:Formulation}.
The critical temperature dependence of the heat current near the quantum phase transition is shown in Sec.~\ref{sec:result}, which is our main result.
A superconducting circuit is proposed that could be used to realize the spin-boson model with sub-ohmic reservoirs in Sec.~\ref{sec:realization}.
Finally, our results are summarized in Sec.~\ref{sec:summary}.
Throughout this paper, we employ the unit of $k_{\rm B}=\hbar=1$.
\section{Model}
\label{sec:Model}
\begin{figure}[tbp]
\centering
\includegraphics[width=7.0cm]{FIG_model_transport2.pdf}
\caption{
Schematic of the model comprises a two-state system coupled to two bosonic reservoirs ($L$ and $R$) with temperatures $T_{L}$ and $T_{R}$, respectively. If $T_L>T_R$, a heat current flows from reservoir $L$ to reservoir $R$ via the two-state system.
}
\label{FIG:model_transport}
\end{figure}
We consider heat transport between two bosonic reservoirs via a two-state system (see Fig.~\ref{FIG:model_transport}).
The model Hamiltonian is given by $H = H_{\rm S} + \sum_{\nu} H_{{\rm B},\nu} + \sum_{\nu} H_{{\rm I},\nu}$, where $H_{\rm S}$, $H_{{\rm B},\nu}$, and $H_{{\rm I},\nu}$ describe a two-state system, a bosonic reservoir $\nu$ ($=L,R$), and the system-reservoir coupling, respectively.
Each term of the Hamiltonian is given as follows:
\begin{eqnarray}
& & H_{\rm S} = -\frac{\Delta}{2}\sigma_x - \varepsilon\sigma_z,
\label{eq:H_system} \\
\label{eq:H_trans_B}
& & H_{{\rm B},\nu} = \sum_{k}\omega_{\nu k}b^\dagger_{\nu k}b_{\nu k}, \\
\label{eq:H_trans_I}
& & H_{{\rm I},\nu} = - \frac{\sigma_z}{2}\sum_{k} \lambda_{\nu k}(b^\dagger_{\nu k} + b_{\nu k}),
\end{eqnarray}
where $\sigma_\alpha$ ($\alpha=x,y,z$) is the Pauli matrix, and $b_{\nu k}$ ($b_{\nu k}^{\dagger}$) is an annihilation (a creation) operator of bosonic excitation with the wavenumber $k$ in the reservoir $\nu$.
The Hamiltonian of the two-state system, $H_{\rm S}$, is obtained by truncating a double-well potential system with the lowest two eigenstates, where $\Delta$ and $\varepsilon$ are the tunneling amplitude and detuning energy, respectively.
The energy dispersion of the reservoirs and the system-reservoir coupling strength are denoted by $\omega_{\nu k}$ and $\lambda_{\nu k}$, respectively.
In this paper, we consider heat transport for the symmetric case ($\varepsilon=0$).
The detuning energy, $\varepsilon$, is used only for the detailed discussion on critical exponents in Appendix~\ref{app:CriticalExponent}.
The property of the reservoirs is determined by the spectral density function:
\begin{equation}
\label{eq:spectral_trans}
I_{\nu}(\omega) \equiv \sum_{k}\lambda_{\nu k}^2\delta(\omega-\omega_{\nu k}) .
\end{equation}
For simplicity, the spectral density function is taken in the following form:
\begin{eqnarray}
& & I_{\nu}(\omega) = \alpha_{\nu}\tilde{I}(\omega), \\
\label{eq:spectral_tilde}
& & \tilde{I}(\omega) = 2\omega_{\rm c}^{1-s}\omega^s e^{-\omega/\omega_{\rm c}},
\end{eqnarray}
where $\alpha_{\nu}$ is the dimensionless system-reservoir coupling strength, and $\omega_{\rm c}$ is the cutoff frequency, which results in much larger energies in comparison with other characteristic energies.
Herein, we focus on the sub-ohmic case ($0<s<1$), for which a second-order quantum phase transition occurs.
\section{Formulation}
\label{sec:Formulation}
The heat current operator from the reservoir $\nu$ into the two-state system is defined as follows:
\begin{eqnarray}
\label{eq:current_operator}
J_\nu &\equiv& -\frac{dH_{{\rm B},\nu}}{dt} = i[H_{{\rm B},\nu},H]\nonumber \\
&=& -i\frac{\sigma_z}{2} \sum_{k} \lambda_{\nu k} \omega_{\nu k} (-b_{\nu k}+b_{\nu k}^{\dagger}).
\end{eqnarray}
Using the standard procedure of the Keldysh formalism~\cite{Rammer1986,Jauho1994,Jauho2007}, the following Meir-Wingreen-Landauer-type exact formula~\cite{Meir1992} for the heat current is derived~\cite{Ojanen2008,Saito2013,Saito2008}:
\begin{eqnarray}
\label{eq:current}
\average{J_L} =\frac{\alpha \gamma_a}{8}\! \int_0^{\infty}\!\!\! d\omega \, \omega\, \mathrm{Im}[\chi(\omega)]\tilde{I}(\omega)\left[n_L(\omega)-n_R(\omega)\right],
\end{eqnarray}
where $\alpha=\alpha_L+\alpha_R$, $\gamma_a=4\alpha_L\alpha_R/\alpha^2$ is an asymmetric factor, $n_\nu(\omega)$ is the Bose-Einstein distribution in the reservoir $\nu$, and $\chi(\omega)$ is the dynamic susceptibility of the two-state system defined by
\begin{eqnarray}
\label{eq:dynamic_susceptibility}
\chi(\omega) = -i \int_0^{\infty} dt \, e^{i\omega t} \langle [\sigma_z(t),\sigma_z(0)] \rangle.
\end{eqnarray}
The thermal conductance is obtained from Eq. (\ref{eq:current}) as
\begin{eqnarray}
\label{eq:conductance}
\kappa &=& \lim_{\Delta T \rightarrow 0} \frac{\average{J_L}}{\Delta T} \nonumber \\
&=& \frac{\alpha\gamma_a}{8}\int_{0}^{\infty}d\omega~\mathrm{Im}
[\chi(\omega)]\tilde{I}(\omega)
\left[\frac{\beta\omega/2}{\mathrm{sinh}(\beta\omega/2)}\right]^2,
\end{eqnarray}
where $\Delta T = T_L - T_R$ and $\beta = 1/T$ ($=1/T_L=1/T_R$).
To evaluate the thermal conductance, the dynamic susceptibility, $\chi(\omega)$, must be calculated in thermal equilibrium.
We numerically calculate the dynamic susceptibility, $\chi(\omega)$, using CTQMC simulations (for details on the CTQMC method, refer to Refs.~\cite{Winter2009,Yamamoto2018}).
Using the CTQMC method, we calculate the spin-spin correlation function $C(\tau)=\Braket{\sigma_z(\tau)\sigma_z(0)}_{\rm eq}$, where $\sigma_z(\tau)$ is the imaginary time path ($0 < \tau < \beta$), and $\average{\cdots}_{\rm eq}$ indicates the thermal average.
The dynamic susceptibility is obtained as:
\begin{eqnarray}
& & \tilde{C}(i\omega_n) = \int_0^\beta \! d\tau \, e^{i\omega_{n}\tau}C(\tau),
\label{eq:analyticC1} \\
& & \chi(\omega) = \tilde{C}(i\omega_n\rightarrow\omega+i\delta).
\label{eq:analyticC2}
\end{eqnarray}
The analytic continuation is performed numerically by the Pad\'e approximation~\cite{Baker1975,Vidberg1977}.
\section{Result}
\label{sec:result}
For the sub-ohmic case ($0<s<1$), a quantum phase transition occurs at zero temperature when the reservoir-system coupling reaches a critical value $\alpha_{\rm c}$, where $\alpha_{\rm c}$ is a function of $s$ and $\Delta/\omega_{\rm c}$~\cite{Kehrein1996,Bulla2003,Weiss2012}.
For $\alpha < \alpha_{\rm c}$, the ground state is described by a coherent superposition of two wave functions localized at each well ($\sigma_z = \pm 1$) and is called a ``delocalized state''.
For $\alpha > \alpha_{\rm c}$, the ground state becomes two-fold degenerate because the coherent superposition is completely broken owing to the disappearance of quantum tunneling between the two wells.
This state is called a ``localized state''.
The phase diagram of the spin-boson model determined by the CTQMC simulations for $\Delta/\omega_{\rm c}=0.1$ is shown in Fig.~\ref{fig:phase_diagram} (for details on determining the critical value, $\alpha_{\rm c}$, refer to Refs.~\cite{Volker1998, Winter2009,Yamamoto2018}).
The transition separating the two phases is of second-order for the sub-ohmic case (the empty squares) or of the Kosterlitz-Thouless-type~\cite{Chakravarty1982,Bray1982} for the ohmic case (the filled circle).
This phase diagram is consistent with previous numerical studies~\cite{Winter2009,Bulla2003}.
\begin{figure}[tbp]
\centering
\includegraphics[width=8.0cm]{FIG_phase_diagram.pdf}
\caption{
The phase diagram of the sub-ohmic spin-boson model for $\Delta/\omega_{\rm c}=0.1$.
The solid line indicates the second-order transition line separating the delocalized and localized phases.
The empty squares indicate the critical system-reservoir coupling that is numerically determined for the sub-ohmic case ($0<s<1$), whereas the filled circle represents the known transition point $\alpha_{\rm c}=1$ for the ohmic case ($s=1$).
}
\label{fig:phase_diagram}
\end{figure}
\begin{figure}[tbp]
\centering
\includegraphics[width=8.0cm]{FIG_conductance.pdf}
\caption{
The temperature dependence of the thermal conductance for (a) $\alpha \le \alpha_{\rm c}$ and (b) $\alpha\ge \alpha_{\rm c}$.
The plots represent the CTQMC simulation results for $s=0.5$ and $\Delta/\omega_{\rm c}=0.1$, for which the critical system-reservoir strength is $\alpha_{\rm c}=0.1074$. }
\label{fig:conductance}
\end{figure}
In Fig.~\ref{fig:conductance}, we show the temperature dependence of the thermal conductance for $s=0.5$ and $\Delta/\omega_{\rm c}=0.1$, where the critical system-reservoir coupling is $\alpha_{\rm c}=0.1074$.
Figs.~\ref{fig:conductance}~(a) and (b) show the delocalized-phase side ($\alpha \le \alpha_{\rm c}$) and the localized-phase side ($\alpha \ge \alpha_{\rm c}$), respectively.
In general, at the critical point, the thermal conductance exhibits distinctive power-law behavior determined by the nature of QPT:
\begin{equation}
\kappa \propto T^{c}, \quad (\alpha = \alpha_{\rm c}),
\label{eq:definitionc}
\end{equation}
where $c$ is the critical exponent dependent on $s$.
As shown in Fig.~\ref{fig:conductance}, the exponent $c$ is 1 for $s=0.5$.
As the system-reservoir coupling is reduced below the critical value ($\alpha < \alpha_{\rm c}$), the temperature dependence of the thermal conductance deviates from one at the critical point.
For a sufficiently small system-reservoir coupling (e.g., $\alpha=0.07$ in Fig.~\ref{fig:conductance}~(a)), the thermal conductance becomes proportional to $T^{2s+1}$ at low temperature, presumably for heat transport due to co-tunneling (see Appendix~\ref{app:Cotunneling}).
The temperature dependence of the thermal conductance also deviates as the system-reservoir coupling is increased above the critical value ($\alpha > \alpha_{\rm c}$).
Its temperature dependence cannot be explained by a simple formula such as the noninteracting-blip approximation, which is expected to hold in the localized phase~\cite{Yamamoto2018}, up to $\alpha = 0.13$.
Let us discuss the critical exponent, $c$, defined in Eq.~(\ref{eq:definitionc}) for general values of $s$.
The static susceptibility is expressed by:
\begin{eqnarray}
\chi_0 &=& \beta\average{\bar{m}^2}_{\rm eq},
\label{eq:chifluctuation}
\\
\label{eq:magnetization2}
\bar{m} &=& \frac{1}{\beta}\int_0^\beta d\tau~\sigma_z(\tau).
\end{eqnarray}
Combining Eq.~(\ref{eq:chifluctuation}) with Eq.~(\ref{eq:magnetization2}), the static susceptibility is expressed as $\chi_0 = \int_0^{\beta} d\tau C(\tau)$ with the spin-spin correlation function $C(\tau)=\langle \sigma_z(\tau) \sigma_z(0) \rangle_{\rm eq}$.
At the critical point, the spin-spin correlation function exhibits the power-law decay:
\begin{eqnarray}
C(\tau) = C(\beta-\tau) \sim \tau^{-\eta}, \quad (\omega_c^{-1} \ll \tau \ll \beta/2),
\end{eqnarray}
where $\eta$ is the critical exponent related to the spin dynamics.
Then, the temperature dependence of the static susceptibility at the critical point is obtained:
\begin{eqnarray}
\chi_0 \sim \beta^{1-\eta}.
\end{eqnarray}
By using Eqs.~(\ref{eq:analyticC1}) and (\ref{eq:analyticC2}), the critical behavior of the imaginary part of the dynamic susceptibility is obtained:
\begin{eqnarray}
\label{eq:dynamic_sus_behavior2}
{\rm Im}[\chi(\omega)] &\sim& \omega^{\eta - 1}.
\end{eqnarray}
Substituting this into Eq.~($\ref{eq:conductance}$), the thermal conductance at the critical point behaves as $\kappa \sim T^{c}$, where the exponent is given by:
\begin{eqnarray}
c = s + \eta.
\end{eqnarray}
The critical exponent $\eta$ is a function of $s$ and has been analyzed in previous theoretical studies~\cite{Luijten_thesis,Winter2009}.
The phase transition for $0<s\le 1/2$ belongs to the mean-field universality class and leads to $\eta = 1/2$.
This conclusion is consistent with the critical exponent $c=1$ obtained by the CTQMC simulation for $s=1/2$ (see Fig. \ref{fig:conductance}).
For $1/2<s<1$, $\eta$ is a nontrivial function of $s$ and is evaluated by the $\varepsilon$-expansion~\cite{Luijten_thesis} (see Appendix~\ref{app:CriticalExponent}).
In summary, the exponent of the thermal conductance is given as follows:
\begin{eqnarray}
c = \left\{ \begin{array}{ll}
s + 1/2 & (s \le 1/2), \\
1 - \varepsilon/2 - \varepsilon^2A(s)/3s + \mathcal{O}(\varepsilon^3) & (s > 1/2),
\end{array} \right.\\ \nonumber
\end{eqnarray}
where $\varepsilon = 2s-1$ and $A(s) = s[\psi(1)-2\psi(s/2)+\psi(s)]$.
Finally, we emphasize that the critical behavior near QPT can be observed for other physical quantities~\cite{Luijten_thesis,Tong2006,Winter2009}.
We summarize the critical exponents for measurable quantities in Appendix~\ref{app:CriticalExponent}.
\section{Experimental Realization}
\label{sec:realization}
\begin{figure}[tbp]
\centering
\includegraphics[width=8.0cm]{FIG_model_flux.pdf}
\caption{
(a) A superconducting circuit composed of a flux qubit and two transmission lines.
(b) The circuit of the transmission lines proposed to realize the sub-ohmic spin-boson model, consisting of resistances $R_{i}$, inductances $L_{i}$, and capacitances $C_{i}$.
}
\label{fig:model}
\end{figure}
In this section, we discuss a superconducting circuit that realizes a spin-boson Hamiltonian with sub-ohmic reservoirs.
A previous theoretical study~\cite{Tong2006} has shown that a spatially-uniform transmission line can realize a sub-ohmic reservoir with $s=0.5$.
For a controlled experiment of the QPT, however, it is favorable to realize a sub-ohmic reservoir with an arbitrary value of $s$.
We propose a superconducting circuit to realize a sub-ohmic reservoirs for arbitrary $s$ by introducing spatial dependence to the circuit elements.
We consider a flux qubit coupled to two transmission lines (or two junction arrays), as shown in Fig.~\ref{fig:model}~(a).
The flux qubit is composed of three small Josephson junctions~\cite{Mooij1999}.
By tuning the external magnetic field, the flux qubit acts like a double-well potential system, and its effective Hamiltonian is given by Eq.~(\ref{eq:H_system})
(for detailed derivation, see Appendix~\ref{app:CircuitModel}).
Then, the flux qubit coupled to the transmission lines can be described by the spin-boson model.
Using linear response theory~\cite{Schon1990,Tong2006,Bruus2004}, the spectral density function is expressed by the joint impedance of the two transmission lines ($Z(\omega) = \sum_{\nu} Z_\nu(\omega)$) as follows:
\begin{eqnarray}
\label{eq:spectral_impedance}
I(\omega) &=& \sum_{\nu} I_\nu(\omega) = \frac{4\phi_0^2\Braket{\varphi_-}^2}{\pi} I_0(\omega), \\
I_0(\omega) &=& \omega{\rm Re}[Z(\omega)^{-1}],
\label{eq:spectral_impedance2}
\end{eqnarray}
where $\phi_0 = \hbar/2e$, and $\pm \Braket{\varphi_-}$ is an expectation value of the phase at the flux qubit.
Detailed discussion is given in Appendix~\ref{app:CircuitModel}.
To realize a sub-ohmic reservoir with an arbitrary exponent, $s$, we propose a superconducting circuit, as shown in Fig.~\ref{fig:model}~(b).
The circuit comprises resistances $R_j$, inductances $L_j$, and capacitances $C_j$ ($j=1,2,\cdots,N$).
For simplicity, we assume that the two transmission lines are constructed by the same circuit.
The joint impedance of the two transmission lines is then calculated as $Z(\omega) = 2 Z_N(\omega)$, where $Z_j(\omega)$ ($j = 1,2,\cdots, N$) is given by a recurrence relation:
\begin{eqnarray}
\label{eq:recurrence}
Z_j(\omega) = R_j + i\omega L_j + \frac{1}{Z_{j-1}(\omega)^{-1} + i\omega C_j},
\end{eqnarray}
with $Z_0(\omega)^{-1} = 0$.
Now, we assume that circuit elements have spatial dependence:
\begin{eqnarray}
\label{eq:setting}
& & R_j = R_0(1-j/N)^n, \\
& & L_j = L_0, \\
& & C_j = C_0(1-j/N)^m,
\end{eqnarray}
where $n$ and $m$ are non-negative real numbers.
We show the spectral density function, $I_0(\omega)$, of this circuit in Fig.~\ref{fig:spectral} for $(n,m) = (2,2)$ and $(6,6)$.
The parameters are set to $R_0 = 1\, {\rm k}\Omega$, $L_0 = 13 \, {\rm nH}$, $C_0 = 1 \, {\rm pF}$, and $N = 10^4$ and referred to experimental studies on Josephson junction arrays~\cite{Miyazaki2002}.
In Fig.~\ref{fig:spectral}, we added $1\%$ relative randomness for each circuit element to introduce tolerance to circuit parameter fluctuations.
\begin{figure}[tbp]
\centering
\includegraphics[width=8.0cm]{FIG_spectral2.pdf}
\caption{
The spectral density function of the superconducting circuit for $s=0.5$ and $0.25$, corresponding to $(n,m) = (2,2)$ and $(6,6)$.
The circuit parameters are set as $N=10^4$, $R_0=1~{\rm k}\Omega$, $L_0=13~{\rm nH}$, and $C_0 = 1~{\rm pF}$.
}
\label{fig:spectral}
\end{figure}
We determined that the spectral density function is approximately proportional to $\omega^s$ in a certain range of the frequency with the exponent $0<s<1$.
This indicates that the present circuit can realize a sub-ohmic reservoir with an arbitrary value of $s$.
Certainly, the analytic calculation concludes:
\begin{eqnarray}
I(\omega) \propto \omega^{2/(m+2)}, \quad
(\omega^* \ll \omega \ll \omega_{\rm c}).
\end{eqnarray}
The detailed calculation is given in Appendix~\ref{app:spect}.
This result is in good agreement with Fig.~\ref{fig:spectral}; $m=2$ and $6$ corresponds to $s=0.5$ and $0.25$, respectively.
The lower frequency limit for the sub-ohmic spectral density function, $\omega^*$, is calculated as follows:
\begin{eqnarray}
\omega^* = \left[\left(\frac{m}{2N}\right)^{2n}\frac{R_0^{m+2}}{C_0^{n}L_0^{m+n+2}}\right]^{1/(m+2n+2)}.
\label{eq:omegastar}
\end{eqnarray}
Therefore, the exponent $n$ for the resistance~(\ref{eq:setting}) controls the lower limit of the sub-ohmic spectral density function.
In contrast, the higher frequency limit, $\omega_{\rm c}$, is a complex function of the circuit parameters.
In summary, the conditions for realizing a quantum phase transition are as follows:
First, the tunneling amplitude, $\Delta$, must be in the range of $\omega^* \ll \Delta \ll \omega_{\rm c}$.
Second, the dimensionless system-reservoir coupling, $\alpha$, should be tuned around the predicted critical point, $\alpha_{\rm c}$.
For a typical value of the tunneling amplitude, $\Delta = 25~{\rm GHz}$, for the flux qubit~\cite{Magazzu2017}, we determined that both of the conditions are satisfied for the parameters used in Fig.~\ref{fig:spectral} for $s = 0.5$ ($m = 2$).
For this parameter set, the critical behavior of the thermal conductance at QPT described by Eq.~(\ref{eq:definitionc}) is expected in the temperature range of $\omega^* < T < \Delta$, when the system-reservoir coupling is tuned as $\alpha_{\rm c}$.
\section{Summary}
\label{sec:summary}
We studied quantum critical phenomena in heat transport by using a spin-boson model with sub-ohmic reservoirs.
By implementing continuous-time quantum Monte Carlo simulations, we show that the thermal conductance at the critical point has a characteristic power-law temperature dependence determined by the nature of QPT.
We also clarify the means by which the critical exponent of the thermal conductance is related to other critical exponents discussed in previous theoretical studies.
Finally, we propose a superconducting circuit that realizes sub-ohmic reservoirs for an arbitrary value of the exponent $s$.
We expect that our study will provide a new platform for experiments attempting to access quantum phase transitions directly upon measuring the transport properties of mesoscopic devices.
Although we used the flux qubit to realize the spin-boson model, other types of qubits such as a charge qubit or a transmon qubit could be considered.
We will present detailed descriptions of the other types of qubits in other studies.
\section*{Acknowledgement}
We also thank K. Saito for close discussions and critical reading of the manuscript.
The authors thank R. Sakano and T. Tamaya for helpful comments.
T.K. was supported by JSPS Grants-in-Aid for Scientific Research (No. JP24540316 and JP26220711).
|
2,877,628,090,388 | arxiv | \section{Introduction}
\par Online payment has become an indispensable way of payment in people's life. In order to ensure the security of payment, online payment is achieved through an authoritative agent. However, the cost of establishing an authority is enormous, which increases the cost of payment. Blockchain technology can realize P2P payment\cite{aste2017blockchain}. Blockchain is a distributed ledger that is copied synchronously across multiple users, which differs from the traditional centralized ledger, and Bitcoin is used to solve the dual payment problem\cite{tschorsch2016bitcoin}. Blockchain has surpassed its original design and become the basic technology to realize decentralized control. Compared with centralized control, the blockchain system has the advantages of distributed high redundancy storage, time sequence data, tamper-resistant and forgery, decentralized credit, automatic execution of smart contracts, security and privacy protection.
\par The key process of blockchain is a calculation process to solve the hash puzzle, which requires high computing power. However, the mobile devices in IOT have insufficient computing power. Therefore, edge computing has been paid more and more attention of scholars\cite{satyanarayanan2017emergence}\cite{luo2020edge}. The target of edge computing is to enable a large number of devices to run applications on IOT, which enables these devices to provide their computing power for mining\cite{yu2017survey}. Therefore, it is a trend that edge computing is used to solve the hash puzzle in blockchain\cite{yeow2017decentralized}. By combining blockchain with edge computing, edge servers can organize a large number of devices to provide computing power, storage resources, etc. It can greatly improve the transmission efficiency of the system and ensure data integrity and computing effectiveness. With the participation of edge computing, the blockchain system obtains a large number of computing power on IOT, which reduces the burden of computing power limited devices for mining. It enables out of chain storage and out of chain computing of edge computing to be realized, and satisfies scalable storage and computing on the blockchain\cite{liu2017blockchain}.
\par However, blockchain is not widely used in mobile environment. Due to the limited computing power of mobile devices, it costs too much to solve the POW puzzle. Therefore, we need to further research the mining strategy in the mobile environment to promote the application of blockchain in IOT. The incentive mechanism based on edge computing is considered to overcome the lack of computing power in mobile devices and satisfies the requirement of mining. Edge servers act as miner, but the number is small, while the number of mobile devices is large. Therefore, the system encourages the edge server to actively recruit mobile devices to provide computing power to complete the mining task. When the mining is successful, the edge server and the mobile device share the rewards. The contributions of this work are mainly shown as follows.
\begin{enumerate}
\item We discuss the challenges of mobile blockchain in mining. In order to address the challenge for high cost of mining by mobile devices alone, the edge server recruit mobile devices to provide computing power to mine and share the profit with them. In order to make them obtain the maximum profit, this paper formulates a two-stage Stackelberg game model to set the profit distribution of mining.
\item We adopt backward induction to prove that the uniqueness Nash equilibrium solution exists in this game under the same expected fee and different expected fee.
\item We show the result curve of the profit for the edge server with the different ratio between the computing power from the edge server and mobile devices. The results show that the contribution of the computing power from the edge server to its profit is more than that of it recruited mobile devices under the same condition.
\end{enumerate}
\par The rest of this paper is organized as follows. Related work is briefly reviewed in Section~\ref{related-work-section}. In Section ~\ref{model-section}, we give out the detail of system model and problem formulation. In Section ~\ref{Game-Equilibrium-Analysis}, Game Equilibrium is analyzed. Performance evaluation and analysis are given in Section ~\ref{simlation-section}. The last Section is conclusion.
\section{Related work}\label{related-work-section}
\par In recent years, the technology of integrating blockchain into edge computing has gradually attracted the attention of scholars. In \cite{khan2019blockchain}, the authors designed an architecture to combine blockchain technology with edge computing. It achieves the fine management of different department data. They implement the synchronization of data storage and processing by deploying regional blockchain, and reach an agreement through sharing. In \cite{ damianou2019architecture}, the author overcomes the constraints of the Internet of things by combining edge computing and blockchain. It can ensure the security and privacy of Internet of things devices when it translates sensitive personal data. Therefore, it can make IOT devices get rid of the dependence on storage devices and improve the overall performance. In \cite{zhang2019edge}, the author designs an architecture combining edge intelligence with blockchain authorization to achieve efficient edge service management. For edge resource sharing, they designed a cross domain scheduling mechanism, and used the credit approval mechanism to ensure security, which can reduce service costs and improve service capabilities.
\par Many researches have been carried out on the mining scheme of blockchain by formulating game model. Houy \cite{houy2014bitcoin}proposed a noncooperative game model in the mining process. In this model, the events of POW puzzle follow Poisson distribution. When the equilibrium point of Nash is reached, the optimal solution of both sides is obtained, but only the equilibrium solution between two miners is obtained. Kiayias et al \cite{kiayias2016blockchain}considered the mining of miners as a competitive game process. Assuming that the miners are rational, they compete for mining and choose whether to broadcast their solutions based on their profit. It is found that under some conditions, the optimal solution is not to propagate. In addition, the authors proved that there is multiplicity of Nash equilibrium in this model. Similar to \cite{kiayias2016blockchain}, L.Wang and Y. Liu \cite{wang2015exploring} considered the mining of miners as a random game model. The miners decide whether to propagate the block according to the estimation of reward.
\par The traditional mining of miners in the blockchain is carried out individually. The advantage of mining by miners alone is that when miners successfully calculate the hash value that meets the conditions, they can obtain all rewards. However, this mining scheme is inefficient and has unstable earnings. In order to obtain stable rewards, researchers have introduced mine pools, which is another way to concentrate resources to mining blocks\cite{ beccuti2017bitcoin}. Lewenberg et al \cite{lewenberg2015bitcoin} proposed the mining scheme of blockchain based on cooperative game. They use the theoretical tool of cooperative game to research the mining pool that miners would like to join and reasonably shared their rewards. The interaction between the miner and the mining pool is simulated as a joint game. In order to the data processing in the Internet of things authorized by the blockchain, \cite{chen2019cooperative} proposed a multi-hop cooperative distributed algorithm to achieve the mining tasks. They set the competition among devices as a game problem to reduce the computing cost of Internet of things devices. Each device can decide its own computing power to obtain the maximum benefit. The researchers found that when the scheme is applied to bitcoin network, some miners always turn to other mines for higher expected rewards under any incentive allocation scheme
\par However, the above research works are to use dedicated nodes to mine, without considering the application of mobile environment. In the mobile Internet environment, the computing power of a single mobile smart device is often limited, but the number of these devices is large. Therefore, it is a new opportunity for blockchain application that edge servers organize a large number of mobile devices, and make use of their idle computing power to contribute computing power to mining.
\section{System Model and Problem Formulation}\label{model-section}
\subsection{System Model}
\par The process of consensus management contains these steps as following: Block broadcast, Mining, Progagation,
Verification, Confirm and Add new block. The data block that contains transaction records is periodically broadcasted
in blockchain network. These transaction records are the issued transactions. The miners will solve a crypto-puzzle by
theirs computing power according to given parameters of the blockchain \cite{li2017securing}. If the miner successfully solves the given puzzle, it propagates the result and block to other miners in the blockchain system to verify. The fastest miner who solves the puzzle and passes the verification will obtain the reward R of blockchain system. To finish the computation and propagation as soon as possible, the edge servers recruit the mobile devices to compute beside theirselves. The verification tasks are completely handed over to the mobile devices. Each mobile device is assigned the computation and verification task according to its computing power by edge servers and then get reward from the edge server that recruited it. Each edge server should share the given reward from the blockchain to mobile devices based on their contributions. In addition, the calculation and verification of blocks will generate communication overhead.
\subsection{Problem Formulation}
\par Let $\mathnormal E=\left\{E_1, \cdots, E_x\right\}$ represents a group of edge servers. Each edge server recruits mobile devices(denoted by $\mathcal M=\left\{M_1, \cdots, M_y\right\}$) to provide computation and verification services. The miners compete with each other to mine. The fastest miner who solves the puzzle and passes the verification will obtain the reward of blockchain system. The reward consists of fixed reward $R$ and variable transaction reward $TR$. For miner $E_i$, the probability of success in mining is determined by its computing power, which is represented by $\Pr=\alpha e^{-\gamma zT}$ \cite{houy2014bitcoin}, where $\alpha$ is factor about the computing power.
\begin{eqnarray}
\alpha_i(x_i,\mathcal M_{-1})=\frac{x_i}{\sum_{j \in \mathcal M}x_j},\alpha_i>0
\end{eqnarray}
and $\sum_{j \in \mathcal M}\alpha_j=1$. The process of solving puzzle follows Poisson distribution, and $\gamma $ is Poisson parameter\cite{xiong2017edge}. $z>0$ represents a delay factor in transmission, and $T$ represents the number of transactions in one block.
The profit of the edge server $E_i$ contain the three parts as follow: 1) The fixed reward from mining; 2)the cost for mobile device charging; 3)electricity and other costs. The utility function of edge servers is formulated as follow:
\begin{eqnarray}
U_e=(R+TR) e^{-\gamma zT}-\sum_{i=1}^n f_i-\Phi.\label{utility of edge}
\end{eqnarray}
The profit of mobile devices contain the three parts as follow: 1)The expected fee provided by edge server; 2)mined block verification; 3)electricity and other costs. The profit function of mobile devices is formulated as follow:
\begin{eqnarray}
U_m=f_i\frac{x_i}{\sum_{j \in \mathcal M}x_j}e^{-\gamma zT_m}-\varphi x_i.\label{utility of miner i}
\end{eqnarray}
$f_i$ is the reward from edge server depend on its computing power.$\varphi$ is the resource about computing power, memory and so on.
The process of mining block between the edge server and mobile devices is designed as a two-stage Stackelberg game. The edge server acts as leader. It sets the expected fee for mobile devices in Stage \uppercase\expandafter{\romannumeral1}. The mobile devices act as follower. They provide their computing power for the leader in Stage \uppercase\expandafter{\romannumeral2}. Obviously, for followers, if the expected fee is less than their own consumption, they will not accept the recruitment of leaders. The minimum consumption of a mobile miner is defined as $\omega_{min}$. Therefore, the expected fee given by leader to each mobile miner must be greater than the consumption value of follower. Then the objective functions of leaders and followers can be expressed as follows:
\begin{eqnarray}
Leader:max F(x_i) \notag\\
1 \ge \alpha_i >0 \quad\quad\notag\\
Follower: max f(\omega)\notag\\
\omega > \omega_{min}\quad\quad
\end{eqnarray}
\section{Game Equilibrium Analysis}\label{Game-Equilibrium-Analysis}
In our model, we consider two situations for mobile devices as follow: the same computer power for each miner and the different computer power for each miner. The traditional backward induction\cite{xiong2017edge} will be adoped to analyze this game.
\subsection{The same computer power for each miner} \label{The same computer power for each miner}
\emph{1) Stage \uppercase\expandafter{\romannumeral2}: Miner's Game}: we first analyze the simple situation that is the same computing power for each miner. These miners can be considered as a entirety, and their total computer power is $Y$. The computing power from the edge server is $X$. Then, the miners compete with the edge server to maximize their profit by providing their computer power, which forms the Edge Miner Game(EMG) $E_m=\{E,\mathcal M,U_m\}$. $E$ is the edge server, $\mathcal M$ is the group of mobile miners recruited by the edge server, $U_m$ represents the utility function of mobile miners. Therefore, the profit function of mobile miners, in Stage \uppercase\expandafter{\romannumeral2}, is represented as:
\begin{eqnarray}
U_m=P\frac{Y}{X+Y}e^{-\gamma zT_m}-\varphi_1 Y.\label{utility of total miners}
\end{eqnarray}
where $P$ is expected fee of miners from edge server.
{\bf Theorem 1.} There is a Nash equilibrium in $E_m=\{E,\mathcal M,U_m\}$
\par $proof.$ The computer power space of each miner is a continuous and non-empty value range. $U_m$ is obviously continuous with respect to $Y$. Let $\alpha=\frac{Y}{X+Y}$. Then, we prove the concavity of $U_m$. $U_m$ will be concave if it satisfy the two condition: the first derivative of $U_m$ with respect to $Y$ is positive and the second derivative of $U_m$ with respect to $Y$ is negative. The process is as following.
\begin{eqnarray}
\frac{\partial U_m}{\partial Y}=P \frac{\partial\alpha}{\partial Y}-\varphi_1 \label{the first order U_m}
\end{eqnarray}
and
\begin{eqnarray}
\frac{\partial^2 U_m}{\partial Y^2}=P \frac{\partial^2\alpha}{\partial Y^2}
\end{eqnarray}
where
\begin{eqnarray}
\frac{\partial \alpha}{\partial Y}= \frac{X}{(X+Y)^2}>0
\end{eqnarray}
and
\begin{eqnarray}
\frac{\partial^2 \alpha}{\partial Y^2}= -\frac{2X}{(X+Y)^3}<0
\end{eqnarray}
Therefore, $U_m$ satisfies the concave condition with respect to $Y$. According to \cite{han2012game}, the Nash equilibrium is in the EMG $E_m$.
Let (\ref{the first order U_m})=0, we can get the best function of miners as following,
\begin{eqnarray}
\frac{\partial U_m}{\partial Y}=Pe^{-\gamma zT_m} \frac{\partial\alpha}{\partial Y}-\varphi_1 =0 \label{the first derivative equal 0 in I of miners}
\end{eqnarray}
\begin{eqnarray}
Y^*=F(x)=\sqrt{\frac{Pe^{-\gamma zT_m}X}{\varphi_1}}-X \label{unique 1 Nash}
\end{eqnarray}
{\bf Theorem 2.}There is a unique Nash equilibrium in EMG $E_m$ if the following condition is satisfied
\begin{eqnarray}
X<\mathop{min}\limits_{P>0}\{\int \frac{Pe^{-\gamma zT_m}}{4\varphi_1} d_X, \frac{Pe^{-\gamma zT_m}}{\varphi_1}\} \label{condition 1}
\end{eqnarray}
\par Let $Y^*$ represent the Nash equilibrium of the EMG. If $Y^*=F(Y)$ is the standard function, then the EMG $E_m$ exists the unique Nash equilibrium\cite{han2012game}.
{\bf Definition 1.} A funciton $F(x)$ is a standard function if the following three conditions are satisfied\cite{han2012game}:
\par (1)$F(x)$ is positive.
\par (2)$F(x)$ is monotonous. It is formulated as: if $x \le x'$, then $F(x)\le F(x')$;
\par (3)$F(x)$ is scalable. It is formulated as: for all $\lambda>1$, then $\lambda F(x)>F(\lambda x)$;
Firstly, for $F(x)>0$, from (\ref{unique 1 Nash}), we have
\begin{eqnarray}
If X<\frac{Pe^{-\gamma zT_m}}{\varphi_1}, then \sqrt{\frac{Pe^{-\gamma zT_m}X}{\varphi_1}}-X>0
\end{eqnarray}
Under the condition in (\ref{condition 1}), we can obtain
\begin{eqnarray}
\sqrt{\frac{Pe^{-\gamma zT_m}X}{\varphi_1}}-X>0
\end{eqnarray}
\begin{figure*}[b]
\hrulefill
\begin{align*}
F(x')-F(x) &=\sqrt{\frac{Pe^{-\gamma zT_m}X'}{\varphi_1}}-X'-(\sqrt{\frac{Pe^{-\gamma zT_m}X}{\varphi_1}}-X) \\
&=\sqrt{\frac{Pe^{-\gamma zT_m}}{\varphi_1}}(\sqrt{X'}-\sqrt{X})-(X'-X) \\
&= (\sqrt{\frac{Pe^{-\gamma zT_m}}{\varphi_1}}-\sqrt{X'}-\sqrt{X})(\sqrt{X'}-\sqrt{X})
\tag{13}
\end{align*}
\begin{align*}
\lambda F(x)-F(\lambda X) &=\lambda\sqrt{\frac{Pe^{-\gamma zT_m}X}{\varphi_1}}-\lambda X-(\sqrt{\frac{Pe^{-\gamma zT_m}\lambda X}{\varphi_1}}-\lambda X) \\
&=(\lambda -\sqrt{\lambda})\sqrt{\frac{Pe^{-\gamma zT_m}X}{\varphi_1}}
\tag{14}\label{for condition 3 of definition 1}
\end{align*}
\end{figure*}
Secondly, Let $X' \ge X$, we calculate the formula of $F(X')-F(X)$, which is shown in (13). Obviously, $\sqrt{X'}-\sqrt{X} \ge 0$. In addition,
\begin{eqnarray}
\setcounter{equation}{15}
\sqrt{\frac{Pe^{-\gamma zT_m}}{\varphi_1}}-\sqrt{X'}-\sqrt{X} \ge \notag\\ \sqrt{\frac{Pe^{-\gamma zT_m}}{\varphi_1}}-2\sqrt{X'}
\end{eqnarray}
Under the condition in (\ref{condition 1}), we can prove that
\begin{eqnarray}
\sqrt{\frac{Pe^{-\gamma zT_m}}{\varphi_1}}-2\sqrt{X'} \ge 0
\end{eqnarray}
Thus, $F'(X)-F(x) \ge 0$.
Finally, for condition(3) of Definition 1. We prove the formula of $\lambda F(x)>F(\lambda x)$ with respect to $\lambda>1$. The process of calculation is shown in (\ref{for condition 3 of definition 1}).
\par Therefore, the three conditions shown in Definition 1 have been proved for the response function in (\ref{unique 1 Nash}). The process of proof has done..
\par From above, we can obtain that the uniqueness Nash equilibrium for miners is true in EMG $E_m$ when $Y^*=\sqrt{\frac{Pe^{-\gamma zT_m}X}{\varphi_1}}-X$. And now, we need to analyze the optimal profit of the edge server in the first stage.
\emph{2) Stage \uppercase\expandafter{\romannumeral1}: Edge server's optimal profit}: According to the Nash equilibrium of the computer power provided by mobile miners in EMG $E_m$, the Edge Server is as leader that can optimize its computer power to achieve its optimal profit represented in (\ref{utility of edge}). For the edge server, the computer power $X$ belongs to itself. The revenue generated by this part of computing power $X$ belongs to the edge server. What we need to analyze is the additional benefits of the computing power from the edge server recruiting mobile miners. The total computer power provided by miners is $Y$. Therefore, the additional benefits for the edge server represented as:
\begin{eqnarray}
\Delta U_e=(R+TR)\frac{Y}{X+Y}e^{-\gamma zT_m}-P \label{the first max profit of edge}
\end{eqnarray}
Thus, by substituting (\ref{unique 1 Nash}) into (\ref{the first max profit of edge}), and let $a=(R+TR)e^{-\gamma zT_m}$, the additional profit maximization of the edge server is simplified as:
\begin{eqnarray}
\mathop{maximize}\limits_{P>0} \Delta U_e=a(1-\sqrt{\frac{X\varphi_1}{Pe^{-\gamma zT_m}}}) \label{the first profit of edge}
\end{eqnarray}
{\bf Theorem 3.} Under the same computer power of miners, the Edge Server can achieve the optimal profit.
\par $proof.$ From (\ref{the first profit of edge}), We proof that the first derivative of $\Delta U_e$ with respect to the expected fee of miners P is positive and the second derivative of $\Delta U_e$ with respect to P is negative as following.
\begin{eqnarray}
\frac{\partial\Delta U_e }{\partial P}=\frac{1}{2}\sqrt{X\varphi_1 e^{-\gamma zT_m}}P^{-\frac{3}{2}}>0 \label{the first derivative of edge I}
\end{eqnarray}
and
\begin{eqnarray}
\frac{\partial^2\Delta U_e }{\partial P^2}=-\frac{3}{4}\sqrt{X\varphi_1 e^{-\gamma zT_m}}P^{-\frac{5}{2}}<0 \label{the second derivative of edge I}
\end{eqnarray}
From (\ref{the first derivative of edge I}) and (\ref{the second derivative of edge I}), $\Delta U_e$ is strictly concavity that can be satisfied. Therefore, the EMG $E_m$ can obtain the optimal profit with the situation for the same computer power of miners. The process of proof has done.
\subsection{the different computer power for each miner}
\emph{1) Stage \uppercase\expandafter{\romannumeral2}: Miner's Game}: in this situation, the edge server and mobile devices are the miners for mining. The additional benefits for the edge server is represented as:
\begin{eqnarray}
\Delta U_e=(R+TR)\frac{x_i}{\sum\limits_{j \in \mathcal M}x_j}e^{-\gamma zT_m}-\sum\limits_{i \in \mathcal M}p_ix_i \label{the second max profit of edge}
\end{eqnarray}
and the utility function of miner $i$ is represented as:
\begin{eqnarray}
U_m=p_i\frac{x_i}{\sum\limits_{j \in \mathcal M}x_j}e^{-\gamma zT_m}-\varphi_2 x_i.\label{the second utility of miner i}
\end{eqnarray}
{\bf Theorem 4.} There is a Nash equilibrium in $E_m=\{E,M,U_m\}$
\par $proof.$ For computer power space of each miner $x_i$, it is a continuous and non-empty value range. In order to proof the concavity of $U_m$ in (\ref{the second utility of miner i}), we calculate the first derivative of $U_m$ and the second derivative of $U_m$ with respect to $x_i$. The process is as following:
\begin{eqnarray}
\frac{\partial U_m}{\partial x_i}=P_ie^{-\gamma zT_m}\frac{\partial\alpha}{\partial x_i}-\varphi_2
\end{eqnarray}
and
\begin{eqnarray}
\frac{\partial^2 U_m}{\partial x_i^2}=P_ie^{-\gamma zT_m}\frac{\partial^2\alpha}{\partial x_i^2}
\end{eqnarray}
where
\begin{eqnarray}
\frac{\partial \alpha}{\partial x_i}= \frac{\sum\limits_{i \ne j}x_i}{(\sum\limits_{j \in \mathcal M}x_j)^2}>0
\end{eqnarray}
and
\begin{eqnarray}
\frac{\partial^2 \alpha}{\partial x_i^2}= -2\frac{\sum\limits_{i \ne j}x_i}{(\sum\limits_{j \in \mathcal M}x_j)^3}<0
\end{eqnarray}
Therefore, $U_m$ satisfies the concave condition with respect to $x_i$. According to \cite{han2012game}, the Nash equilibrium is in the EMG $E_m$. Let (\ref{the second utility of miner i})=0, we can get the best response function of miners as following.
\begin{eqnarray}
\frac{\partial U_m}{\partial x_i}=p_ie^{-\gamma zT_m} \frac{\partial\alpha}{\partial x_i}-\varphi_2 =0 \label{the first derivative of utility function miner i}
\end{eqnarray}
\begin{eqnarray}
x_i^*=F(x)=\sqrt{\frac{p_ie^{-\gamma zT_m}\sum\limits_{i \ne j}x_j}{\varphi_2}}-\sum\limits_{i \ne j}x_j \label{the second optimal x_i}
\end{eqnarray}
{\bf Theorem 5.} There is a unique Nash equilibrium in EMG $E_m$ if the following condition is satisfied
\begin{eqnarray}
\frac{2(M-1)}{p_i}<\sum\limits_{j \in \mathcal M}\frac{1}{p_j} \label{condition 2}
\end{eqnarray}
\par Let $x_i^*$ represent the Nash equilibrium of the EMG. If $x_i^*=F(x)$ satisfies the three condition of the standard function shown in Definition 1, then the EMG exists a unique Nash equilibrium \cite{han2012game}.
Firstly, for the positivity, we need to prove $\sum\limits_{i \ne j}x_j<\frac{p_ie^{-\gamma zT_m}}{\varphi_2}$. Let (\ref{the sum miner i}) minus (\ref{the second Nash for miner i}), we have
\begin{eqnarray}
\sum\limits_{i \ne j}x_j=\frac{\varphi_2}{p_ie^{-\gamma zT_m}}(\frac{M-1}{\sum\limits_{j \in \mathcal M}\frac{\varphi_2}{p_ie^{-\gamma zT_m}}})^2
\end{eqnarray}
Under the condition of (\ref{condition 2}), we have
\begin{eqnarray}
\sum\limits_{i \ne j}x_j<\frac{p_ie^{-\gamma zT_m}}{4\varphi_2} \label{the sum x_i > 0}
\end{eqnarray}
It is easily to obtain $\sum\limits_{i \ne j}x_j<\frac{p_ie^{-\gamma zT_m}}{\varphi_2}$. Thus, it satisfies the positivity.
Secondly, for the Monotonicity, we calculate $F(x')-F(x)$ under the condition $x' \ge x$. It is similar with (13). We have (32).
\begin{figure*}[t]
\begin{align*}
F(x')-F(x)= (\sqrt{\frac{Pe^{-\gamma zT_m}}{\varphi_2}}-\sqrt{\sum\limits_{i \ne j}x_j'}-\sqrt{\sum\limits_{i \ne j}x_j})(\sqrt{\sum\limits_{i \ne j}x_j'}-\sqrt{\sum\limits_{i \ne j}x_j})
\tag{32}
\end{align*}
\begin{align*}
\lambda F(x_i)-F(\lambda x_i)= (\lambda -\sqrt{\lambda})\sqrt{\frac{Pe^{-\gamma zT_m}\sum\limits_{i \ne j}x_j}{\varphi_2}}
\tag{33}
\end{align*}
\hrulefill
\end{figure*}
Obiously, $\sqrt{\sum\limits_{i \ne j}x_j'}-\sqrt{\sum\limits_{i \ne j}x_j}>0$, and we have
\begin{eqnarray}
\setcounter{equation}{34}
\sqrt{\frac{p_ie^{-\gamma zT_m}}{\varphi_2}}-\sqrt{\sum\limits_{i \ne j}x_j'}-\sqrt{\sum\limits_{i \ne j}x_j} \in \notag\\ (\sqrt{\frac{p_ie^{-\gamma zT_m}}{\varphi_2}}-2\sqrt{\sum\limits_{i \ne j}x_j'}, \notag\\ \sqrt{\frac{p_ie^{-\gamma zT_m}}{\varphi_2}}-2\sqrt{\sum\limits_{i \ne j}x_j})
\end{eqnarray}
From (\ref{the sum x_i > 0}), we can obtian $\sqrt{\frac{p_ie^{-\gamma zT_m}}{\varphi_2}}-2\sqrt{\sum\limits_{i \ne j}x_j}>0$. Thus, it is obviously exist the condition that satisfies $F(x')-F(x)>0$. The proof is now completed.
Finally, for condition(3) of Definition 1. We prove the formula of $\lambda F(x_i)>F(\lambda x_i)$ in the condition as $\lambda>1$. From (33), we can obtain it.
{\bf Theorem 6.} In EMG $E_m$, the unique Nash equilibrium solution of computing power for a miner $i$ is shown by
\begin{eqnarray}
x_i^*=\frac{M-1}{\sum\limits_{j \in \mathcal M}\frac{\varphi_2}{p_ie^{-\gamma zT_m}}}-\frac{\varphi_2}{p_ie^{-\gamma zT_m}}(\frac{M-1}{\sum\limits_{j \in \mathcal M}\frac{\varphi_2}{p_ie^{-\gamma zT_m}}})^2 \label{the second Nash for miner i}
\end{eqnarray}
$proof.$ From (\ref{the first derivative of utility function miner i}), we obtain
\begin{eqnarray}
\frac{\sum\limits_{i \ne j}x_j}{(\sum\limits_{j \in \mathcal M}x_j)^2}=\frac{\varphi_2}{p_ie^{-\gamma zT_m}}
\end{eqnarray}
Then, for all the miners, we calculate the sum of the above formula as following:
\begin{eqnarray}
(M-1)\frac{\sum\limits_{i \ne j}x_j}{(\sum\limits_{j \in \mathcal M}x_j)^2}=\sum\limits_{j \in \mathcal M}\frac{\varphi_2}{p_ie^{-\gamma zT_m}}
\end{eqnarray}
Thus, we can get
\begin{eqnarray}
\sum\limits_{j \in \mathcal M}x_j=\frac{M-1}{\sum\limits_{j \in \mathcal M}\frac{\varphi_2}{p_ie^{-\gamma zT_m}}} \label{the sum miner i}
\end{eqnarray}
From (\ref{the second optimal x_i}), we have
\begin{eqnarray}
\sum\limits_{j \in \mathcal M}x_j=\sqrt{\frac{p_ie^{-\gamma zT_m}\sum\limits_{i \ne j}x_j}{\varphi_2}}
\end{eqnarray}
which means
\begin{eqnarray}
\sum\limits_{j \in \mathcal M}x_j=\sqrt{\frac{p_ie^{-\gamma zT_m}(\sum\limits_{j \in M}x_j-x_i)}{\varphi_2}} \label{another expressed the sum miner i}
\end{eqnarray}
By substituting (\ref{the sum miner i}) to (\ref{another expressed the sum miner i}), we have
\begin{eqnarray}
\frac{M-1}{\sum\limits_{j \in \mathcal M}\frac{\varphi_2}{p_ie^{-\gamma zT_m}}}=\sqrt{\frac{p_ie^{-\gamma zT_m}}{\varphi_2}(\frac{M-1}{\sum\limits_{j \in \mathcal M}\frac{\varphi_2}{p_ie^{-\gamma zT_m}}}-x_i)}
\end{eqnarray}
We will get the Nash equilibrium solution of computing power for miner $i$ with mathematical operation. The process of proof has done.
\par Therefore, we can adopt the solution for obtaining the Nash equilibrium for the edge server. We will analyze the optimal profit in Stage \uppercase\expandafter{\romannumeral1}.
\emph{2) Stage \uppercase\expandafter{\romannumeral1}: Edge Server's optimal profit}: It is similar to \ref{The same computer power for each miner}. What we need to analyze is the additional benefits of the computing power from the edge server recruiting mobile miner $i$. The computer power provided by miner $i$ is $x_i$. Therefore, the additional benefits for the edge server represented as:
\begin{eqnarray}
\Delta U_e=(R+TR)\frac{x_i}{\sum\limits_{j \in \mathcal M}x_j}e^{-\gamma zT_m}-p_i \label{the second additional profit of edge}
\end{eqnarray}
Thus, by substituting (\ref{the second Nash for miner i}) and (\ref{the sum miner i}) into (\ref{the second additional profit of edge}), and let $a=(R+TR)e^{-\gamma zT_m}$, the additional profit maximization of the edge server is simplified as:
\begin{eqnarray}
\mathop{maximize}\limits_{p_i>0} \Delta U_e=a(1-\frac{M-1}{p_i\sum\limits_{j \in \mathcal M}\frac{1}{p_j}}) \label{the second profit of edge}
\end{eqnarray}
{\bf Theorem 7.} Under the different computer power of each miner $i$, the Edge Server can achieve the optimal profit.
$proof.$ From (\ref{the second profit of edge}), we can calculate the first derivative of $\Delta U_e$ with respect to the expected fee $p_i$ of miner $i$ is positive and that of the second derivative of $\Delta U_e$ is negative as following:
\begin{eqnarray}
\frac{\partial \Delta U_e}{\partial p_i}=\frac{a(M-1)\sum\limits_{i \ne j}\frac{1}{p_j}}{(p_i\sum\limits_{j \in \mathcal M}\frac{1}{p_j})^2}>0 \label{the first derivative of edge II}
\end{eqnarray}
and
\begin{eqnarray}
\frac{\partial^2 \Delta U_e}{\partial p_i^2}=\frac{-2a(M-1)(\sum\limits_{i \ne j}\frac{1}{p_j})^2}{(p_i\sum\limits_{j \in \mathcal M}\frac{1}{p_j})^3}<0 \label{the second derivative of edge II}
\end{eqnarray}
\par \quad
From (\ref{the first derivative of edge II}) and (\ref{the second derivative of edge II}), $\Delta U_e$ is strictly concavity that can be satisfied. Therefore, the EMG $E_m$ can obtain the optimal profit with the situation for the different computer power of miners. The process of proof has done.
\par \quad
\par \quad
\section{Simulation Results}\label{simlation-section}
We will evaluate the performance of the incentivizing mechanism in EMG proposed in this paper. We consider an edge server that recruits a set of mobile devices to mine in mobile blockchain system. We adopt algorithm I to achieve the unique Stackelberg Equilibrium solution for miners. Then the edge server maximizes its profit by substituting the expected fee of miners. For a given expected fee imposed by EMG, the following sub game is solved first. The optimal response of follower game is substituted into leader game, and the optimal price is obtained. Similar algorithms can also be used for the situation of different fee provided by the edge server. We set 1000 blocks adopting Node.js\cite{js2016node}. To simplify the experiment, the size of block is the same and there are 10 transactions exist in each block to mine.
\begin{algorithm}[t]
\caption{The Algorithm for finding Stackelberg Equilibrium under the same expected fee}
\hspace*{0.02in} {\bf Input:}
initial expected fee $p_0$, parameter $\theta$ , precision threshold $\varepsilon$\\
\hspace*{0.02in} {\bf Output:}
the optimal expected fee $p_{opt}$, the computing power of miner $c_i$, the count of successful mining $\mathcal S$
\begin{algorithmic}[1]
\State{\bf Initialization:}
\State Set initial $p_0$, parameter $\theta \in (0,1)$, precision threshold $\varepsilon > 0$\\
Let $X_1, \cdots, X_n$ be $n$ independent random 0-1 variables, where $X_i$ takes 1 with the successful mining of miner $i$.
\State {\bf Repeat:}
\State Each miner $i$ provides its computing power $c_i$ to mine
\State If the mining is successful, $X_i$=1
\State $p_{i+1}=p_i(1+\theta)$
\State $i \gets i+1$
\State {\bf Until:}
\State $p_{i+1}-p_i < 0$ and $\frac{|p_{i+1}-p_i|}{p_i} < \varepsilon$
\State $\mathcal S = \sum\limits_{i=1}^n X_i$
\State {\bf Return:}
\State $p_{opt}=p_i$, $c_i$, $\mathcal S$
\end{algorithmic}
\end{algorithm}
Firstly, we evaluate the probability of successful mining versus the computing power of edge server. The computing power from the edge server is the critical factor that decides the probability of successfully mining block, as shown in Fig 1.
\begin{figure}[htp]
\centering
\includegraphics[width=3.0in,height=3.0in,clip,keepaspectratio]{F1.\format}
\caption{\small The probability of mining successful vs The computing power of the edge server}
\end{figure}
We secondly investigate the optimal expected fee for mobile devices under the same computing power versus the fixed reward R by blockchain system.
\begin{figure}[htp]
\centering
\includegraphics[width=3.0in,height=3.0in,clip,keepaspectratio]{F2.\format}
\caption{\small The optimal expected fee vs Fixed reward for successfully mining}
\end{figure}
\par The Fig 2 shows that the optimal expected fee for mobile devices increase with the increasing of the fixed reward R. The fixed reward is higher, the expected fee of the edge server is higher. Miners are more willing to respond to the recruitment of edge servers to provide higher computing power to mine. Therefore, the edge server can provide a higher expected fee to recruit mobile devices and obtain higher computing power to improve the probability of mining success, and thus obtain greater profit. Thus, with the increase of fixed rewards provided by blockchain system, the individual computing power provided by each miner also increases.
\par We next evaluate two situations for the profit of Edge Server versus the different computing power of recruited miners and the different computing power provided by edge server under the same total computing power for them. The first one is that the computing power from edge server is set to 50 constantly, and when the computing power of this recruited mobile devices increases, the reward curve of the edge server changes. Another is that the total computing power of mobile miners is set to 50, and when the computing power from edge server increases, the reward curve of the edge server changes accordingly. From Fig 3, we find that the profit of the edge server increase rapidly with the increase of recruited mobile devices in the initial stage. That’s because, in the initial stage, with the mobile devices recruited by the edge server, the computing power is increased rapidly, and the probability of mining success is increased, so the reward is increased. However, with the addition of more mobile devices, the increasing probability of mining success is not obvious, but the edge server still has to pay the corresponding remuneration, so the cost increases sharply, and the total income increases slowly. In addition, under the different computing power for each mobile device, the profit of the edge server increases with the increase of total computing power compared with that under the same computing power of mobile devices. However, Fig 4 shows that when the computing power from edge server accounts for a larger proportion of the total computing power, there is little difference between the benefits of the edge server under the two situations. That’s because the edge server is dominant in that case, and the computing power from mobile device is little and makes little contribution to mining.
\begin{figure}[htp]
\centering
\includegraphics[width=3.0in,height=3.0in,clip,keepaspectratio]{F3.\format}
\caption{\small The profit of the edge server vs The total computing power of miners}
\end{figure}
\begin{figure}[htp]
\centering
\includegraphics[width=3.0in,height=3.0in,clip,keepaspectratio]{F4.\format}
\caption{\small The profit of the edge server vs The computing power of the edge server}
\end{figure}
\par Finally, we compare the profit of edge server in EMG with MDG\cite{xiong2018optimal}. For the EMG, we consider that the computing power of the edge server accounts for 10\%, 50\% and 90\% of the total computing power respectively. In addition, the total computing power that provided by all miners in two schemes is the same. We investigate the profit of the edge server versus the total computing power. From Fig 5 and Fig 6, we can find that the profit of the edge server in both schemes increases with the increase of the total computing power. However, the profit of the scheme proposed in this because the edge server itself participates in mining, reducing the information transmission delay and improving the efficiency. In the EMG method, the edge server does not participate in mining, and all mining tasks are assigned to the mobile device, which will generate a large communication cost and a large delay, so it reduces the probability of mining success and reduces the profit of the edge server
\begin{figure}[htp]
\centering
\includegraphics[width=3.0in,height=3.0in,clip,keepaspectratio]{F5.\format}
\caption{\small The profit of the edge server vs The total computing power under the different computing power provided by miners}
\end{figure}
\begin{figure}[htp]
\centering
\includegraphics[width=3.0in,height=3.0in,clip,keepaspectratio]{F6.\format}
\caption{\small The profit of the edge server vs The total computing power under the same computing power provided by miners}
\end{figure}
\section{Conclusion}
In this paper, we discuss the challenges of mobile blockchain in mining. In order to address the challenges of high mining cost for mobile devices, this paper proposes an incentive mechanism based on edge computing. We developed a two-stage Stackelberg Game model to jointly optimize the reward of edge servers and recruited mobile devices. The edge server recruits mobile devices to compete with it in mining. The edge server sets the expected fee according to the computing power of the mobile device, and the mobile device provides the computing power for the edge server. We prove that there is a unique Nash Equilibrium solution in this game under the same expected fee or different expected fee. In addition, we show the result curve of the profit for the edge server with the different ratio between the computing power from edge server and mobile devices. The results show that the contribution of the computing power from the edge server to its profit is more than that of it recruited mobile devices under the same condition. In addition, the proposed scheme has been compared with the MDG scheme for the profit of the edge server. The results show that the profit of the proposed scheme is more than that of the MDG scheme under the same conditions.
\section*{Acknowledgment}
This work is supported by the National Science Foundation of China(No.61662039), Science and technology project of Jiangxi Provincial Department of Education (No. GJJ170967), Jiangxi Key Natural Science Foundation (No. 20192ACBL20031), Project of Teaching Reform in Jiujiang University(No. XJJGYB-19-47).
\ifCLASSOPTIONcaptionsoff
\newpage
\fi
\bibliographystyle{unsrt}
\section{\@startsection {section}{1}{\z@}{-3.5ex plus -1ex minus
\newcommand{{\textstyle *}}{{\textstyle *}}
\newcommand{{\rm F}}[2]{F_{#1}^{#2}}
\newcommand{{\rm FQ}}[2]{{\rm PF}_{{#1\rm\mbox{-}T}}^{#2}}
\newcommand{{\rm Q}}[2]{\P_{{#1\rm\mbox{-}T}}^{#2}}
\newcommand{\FQ_\|}[2]{{\rm PF}_{{#1\rm\mbox{-}tt}}^{#2}}
\newcommand{\Q_\|}[2]{\P_{{#1\rm\mbox{-}tt}}^{#2}}
\newcommand{\P/\poly}{\P/{\rm poly}}
\newcommand{\P^{\NP[\log]}}{\P^{{\rm NP}[\log]}}
\newcommand{\pleq}[1]{\leq_{#1}^{p}}
\def\abs#1{\left|#1\right|}
\def\exp#1{{\rm exp}\left(#1\right)}
\newcommand{\transfig}[1]{\begin{center}\vspace{-6pt}\input{#1.tex}\vspace{-12pt}\end{center}}
\newif\ifshortconferences
\shortconferencesfalse
\newif\ifmediumconferences
\mediumconferencesfalse
\def\ending#1{{\count1=#1\relax
\count2=\count1
\divide\count2 by 100
\multiply\count2 by 100
\advance\count1 by -\count2
\ifnum\count1=11
th%
\else \ifnum\count1=12
th%
\else \ifnum\count1=13
th%
\else
\count2=\count1
\divide\count1 by 10
\multiply\count1 by 10
\advance\count2 by -\count1
\ifnum\count2=1
st%
\else \ifnum\count2=2
nd%
\else \ifnum\count2=3
rd%
\else th%
\fi\fi\fi\fi\fi\fi
}}
\def\conf{STOC}{\conf{STOC}}
\def\ifshortconferences ACM STOC\else\ifmediumconferences Ann. ACM Symp. Theor. Comput.\else Annual ACM Symposium on Theory of Computing\fi\fi{\ifshortconferences ACM STOC\else\ifmediumconferences Ann. ACM Symp. Theor. Comput.\else Annual ACM Symposium on Theory of Computing\fi\fi}
\def68{68}
\def\conf{FOCS}{\conf{FOCS}}
\def\ifshortconferences IEEE FOCS\else\ifmediumconferences Ann. Symp. Found. Comput. Sci.\else IEEE Symposium on Foundations of Computer Science\fi\fi{\ifshortconferences IEEE FOCS\else\ifmediumconferences Ann. Symp. Found. Comput. Sci.\else IEEE Symposium on Foundations of Computer Science\fi\fi}
\def59{59}
\def\conf{FSTTCS}{\conf{FSTTCS}}
\def\ifshortconferences FST\&TCS\else\ifmediumconferences Ann. Conf. Found. Softw. Theor. and Theor. Comp. Sci.\else Conference on Foundations of Software Theory and Theoretical Computer Science\fi\fi{\ifshortconferences FST\&TCS\else\ifmediumconferences Ann. Conf. Found. Softw. Theor. and Theor. Comp. Sci.\else Conference on Foundations of Software Theory and Theoretical Computer Science\fi\fi}
\def80{80}
\def\conf{Complexity}{\conf{Complexity}}
\def\ifshortconferences Conf. Computational Complexity\else\ifmediumconferences Ann. Conf. Computational Complexity\else Annual Conference on Computational Complexity\fi\fi{\ifshortconferences Conf. Computational Complexity\else\ifmediumconferences Ann. Conf. Computational Complexity\else Annual Conference on Computational Complexity\fi\fi}
\def85{85}
\defStructure in Complexity Theory{Structure in Complexity Theory}
\def\conf{Structures}{\conf{Structures}}
\def\ifshortconferences Ann. Conf. Structure in Complexity Theory\else\ifmediumconferences Ann. Conf. Structure in Complexity Theory\else Annual Conference on Structure in Complexity Theory\fi\fi{\ifshortconferences Ann. Conf. Structure in Complexity Theory\else\ifmediumconferences Ann. Conf. Structure in Complexity Theory\else Annual Conference on Structure in Complexity Theory\fi\fi}
\def85{85}
\defStructure in Complexity Theory{Structure in Complexity Theory}
\def\conf{SODA}{\conf{SODA}}
\def\ifshortconferences ACM-SIAM Symp. on Discrete Algorithms\else Annual ACM-SIAM Symposium on Discrete Algorithms\fi{\ifshortconferences ACM-SIAM Symp. on Discrete Algorithms\else Annual ACM-SIAM Symposium on Discrete Algorithms\fi}
\def89{89}
\def\conf{STACS}{\conf{STACS}}
\def\ifshortconferences STACS\else\ifmediumconferences\else Annual Symposium on Theoretical Aspects of Computer Science\fi\fi{\ifshortconferences STACS\else\ifmediumconferences\else Annual Symposium on Theoretical Aspects of Computer Science\fi\fi}
\def83{83}
\def\conf{SPAA}{\conf{SPAA}}
\def\ifshortconferences ACM SPAA\else\ifmediumconferences Ann. ACM Symp. Par. Alg. Arch.\else Annual ACM Symposium on Parallel Algorithms and Architectures\fi\fi{\ifshortconferences ACM SPAA\else\ifmediumconferences Ann. ACM Symp. Par. Alg. Arch.\else Annual ACM Symposium on Parallel Algorithms and Architectures\fi\fi}
\def88{88}
\def\ifshortconferences Proc.\else\ifmediumconferences Proc.\else Proceedings\fi\fi{\ifshortconferences Proc.\else\ifmediumconferences Proc.\else Proceedings\fi\fi}
\def\ifshortconferences Proc.\else\ifmediumconferences Proc.\else Proceedings of the\fi\fi{\ifshortconferences Proc.\else\ifmediumconferences Proc.\else Proceedings of the\fi\fi}
\newcounter{confnum}
\def\conf#1#2{%
\setcounter{confnum}{#2}%
\addtocounter{confnum}{-\csname #1zero\endcsname}%
\ifnum\value{confnum}=1%
\expandafter\ifx\csname #1One\endcsname\relax%
\ifshortconferences Proc.\else\ifmediumconferences Proc.\else Proceedings of the\fi\fi\ \arabic{confnum}\ending{\value{confnum}}\ \csname #1name\endcsname%
\else \csname #1One\endcsname\fi%
\else%
\ifshortconferences Proc.\else\ifmediumconferences Proc.\else Proceedings of the\fi\fi\
\arabic{confnum}\ending{\value{confnum}}\ \csname #1name\endcsname\fi}
\def\vrule width0.7ex height0.9em depth0ex{\vrule width0.7ex height0.9em depth0ex}
\newif\ifqed\qedtrue
\def\global\qedfalse{\global\qedfalse}
\def\qed{\ifqed{\penalty1000\unskip\nobreak\hfil\penalty50
\hskip2em\hbox{}\nobreak\hfil\vrule width0.7ex height0.9em depth0ex
\parfillskip=0pt \finalhyphendemerits=0\par\medskip}\fi\global\qedtrue}
\def\QEDcomment#1{\ifqed{\penalty1000\unskip\nobreak\hfil\penalty50
\hskip2em\hbox{}\nobreak\hfil\vrule width0.7ex height0.9em depth0ex\ #1
\parfillskip=0pt \finalhyphendemerits=0\par\medskip}\fi\global\qedtrue}
\makeatletter
\def\eqnqed{\global\qedfalse
\def\@tempa{equation}
\ifx\@tempa\@currenvir\def\@eqnnum{\vrule width0.7ex height0.9em depth0ex}%
\addtocounter{equation}{-1}\else%
\def\@@eqncr{\let\@tempa\relax
\ifcase\@eqcnt \def\@tempa{& & &}\or \def\@tempa{& &}%
\else \def\@tempa{&}\fi
\@tempa {\def\@eqnnum{{\vrule width0.7ex height0.9em depth0ex}}\@eqnnum
\global\@eqnswtrue\global\@eqcnt\z@\cr}\fi}
\def\eqnlabel#1#2{\if@filesw {\let\thepage\relax%
\def\protect{\noexpand\noexpand\noexpand}%
\edef\@tempa{\write\@auxout{\string
\newlabel{#2}{{{#1}}{\thepage}}}}%
\expandafter}\@tempa%
\if@nobreak \ifvmode\nobreak\fi\fi\fi%
\def\@tempa{equation}
\ifx\@tempa\@currenvir\def\theequation{{#1}
\addtocounter{equation}{-1}\else%
\def\@@eqncr{\let\@tempa\relax
\ifcase\@eqcnt \def\@tempa{& & &}\or \def\@tempa{& &}%
\else \def\@tempa{&}\fi
\@tempa {\def\@eqnnum{{#1}}\@eqnnum
\global\@eqnswtrue\global\@eqcnt\z@\cr}\fi}
\makeatother
\newcommand{\listqed}{\qed\global\qedfalse}
\def\littleqed{\ifqed{\penalty1000\unskip\nobreak\hfil\penalty50
\hskip2em\hbox{}\nobreak\hfil\vrule width0.6ex height0.6em depth0ex
\parfillskip=0pt \finalhyphendemerits=0\par\medskip}\fi\global\qedtrue}
\def\vrule width0.6ex height0.6em depth0ex{\vrule width0.6ex height0.6em depth0ex}
\def\qed{\qed}
\makeatother
\def\aftereqnskip{\vskip-3pt}
\newcommand{\SPASAM}[1]{{\rm MOL{\rm\mbox{-}} A}(#1)}
\newcommand{\SPASM}[1]{{\rm MOL'}(#1)}
\newcommand{\SPAM}[1]{{\rm MOL}(#1)}
\newcommand{\NPbits}[1]{{\rm NPbits}(#1)}
\newcommand{\NPinit}[1]{{\rm NPinit}(#1)}
\newcommand{\NPpaths}[1]{{\rm NPpaths}(#1)}
\newcommand{\bigvee}{\bigvee}
\newcommand{\set}[1]{\left\{{#1}\right\}}
|
2,877,628,090,389 | arxiv | \section{Light-front Helicity operator $J^3$ from the manifestly gauge
invariant energy momentum tensor}
It is well-known that even though the {\it energy-momentum density} (which gives rise to
Hamiltonian and three-momentum) and the {\it generalized angular momentum
density} (which gives
rise to angular momentum and boosts) can be expressed in a manifestly
covariant, gauge invariant form, the explicit form of Poincare generators
in quantum
field theory depends on the frame of reference and may also depend up on the
gauge choice. This of course does not imply that the
theory has lost Lorenz and gauge symmetry. The symmetries are no longer
manifest, but the physical observables in the theory still obey the consequences
of the symmetries.
Poincare generators can be further classified as kinematical (which do not
contain interactions and do not change the quantization surface) and
dynamical (which contain interactions and change the quantization surface).
Which operator is dynamical and which is kinematical of course depends on
the choice of quantization surface. It is well-known that in light-front
field theory, on which our formalism of deep inelastic scattering is based,
the generators of boosts and the rotation in the transverse plane
(light-front helicity) are kinematical like three momenta
whereas the generators of rotations
about the two transverse axes are dynamical like the Hamiltonian.
The operator in light-front
field theory relevant to the $``$proton spin crisis " is the light-front
helicity operator which belongs to the kinematical subgroup. In light-front
literature, it is customary to construct this operator from the canonical
symmetric energy momentum tensor and one explicitly finds that this operator
is indeed free of interaction and has the same form as in free field
theory\cite{ks}.
In non-Abelian gauge theories like QCD, one should be extra cautious since
such theories are known to exhibit non-trivial topological effects. In this
work, we restrict our attention to the topologically trivial sector of QCD.
In this sector, interactions do not affect kinematical
generators\cite{wein}.
In view of the prevailing confusion in the literature (see Ref. \cite{mhs}
for a list of recent papers on the subject),
we provide an explicit demonstration of this
fact in this section in the case of the light-front helicity operator.
We start from the manifestly gauge invariant, symmetric energy momentum
tensor in QCD.
\begin{eqnarray}
\Theta^{\mu \nu} && = { i \over 2} {\overline \psi} [ \gamma^\mu D^\nu +
\gamma^\nu D^\mu ]\psi - F^{\mu \lambda a} F^{\nu a} _{~ \lambda} \nonumber \\
&& ~~ - g^{\mu \nu} \Big \{ -{1 \over 4} (F_{\lambda \sigma a})^2 +
{\overline \psi}(i \gamma^\lambda D_\lambda -m) \psi \Big \}
\end{eqnarray}
where $ i D^\mu = i \partial^\mu + g A^\mu $,
$ F^{\mu \lambda a} = \partial^\mu A^{\lambda a} - \partial^\lambda
A^{\mu a} + g f^{abc} A^{\mu b} A^{\lambda c}$ , $
F^{\nu a}_{~ \lambda } = \partial^{\nu a} A_{\lambda} - \partial_{\lambda}
A^{\nu a} + g f^{abc} A^{\nu b} A^{\lambda c}$.
We define the light-front helicity operator
\begin{eqnarray}
{\cal J}^3 = { 1 \over 2} \int dx^- d^2 x^\perp [ x^1 \Theta^{+2} - x^2
\Theta^{+1}].
\end{eqnarray}
${\cal J}^3$ is a manifestly gauge invariant operator
by construction. However,
it depends explicitly on the interaction and does not appear to be a
kinematical operator at all. Furthermore,
it is not apparent that ${\cal J}^3$
generates the correct transformations as an angular momentum operator.
Thus at this stage, we are not justified to call it a helicity operator.
Explicitly, we have,
\begin{eqnarray}
{\cal J}^3 && = { 1 \over 2} \int dx^- d^2 x^\perp \Big \{
x^1 [ { i \over 2} {\overline \psi} (\gamma^+ D^2 + \gamma^2 D^+) \psi
- F^{+ \lambda a} F^{2a}_{~\lambda}] \nonumber \\
&& ~~ - x^2 [ { i \over 2} {\overline \psi} (\gamma^+ D^1 + \gamma^1 D^+) \psi
- F^{+ \lambda a} F^{1 a}_{~\lambda}] \Big \}
\end{eqnarray}
The fermion field can be decomposed as $ \psi^{\pm} = \Lambda^{\pm} \psi$, with $ \Lambda^{\pm} = { 1 \over
4} \gamma^{\mp} \gamma^{\pm}$.
We shall work in the gauge $A^+=0$.
In this gauge, we still have residual gauge freedom associated with
$x^-$-independent gauge transformations. Note that only $\psi^+$ and $A^i$
are dynamical variables whereas $\psi^-$ and $A^-$ are constrained.
We have,
\begin{eqnarray}
{ i \over 2} {\overline \psi} ( \gamma^+ D^2 + \gamma^2 D^+) \psi =
{\psi^+}^\dagger i \partial^2 \psi^+
+ g {\psi^+}^\dagger T^a \psi^+ A^2_a + { i \over 2} {\overline \psi}
\gamma^2 i \partial^+ \psi.
\end{eqnarray}
Using the constraint equation
\begin{eqnarray}
i \partial^+ \psi^- = \Big [ \alpha^\perp \cdot ( i \partial^\perp + g
A^\perp) + \gamma^0 m \Big ] \psi^+,
\end{eqnarray}
to eliminate the constraint variable $\psi^-$ we arrive at,
after some algebra,
\begin{eqnarray}
{ i \over 2} {\overline \psi} \gamma^2 \partial^+ \psi &&= i {\psi^+}^\dagger
\partial^2 \psi^+
+ {1 \over 2} \partial^1({\psi^+}^\dagger \Sigma^3 \psi^+) + g {\psi^+}^\dagger T^a
\psi^+ A^{2a} \nonumber \\
&& ~~~~ + { i \over 2} \partial^+ \Big ( {\psi^-}^\dagger \alpha_2 \psi^+
\Big ) - { i \over 2} \partial^2 \Big ( {\psi^+}^\dagger \psi^+ \Big ).
\end{eqnarray}
Now we restrict ourselves to the topologically trivial sector by
requiring that the dynamical fields ($\psi^+$ and $A^i$) vanish at $x^{-,i}
\rightarrow \infty$. The residual gauge freedom and the surface terms
are no longer present
and so we drop total derivatives of
$\partial^+$ and $ \partial^2$. Note that the term involving $ \partial^1$
is not a surface term since $\Theta^{+2}$ is multiplied by $x^1$.
Collecting the results together, we have,
\begin{eqnarray}
{ i \over 2} {\overline \psi} (\gamma^+ D^2 + \gamma^2 D^+) \psi = 2 i
{\psi^+}^\dagger \partial^2 \psi^+
+ {1 \over 2} \partial^1 ({\psi^+}^\dagger \Sigma^3 \psi^+) + 2 g {\psi^+}^\dagger T^a
\psi^+ A^{2a}
\end{eqnarray}
where $ \Sigma^3 = i \gamma^1 \gamma^2$.
In the gauge $A^+=0$,
\begin{eqnarray}
- F^{+ \lambda a} F^{2 a}_{~ \lambda } && = - { 1 \over 2 } (\partial^+)^2 A^{-a}
A^{2a} + \partial^+ A^{ja} (\partial^2 A^{ja} - \partial^j A^{2a})
+ g f^{abc} (\partial^+ A^{ja}) A^{2b} A^{jc} \nonumber \\
&& ~~~~ + { 1 \over 2} \partial^+ \Big (\partial^+ A^{-a} A^{2a} \Big ).
\end{eqnarray}
We have the constraint equation for the elimination of the variable $A^-$,
\begin{eqnarray}
{ 1 \over 2} (\partial^+)^2 A^{-a} = \partial^+ \partial^i A^{ia} + g
f^{abc} A^{ib} \partial^+ A^{ic} + 2 g {\psi^+}^\dagger T^a \psi^+.
\end{eqnarray}
Thus
\begin{eqnarray}
-F^{+ \lambda a} F^{2a}_{~ \lambda } &&= \partial^i A^{ia} \partial^+ A^{2a} +
\partial^+ A^{ja}(\partial^2 A^{ja} - \partial^j A^{2a}) - 2 g
{\psi^+}^\dagger T^a \psi^+ A^{2a} \nonumber \\
&&~~~~~ + { 1 \over 2} \partial^+ \Big ( \partial^+ A^{-a} A^{2a} \Big )
- \partial^+ \Big ( \partial^i A^{ia} A^{2a} \Big ) \nonumber \\
&& = \partial^+ A^{1a} \partial^2 A^{1a} + \partial^+ A^{2a} \partial^2
A^{2a} + \partial^1 (A^{1a} \partial^+ A^{2a}) \nonumber \\
&& ~~~~+ { 1 \over 2} \partial^+ \Big ( \partial^+ A^{-a} A^{2a} \Big )
- \partial^+ \Big ( \partial^i A^{ia} A^{2a} \Big )
- \partial^+ \Big ( A^{1a} \partial^1 A^{2a} \Big ).
\end{eqnarray}
Collecting the results together,
\begin{eqnarray}
\Theta^{+2} && = 2 i {\psi^+}^\dagger \partial^2 \psi^+ +
{ 1 \over 2} \partial^1({\psi^+}^\dagger \Sigma^3 \psi^+) \nonumber \\
&& ~~ + \partial^+ A^{1a} \partial^2 A^{1a} + \partial^+ A^{2a} \partial^2
A^{2a} + \partial^1 (A^{1a} \partial^+ A^{2a}).
\end{eqnarray}
We have dropped the surface terms at $ x^- = \pm \infty$.
By a similar calculation,
\begin{eqnarray}
\Theta^{+1} && = 2 i {\psi^+}^\dagger \partial^1 \psi^+ -
{ 1 \over 2} \partial^2({\psi^+}^\dagger \Sigma^3 \psi^+) \nonumber \\
&& ~~ + \partial^+ A^{1a} \partial^1 A^{1a} + \partial^+ A^{2a} \partial^1
A^{2a} + \partial^2 (A^{2a} \partial^+ A^{1a})
\end{eqnarray}
>From the above two equations it is clear that $\Theta^{+1}$ and
$\Theta^{+2}$ agree with the free field theory form at the operator level.
This shows that
in light-front quantization, with $A^+=0$ gauge, ${\cal J}^3 = J^3$ (the
naive canonical form independent of interactions) at the
operator level, provided the fields vanish at the boundary. Explicitly,
\begin{eqnarray}
J^3 = J^3_{f(o)}+ J^3_{f(i)}+ J^3_{g(o)}+J^3_{g(o)}
\end{eqnarray}
with
\begin{eqnarray}
J^3_{f(o)} &&= \int dx^- d^2 x^\perp {\psi^+}^\dagger i ( x^1 \partial^2 -x^2
\partial^1) \psi^+ , \nonumber \\
J^3_{f(i)}&& = { 1 \over 2} \int dx^- d^2 x^\perp {\psi^+}^\dagger
\Sigma^3 \psi^+, \nonumber \\
J^3_{g(o)}&& = {1 \over 2} \int dx^- d^2 x^\perp \Big \{
x^1 [\partial^+A^1 \partial^2 A^1 + \partial^+A^2 \partial^2 A^2]
-x^2 [\partial^+A^1 \partial^1 A^1 + \partial^+A^2 \partial^1 A^2]
\Big \}, \nonumber \\
J^3_{g(i)} && = { 1 \over 2} \int dx^- d^2 x^\perp [ A^1 \partial^+ A^2 -
A^2 \partial^+ A^1 ].
\end{eqnarray}
The color indices are implicit in these equations.
Using canonical commutation relations, we explicitly find that,
\begin{eqnarray}
i \left [ J^{3}_{f(o)}, \psi^{+} (x) \right ] &&= (x^1 \partial^2 - x^2 \partial^1)
\psi^+(x), \label{commu1} \\
i \left [ J^{3}_{f(i)},\psi^{+}(x) \right ] && = { 1 \over 2} \gamma^1
\gamma^2 \psi^{+}(x), \label{commu2} \\
i \left [ J^{3}_{g(o)},A^{i}(x) \right ] && = (x^1 \partial^2 - x^2 \partial^1)
A^{i}(x), \label{commu3} \\
i \left [J^{3}_{g(i)}, A^{i} (x)\right ] && = - \epsilon_{ij} A^{j} (x) .
\label{commu4}
\end{eqnarray}
Thus these operators do qualify as angular momentum operators (generators of
rotations in the transverse plane) in the theory\cite{ks}.
To summarize,
the helicity operator constructed from manifestly gauge
invariant, symmetric, energy momentum tensor in QCD, in the gauge $A^+=0$,
and after the elimination of constraint variables, is equal to the naive
canonical form of the light-front helicity operator plus surface terms. In
the topologically trivial sector, we can legitimately require the dynamical
fields to vanish at the boundary. This eliminates the residual gauge degrees
of freedom and removes the surface terms. Thus we have a gauge fixed
Poincare generator which we consider in the following sections.
\section{ Orbital helicity distribution functions}
We define the orbital helicity distribution for the fermion
\begin{eqnarray}
\Delta q_L(x, Q^2) = { 1 \over 4 \pi P^+} \int d \eta e^{ -i \eta x}
\langle PS \mid \Big [ {\overline \psi}(\xi^-) \gamma^+ i (x^1 \partial^2
- x^2 \partial^1) \psi(0) + h.c.\Big ] \mid PS \rangle \label {fo}
\end{eqnarray}
with $ \eta = {1 \over 2} P^+ \xi^-$. Here $ \mid P S \rangle $ denotes the
hadron state with momentum $P$ and helicity $S$.
We define the light-front orbital helicity distribution for the gluon as
\begin{eqnarray}
\Delta g_L(x,Q^2) &=& {-1 \over 4 \pi P^+} \int d \eta
e^{ - i \eta x} \langle PS \mid \Big [ x^1 F^{+ \alpha}(\xi^-)
\partial^2 A_\alpha(0) - x^2 F^{+ \alpha} (\xi^-) \partial^1
A_\alpha(0) \Big ] \mid PS \rangle. \label{go}
\end{eqnarray}
These distributions are defined in analogy with the more
familiar intrinsic helicity distributions for quarks and gluons
given as follows.
For the fermion, the intrinsic light-front helicity distribution function
is given by
\begin{eqnarray}
\Delta q(x,Q^2) = { 1 \over 8 \pi S^+} \int d \eta e^{ - i \eta x}
\langle PS \mid \Big [ {\overline \psi} (\xi^- ) \gamma^+ \Sigma^3 \psi(0) +
h.c \Big ] \mid PS \rangle \label{fi}
\end{eqnarray}
where $ \Sigma^3 = i \gamma^1 \gamma^2$. This is the same as the chirality
distribution function $g_1$.
For the gluon, the intrinsic light-front helicity distribution is defined
\cite{jgd} as
\begin{eqnarray}
\Delta g(x,Q^2) = -{ i \over 4 \pi (P^+)^2 x} \int d \eta e^{ - i \eta x}
\langle PS \mid F^{+ \alpha} (\xi^-) {\tilde F}^+_{~~\alpha}(0) \mid PS \rangle.
\label {gi}
\end{eqnarray}
The dual tensor
\begin{eqnarray}
{\tilde F^{\mu \nu}} = { 1 \over 2} \epsilon^{\mu \nu \rho \sigma} F_{\rho
\sigma} ~~~~
{\rm with} ~~~~ \epsilon^{+1-2} = 2.
\end{eqnarray}
Note that the above distribution functions are defined in the
light-front gauge $A^+=0$. In the two-component representation\cite{zhang93}
we have the dynamical fermion field,
\begin{eqnarray}
\psi_+(x) = \sum_\lambda \chi_\lambda \int {dk^+d^2k_\bot
\over 2(2\pi)^3 \sqrt{k^+}}\Big(b_\lambda(k)e^{-ikx} +
d_{-\lambda}^\dagger(k)e^{ikx} \Big)\label{psi} ,
\end{eqnarray}
and the dynamical gauge field
\begin{eqnarray}
A_{}^i(x) = \sum_\lambda \int {dk^+d^2k_\bot\over
2(2\pi)^3k^+}\Big(\varepsilon^i(\lambda)
a_\lambda(k)e^{-ikx} + h.c \Big)\label{ap},
\end{eqnarray}
with
\begin{eqnarray}
\Big\{b_\lambda (k), b_{\lambda'}^\dagger(k') \Big\} &=&
\Big\{d_\lambda(k), d_{\lambda'}^\dagger{k'} \Big\}
= 2(2\pi)^3 k^+\delta (k^+-{k'}^+) \delta^2(k_\bot - k_\bot')
\delta_{\lambda \lambda'}, \\
\Big[a_\lambda(k) , a_{\lambda'}^\dagger (k') \Big] &=&
2(2\pi)^3 k^+\delta (k^+-{k'}^+) \delta^2(k_\bot - k_\bot')
\delta_{\lambda \lambda'},
\end{eqnarray}
and $\chi_\lambda$ is the eigenstate of $\sigma_z$ in the two-component
spinor of $\psi_+$ by the use of the following light-front $\gamma$
matrix representation \cite{Zhang95},
\begin{equation}
\gamma^0 = \left[\begin{array}{cc} 0 & - i \\ i & 0 \end{array}
\right] ~~, ~~
\gamma^3= \left[\begin{array}{cc} 0 & i \\ i & 0 \end{array}
\right] ~~, ~~
\gamma^i = \left[\begin{array}{cc} -i\tilde{\sigma}^i & 0 \\
0 & i\tilde{\sigma}^i \end{array} \right]
\end{equation}
with $\tilde{\sigma}^1 =\sigma^2, \tilde{\sigma}^2=-\sigma^1$) and
$\varepsilon^i(\lambda)$ the polarization vector of transverse
gauge field.
Note that integration of the above distribution functions over $x$ is
directly related to the expectation values of the corresponding helicity
operators as follows.
\begin{eqnarray}
\int_0^1 dx \Delta q(x,Q^2)=&&~ { 1 \over {\cal N}} \langle PS \mid J^3_{q(i)}
\mid PS \rangle\nonumber\\
\int_0^1 dx \Delta q_L(x,Q^2)=&&~ { 1 \over {\cal N}} \langle PS \mid
J^3_{q(o)}
\mid PS \rangle\nonumber\\
\int_0^1 dx \Delta g(x,Q^2)=&&~ { 1 \over {\cal N}} \langle PS \mid J^3_{g(i)}
\mid PS \rangle\nonumber\\
\int_0^1 dx \Delta g_L(x,Q^2)=&&~ { 1 \over {\cal N}} \langle PS \mid
J^3_{g(o)}
\mid PS \rangle
\end{eqnarray}
where ${\cal N}=2(2\pi)^3P^+\delta^3(0)$.
\section{Perturbative calculation of anomalous dimensions}
In this section, we evaluate the internal helicity distribution functions for
a dressed quark in perturbative QCD by replacing the hadron target by a
dressed quark target. We have provided the necessary details of the
calculation which may serve as the stepping stone for more realistic
calculation with meson target. From this simple calculation, we have
illustrated how easily one can
extract the relevant splitting
functions and evaluate the corresponding anomalous dimensions.
Note that, since we are not interested in exhaustive calculation of various
anomalous dimensions and the purpose of this section being illustrative,
we can safely drop the derivative of delta function in the following
calculations and work explicitly with forward matrix element.
The dressed quark state with fixed helicity can
be expressed as
\begin{eqnarray}
|k^+,k_\bot,\lambda \rangle &=& \Phi^{\lambda}(k) b^\dagger_\lambda
(k)|0\rangle + \sum_{\lambda_1\lambda_2} \int {dk_1^+d^2
k_{\bot 1}\over \sqrt{2(2\pi)^3 k_1^+}}
{dk_2^+d^2k_{\bot 2}\over \sqrt{2 (2\pi)^3k_2^+}}
\sqrt{2(2\pi)^3 k^+} \delta^3(k-k_1-k_2) \nonumber \\
&&~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~\times \Phi^\lambda
_{\lambda_1\lambda_2}(k;k_1,k_2)b^\dagger_{\lambda_1}
(k_1) a^\dagger_{\lambda_2} (k_2) | 0 \rangle +
\cdots, \label{dsqs}
\end{eqnarray}
where the normalization of the state is determined by
\begin{equation}
\langle {k'}^+,k'_\bot,\lambda' |k^+,k_\bot,\lambda \rangle
= 2(2\pi)^3 k^+ \delta_{\lambda,\lambda'}\delta(k^+-{k'}^+)
\delta^2(k_\bot-k'_\bot),
\end{equation}
We introduce the boost invariant amplitudes $\psi_1^\lambda$
and $ \psi^\lambda_{\sigma_1,
\lambda_2}(x,\kappa^\perp)$ respectively by $\Phi^\lambda(k)=\psi_1^\lambda$ and
$\Phi^\lambda_{\lambda_1\lambda_2}(k;k_1,k_2)
= { 1 \over \sqrt{P^+}} \psi^\lambda_{\sigma_1
\lambda_2}(x,\kappa^\perp)$. From the light-front QCD Hamiltonian, to lowest
order in perturbation theory, we have,
\begin{eqnarray}
\psi^\lambda_{\sigma_1\lambda_2}(x,\kappa_\bot) &=& -
{g \over \sqrt{2 (2 \pi)^3}}
T^a {x(1-x)\over \kappa_\bot^2 + m_q^2(1-x)^2}
\chi^\dagger_{\sigma_1} \Bigg\{2{\kappa_\bot^i \over
1-x} \nonumber \\
&& ~~~~~~~~~~~~~ + {1\over x}(\tilde{\sigma_\bot}\cdot \kappa_\bot)
\tilde{\sigma}^i -im_q\tilde{\sigma}^i{1-x\over x}\Bigg\}
\chi_\lambda \varepsilon^{i*}(\lambda_2)~\psi_1^\lambda .
\label{psip}
\end{eqnarray}
Here $x$ is the longitudinal momentum fraction carried by the quark.
We shall ignore the $m_q$ dependence in the above wave function which can
lead to higher twist effects in orbital helicity.
In the following we take the helicity of the dressed
quark to be + $ {1 \over 2}$. Due to transverse boost invariance,
without loss of generality, we take the transverse momentum of the initial
quark to be zero.
First we evaluate the gluon intrinsic helicity
distribution function given in eq. (\ref{gi}) in the dressed quark state.
The non vanishing contribution comes from the quark-gluon state. We get,
\begin{eqnarray}
\Delta g (1-x,Q^2) && = \sum_{\sigma_1, \lambda_2} \lambda_2 ~
\int d^2 \kappa^\perp~
{\psi^{\uparrow}_{\sigma_1 \lambda_2}}^*(x, \kappa^\perp)
{\psi^{\uparrow}_{\sigma_1 \lambda_2}}(x, \kappa^\perp)
\nonumber \\
&& = {\alpha_s \over 2 \pi} C_f ln{Q^2 \over \mu^2} ~x^2 (1-x)^2
{ 1 \over 1-x} \Big [ { 1 \over x^2 (1-x)^2} - { 1 \over (1-x)^2} \Big ].
\end{eqnarray}
The first (second) term inside the square bracket arises from the
state with gluon helicity +1 (-1).
Thus we have the gluon intrinsic helicity contribution in the dressed quark
state
\begin{eqnarray}
\Delta g(1-x,Q^2) = {\alpha_s \over 2 \pi} C_f ln{Q^2 \over \mu^2} ~(1+x).
\label{gih}
\end{eqnarray}
Note that the gluon distribution function has the argument $(1-x)$ since we
have assigned $x$ to the quark in the dressed quark state.
Next we evaluate the quark orbital helicity
distribution function given in eq.(\ref{fo}) in the dressed quark state.
The non vanishing contribution comes from the quark-gluon state. We get,
\begin{eqnarray}
\Delta q_L(x,Q^2) && = \sum_{\sigma_1, \lambda_2} ~\int d^2 \kappa^\perp
~(1-x)~
{\psi^{\uparrow}_{\sigma_1 \lambda_2}}^*(x, \kappa^\perp)
(- i {\partial \over \partial \phi})
{\psi^{\uparrow}_{\sigma_1 \lambda_2}}(x, \kappa^\perp) \nonumber \\
&& = -{\alpha_s \over 2 \pi} C_f ln{Q^2 \over \mu^2}
(1-x) x^2 (1-x)^2
{ 1 \over 1-x} \Big [ { 1 \over x^2 (1-x)^2} - { 1 \over (1-x)^2} \Big ].
\end{eqnarray}
The first (second) term inside the square bracket arises from the
state with gluon helicity +1 (-1).
Thus we have the quark orbital helicity contribution in the dressed quark
state
\begin{eqnarray}
\Delta q_L(x,Q^2) = - {\alpha_s \over 2 \pi} C_f ln{Q^2 \over \mu^2} ~(1-x)
(1+x).
\label{qoh}
\end{eqnarray}
Similarly we get the gluon orbital helicity distribution defined in eq.
(\ref{go}) in the dressed
quark state
\begin{eqnarray}
\Delta g_L(1-x,Q^2) && = \sum_{\sigma_1, \lambda_2} ~\int d^2 \kappa^\perp
~~x~
{\psi^{\uparrow}_{\sigma_1 \lambda_2}}^*(x, \kappa^\perp)
(- i {\partial \over \partial \phi})
{\psi^{\uparrow}_{\sigma_1 \lambda_2}}(x, \kappa^\perp) \nonumber \\
&&= -{\alpha_s \over 2 \pi} C_f ln{Q^2 \over \mu^2} ~ x
(1+x).
\label{goh}
\end{eqnarray}
We note that the helicity is conserved at the quark gluon vertex.
For the initial quark of zero transverse momentum, total helicity of the
initial state is the intrinsic helicity of the initial quark, namely, $+ { 1
\over 2}$ in our case.
Since we have neglected quark mass effects, the final quark also has
intrinsic helicity $+ { 1\over 2}$. Thus total helicity conservation implies
that the contributions from gluon intrinsic helicity and quark and gluon
internal orbital helicities have to cancel. This is readily verified using eqs.
(\ref{gih}), (\ref{qoh}), and (\ref{goh}).
>From eqs. (\ref{gih}), (\ref{qoh}) and (\ref{goh}) we extract the relevant splitting
functions. The splitting functions are
\begin{eqnarray}
P_{SS(gq)}(1-x) ~ &&= C_f~(1+x) ,\nonumber \\
P_{LS(qq)} (x) ~ &&= -~C_f~(1-x^2), \nonumber \\
P_{LS(gq)}(1-x) ~ &&= -C_f ~ x~(1+x).
\end{eqnarray}
We define the anomalous dimension $ A^n = \int_0^1 dx x^{n-1} P(x)$.
The anomalous dimensions are given by
\begin{eqnarray}
A^n_{SS(gq)} = C_f ~ { n+2 \over n(n+1)}, ~~
A^n_{LS(qq)} = -~ C_f ~ { 2 \over n(n+2)}, ~~ A^n_{LS(gq)} = -~ C_f ~{ n+4
\over n (n+1) (n+2)}.
\end{eqnarray}
These anomalous dimensions agree with those given in the recent work of
H\"{a}gler and Sch\"{a}fer\cite{sch}.
\section { Verification of helicity sum rule}
Helicity sum rule for the fermion target is given by
\begin{eqnarray}
{ 1 \over {\cal N}} \langle PS \mid \Big [ J^3_{q(i)} + J^3_{q(o)} +
J^3_{g(i)} + J^3_{g(o)} \Big ] \mid PS \rangle = \pm { 1 \over 2}.
\end{eqnarray}
For boson target RHS of the above equation should be replaced by the
corresponding helicity.
Here we verify the correctness of our definitions of distribution functions
in the context of
helicity sum rule for a dressed quark as well as
a dressed gluon target perturbatively. For simplicity, we take the external
transverse momenta of the target to be zero so that there is no net angular
momentum associated with the center of mass of the target. Using the field expansions,
given in Eqs.(\ref{psi}) and (\ref{ap}),
we have,
\begin{eqnarray}
J^3_{f(o)} && = i \sum_s \int {dk^+ d^2 k^\perp \over 2 (2 \pi)^3 k^+} \Bigg
[
b^\dagger(k,s) \Big [ k^2 { \partial \over \partial k^1} - k^1 { \partial
\over \partial k^2} \Big ] b(k,s)
+ d^\dagger(k,s) \Big [ k^2 { \partial \over \partial k^1} - k^1 { \partial
\over \partial k^2} \Big ] d(k,s) \Bigg ], \nonumber \\
J^3_{f(i)} && = {1 \over 2} \sum_\lambda \lambda
\int {dk^+ d^2 k^\perp \over 2 (2 \pi)^3 k^+} \Bigg
[b^\dagger(k,\lambda) b(k,\lambda)+
d^\dagger(k,\lambda) d(k,\lambda) \Bigg ], \nonumber \\
J^3_{g(o)} && = i \sum_{\lambda} \int { dk^+ d^2 k^\perp \over 2 ( 2 \pi)^3
k^+} a^\dagger (k, \lambda) \Big [ k^2 {\partial \over \partial k^1} - k^1
{\partial \over \partial k^2} \Big ] a(k, \lambda), \nonumber \\
j^3_{g(i)} && = \sum_{\lambda} \lambda \int { dk^+ d^2 k^\perp \over 2 ( 2 \pi)^3
k^+} a^\dagger (k, \lambda) a(k, \lambda).
\end{eqnarray}
For a dressed quark target having helicity $+{1\over2}$ we get,
\begin{eqnarray}
{1\over {\cal N}}\langle P , \uparrow \mid J^3_{f(i)} \mid P, \uparrow
\rangle_q = && \int dx \Big[ {1\over 2}\delta(1-x) + {\alpha\over 2\pi}C_f
ln{Q^2\over \mu^2}\big[ {1+x^2 \over (1-x)_+}
+{3\over2}\delta(1-x)\big]\Big]\nonumber\\
= &&~~~{1\over2}\nonumber\\
{1\over {\cal N}}\langle P , \uparrow \mid J^3_{f(o)} \mid P, \uparrow
\rangle_q = && - {\alpha\over 2\pi}C_f
ln{Q^2\over \mu^2}\int dx ~(1-x)~(1+x)\nonumber\\
{1\over {\cal N}}\langle P , \uparrow \mid J^3_{g(i)} \mid P, \uparrow
\rangle_q = &&~~~ {\alpha\over 2\pi}C_f
ln{Q^2\over \mu^2}\int dx ~ (1+x)\nonumber\\
{1\over {\cal N}}\langle P , \uparrow \mid J^3_{g(o)} \mid P, \uparrow
\rangle_q = && - {\alpha\over 2\pi}C_f
ln{Q^2\over \mu^2}\int dx ~ x ~(1+x).
\end{eqnarray}
Adding all the contributions, we get,
\begin{equation}
{1\over {\cal N}}\langle P , \uparrow \mid J^3_{f(i)}
+J^3_{f(o)}+J^3_{g(i)}+J^3_{g(o)}
\mid P, \uparrow
\rangle_q =~~~{1\over2}.
\end{equation}
For a dressed gluon having helicity $+1$, the corresponding expressions
are worked out to be the
following.
\begin{eqnarray}
{1\over {\cal N}}\langle P , \uparrow \mid J^3_{f(i)} \mid P, \uparrow
\rangle_g = &&~~~0\nonumber\\
{1\over {\cal N}}\langle P , \uparrow \mid J^3_{f(o)} \mid P, \uparrow
\rangle_g = && {\alpha\over 2\pi}N_fT_f
ln{Q^2\over \mu^2}\int dx ~[x^2+ (1-x)^2]\nonumber\\
{1\over {\cal N}}\langle P , \uparrow \mid J^3_{g(i)} \mid P, \uparrow
\rangle_g = &&~~~ \psi_1^*\psi_1 \nonumber\\
=&&~~~1- {\alpha\over 2\pi}N_fT_f
ln{Q^2\over \mu^2}\int dx~ [x^2+ (1-x)^2]\nonumber\\
{1\over {\cal N}}\langle P , \uparrow \mid J^3_{g(o)} \mid P, \uparrow
\rangle_g = &&~~~0
\end{eqnarray}
Adding all the contributions, we get,
\begin{equation}
{1\over {\cal N}}\langle P , \uparrow \mid J^3_{f(i)}
+J^3_{f(o)}+J^3_{g(i)}+J^3_{g(o)}
\mid P, \uparrow
\rangle_g =~~~1.
\end{equation}
Note that in evaluating the above expression, we have used the
Fock-expansion of the target states. For the dressed quark we have used
Eq.(\ref{dsqs}), while for gluon we have used similar expansion but ignored
two-gluon Fock sector for simplicity.
\section{summary, conclusions and discussion}
We have presented a detailed analysis of the
light-front helicity operator (generator of rotations in the transverse
plane) in QCD.
We have explicitly shown that,
the operator constructed from manifestly gauge
invariant, symmetric energy momentum tensor in QCD, in the gauge $A^+=0$,
and after the elimination of constraint variables, is equal to the naive
canonical form of the light-front helicity operator plus surface terms. In
the topologically trivial sector, we can legitimately require the dynamical
fields to vanish at the boundary. This eliminates the residual gauge degrees
of freedom and removes the surface terms.
Next, we have defined non-perturbative quark and gluon orbital helicity
distribution functions as Fourier
transform of forward hadron matrix elements of appropriate bilocal operators
with bilocality only in the light-front longitudinal space.
We have calculated these distribution functions by replacing the hadron
target by a dressed parton providing all the necessary details.
>From these simple calculations we have
illustrated the utility of the newly defined distribution functions in the
calculation of
splitting functions and hence anomalous dimensions in perturbation theory.
We have also verified the helicity sum rule explicitly to the first non-trivial
order in perturbation theory.
Lastly, in an appendix, we have compared and contrasted the expressions
for internal
orbital helicity in non-relativistic and light-front (relativistic) cases.
Our calculation shows that the role played by particle masses in the internal
orbital angular momentum in the non-relativistic case is replaced by the
longitudinal momentum fraction in the relativistic case.
Although four terms appear in the expression of $L_3$ for
individual particles in two body system, only the term proportional to the
total internal $L_3$ contributes due to transverse boost invariance of the
multi-parton wave-function in light-front dynamics. We also note the occurrence
of the longitudinal
momentum fraction $x_2$($x_1$) multiplied by the total internal $L_3$ in the
expressions
of $L_3$ for particle one(two). This explains why one needs to take first moment with
respect to $x$ as well as $(1-x)$ for the respective distributions in
obtaining the helicity sum rule\cite{ji}.
Our explicit demonstration that the operator
constructed from manifestly gauge
invariant, symmetric energy momentum tensor in QCD, in the gauge $A^+=0$,
and after the elimination of constraint variables and residual gauge freedom,
is equal to the naive
canonical form of the light-front helicity operator is facilitated by the
fact that in light-front theory only transverse gauge fields are dynamical
degrees of freedom. The conjugate momenta (color electric fields) are
constrained variables in the theory. Thus we
were able to show explicitly that the
resulting gauge fixed operator is free of interactions.
The question naturally arises as
to whether this result is valid in other gauges also. Several years ago, in
the context of magnetic monopole solutions,
it has been shown\cite{cgw} that in
Yang-Mills-Higgs system, quantized in the axial gauge $A_3=0$ using the
Dirac procedure, the angular momentum operator constructed from manifestly
gauge invariant symmetric energy momentum tensor differs from the canonical
one only by surface terms.
In the study of QCD in $A_3 =0$ gauge, it has been shown\cite{bg} that
in the presence of surface terms, Poincare algebra holds only in the
physical subspace.
The situation in $A^0=0$ gauge or
in covariant gauges where unphysical degrees of freedom are present is to be
investigated. Another interesting problem to be studied
is the helicity conservation in
the topologically non-trivial sector of QCD and its implications, if any,
for deep inelastic scattering.
\acknowledgments
A.H would like to thank Wei-Min Zhang for useful communications.
|
2,877,628,090,390 | arxiv | \section{Introduction}
Medical image segmentation is the base for diagnosis, surgical planning, and treatment of diseases. Recent advances in deep learning\cite{Bao2015Multi}\cite{Chen2017VoxResNet}\cite{Hao2016Deep}\cite{Hao2016DCAN}\cite{Xu2016Gland}\cite{Moeskops2016Automatic}\cite{Dong2016Fully}\cite{Zhang2015Deep}have achieved promising results on many biomedical image segmentation tasks. Relying on large annotated datasets, deep learning can achieve promising results. However, differing from natural scene images, labeled medical data are too rare and expensive to extensively obtain since annotating medical image is not only tedious and time consuming but also can only be effectively performed by medical experts.
To dramatically alleviate the common burden of manual annotation, some weakly supervised segmentation algorithms\cite{Hong2015Decoupled}and active learning\cite{google}\cite{Wang2016A}\cite{Jain2016Active}have been proposed. However, these methods are used in natural scene image analysis, which cannot be easily imitated in biomedical image settings due to large variations and rare training data in biomedical applications. For biomedical images, Zhou et al.\cite{Zhou2017Fine}presented fine-tuning convolutional neural networks for colonoscopy frame classification, polyp detection, and pulmonary embolism (PE) detection. Lin et al.\cite{Yang}presented an annotation suggestion for lymph node ultrasound image and gland segmentation by combining FCNs and active learning. But such methods just cut the number of annotation candidates, which is not enough for medical image since even annotating a small amount of data will take a lot of time for the doctor. As is shown in Fig.1, for example, annotating MR image is difficult, which will take more than 20 hours by a doctor to annotate a set of volume MR size of and seldom experts are willing to do it, for the high complex structure and little grayscale change between different tissue classes of MR images.\\
\begin{figure}[H]
\begin{center}
\includegraphics[width=1.\linewidth]{fig1}
\caption{(a) an original image; (b) complete ground truth; (c) ground truth of cerebrospinal fluid (CSF); (d) ground truth of grey matter (GM); (e) ground truth of white matter (WM).}
\end{center}
\end{figure}
In this paper, we propose a new criterion to evaluate efforts of doctors annotating medical image. There are two major components: (1) suggestive annotation to reduce annotation candidates; (2) an annotation platform of fine annotation to alleviate annotating efforts on each candidate. We take MR brain tissue segmentation as an example to evaluate proposed method. Extensive experiments using the well-known IBSR18 dataset\footnote{\url{https://www.nitrc.org/frs/?group_id=48}} and MRBrainS18 Challenge dataset\footnote{\url{https://mrbrains18.isi.uu.nl/data/}} show our proposed method can attain state-of-the-art segmentation performance by using only 60\% training data and annotation efforts will be cut separately at least 44\%, 44\%, 47\% for CSF,GM,WM on each annotation candidate. The remainder of this paper is organized as follows. In section 2, we introduce proposed method. Experiments and results are detailed in section 3. Finally, the discussion and the main conclusions are illustrated in section 4 and section 5 separately.
\section{Method}
We first exploit suggestive annotation to reduce annotation candidates. Then we employ our annotation platform to alleviate annotation efforts on each candidate, which are quantitatively calculated by proposed criterion.
\begin{figure}[H]
\begin{center}
\includegraphics[width=1.\linewidth]{fig2}
\caption{illustrating the overall of the proposed method.}
\end{center}
\end{figure}
\subsection{Suggestive annotation}
We present the annotation strategy by combining U-shape network model and active learning. Fig.3. illustrates the main ideas and steps of proposed strategy. Starting with very little training data, we iteratively train a set of U-shape models. At the end of each stage, if the test results cannot meet the requirements of experts, we extract uncertainty estimation from these models to decide what will be the next data to annotate. After acquiring the new annotation data, the next stage is started using all available annotated data.
\begin{figure}[H]
\begin{center}
\includegraphics[width=1.\linewidth]{fig3}
\caption{Flowchart of the suggestive annotation.}
\end{center}
\end{figure}
\textbf{U-shape network}
Fig.4. shows the network architecture we use in this paper. Like the standard U-Net\cite{Ronneberger2015U}, it has an analysis and a synthesis path each with four resolution steps. In the analysis path, each layer contains two 3×3 convolutions each followed by a rectified linear unit (ReLU), and then a 2×2 max pooling with strides of 2 for down-sampling. In the synthesis path, each layer consists of an up-convolution of 2×2 by strides of one in each dimension, followed by two 3×3 convolutions each followed by a ReLU. Shortcut connections from layers of equal resolution in the analysis path provide the essential high-resolution features to the synthesis path [1]. Differing from the standard U-Net\cite{Ronneberger2015U}, in the last layer, we use 4 1×1 convolutions followed by softmax activation to reduce the subject of output channels to the subject of labels which is 4 in our case. Therefore, our network can segment CSF, GM and WM three tissues at once. At the same time, to keep the same shape after convolution, we use the same padding. The architecture has 3.1x10$^{7}$ parameters in total.
Like suggested in\cite{Szegedy2016Rethinking} we avoid bottlenecks by doubling the subject of channels already before max pooling. We also adopt this scheme in the synthesis path. The input size to the network is 64x64 and the output is 64x64x4, where the four channel separately represents the probability of the background, CSF, GM, and WM for each pixel.
\begin{figure}[H]
\begin{center}
\includegraphics[width=1.\linewidth]{fig4}
\caption{U-shape network architecture.}
\end{center}
\end{figure}
Based on the goal to maximize DSC (Dice’s coefficient, the higher is better) of brain tissues, we directly use DSC loss function as following:
\begin{equation}
L(y,y')=m-\sum_{i=0}^{m-1}DSC(y_i,y_i')
\end{equation}
where $y_i$ and $y_i'$ are predicted and ground-truth for class i, respectively.Since our goal is to segment 4 classes including CSF, GM, WM and background, m is 4 here.
In the stage of segmentation reconstruction, we select the maximum probability among four classes and return the corresponding label for each pixel. In fact, we have a trail to use MSE (L2 loss) as loss function but its results are worse than using DSC loss.\\
\textbf{Uncertainty criterion}
We utilize uncertainty to determine the “worthiness” of a candidate for annotation. As mentioned below, utilizing DSC loss and softmax activation, we can attain 4 probabilities for each pixel. Referring BvSB criterion\cite{Joshi2009Multi}, which considers the difference of the highest two classes of probabilities for each pixel as a measure of uncertainty, we calculate Average BvSB defined below, because we segment a whole image at a time.
\begin{equation}
Average BvSB=\frac{\sum_{i=1}^{n}min(p(y_{Best}|x_i)-p(y_{Second-Best}|x_i))}{n}
\end{equation}
Where $p(y_{Best}|x_i)$ and $p(y_{Second-Best}|x_i)$ are the probability of pixel $x_i$ belonging to the best and the second best class, respectively, n means the total pixel number. Lower Average BvSB means higher uncertainty. Using Average BvSB can eliminate the effects of noise in image and it is easy to calculate.
\textbf{Annotation strategy}
First, we utilize very little training data to train U-shape network model and exploit it to test unlabeled data. If the test results cannot meet the requirements of experts, we use the uncertainty extracted by well-trained U-shape network to decide what will be the next data to annotate. By adding the new annotation to original training data, we iteratively train the model until performance is satisfactory. Finally, we attain a stable model which can achieve state-of-the-art performance by annotating the most effective data instead of annotating the full unlabeled data.
\begin{figure}[H]
\begin{center}
\includegraphics[width=1.\linewidth]{active}
\caption{The process of Annotation.}
\end{center}
\end{figure}
\subsection{Fine annotation}
In annotation stage of Fig.3, the annotator doesn’t need to annotate the most effective annotation candidate selected by active learning from scratch. They just need to correct the wrong predictions like Fig.6.
\begin{figure}[H]
\begin{center}
\includegraphics[width=1.\linewidth]{fig5}
\caption{New annotation. (a) original images; (b) complete ground truths; (c) single class ground truths; (d) single class predictions; (e) overlap graphs of ground truth and prediction(the dull red lines mean the common part of ground truths and predictions, the blue lines mean the predictions and the yellow lines mean the ground truths); (f) extra parts needed to be annotated.}
\end{center}
\end{figure}
As is illustrated in Fig.7, the green lines mean the predictions and the blue lines are the ground truths. The doctor should correct line ab of green instead of annotating from scratch. Therefore, the real annotation efforts should be the time spent annotating line ab. We calculate the saved efforts below.
\begin{equation}
\setlength\belowdisplayskip{-8pt}
saved \quad efforts=\frac{C}{L}\times100\%
\end{equation}
Where the length of ground truths is L and the length of overlapping part of ground truths and predictions is C.
\begin{figure}[H]
\setlength{\belowcaptionskip}{-1cm}
\begin{center}
\includegraphics[width=6cm,height=5cm]{fig6}
\caption{Diagram of calculating annotation efforts (point a and b are the intersections).}
\end{center}
\end{figure}
\section{Experiments and Results}
\subsection{The most effective annotation candidates}
To thoroughly evaluate our method on different scenarios, we apply it to the IBSR18 dataset and MRbrainS18 Challenge dataset. The IBSR18 dataset consists of 18 T1 mode MRI volumes (01-18) and corresponding ground truth (GT) are provided. We use 10 samples as the full training data and the other 8 samples as testing data. The MRbrainS18 Challenge provides 7 labeled volumes (1, 4, 5, 7, 14, 070, 148) as training data but no testing data. We divide the 7 annotated data into two parts (5 as the full training data and 2 as testing data). We use cross 2-fold cross-validation.
Using full training data, we compare our method with several state-of-the-art methods, including Moeskops’ multi-scale (25$^2$,51$^2$,75$^2$ pixels) patch-wise CNN method\cite{Moeskops2016Automatic} and Chen’s voxel based residual network\cite{Chen2017VoxResNet} on the two datasets. Below are the results.
\begin{table}[H]
\centering
\begin{tabular}{cccccc}
\toprule
Method & CSF & GM & WM &Average &Time(s)\\
\midrule
U-shape network(ours)&\textbf{89.75}&\textbf{91.18}&\textbf{91.80}&\textbf{90.91}&\textbf{40}\\
VoxResNet\cite{Chen2017VoxResNet}& 81.03 & 87.91 & 89.73 & 86.2 & 100\\
Multi-scale CNN\cite{Moeskops2016Automatic}& 63.01 & 80.53 & 82.16 & 75.23 & 3500\\
\bottomrule
\end{tabular}
\caption{Comparison with full training data for IBSR18 dataset segmentation(DSC: \%).}
\end{table}
\begin{table}[H]
\centering
\begin{tabular}{ccccc}
\toprule
Method& CSF& GM&WM&Average\\
\midrule
U-shape network(ours)&\textbf{90.78}&\textbf{83.98}&\textbf{87.45}&\textbf{87.40}\\
VoxResNet\cite{Chen2017VoxResNet}& 90.38& 82.97&84.80&86.05\\
\bottomrule
\end{tabular}
\caption{Comparison with full training data for MRBrainS18 dataset segmentation(DSC: \%).}
\end{table}
From Table 1 and Table 2, we can see that our U-shape model achieves considerable improvement on all columns.
Then, we evaluate the effectiveness of proposed annotation strategy. We use DSC of testing data as checking standard of experts and original ground truth as new annotation in Fig.3. For IBSR18 dataset, we randomly initialize the small train set T with 2 training data and we regard the remaining 8 training data as unlabeled data, while for MRBrainS18 Challenge dataset, we randomly initialize the train set T with 1 training data and the remaining 4 training data as unlabeled data. We query the most uncertain sample X each time. After that, we add the new labeled sample X into the train set T and retrain the model until satisfying stopping criterion. To save training time, we retrain the model using the weights of last round as initiation instead of training from scratch.
\begin{figure}[H]
\begin{center}
\includegraphics[width=1.\linewidth]{fig7}
\caption{Comparison using limited training data for MR brain tissue segmentation.}
\end{center}
\end{figure}
As is shown in Fig.8, we compare our method with random query: randomly requesting annotation. It shows that our annotation strategy is consistently better than random query, and state-of-the-art performance can be achieved by using only 60\% of the training data.
\subsection{Efforts evaluation on each annotation candidate}
As mentioned in 3.1, we randomly initialize the small train set T with 2 training data on IBSR18 dataset and iteratively train U-shape network to select the most effective one sample for annotation. For MRBrainS18 Challenge dataset, starting with a completely empty labeled dataset, we use transfer learning to attain the first training sample. Using annotation platform in 2.2, the annotator just need to correct wrong predictions. Below are the results of saved annotation efforts.
\begin{table}[H]
\centering
\begin{tabular}{cccccc}
\toprule
Class& Iteration1& Iteration2&Iteration3&Iteration4&mean\\
\midrule
CSF&73.04&47.96&49.56&46.82&54.35\\
GM& 63.38& 39.68&43.57&41.58&47.05\\
WM& 62.80& 39.57&43.24&43.13&47.19\\
\bottomrule
\end{tabular}
\caption{Saved efforts (\%) on IBSR18 dataset.}
\end{table}
\begin{table}[H]
\centering
\begin{tabular}{ccccc}
\toprule
Class& Iteration1& Iteration2&Iteration3&mean\\
\midrule
CSF&2.46&56.28&73.71&44.15\\
GM& 34.87& 42.25&56.83&44.65\\
WM& 48.79& 49.03&49.31&49.04\\
\bottomrule
\end{tabular}
\caption{Saved efforts (\%) on MRBrainS18 Challenge dataset.}
\end{table}
\subsection{Implementation details and computation cost}
In our work, when using the full training data, the network was trained for 500 epochs on a single NVIDIA TitanX GPU. In order to prevent the network from over-fitting, we applied early stopping in the training process. The training process was automatically terminated when the validation accuracy did not increase after 30 epochs in Fig.9, which took approximately 3 hours for the whole training process. We used the glorot\_uniform initialization and the Adam algorithm in keras. Segmentation runtime is 40-50 seconds for processing each testing data (size 256 ×128×256). In retraining process, training time was about 1 hours due to the using of initiation.
\begin{figure}[H]
\begin{center}
\includegraphics[width=8cm,height=6cm]{fig8}
\caption{Dice loss as a function of a training epoch for our proposed models.(label\_0\_dice-coef, label\_1\_dice-coef , label\_2\_dice-coef , label\_3\_dice-coef means the DSC of background, CSF, GM and WM respectively.)}
\end{center}
\end{figure}
\section{Discusion}
By comparing the segmentation results illustrated in Fig.10, we realize that different models have what they are severally skilled in. Although the U-shape network model performs better than VoxResNet method does in general, as the red boxes show, the green box indicates that VoxResNet model also has its advantages.
\begin{figure}[H]
\begin{center}
\includegraphics[width=1.\linewidth]{fig9}
\caption{Visual results of different methods.}
\end{center}
\end{figure}
To explore the relationship between data and deep learning models, we combine the strategy described in section 2.3 with VoxResNet model. We find that the most effective training data selected by VoxResNet model differ from those selected by U-shape network model. For U-shape network model, start-of-the-art performance can be achieved by using 03, 06, 10-13 of IBSR18 dataset as training data, while for VoxResNet model, the most effective training data are 01, 02, 10-14. Meanwhile, for MRBrainS18 dataset, the most effective data for U-shape network model are 1, 4, 148 while they are 1, 5, 7 for VoxResNet model. In other words, “worthiness” differs from model to model. As the saying goes, a foot is short and an inch is long. Therefore, it is a promising way to combine the advantages of different models, which is what we will continue to research in our future studies.
In this paper, we conduct experiments on brain MR data, which is difficult to annotate due to the high complex structure and little grayscale change between different tissue classes. We need to evaluate proposed method on more types of data and more clinical data in the future. Compared with MR, CT image has the advantages of high density resolution and is extensively used in radiotherapy. However, the doctor always needs to spend much time in delineation before making radiotherapy plan. Therefore, our proposed method is expected to dramatically alleviate burden in radiotherapy. Meanwhile, besides brain, our proposed method can be used in other positions like chest and abdominal CT.
\section{Conclusions}
In this paper, we propose an efforts estimation of doctors annotating medical image. There are two main contributions.
(1) An effective suggestive annotation strategy to select the most effective training data, which can attain state-of-the-art performance by using only 60\% training data; (2) An annotation platform to alleviate annotation efforts on each selected effective annotation candidate, which can cut at least 44\%, 44\%, 47\% annotation efforts.
\bibliographystyle{IEEEtranS}
|
2,877,628,090,391 | arxiv | \section{Introduction}
Nowadays, arbitrarily large datasets of high-resolution images have become available thanks to almost unlimited sources of data, such as videos acquired by autonomous vehicles with multi-camera rigs and Internet pictures uploaded by millions of users on Social Media. While this large amount of data can be used to effectively reconstruct a precise 3D view of the world, the increasing size and high redundancy introduce several challenges for 3D reconstruction. A traditional image-based 3D reconstruction pipeline is composed by two main steps: Structure From Motion (SFM) and Multi-View Stereo (MVS). SFM algorithms produce camera poses and sparse point clouds with raw images as input~\cite{1_schonberger2016structure}. Then, MVS is used to generate depth and normal maps for each image, which can be converted either to dense point clouds or meshed surfaces~\cite{2_seitz2006comparison}.
\begin{figure*}
\centering
\includegraphics[width=\textwidth]{fig_1_final.pdf}
\vspace*{1mm}
\caption{Overview of the proposed approach.}
\label{fig:overview}
\end{figure*}
To deal with such efficiency issues, view selection algorithms have been proposed in order to select a representative subset of input images~\cite{3_mauro2014integer,4_mauro2014unified}. Moreover, each image shares common visibility information only with a local group of neighboring views, therefore the whole dataset can be divided in partially overlapping clusters to be processed simultaneously and independently~\cite{5_ladikos2009spectral,6_furukawa2010towards,7_mauro2013overlapping,8_zhang2015joint}. Several approaches have been developed to reconstruct large buildings using community photo collections~\cite{9_agarwal2011building,6_furukawa2010towards,8_zhang2015joint}. These works rely on unique and recurrent architectonic features that belong to the building of interest, when computing shared visibility information among images. However, urban scenarios exhibit a peculiar set of challenges and they have rarely been targeted in detail~\cite{10_akbarzadeh2006towards}. The presence of large portions of textureless surfaces, such as roads, vegetation or sky, makes it difficult to extract well-distributed and well-triangulated keypoints during the SFM phase.
In this paper, a novel framework for efficient view clustering and selection for images of urban scenarios is presented. The proposed approach is specifically devoted to city-scale 3D reconstruction and designed to solve the aforementioned issues, assuming an arbitrarily large dataset of images acquired by a moving vehicle. The method overview is shown in figure~\ref{fig:overview}: starting from the output of a SFM module, the view clustering algorithm builds uniform clusters in the observed world, while the view selection step computes the optimal subset of views in parallel for each cluster. Then, each cluster is reconstructed separately with any MVS method of choice and partial results are merged to get the full 3D model of the scene.
\section{Related Work\label{sect:related}}
Scalability issues for large-scale 3D reconstruction mainly arise in literature in the context of architectural datasets collected from unstructured Internet images. Furukawa \textit{et al.}~\cite{6_furukawa2010towards} propose to perform global view selection on the whole set of images to remove redundancy and to build a visibility graph with the remaining cameras. These similarities are collected in a matrix that represents the adjacency matrix of the visibility graph. Then, an optimization procedure is applied to iteratively divide the graph into clusters using normalized-cuts, enforcing a size constraint, and add cameras back if a coverage constraint is violated. This process is repeated until convergence.
Several other approaches are based on this visibility graph formulation. Ladikos \textit{et al.}~\cite{5_ladikos2009spectral} apply spectral graph theory on the similarity matrix and use mean shift to select the number of clusters, while Mauro \textit{et al.}~\cite{7_mauro2013overlapping} employ the game theoretic model of dominant sets to find regular overlapping clusters. In a subsequent work~\cite{3_mauro2014integer}, Mauro \textit{et al.} propose to place selection after clustering and formulate an ILP problem with cameras as binary variables. The goal is to select the minimum number of views for each cluster, such that coverage and matchability are guaranteed. The same approach is adopted in this work. However, their formulation requires to execute the expensive Bron-Kerbosch algorithm~\cite{11_bron1973algorithm} for each keypoint in the cluster. Since neighboring keypoints are likely to share the same camera subgraph, a more efficient alternative is provided in Section~\ref{sect:selection}.
Furthermore, all the methods presented so far cluster images according on their relative visibility information and without considering the 3D structure of the scene. Zhang \textit{et al.}~\cite{8_zhang2015joint} suggest to perform joint camera clustering and surface segmentation, which are formulated as a constrained energy minimization problem. Similarly, the clustering algorithm proposed in this work operates directly in 3D by exploiting the approximately uniform distribution of poses and geometry as produced by a moving vehicle.
It must be underlined that the naive solution of clustering images using temporal information from videos~\cite{10_akbarzadeh2006towards} is not ideal, since it does not consider multi-camera settings, where optimal viewpoints might be acquired by different sensors at very different time instants, and it fails when certain regions of the world are observed multiple times, since new clusters would be created each time, despite belonging to the same region.
Therefore, the proposed approach makes the following contribution to existing literature: (i) a novel framework that targets specifically urban scenarios and data acquired by a moving vehicle, which exhibit a unique set of challenges with respect to architectural Internet datasets; (ii) a clustering algorithm that is independent from pairwise relationships between cameras; (iii) a more efficient formulation of the ILP problem for view selection with respect to~\cite{3_mauro2014integer}.
\section{View Clustering\label{sect:clustering}}
\begin{figure*}
\includegraphics[width = 0.99\textwidth]{clustering.pdf}
\caption{View clustering example: the raw 2D grid built from SFM (left), the full sequence clustered and filtered (center) and overlapping clusters in more detail (right). Each cluster is represented with its borders and in a different color.\label{fig:clustering}}
\end{figure*}
We assume images are acquired by a moving vehicle with a sensor suite of $N_{cams}$ cameras with framerate $f$ and processed by a SFM module to obtain intrinsic and extrinsic parameters, as well as a set of sparse keypoints, for each image. These data are the input to the view clustering algorithm, which starts by building a 2D grid on the $(x,y)$ plane with block size $(x_b,y_b)$ and overlap $d_{overlap}$, given the input range of the sparse keypoints. This process is shown in figure \ref{fig:clustering} (left): since the majority of clusters is empty and several blocks contain noisy keypoints very distant from the vehicle trajectory, a filtering step is required. To this end, each keypoint is assigned to the corresponding block simply by projecting it onto the $(x,y)$ plane and empty clusters without associated points are removed. In this way, the sparse input point cloud is divided into partially overlapping sets of keypoints which can be processed independently and in parallel.
At this point, cameras must be associated to clusters. Intuitively, the assignment should be done based on the number and the quality of keypoints seen by each camera for each cluster. However, in urban scenarios with large textureless areas, the SFM module produces typically very few keypoints. Moreover, they are mostly concentrated in small textured patches of the world. Therefore, the 3D information of each non-empty cluster is augmented by sampling uniform points with resolution $r$ within the boundaries of the corresponding block. This allows cameras to be assigned in clusters within their field of view even if SFM did not extract any keypoint at that specific location.
Then, each set of points is projected onto each camera lying within a given distance from its centroid and all the cameras that see at least one point in the cluster are associated to it. Finally, clusters with a few number of cameras are iteratively merged with their neighbors, until a minimum target set of views is reached for each cluster. This lower bound is set to 10 views in experiments, in order to provide enough information for 3D reconstruction. The output of the view clustering algorithm is a set of of $C$ independent clusters of cameras with common visibility, shown in figure \ref{fig:clustering} (center) for a full sequence and in figure \ref{fig:clustering} (right) in a greater detail.
The key novelty of the proposed method is that it does not require the computation of complex pairwise relationships between cameras for the whole dataset, differently from previous literature~\cite{6_furukawa2010towards,3_mauro2014integer}. The use of the full similarity matrix quadratically scales as $O(N^2)$ and becomes quickly impractical in urban scenarios where up to $N = 60 \times f \times N_{cams}$ images are acquired every minute. Using the presented approach, a camera can be associated to at most $K$ neighboring clusters and this redundancy is independent from the size of the scene. In the worst case scenario, each cluster has $N_c = K \frac{N}{C}$ cameras associated. Even in this unlikely case, building $C$ separate and smaller visibility graphs is still significantly cheaper, requiring $O\left(\sum_{c=1}^{C} N_c^2\right)$ operations:
\begin{equation}
\sum_{c=1}^{C} N_c^2 = \frac{K^2N^2}{C} < N^2 \Longleftrightarrow K < \sqrt{C}
\label{eq:cond}
\end{equation}
In practical situations, a camera is associated at most to a cluster and its immediate neighbors (i.e. $K \leq 8$ by design), while the number of clusters grows quickly to several hundreds or thousands as the vehicle moves. This allows the approach to scale up to arbitrarily large scenarios, since the improvement gap grows as the dataset size increases. Furthermore, each block can be processed independently, thus boosting up the performance thanks to the high degree of parallelization that can be obtained.
\section{View Selection\label{sect:selection}}
At the end of the view clustering algorithm, every camera is added to all clusters where associated points are present. This is likely to produce a highly redundant set of cameras for each block, since even a single point brings to associate a camera to a cluster. A view selection algorithm is then needed in order to choose the optimal subset of views for each cluster. In this context, \textit{optimal} means the smallest subset of cameras that guarantees that each point in the cluster is seen by at least $N_{vis}$ cameras and each camera have at least $N_{match}$ other cameras to be successfully matched with.
Two cameras are considered to be \textit{matchable} if they see a sufficient number of common points. This differs from previous literature, where the typical similarity measure is the average Gaussian-weighted triangulation angle between the two camera centers and the common keypoints~\cite{6_furukawa2010towards,3_mauro2014integer}. In urban scenarios with very sparse features, this is not a reliable metric and it would require tuning the Gaussian parameters for each cluster, due to the high variability and sparsity of urban keypoints. Note that as the sampling resolution $r \rightarrow 0$, the number of common points is essentially a measure of the intersection between the two camera frustum and the cluster itself.
Therefore, an Integer Linear Programming (ILP) problem is formulated for each cluster, with binary variables $x_i \in \{0,1\}$ representing cameras:
\begin{equation}
\begin{array}{ll@{}ll}
\min & \displaystyle\sum\limits_{i=1}^{N_{c}} x_{i} \\ [\bigskipamount]
\mathrm{s.t.} & \displaystyle\sum\limits_{i=1}^{N_{c}} x_{i} \geq N_{min} & \\ [\bigskipamount]
& \mathbf{A}_i^\top \mathbf{x} \geq 0 & \quad \forall i=1 ,\dots, N_{c} & \\ [\bigskipamount]
& \mathbf{B}_j^\top \mathbf{x} \geq N_{vis} & \quad \forall j = 1, \dots, P_c
\end{array}
\end{equation}
The first constraint requires each camera in the cluster to have at least $N_{match}$ matchable cameras. A linear formulation can be derived from the similarity matrix $\mathbf{S}_c$ of the considered cluster as follows. Let $\tilde{\mathbf{S}}_c = \mathbf{S}_c > 0$ a binary matrix with $\tilde{s}_{c,ij} = 1$ if cameras $i$ and $j$ share common keypoints, $0$ otherwise. The constraint vectors $\mathbf{A}_i$ are computed as the rows of the matrix $\mathbf{A} = \tilde{\mathbf{S}}_c - N_{match} \cdot \mathbf{I}$, being $\mathbf{I}$ the identity matrix of size $N_c \times N_c$. This formulation effectively activates the constraint only for the selected cameras, ignoring the others.
The visibility constraint vectors $\mathbf{B}_j$ for each point in the cluster are composed by binary coefficients $b_{ij} = 1$ if the point $j$ is visible in camera $i$, $0$ otherwise. This is the first improvement with respect to the ILP formulation in~\cite{3_mauro2014integer}, where the coverage constraint requires that each point must be seen by at least one clique in the visibility subgraph associated to that point. This formulation implies that the Bron-Kerbosch algorithm for finding maximal cliques needs to be executed separately for each point. However, such requirement is both inefficient and redundant, since the same set of cameras is likely to see multiple points.
As a second important difference, the efficiency of the algorithm is boosted further by providing a good initial guess to the ILP solver. A minimum number of cameras $N_{min}$ must be selected for each cluster, in order to guarantee enough information for the reconstruction. We select this threshold adaptively according the cluster size and clamp it between boundary values $N_{low}$ and $N_{high}$. Then, we sort cameras by the number of visible points and force the solver to select the $N_{min}$ views with the best visibility.
\section{Results\label{sect:results}}
\subsection{Experimental Setup}
The proposed framework has been implemented in C++ and tested on a consumer Intel Core i5-5300U 2.30 GHz CPU. The code relies on the open-source library OR-Tools~\cite{ortools} for solving the ILP problem during view selection. The approach is independent from the specific choice of both the SFM module producing the required input and the MVS software generating the dense reconstruction. Camera poses and sparse keypoints are obtained by a custom implementation of SFM with bundle adjustment, while the state-of-the-art MVS algorithm proposed in~\cite{13_xu2020planar} has been chosen, with the implementation provided by the authors.
To the best of our knowledge, the only two available large-scale urban datasets with a multi-camera setting are nuScenes~\cite{14_nuscenes2019} and the recently released DDAD~\cite{15_packnet}. However, they contain short sequences (20 s and 5-10 s, respectively) at a low framerate (12 Hz and 10 Hz, respectively), making it difficult to evaluate the proposed approach on those data. Therefore, the algorithm is validated on custom sequences acquired in the city of Parma, Italy, representing diverse real-world situations. The vehicle is equipped with $N_{cams} = 7$ cameras with resolution 3840 x 1920 and framerate $f = 30$~Hz.
The following set of parameters has been used for experiments: block size $(x_b, y_b) = (20, 20)$ m, with $d_{overlap} = 2$ m; sampling resolution $r = 1$ m; $N_{vis} = N_{match} = 2$, $N_{low}~=~10$ and $N_{high}~=~30$ for view selection.
\begin{table*}
\begin{center}
\begin{tabular}{|c|c|c|c|c|c|}
\hline
Dataset & Seq. 1 & Seq. 2 & Seq. 3 & Seq. 4 & Seq. 5\\
\hline\hline
\# views ($N$) & 5131 & 6782 & 8477 & 9912 & 11956 \\
\# keypoints ($P$) & 389621 & 321696 & 349616 & 372434 & 393246 \\
\# clusters ($C$) & 156 & 173 & 200 & 261 & 324 \\
\hline
$N$ after clustering & 27103 & 32757 & 39014 & 49758 & 58364 \\
$K$ after clustering & 5.28 & 4.83 & 4.60 & 5.02 & 4.88 \\
Avg. $N_c$ after clustering & 173.75 & 189.34 & 195.07 & 190.64 & 180.13 \\
$t_{clustering}$ (s) & 22.35 & 28.52 & 32.7 & 37.32 & 40.89 \\
\hline
$N$ after selection & 4951 & 5842 & 6266 & 8411 & 10138 \\
$K$ after selection & 0.96 & 0.86 & 0.74 & 0.84 & 0.85\\
Avg. $N_c$ after selection & 31.73 & 33.76 & 31.33 & 32.22 & 31.29 \\
$t_{selection}$ (s) & 50.54 & 76.13 & 92.4 & 124.85 & 144.02 \\
\hline
$t_{tot}$ (s) & 72.89 & 104.65 & 125.1 & 162.17 & 184.91\\
\hline
\end{tabular}
\end{center}
\caption{Quantitative results for each dataset and each stage of the pipeline.}
\label{tab:quant}
\end{table*}
\subsection{Quantitative Evaluation}
Table~\ref{tab:quant} provides a quantitative description of the input sequences and the algorithm results for each stage of the pipeline. The first thing to note is that urban data contain relatively few keypoints ($P$), when compared to architectural image sets with similar size~\cite{6_furukawa2010towards}. This justifies the choices of sampling additional points for each cluster and avoiding similarity measures based on the triangulation angle. Secondly, the clustering phase produces extremely redundant data ($N$ after clustering), with each view assigned to approximately 5 different clusters ($K$ after clustering), on average. Most of the clustering time ($t_{clustering}$) is spent for the camera association procedure, since several hundreds of views must be tested for each cluster. When processing extremely large sequences with millions of images, this phase can be naturally parallelized since each cluster is independent from the others, after the sampling step. Moreover, the optimization algorithm during view selection exploits the clustering redundancy to automatically remove useless views. The number of output views ($N$ after selection) is consistently lower than the input for each sequence, even considering that some cameras covering the overlap between two clusters are assigned to both. Finally, the efficiency condition imposed in equation~\eqref{eq:cond} ($K < \sqrt{C}$) is satisfied for each sequence by a large margin and this gap increases with the dataset size.
Direct quantitative comparison with state-of-the-art approaches is difficult, since they all target a substantially different scenario \cite{8_zhang2015joint,6_furukawa2010towards}, where the assumption of uniformly distributed poses is violated. In terms of the running time as a function of the input number of images, figure \ref{fig:run} shows that the proposed method is much faster and scales linearly with the dataset size, while \cite{8_zhang2015joint,6_furukawa2010towards} both require several hours to process a few thousands of images. This demonstrates that they are not suited for urban applications, where thousands of images are acquired every minute by the vehicle. The considered ILP baseline \cite{3_mauro2014integer} shares the same issue and numerical performances are available only for datasets smaller by an order of magnitude. The reported runtime for~706 images divided into 36 clusters is around 3~minutes. However, the Bron-Kerbosch algorithm scales exponentially as $O(3^{\frac{n}{3}})$ with the number of cameras $n$. In our collected sequences, an instance of such algorithm with $n \approx 10^2$ would be executed for each keypoint of each cluster, making the approach impractical even for very short scenes. On the other hand, our framework can process approximately 4000~images per minute.
\begin{figure}
\centering
\includegraphics[width=0.7\textwidth]{run_log.pdf}
\caption{Running time comparison between our method and state-of-the-art solutions \cite{8_zhang2015joint,6_furukawa2010towards}. The Y axis is in log-scale for better visualization.}
\label{fig:run}
\end{figure}
\subsection{Qualitative Evaluation}
Figure \ref{fig:results} provides a visualization of clustered cameras, before and after view selection, in order to show how redundancy is exploited and reduced by the optimization algorithm. Three main situations arise in urban data: (i) when all the views lie along a straight line outside the cluster boundaries (figure \ref{fig:qual_clust}, left), the ILP solver selects a well-distributed subset of cameras (figure \ref{fig:qual_sel}, left); (ii) when the vehicle trajectory intersects the cluster (figure \ref{fig:qual_clust}, center), two disjoint sets of cameras are selected (figure~\ref{fig:qual_sel}, center); (iii) for complex trajectories such as roundabouts (figure~\ref{fig:qual_clust}, right), the framework generalizes well by selecting cameras from diverse viewpoints (figure~\ref{fig:qual_sel}, right).
\begin{figure*}
\begin{subfigure}{\textwidth}
\centering
\includegraphics[width=\linewidth]{qual_clust.png}
\caption{Clustering: basic (left), intersecting (center) and complex (right). \label{fig:qual_clust}}
\end{subfigure}
\begin{subfigure}{\textwidth}
\centering
\includegraphics[width=\linewidth]{qual_sel.png}
\caption{Selection: basic (left), intersecting (center) and complex (right). \label{fig:qual_sel}}
\end{subfigure}
\caption{From clustering to selection: each cluster has black dashed borders, keypoints (blue) and cameras with viewing direction (red).\label{fig:results}}
\end{figure*}
Furthermore, figure~\ref{fig:3d} shows that the proposed method effectively cluster images based on shared visual content, which allows to produce a detailed 3D reconstruction of the world. Each point cloud has been cropped at the corresponding cluster boundaries and it is stored in this way for the subsequent global fusion step. Only qualitative evaluation of the resulting reconstruction is provided, since performances depend solely on the choice of MVS algorithm. The goal of the presented approach is to show that 3D reconstruction can be achieved in very large-scale scenarios where processing all the images in a single batch is not practically feasible. For a detailed comparison of state-of-the-art MVS algorithms for large-scale outdoor scenes, the reader can refer to~\cite{tanks}.
\begin{figure*}
\begin{subfigure}{\textwidth}
\centering
\includegraphics[width=\linewidth]{6_full.png}
\end{subfigure}
\begin{subfigure}{\textwidth}
\centering
\includegraphics[width=\linewidth]{22_full.png}
\end{subfigure}
\begin{subfigure}{\textwidth}
\centering
\includegraphics[width=\linewidth]{128_full.png}
\end{subfigure}
\caption{Clusters of images (left) and corresponding 3D point cloud (right).}
\label{fig:3d}
\end{figure*}
Finally, a very short sequence with a few hundreds of images is considered, in order to have a relatively small instance of the problem where full reconstruction in a single batch is still possible. Figure~\ref{fig:batch} shows a comparison of the point clouds obtained by considering the whole set of images at once (left) and by merging multiple clusters computed with the proposed algorithm (right). While ground truth for numerical evaluation is not available, it can be seen that our divide-and-conquer approach maintains a good reconstruction quality, while being able to scale up to entire cities, where batch reconstruction is not an option.
\begin{figure*}
\centering
\includegraphics[width=0.8\textwidth]{pcl_comp_full.png}
\caption{Point cloud comparison between batch reconstruction (left) and merged local clusters computed with the proposed algorithm (right).}
\label{fig:batch}
\end{figure*}
\section{Conclusion\label{sect:conclu}}
In this work, a method for enabling city-scale 3D reconstruction from images acquired by a moving vehicle has been proposed. The algorithm builds a set of partially overlapping clusters over the vehicle trajectory, then it selects the optimal subset of views to compute a 3D point cloud, independently for each cluster. All the local point clouds are then fused together in order to obtain a full 3D model of the sequence. The presented framework focuses on efficiency, by clustering images independently from their pairwise similarities and providing a novel formulation for the view selection step. These contributions reduce the processing time with respect to state-of-the-art methods, not designed for the urban scenario, allowing the algorithm to scale up to arbitrarily large datasets.
\bibliographystyle{splncs04}
|
2,877,628,090,392 | arxiv | \section{Introduction} \label{sec:intro} \setcounter{equation}{0}
While the QCD gauge coupling is not perturbative at low energies, it is
nonetheless possible to formulate an expansion in terms of the parameter
$1/N_c$, where $N_c$ is the number of colors \cite{tHooft}. In recent years,
effective field theories for baryons have been constructed that exploit this
fact, allowing physical observables to be computed to any desired
order in $1/N_c$. For the ground state baryons, the SU(6) {\bf 56}-plet,
the large-$N_c$ approach has been used successfully to study SU(6)
spin-flavor symmetry \cite{DM,Jenk1,DJM1,CGO,Luty1},
masses \cite{DJM1,Jenk2,DJM2,JL}, magnetic
moments \cite{DJM1,DJM2,JL,JM,Luty2,DDJM}, and axial current matrix
elements \cite{DM,DJM1,DJM2,DDJM}.
Whether the large-$N_c$ framework works equally well in describing
the phenomenology of excited baryon multiplets is a question that is
under active investigation. Recent attention has focused on the $\ell=1$
orbitally-excited baryons, the SU(6) {\bf 70}-plet for $N_c=3$.
There have been studies of the masses \cite{Goity,CCGL}, strong
decays \cite{CGKM,PY2}, axial current matrix elements \cite{PY1}, as well
as the radiative decays of these states to ordinary nucleons \cite{CC}.
However, the radiative decays to $\Delta$ final states have not been
considered, and are of considerable interest, as we describe
below. These decays will be the main focus of this paper.
There is a good reason why much of the past literature \cite{old,gc} has
focused on the radiative decays of excited baryons to nucleons rather than
deltas: these are the decays for which there is experimental data. The
helicity amplitudes that describe the decays to nucleons are extracted
experimentally by considering instead the time-reversed process, pion
photoproduction. Information is available for the decays to nucleons simply
because the fixed targets used in experiment are made of nucleons, not
deltas. To study the decays of excited baryons (in our case $N^*$'s or
$\Delta^*$'s) to $\Delta \gamma$ not only requires that we produce enough
excited states to make up for the small electromagnetic branching fractions,
but also that we reconstruct a sufficient fraction of the events given the
final three decay products. The 1998 Review of Particle
Physics \cite{rpp98} lists no data for the $\Delta \gamma$ partial decay
widths, nor for the more challenging helicity amplitudes, which require
data on the angular distributions of the decays.
One reason why an analysis of the decays to $\Delta \gamma$ is of timely
interest is the possibility that the experimental situation may soon change,
given, for example, the ongoing work at the Continuous Electron Beam
Accelerator Facility (CEBAF). CEBAF's high integrated beam luminosity,
combined with the CEBAF Large Angle Spectrometer's (CLAS) efficiency for
detecting photons in the forward direction makes study of the decays
$N^*\rightarrow \Delta \gamma$ and $\Delta^* \rightarrow \Delta \gamma$
a possibility worthy of consideration. The Crystal Ball detector at
Brookhaven may also allow study of the $\Delta \gamma$ decays, with
excited baryons produced via a pion rather than a photon beam.
In this paper we will present our predictions for the $\Delta \gamma$ decay
amplitudes based on the large-$N_c$ operator analysis of Ref.~\cite{CC}. The
leading-order predictions following from single quark as well as single
plus multiquark interactions are presented in algebraic and numerical
form in the following section. Study of the the $\Delta \gamma$ decays
can give us further information on the significance of
the multibody interactions that are included systematically in the
large-$N_c$ approach, but are less frequently taken into account in quark
model analyses. While we will not embark on any detailed accelerator
simulations to address the question of precisely how well the radiative
decays to $\Delta$'s can be measured at various facilities, we will provide
in the third section a discussion of how the amplitudes we predict can be
extracted from the differential decay widths measured in experiment. We
hope this will provide strong motivation for experimenters to explore how
well they can do. In the final section we summarize our conclusions.
\section{Decay Amplitudes} \label{sec:damps} \setcounter{equation}{0}
In this section, we present our predictions for the
$N^*\rightarrow \Delta \gamma$ and $\Delta^* \rightarrow \Delta \gamma$
helicity amplitudes. These follow directly from the large $N_c$
operator analysis of Ref.~\cite{CC}. The formulation of an effective
theory for baryons in the large-$N_c$ limit has been discussed extensively
in Refs.~\cite{CGO,CGKM}, so we will only provide a brief summary here:
Baryon states can be conveniently labeled by the SU(6)$\times$O(3)
quantum numbers of their valence quarks. For baryons of small total
spin within any given spin-flavor multiplet, this symmetry becomes
exact as $N_c \rightarrow \infty$, even if the valence quarks
are light compared to $\Lambda_{QCD}$. Thus, for the low spin states, this
spin-flavor space provides us with a basis for performing an operator
analysis. Operators with desired transformation properties may be formed
by taking products of spin-flavor generators, O(3) generators, momenta, and
polarizations of the states, and may involve one or more quark lines.
Operators that act on $n$ quark lines have coefficients suppressed by
$1/N_c^{n-1}$, reflecting the $n-1$ gluon exchanges necessary to generate
the operator in QCD. The $1/N_c$ power counting becomes nontrivial when one
takes into account that compensating factors of $N_c$ may arise in the matrix
elements of the operators, when a matrix element involves a coherent sum
over O($N_c$) quark lines. For low spin states, sums of the form
\begin{equation}
\sum_\alpha \sigma^i_\alpha \,\,\, ,
\end{equation}
where $\sigma^i$ is a Pauli spin matrix, are incoherent, and of order one.
On the other hand, sums of the form
\begin{equation}
\sum_\alpha \lambda^a_\alpha \,\,\, \mbox{ or } \,\,\,
\sum_\alpha \lambda^a_\alpha \sigma^i_\alpha \,\,\, ,
\end{equation}
where $\lambda$ is an SU(3) flavor matrix, are often coherent on at least
some of the states. To isolate the corrections to a physical observable
that appear at a given order in the $1/N_c$ expansion, one must take into
account both the factors of $1/N_c$ that appear in the Lagrangian, as well
as the compensating factors of $N_c$ that originate from taking matrix
elements.
The analysis of radiative decays in Ref.~\cite{CC} focused on the one-
and two-body operators that contribute to the helicity amplitudes at
leading order, ${\cal O}$($N_c^0$). In Coulomb gauge, the one-body
operators may be written:
\begin{equation}
a_1 Q_* \vec{\varepsilon}_m \cdot \vec{A} \,\,\, ,
\end{equation}
\begin{equation}
i b_1 Q_* \vec{\varepsilon}_m \cdot \vec{\nabla}
(\vec{\sigma}_*\cdot\vec{\nabla}
\times \vec{A}) \,\,\, ,
\end{equation}
\begin{equation}
i b_2 Q_* \vec{\sigma}_*\cdot \vec{\nabla}(\vec{\varepsilon}_m
\cdot \vec{\nabla}
\times \vec{A}) \,\,\, ,
\end{equation}
where the quark charge $Q$ is a matrix in SU($3$) flavor space,
$Q=$diag$(2/3,-1/3,-1/3)$, $\vec{\varepsilon}_m$ is the polarization
of the orbitally-excited quark, and an asterisk indicates that a given
spin or flavor matrix acts only on the excited quark line. We assume that the
derivatives in these operators are suppressed by the scale $\Lambda_{QCD}$,
which we have left implicit, for notational convenience. These operators
are in one-to-one correspondence with the three operators included in
conventional quark model analyses \cite{close}; the precise relationship is
given in Ref.~\cite{CC}. In addition to the operators above, a number of
potentially coherent two-body operators were included in the analysis.
The fits presented in Ref.~\cite{CC} demonstrated that the one-body
operators provide a reasonable description of the experimental data, with
only one of the two-body operators yielding a significant reduction in
the $\chi^2$ of the fit. Therefore, we include this two-body operator in
the present analysis,
\begin{equation}
c_3 (\sum_{\alpha \neq *} Q_\alpha \vec{\sigma}_\alpha )\cdot \vec{\sigma}_*
(\vec{\varepsilon}_m \cdot \vec{A}) \,\,\, ,
\end{equation}
where we have used the same labeling for undetermined coefficients as
in Ref.~\cite{CC}. Note that this operator has exactly the type of
spin-flavor sum that may be coherent on a large-$N_c$ baryon state.
The helicity amplitudes that we consider are defined by
\[
A_{-1/2} = K \, \xi \, \langle B^*, \,\, s_z=-\frac{1}{2} \,\, | H_{int}
| \,\, \gamma ,\,\, \epsilon_{+1}; \,\,B,\,\, s_z=-\frac{3}{2}\rangle
\]\[
A_{1/2} = K \, \xi \, \langle B^*, \,\, s_z=\frac{1}{2} \,\, | H_{int}
| \,\, \gamma ,\,\, \epsilon_{+1}; \,\,B,\,\, s_z=-\frac{1}{2}\rangle
\]\[
A_{3/2} = K \, \xi \, \langle B^*, \,\, s_z=\frac{3}{2} \,\, | H_{int}
|\,\, \gamma ,\,\, \epsilon_{+1}; \,\,B,\,\, s_z=\frac{1}{2}\rangle
\]\begin{equation}
A_{5/2} = K \, \xi \, \langle B^*, \,\, s_z=\frac{5}{2} \,\, | H_{int}
|\,\, \gamma ,\,\, \epsilon_{+1}; \,\,B,\,\, s_z=\frac{3}{2}\rangle \,\,\, ,
\label{eq:hampsdef}
\end{equation}
where the baryon states are relativistically normalized to $E/M_B$
particles per unit volume. Above, $s_z$ is the $z$-component spin in the
$B^*$ rest frame, where $B$ represents either an $N$ or $\Delta$; $K$ is a
kinematical factor given by $[4\pi\alpha m_{B^*}/(m^2_{B^*} - m^2_B)]^{1/2}$.
The factor $\xi$ is the sign of the $\pi B B^*$ vertex that would appear in the
tree-level contribution to $\pi B \rightarrow B \gamma$; this renders our
sign conventions consistent with our previous work \cite{CC}. Compared
to the $B^*\rightarrow N \gamma$ decays, two additional helicity amplitudes,
$A_{-1/2}$ and $A_{5/2}$, are required to compute physical observables. The
dependence of various differential decay widths on the $A_\lambda$ are given in
the following section.
Compared to our previous study of the $B^*\rightarrow N\gamma$ decays, the
computation involved in the current work is identical, except that (1)
we replace the spin-flavor wave function of the final state nucleon by that
of a $\Delta$, and (2) we compute one or two new matrix elements for each
decay. The results we obtain for the $A_\lambda$ by evaluating the matrix
elements of our four operators is presented in Table~\ref{table1}, for
$N_c=3$. There, $B_J^*$ and ${B_J^*}'$ represent baryon states with total
spin $J$ and total quark spin $1/2$ and $3/2$, respectively.
The physical baryon states, however, are not eigenstates of the total
quark spin. Two mixing angles are necessary to specify the $s=1/2$ and
$s=3/2$ nucleon mass eigenstates. We define
\begin{equation}
\left[\begin{array}{c} N(1535) \\ N(1650) \end{array} \right] =
\left[\begin{array}{cc} \cos\theta_{N1} & \sin\theta_{N1} \\
-\sin\theta_{N1} & \cos\theta_{N1} \end{array}\right]
\left[\begin{array}{c} N^*_{1/2} \\ {N^*}'_{1/2}\end{array} \right]
\end{equation}
and
\begin{equation}
\left[\begin{array}{c} N(1520) \\ N(1700) \end{array} \right] =
\left[\begin{array}{cc} \cos\theta_{N3} & \sin\theta_{N3} \\
-\sin\theta_{N3} & \cos\theta_{N3} \end{array}\right]
\left[\begin{array}{c} N^*_{3/2} \\ {N^*}'_{3/2}\end{array} \right] \,\,\, ,
\label{eq:tpt}
\end{equation}
as in Refs.~\cite{CGKM,CC}. Using fit values for the operator coefficients
and mixing angles, we can make numerical predictions for the $A_\lambda$, for
each physical baryon state.
Our predictions are first presented in algebraic form in
Table~\ref{table1}, in the limit of no mixing. As in Ref.~\cite{CC}, we
absorb any factors of momentum/$\Lambda_{QCD}$ that appear in the operators
into our definitions of the fit coefficients; at leading order in $1/N_c$,
these factors are multiplicative constants over the entire baryon multiplet.
This redefinition is equivalent to replacing derivatives by unit vectors
$\hat{k}$. We present two sets of numerical predictions, corresponding
to (i) a fit that includes only the one-body operators, and (ii) a fit that
includes the one-body operators and the operator $c_3$. These are
shown in Tables~\ref{table2} and \ref{table3}, respectively. While
$c_3$ and $a_1$ appear in the same combination for the
$\Delta\gamma$ decays, the same is not true for the $N\gamma$ decays, and
thus the numerical fits are different. Note that
the experimentally measured masses are used in evaluating the kinematical
factor $K$. Our predictions correspond approximately to the fits presented
in Tables II and III of Ref.~\cite{CC}. There, the one-body fit treated
the mixing angles as free parameters, while the fit that included the operator
$c_3$ held the mixing angles fixed at the values determined from a
large-$N_c$ analysis of the decays $B^* \rightarrow B \pi$ \cite{CGKM}. The
mixing angles in both cases agreed within errors. For the sake of
consistency, we present all our predictions with the mixing angles
set to the values given in Ref.~\cite{CGKM}: $\theta_{N1}=0.61\pm 0.09$ and
$\theta_{N3}=3.04\pm 0.15$.
Our two sets of predictions are qualitatively similar. There are
a number of cases where the central values of amplitudes are noticeably
shifted by the inclusion of the operator $c_3$. However, taking into
account the uncertainties in the fit parameters, these differences are 2-4
standard deviation effects. It is worth pointing out that new experimental
data will lead to a reduction in the errors of our fit parameters, and hence
to a clearer distinction between the predictions with and without the
two-body operator effects. Nonetheless, Table~\ref{table2} contains our
most reliable predictions given the present data. We now consider how
these amplitudes may be extracted from experiment.
\begin{table}[ht]
\begin{center}
\begin{tabular}{lcccc}
\\ \hline\hline
& $\Delta_{1/2}^{*+}$ & $\Delta_{1/2}^{*0}$ & $p^*_{1/2}$ & ${p^*}'_{1/2}$
\\ \hline
$\tilde{A}_{-1/2}$
& $\frac{2}{3\sqrt{3}}b_1$ & 0 & $-\frac{2}{3\sqrt{3}} b_1$ &
$-\frac{1}{3\sqrt{3}}(3 a_1 - 2 b_1 +3 b_2 - 3 c_3)$ \\
$\tilde{A}_{1/2}$ & $-\frac{2}{9}(b_1+2 b_2)$ & 0
& $\frac{2}{9}(b_1+2 b_2)$ &
$-\frac{1}{9}(3 a_1-4 b_1 + b_2 - 3 c_3)$ \\ \hline
\end{tabular}
\begin{tabular}{lcccc}
\\ \hline\hline
& $\Delta_{3/2}^{*+}$ & $\Delta_{3/2}^{*0}$ & $p^*_{3/2}$ & ${p^*}'_{3/2}$
\\ \hline
$\tilde{A}_{-1/2}$ & $\frac{2\sqrt{2}}{3\sqrt{3}} b_1$
& 0 & $-\frac{2\sqrt{2}}{3\sqrt{3}}b_1$
& $\frac{2}{3\sqrt{15}}(3 a_1 + b_1 + 3 b_2 - 3 c_3)$ \\
$\tilde{A}_{1/2}$ & $\frac{2\sqrt{2}}{9}(b_1-b_2)$
& 0 & $-\frac{2\sqrt{2}}{9}(b_1-b_2)$
& $\frac{4}{9\sqrt{5}}(3 a_1 - b_1 + b_2 - 3 c_3)$ \\
$\tilde{A}_{3/2}$ & $-\frac{2\sqrt{2}}{3\sqrt{3}} b_2$
& 0 & $\frac{2\sqrt{2}}{3\sqrt{3}}b_2$ & $\frac{2}{3\sqrt{15}}
(3 a_1 - 3 b_1 -b_2 - 3 c_3)$ \\ \hline
\end{tabular}
\begin{tabular}{lc}
& ${p^*}'_{5/2}$ \\\hline\hline
$\tilde{A}_{-1/2}$ & $-\frac{1}{\sqrt{15}}(a_1 + 2 b_1 + b_2 - c_3)$ \\
$\tilde{A}_{1/2}$ & $- \frac{1}{3\sqrt{5}} (3 a_1 + 4 b_1 + b_2 - 3 c_3 )$ \\
$\tilde{A}_{3/2}$
& $-\frac{\sqrt{2}}{3\sqrt{5}} (3 a_1 + 2 b_1 - b_2 - 3 c_3)$ \\
$\tilde{A}_{5/2}$ & $-\frac{\sqrt{2}}{\sqrt{3}}(a_1-b_2 - c_3)$ \\ \hline
\end{tabular}
\caption{Helicity amplitude predictions in terms of the operator
coefficients $a_1$, $b_1$, $b_2$ and $c_3$, in the case of no mixing.
Here $\tilde{A}_\lambda$ is defined by $A_\lambda = K\xi \tilde{A}_\lambda$.
Amplitudes related to these by isospin have not been displayed.}
\label{table1}
\end{center}
\end{table}
\begin{table}[ht]
\begin{tabular}{lcccc}
\\ \hline\hline
& $A_{-1/2}$ & $A_{1/2}$ & $A_{3/2}$ & $A_{5/2}$ \\ \hline
$\Delta^+(1620)$ & $-0.042\pm 0.005$ & $0.073\pm 0.006$ & - & - \\
$\Delta^0(1620)$ & $0$ & $0$ & - & - \\
$\Delta^+(1700)$ & $-0.054\pm 0.007$ & $0.000\pm 0.005$ & $0.055\pm 0.006$ \\
$\Delta^0(1700)$ & $0$ & $0$ & $0$ & - \\
$p(1535)$ & $0.108\pm 0.010$ & $0.004\pm 0.006$ & - & - \\
$p(1650)$ & $-0.063\pm 0.007$ & $-0.128\pm 0.007$ & - & - \\
$p(1520)$ & $0.062\pm 0.009$ & $-0.016\pm 0.006$ & $-0.090\pm 0.008$
& - \\
$p(1700)$ & $0.043\pm 0.008$ & $0.123\pm 0.007$ & $0.169\pm 0.008$
& - \\
$p(1675)$ & $0.024\pm 0.008$ & $-0.019\pm 0.009$ & $-0.113\pm 0.009$
& $-0.258\pm0.012$ \\ \hline
\end{tabular}
\caption{Helicity amplitude predictions, in GeV$^{-1/2}$, using parameter
values from a one-body operator fit, approximately that of Table~II in
Ref.~\protect\cite{CC} (see the text): $a_1=0.615\pm 0.028$,
$b_1=-0.295\pm 0.038$, $b_2=-0.299\pm 0.032$, $\theta_{N1}=0.61$ (fixed),
$\theta_{N3}=3.04$ (fixed).}
\label{table2}
\end{table}
\begin{table}[ht]
\begin{tabular}{lcccc}
\\ \hline\hline
& $A_{-1/2}$ & $A_{1/2}$ & $A_{3/2}$ & $A_{5/2}$ \\ \hline
$\Delta^+(1620)$ & $-0.042\pm 0.005$ & $0.074\pm 0.006$ & - & - \\
$\Delta^0(1620)$ & $0$ & $0$ & - & - \\
$\Delta^+(1700)$ & $-0.055\pm 0.007$ & $0.001\pm 0.005$ & $0.057\pm 0.006$ \\
$\Delta^0(1700)$ & $0$ & $0$ & $0$ & - \\
$p(1535)$ & $0.144\pm 0.013$ & $0.024\pm 0.008$ & - & - \\
$p(1650)$ & $-0.107\pm 0.012$ & $-0.156\pm 0.007$ & - & - \\
$p(1520)$ & $0.057\pm 0.009$ & $-0.024\pm 0.006$ & $-0.098\pm 0.008$
& - \\
$p(1700)$ & $0.089\pm 0.013$ & $0.177\pm 0.013$ & $0.218\pm 0.013$
& - \\
$p(1675)$ & $0.002\pm 0.009$ & $-0.060\pm 0.013$ & $-0.173\pm 0.015$
& $-0.337\pm0.020$ \\ \hline
\end{tabular}
\caption{Helicity amplitude predictions, in GeV$^{-1/2}$, using parameter
values from the four parameter fit given in Table~III of
Ref.~\protect\cite{CC}: $a_1=0.816\pm 0.061$, $b_1=-0.299\pm 0.038$,
$b_2=-0.308\pm 0.032$, $c_3=-0.072\pm 0.020$, $\theta_{N1}=0.61$ (fixed),
$\theta_{N3}=3.04$ (fixed).}
\label{table3}
\end{table}
\section{Cross section formulas}
The total decay rate for $B^* \rightarrow \Delta + \gamma$ is
\begin{equation}
\Gamma_\gamma = {k^2 \over \pi} {2m_\Delta \over (2J+1) m_{B^*}}
\sum_{\lambda=-1/2}^{\lambda=J}
\left| A_\lambda \right|^2 ,
\end{equation}
where $k$ is the momentum of the outgoing photon in the
excited baryon rest frame. The formula is, of course, the
same as the analogous expression for $B^* \rightarrow N \gamma$, except
for the substitution of $m_\Delta$ for $m_N$ in the numerator and the
expanded range of $\lambda$.
The helicity amplitudes $A_\lambda$ can be found
separately with more detailed measurement. We will record
some of the relevant formulas. Our goal will be to show
explicitly that a set of measurements can lead to separation
of the $|A_\lambda|$, rather than to do an exhaustive analysis
of, for example, interference with the non-resonant background.
The excited baryon may be produced in a photonic or pionic
reaction,
\begin{eqnarray}
\gamma + N \rightarrow B_J^* \rightarrow \Delta + \gamma ,
\nonumber \\
\pi + N \rightarrow B_J^* \rightarrow \Delta + \gamma ,
\end{eqnarray}
and, in either case, the $B_J^*$ will be tensor polarized, at
least for $J \ne 1/2$. For the case of the pionic reaction,
the pion brings in neither helicity nor angular momentum
projection along its direction of motion, so the helicity of
the excited baryon can only be $\pm 1/2$. In the photonic or
Compton reaction, the initial state can have total helicity
$\pm 1/2$ and $\pm 3/2$, which is reflected in the
possibilities available to the excited baryon. The
probabilities of finding the differing helicities in the excited
baryon are, however, not the same but are given by
\begin{eqnarray}
p_{1/2} &=& { | A_{1/2}(\gamma N \rightarrow B_J^*) |^2
\over
| A_{1/2}(\gamma N \rightarrow B_J^*) |^2 +
| A_{3/2}(\gamma N \rightarrow B_J^*) |^2} ,
\nonumber \\[1.2ex]
p_{3/2} &=& { | A_{3/2}(\gamma N \rightarrow B_J^*) |^2
\over
| A_{1/2}(\gamma N \rightarrow B_J^*) |^2 +
| A_{3/2}(\gamma N \rightarrow B_J^*) |^2} ,
\end{eqnarray}
for helicity magnitude $|\lambda_{B^*}| = 1/2$ and $3/2$,
respectively. We will suppose that the helicity amplitudes
for
$\gamma N \rightarrow B_J^*$ are well measured, so that the
numbers $p_\lambda$ are known.
Because of the tensor polarization, the excited baryon does
not decay isotropically. The angular distribution is given by
\begin{eqnarray}
{d\Gamma_\gamma \over d\Omega_\gamma} &&
(B_J^* \rightarrow \gamma \Delta) =
{k^2\over 4 \pi^2} {m_\Delta \over m_{B^*}}
\nonumber \\
&& \sum_{\lambda = -1/2}^{\lambda = J}
|A_\lambda|^2
\left\{ p_{1/2} \left[ |d^J_{1/2,\lambda}|^2 +
|d^J_{-1/2,\lambda}|^2 \right]
+
p_{3/2} \left[ |d^J_{3/2,\lambda}|^2 +
|d^J_{-3/2,\lambda}|^2 \right]
\right\} ,
\end{eqnarray}
Parity invariance has been used. The
$d^J_{M,\lambda} = d^J_{M,\lambda}(\theta_\gamma)$ are
elements of a matrix representation of rotations \cite{edmunds},
and $\theta_\gamma$ is the angle between the outgoing
photon and the incoming photon or pion in the rest frame of
the excited baryon (see Fig.~\ref{2stepdecay}). Any
$A_\lambda$ not further specified is for $B^* \rightarrow
\gamma \Delta$. If the excited baryon is produced in the
pionic reaction, the above formula is valid with $p_{1/2}$
set to unity and $p_{3/2}$ set to zero.
\begin{figure}
\centerline{ \epsfxsize=4.5 in \epsfbox{2stepdecay.eps} }
\vglue .3 in
\caption{Production and decay of an excited baryon $B^*$ in
its own rest frame. The angles $\theta_\gamma$,
$\phi$, and $\theta_N$ are indicated. The $\theta_N$ used in
the text is $\theta_N$ boosted to the rest frame of the
$\Delta$.}
\label{2stepdecay}
\end{figure}
Using the above expression, one can separate $|A_{3/2}|$ and
$|A_{5/2}|$. However, $|A_{-1/2}|^2$ and $|A_{1/2}|^2$ are
multiplied by the same kinematic factor, as one can see by
substituting $\lambda = \pm 1/2$ and using the symmetry of
the $d$-functions in the lower two indices. Hence, some
further measurement is needed to separate them.
Recall that the $\Delta$ decays dominantly into $N\pi$, so
the full reaction is
\begin{eqnarray}
(\pi {\rm\ or\ } \gamma) + N \rightarrow B^* \rightarrow
\gamma
+ \Delta
\rightarrow \gamma + N + \pi .
\end{eqnarray}
\noindent The angular distribution of the Delta decay
involves two more angles. The decays of the~$B^*$ and
of the $\Delta$ each define a plane, and the angle between
them is the azimuthal angle $\phi$. There is also a polar
angle
$\theta_N$, defined in the $\Delta$ rest
frame as the angle between the emerging $N$ and the $\Delta$
helicity axis inherited from the $B^*$ rest frame. The
angular distribution of the
$\Delta$ decay depends on its helicity. For example, if the
Delta has a definite helicity
$\lambda_\Delta$, the decay distribution is
proportional to
\begin{eqnarray}
{1\over 2} \left(1 + s_\lambda P_2 (\cos \theta_N) \right)
\end{eqnarray}
\noindent where $P_\ell$ is a Legendre polynomial, and
\begin{eqnarray}
s_\lambda = (-1)^{|\lambda_\Delta|-1/2}
= \left\{
\begin{array}{ll}
+1, & \lambda = 1/2,3/2 \\
-1, & \lambda = -1/2,5/2
\end{array}
\right. .
\end{eqnarray}
\noindent The last part follows using
$\lambda = 1 - \lambda_\Delta$. All four helicity
amplitudes can be separated if one measures the angular
distributions of the outgoing $N$ (or outgoing $\pi$) in
addition to that of the outgoing $\gamma$.
We will give the formulas for the double differential
cross sections, including an explicit evaluation of the
$d$-functions, separately for $J = 1/2$,
$3/2$, and $5/2$:
\begin{eqnarray}
{d\Gamma_\gamma \left(B^*_{1/2}\right) \over
d \cos\theta_\gamma\, d\cos\theta_N } =
{k^2 \over 4 \pi} {m_\Delta \over m_{B^*}}
\left\{
\left| A_{-1/2} \right|^2
\left( 1 - P_2 (\cos\theta_N) \right)
+ \left| A_{ 1/2} \right|^2
\left( 1 + P_2 (\cos\theta_N) \right)
\right\} ,
\nonumber
\\
\end{eqnarray}
\begin{eqnarray}
{d\Gamma_\gamma \left(B^*_{3/2}\right) \over
d\cos\theta_\gamma\, d\cos\theta_N } =
{k^2 \over 4 \pi} {m_\Delta \over 2 m_{B^*}}
&\Bigg\{&
\left| A_{-1/2} \right|^2
\left( 1 +
[1 - 2p_{3/2}] P_2 (\cos\theta_\gamma) \right)
\left( 1 - P_2 (\cos\theta_N) \right)
\nonumber \\
&+& \left| A_{ 1/2} \right|^2
\left( 1 +
[1 - 2p_{3/2}] P_2 (\cos\theta_\gamma) \right)
\left( 1 + P_2 (\cos\theta_N) \right)
\nonumber \\
&+& \left| A_{ 3/2} \right|^2
\left( 1 -
[1 - 2p_{3/2}] P_2 (\cos\theta_\gamma) \right)
\left( 1 + P_2 (\cos\theta_N) \right) \Bigg\},
\nonumber \\
& &
\end{eqnarray}
\noindent and
\begin{eqnarray}
&& {d\Gamma_\gamma \left(B^*_{5/2}\right) \over
d\cos\theta_\gamma\, d\cos\theta_N } =
{k^2 \over 4 \pi} {m_\Delta \over 3 m_{B^*}} \times
\nonumber \\
&&\quad \Bigg\{
\left| A_{-1/2} \right|^2
\left( 1 + {8\over 7}
[1-{3\over 4}p_{3/2}] P_2 (\cos\theta_\gamma)
+ {6\over 7}
[1-{5\over 2}p_{3/2}] P_4 (\cos\theta_\gamma)
\right)
\left( 1 - P_2 (\cos\theta_N) \right)
\nonumber \\
&&\quad + \left| A_{ 1/2} \right|^2
\left( 1 + {8\over 7}
[1-{3\over 4}p_{3/2}] P_2 (\cos\theta_\gamma)
+ {6\over 7}
[1-{5\over 2}p_{3/2}] P_4 (\cos\theta_\gamma)
\right)
\left( 1 + P_2 (\cos\theta_N) \right)
\nonumber \\
&&\quad + \left| A_{ 3/2} \right|^2
\left( 1 + {2\over 7}
[1-{3\over 4}p_{3/2}] P_2 (\cos\theta_\gamma)
- {9\over 7}
[1-{5\over 2}p_{3/2}] P_4 (\cos\theta_\gamma) \right)
\left( 1 + P_2 (\cos\theta_N) \right)
\nonumber \\
& &\quad + \left| A_{ 5/2} \right|^2
\left( 1 - {10\over 7}
[1-{3\over 4}p_{3/2}] P_2 (\cos\theta_\gamma)
+ {3\over 7}
[1-{5\over 2}p_{3/2}] P_4 (\cos\theta_\gamma) \right)
\left( 1 - P_2 (\cos\theta_N) \right) \Bigg\} .
\nonumber \\
& &
\end{eqnarray}
\noindent Note that we have integrated over the azimuthal angle $\phi$
in deriving these decay widths. We see that we can now extract the magnitude
of each of the helicity amplitudes separately, the goal stated at the
beginning of this section. It is worth pointing out that we gain some,
but not complete, information on the relative signs of the amplitudes by
including the azimuthal angle dependence. To determine the remaining signs
requires a more detailed analysis, including for example polarizations
and/or interference with the nonresonant background; we will consider these
issues elsewhere. Again, to use these formulas for the pion induced
reaction, set $p_{3/2}=0$.
\section{Discussion}
The advent of new experimental facilities makes the
measurement of excited baryon decay into $\Delta(1232) +
\gamma$ a real possibility. In this paper, we have
considered decays of the {\bf 70}-plet. There are 24 new
measurable amplitudes for {\bf 70} $\rightarrow \Delta
\gamma$, not counting amplitudes that are related using isospin
invariance.
Until now, the corresponding data on decays into $N \gamma$
has been obtained using time reversal invariance and
photoproduction. This possibility doesn't exist for
$\Delta \gamma$ decays, and the lack of data appears to have
engendered a paucity of theoretical study. However,
measurements of the {\bf 70} $\rightarrow \Delta \gamma$ are
interesting for several specific reasons.
First, they are a test of the SU(6) symmetry that
arises in the large-$N_c$ limit for baryons of low spin within
any given multiplet. The $\Delta$ and the nucleon are both members
of the {\bf 56}-plet, and thus the {\bf 70} $\rightarrow \Delta \gamma$
amplitudes are predictable in terms of the same SU(6)-breaking parameters
that determine the {\bf 70} $\rightarrow N \gamma$ decays. Considering
the one body operators, there are three such parameters, and the fit
to the 19 $N \gamma$ decays is decently good. If one assumes, as in
the quark model, that only one-body decay operators are relevant,
then the predictions for the $\Delta \gamma$ decays given in this paper
provide 24 new opportunities to verify or vilify SU(6). Our predictions
also provide a means of discerning effects of the most important two-body
decay operators, those involving coherent sums over the large-$N_c$
baryon states.
Secondly, the {\bf 70} $\rightarrow \Delta \gamma$ decays allow us to
test the assumption that two-body operators proportional to
quark spin sums have matrix elements that are incoherent.
Since the $\Delta$ spin is larger than that of the nucleon, it is
possible that these subleading effects are not sufficiently suppressed
for $N_c=3$ to justify the large-$N_c$ operator analysis. Then we might
encounter large corrections not present in the $N\gamma$ decays. Such
large multibody operator effects would lead to significant deviations
from the predictions presented here, as well as a noticeable
breakdown of the naive quark model.
\begin{center}
{\bf Acknowledgments}
\end{center}
We thank David Armstrong, Nathan Isgur, and Ron Workman for useful
comments. CDC thanks the National Science Foundation for support under
grant PHY-9800741. CEC thanks the NSF for support under grant PHY-9600415.
|
2,877,628,090,393 | arxiv | \section{Introduction}
\begin{figure}
\includegraphics[width=1.0\columnwidth]{growthSequence_bubbletraj.pdf}
\caption{(A) A series of transmission electron microscopy images of a bubble growing under high, non-uniform confinement witinh a liquid cell, FIG. S1 and movie S1. The bubble formation occurred while imaging an aqueous solution of gold nanorods that contained a trace amount of the surfactant cetrimonium bromide (CTAB) with TEM at 300 keV, beam current $\sim$1−10 nA, and beam radius $\sim$2 $\mu$m. The bubble grows anisotropically and in the last frame detaches from the nucleation size (dashed white circle), the growth of this bubble is compared to theory in FIG. \ref{fig:compareshapes}. (B) Normalized intensity (contours) with bubble trajectories (data points), nucleation site (dashed ellipse), and approximate location of bubble breakup (solid line).}\label{fig:growthsequence}
\end{figure}
The motion and shape of droplets and bubbles in confined geometries have attracted considerable attention in the scientific community due to their importance in industrial processes, multi-phase flows through porous media and recently, microfluidic devices. Typically at the macroscale, pressure driven flow or buoyancy drive motion of bubbles and fluid-fluid interfaces. G. I. Taylor and Saffman \cite{Saffman1958,Taylor1959} and Tanveer and Saffman \cite{Tanveer1987} studied the migration of droplets in Hele-Shaw cells. Matched asymptotic methods have subsequently been used to elucidate the geometry of gas bubbles in cylindrical tubes \cite{Bretherton1961} and in Hele-Shaw cells when the continuous phase either perfectly wets the surface \cite{Park1984} or when contact line dynamics are at play \cite{Weinstein1990}.
In the absence of applied pressure gradients and when drops are small such that the Bond number is much less than unity, gradients in substrate elasticity \cite{Style2012,Style2013}, chemistry \cite{Chaudhury1992, Darhuber2005}, temperature, and electric field, among others, can spontaneously drive the motion of contact lines and interfaces. Similarly, geometric gradients can induce capillary forces and promote transport of drops/bubbles confined in tapered capillaries \cite{Renvoise2009} and residing on wires with varying cross-sections \cite{Hanumanthu2006, Lorenceau2004}. To minimize their surface energy, wetting drops seek confinement while non-wetting drops and bubbles avoid it. The dynamics of bubbles and droplets in tapered Hele-Shaw devices have been modeled and compared to experiment \cite{Jenson2014, Metz2009, Reyssat2014}. In these cases a dynamically created film wets the substrate. Furthermore, the mass of the bubble or drop is fixed and so the process is purely relaxational. Droplets and bubbles begin in a non-equilibrium configuration and transport to their equilibrium positions, where they can exist as spheres (for completely wetting substrates) or spherical sections (for partially wetting substrates) that satisfy the equilibrium contact angle \cite{Langbein2002}. Laplace pressure gradients drive the disperse phase and hydrodynamics either in the bulk or at the contact line provide dissipation, setting the time scale for the process.
In recent years, there has been a considerable interest in surface-bound nanobubbles and their stability \cite{Alheshibri2016, Ball2012, Craig2011, Fang2016}. Understanding such bubbles has the potential to improve processes such as acoustic surface scrubbing \cite{Liu2008, Liu2009, Yang2011} and designing surface textures that optimize onset of nucleate boiling to enhance heat transfer \cite{Dong2014,Fazeli2015,Chu2012}. Free nano-bubbles can also be used as contrast agents in ultra-sonic imaging \cite{Cai2015} and so developing processes which reliably produce nanobubles could have applications in medical imaging as well. Due to the extremely high Laplace pressure of free bubbles, mass transfer-driven dissolution can dominate dynamics when the bubbles are below their critical radius. As the radius of curvature increases $R^{-1}$, the internal pressure $P$ and the equilibrium surface concentration $C_\text{s}$ rise; the latter in accordance with Henry's law, $C_\text{s}\propto P\propto R^{-1}$. This positive feedback results in rapid dissolution; a bubble 100nm in radius will dissolve in $\sim$100$\mu$s \cite{Epstein1950, Ljunggren1997}. In contrast to the above prediction, surface-bound nanobubbles have been observed to persist for many hours \cite{Zhang2008, Seddon2011, Sun2016}. Researchers have proposed various mechanisms for this anomalous behavior, including a perpetual dynamic equilibrium \cite{Petsev2013,Seddon2011}, the stabilizing effect of organic contaminants \cite{Zhang2012}, and contact line pinning \cite{Weijs2013,Lohse2015,Maheshwari2016}. The constraint of a pinned contact line introduces a negative feedback by forcing the radius of curvature to decrease as the mass of the bubble decreases. As the surface concentration of dissolved gases decreases and approaches that of the bulk concentration, mass transfer slows.
Inspired by this simple feedback mechanism between the geometry of a bubble and its \emph{growth}, we reaxmine the transport of bubbles in tapered Hele-Shaw cells when mass transfer and contact line dynamics dominate the system behavior. We develop our model around experimental observations of nanobubbles $R\sim 10-100$nm growing and migrating in tapered Hele-Shaw cells with plate gaps $\sim$10-100s of nanometers, FIG. \ref{fig:growthsequence}\cite{Grogan2014}. The bubbles grow anomalously slow when compared to classical mass transfer limited growth theories \cite{Epstein1950} and despite their small size and velocities (the capillary numbers are in the range $10^{-9}-10^{-7}$ and Bond numbers $10^{-11}-10^{-9}$) are rarely spherical in shape. To explain both observations, we posit that the contact line is slow to relax compared to the rate of mass transfer and dominates bubble shape, growth rate, and translation in a regime where the gas-liquid interface is always at mechanical equilibrium (uniform Laplace pressure). We utilize the Blake-Haynes (BH) mechanism to model contact line movement. When a portion of the contact line is immobilized, a teardrop shaped bubbles result, similar to those observed in our experiments.
\section{Bubble Growth and Migration Model}
Our goal is to model contact line evolution of a bubble growing in a supersaturated solution confined between two plates diverging at an $2\Phi$. FIG. \ref{fig:schematic}A depicts top view of the bubble. The bubble is symmetric with respect to the $x$-axis. The inner contour (solid line) is the projection of the bubble's contact line with the confining plates onto the $xy$-plane and is defined with the polar coordinate $\rho\left(\psi,t\right)$, where $\psi$ is the azimuthal angle and $t$ is time. The intersection of the bubble's surface with the $xy$-plane is shown as the outer, dashed contour. FIG. \ref{fig:schematic}B depicts the cross-section of the bubble confined in the tapered conduit. The two plates are distance $2h\left(\mathbf{x},t\right)$ apart. We restrict our analysis to cases when $R\left(t \right) \ll \min\limits_{\psi}\rho\left(\psi,t\right)$. In the limit of zero capillary and Bond numbers, the pressure inside the bubble is nearly uniform and dominated by the smallest radius of curvature $R$. Hence, we take $R$ to be $\psi$-independent. $\theta\left(\psi\right)$ is the dynamic contact angle between the continuous phase (liquid) and the plates. The contact angle may vary as a function of position.
\begin{figure}[h]
\includegraphics[width=\columnwidth]{schematic_3.pdf}
\caption{: (A) Schematic of the geometry of our problem with global perspectives of a bubble shape (top) as seen from the top and a cross section of the interface as viewed from the local coördinate system (\emph{i}); the unit vectors of the latter are defined by the local tangent and normal vectors of the bubble's interface at the $xy$-plane; both the slope $\varphi$ and contact angle $\theta$ are local properties and depend on the polar angle $\psi$. (B) Top and cross-section (through the plane \emph{ii}) view of the initial condition used for the model, a sphere with a circular contact line that when projected onto the $xy$-plane is an ellipse with major and minor radii $\{a,a^*\}$.}\label{fig:schematic}
\end{figure}
The projection of a contact line point on the $x-y$ plane is given by the two-dimensional vector
\begin{equation}
{\bf{x}}\left( {\psi ,t} \right) = \left\{ {\rho \left( {\psi ,t} \right)\cos \psi + X\left( t \right),\rho \left( {\psi ,t} \right)\sin \psi } \right\}.
\label{eq:contactline}
\end{equation}
In the above, $X$ is the distance of the origin of $\rho$ from the center of the initial $\left(t=0\right)$ bubble geometry, which we will later define to be a sphere (eqn. \ref{eq:RIC}). $2h_0$ is the height of the conduit at $X=0$.
We define local planar coordinates aligned with the normal $\hat{\mathbf{n}}$ and tangent $\hat{\mathbf{t}}$ to the projection of the contact line on the $xy$-plane. Using FIG. \ref{fig:schematic}\emph{i} as a guide, we relate the radius of curvature $R$ to the slope of the wedge, contact angle, and channel height at the contact line in the plane that is both normal to the contact line at position $\mathbf{x}$ and the $xy$- plane.
\begin{equation}
h = R\cos \left( {\theta \left( \psi \right) - \varphi \left( \psi \right)} \right).
\label{eq:localheight1}
\end{equation}
Additionally,
\begin{equation}
h = {h_0} + x\tan \Phi = {h_0} + \left( {X + \rho \cos \psi } \right)\tan \Phi.
\label{eq:localheight2}
\end{equation}
Equating eqn. \ref{eq:localheight1} and \ref{eq:localheight2} and solving for the contact angle $\theta$ gives
\begin{equation}
\theta = {\cos ^{ - 1}}\left[ {\frac{{{h_0} + \left( {X + \rho \cos \psi } \right)\tan \Phi }}{R}} \right] + \varphi.
\label{eq:localcontactanlge}
\end{equation}
The local wedge angle $\varphi\left(\theta\right)$ relates to the global or maximum wedge angle $\Phi$ through
\begin{equation}
\tan \varphi = {\bf{\hat n}} \cdot {{\bf{\hat e}}_x}\tan \Phi
\label{eq:localslopeA}
\end{equation}
When ${\left. {{\bf{\hat n}}} \right|_{\psi = 0}} = {{\bf{\hat e}}_x}$ eqn. \ref{eq:localslopeA} yields ${\left. \varphi \right|_{\psi = 0}} = \Phi $; when ${\left. {{\bf{\hat n}}} \right|_{\psi = \pi /2}} = {{\bf{\hat e}}_y}$ , ${\left. \varphi \right|_{\psi = \pi /2}} = 0$. In radial coordinates, the normal vector to the contact line curve
\begin{equation}
{\bf{\hat n}} = \left\{ {\begin{array}{*{20}{c}}
{\rho \cos \psi + \frac{{\partial \rho }}{{\partial \psi }}\sin \psi }\\
{\rho \sin \psi - \frac{{\partial \rho }}{{\partial \psi }}\cos \psi }
\end{array}} \right\}\mathcal{N}^{-1},
\end{equation}
where
$\mathcal{N}= \sqrt{{\rho ^2} + {{\left( {{\partial \rho }/{{\partial \psi }}} \right)}^2}}$, which follows from ${\bf{\hat n}} = {{\frac{{d{\bf{\hat t}}}}{{d\psi }}} \mathord{\left/
{\vphantom {{\frac{{d{\bf{\hat t}}}}{{d\psi }}} {\left| {\frac{{d{\bf{\hat t}}}}{{d\psi }}} \right|}}} \right.
\kern-\nulldelimiterspace} {\left| {\frac{{d{\bf{\hat t}}}}{{d\psi }}} \right|}}$, where ${\bf{\hat t}} = {{\frac{{d{\bf{x}}}}{{d\psi }}} \mathord{\left/
{\vphantom {{\frac{{d{\bf{x}}}}{{d\psi }}} {\left| {\frac{{d{\bf{x}}}}{{d\psi }}} \right|}}} \right.
\kern-\nulldelimiterspace} {\left| {\frac{{d{\bf{x}}}}{{d\psi }}} \right|}}$ is the unit tangent vector. The full equation for the local inclination angle
\begin{equation}
\varphi = \tan^{-1}\left[ {\tan \Phi} \left( {\rho \cos \psi + \frac{{\partial \rho }}{{\partial \psi }}\sin \psi } \right) \mathcal{N}^{-1} \right].
\label{eq:localslopeB}
\end{equation}
The full expression for the local contact angle is therefore given by substituting eqn. \ref{eq:localslopeB} into eqn. \ref{eq:localcontactanlge}. The projection of the linearized Blake-Haynes (BH) velocity \cite{Blake1969, Blake2006} for contact line motion onto the $x-y$ plane gives the normal velocity of the interface
\begin{equation}
\frac{{\partial {\mathbf{x}}}}{{\partial t}} \cdot {\bf{\hat n}} = {U_0}\cos \varphi \left( {\cos \theta - \cos {\theta _0}} \right),
\end{equation}
where
\begin{equation}
\frac{{\partial {\mathbf{x}}}}{{\partial t}} = \left\{ {\frac{{dX}}{{dt}} + \frac{{\partial \rho }}{{\partial t}}\cos \psi ,\frac{{\partial \rho }}{{\partial t}}\sin \psi } \right\}.
\label{eq:BH}
\end{equation}
$U_0=\gamma/\eta_{\text{CL}}$ is the contact line's velocity scale defined in terms of the contact line viscosity $\eta_{\text{CL}}$ and the surface tension of the gas-liquid interface $\gamma$. We assume that the continuous phase (water) only partially wets the solid substrate (silicon nitride) ($\theta_0>0$). Previously, the BH equation has been successfully applied to model the adsorption dynamics of colloids at interfaces \cite{Kaz2012}. Since the observed interfacial velocities in our experiments are well below the threshold required to support a dynamically-formed liquid film \cite{Brochard-Wyart2001}, we assume that the gaseous phase contacts the solid surfaces at all times. In the supplement, we compare bulk and contact line hydrodynamic dissipation with that of BH wetting/de-wetting and determine that the latter dominates the system. Although the disjoining pressure may influence the details of the interface geometry and play a role in wetting and de-wetting dynamics when the gap between the plates is small, this effect is not accounted for by the coarse-grain BH model.
Substititung eqns. \ref{eq:localcontactanlge} and \ref{eq:localslopeB} into eqn. \ref{eq:BH} and linearizing for $\Phi\ll1$ and $\partial\rho/\partial\psi\ll1$ about $\Phi=0$ and $\partial\rho/\partial\psi=0$, we obtain
\begin{multline}
\frac{{\partial \rho }}{{\partial t}} + \frac{{dX}}{{dt}}\left( {\frac{{\sin \psi }}{\rho }\frac{{\partial \rho }}{{\partial \psi }} + \cos \psi } \right) + \mathcal{O}\left(\left( {{{\frac{{\partial\rho }}{{\partial\psi }}}}} \right)^2\right) =\\
U_0\left\{\frac{h_0}{R}+\Phi\left[\frac{X}{R}+\cos\psi\left(\frac{\rho}{R}-H\right)-\frac{\sin\psi H}{\rho}\frac{\partial\rho}{\partial\psi}\right.\right.\\
\left.\left.+\mathcal{O}\left(\left(\frac{\partial\rho}{\partial\psi}\right)^2\right)\right]-\cos\theta_0+\mathcal{O}\left(\Phi^2\right)\right\}
\label{eq:BHunpinned}
\end{multline}
where $H=\sqrt{1-\left(h_0/R\right)^2}$.
In our experiments, we observed bubbles with an advancing front and a pinned aft, FIG. \ref{fig:growthsequence}. To accommodate such situations, we modified the BH theory, introducing the weighing pre-factor $\left(1+\cos\psi\right)/2$ such that eqn. \ref{eq:BHunpinned} becomes
\begin{multline}
\frac{{\partial \rho }}{{\partial t}} + \frac{{dX}}{{dt}}\left( {\frac{{\sin \psi }}{\rho }\frac{{\partial \rho }}{{\partial \psi }} + \cos \psi } \right) =\\
U_0\frac{\left(1+\cos\psi\right)}{2}\left\{\frac{h_0}{R}+\Phi\left[\frac{X}{R}+\cos\psi\left(\frac{\rho}{R}-H\right)\right.\right.\\
\left.\left.-\frac{\sin\psi H}{\rho}\frac{\partial\rho}{\partial\psi}\right]-\cos\theta_0\right\}
\label{eq:BHpinned}
\end{multline}
The above weighting function smoothly reduces the contact line velocity from its maximum value at $\psi=0$ to zero at $\psi=\pi$ .
In defining our coordinate system, we introduced the variable $X$, denoting the position of the bubble's center, without providing a rigorous definition. We do so now by selecting $X\left(t\right)$ such that $\rho\left(0,t\right)=\rho\left(\pi,t\right)$ at all times. Equivalently, $\left.\partial\rho/\partial t \right|_{\psi=\pi}=\left.\partial\rho/\partial t \right|_{\psi=0}$ with $\rho\left(0,0\right)=\rho\left(\pi,0\right)$. Evaluating eqn. \ref{eq:BHunpinned} for $\psi=0$ and $\psi=\pi$, subtracting the results, and incorporating $\left(\partial\rho/\partial t\right)|_{\psi=\pi}=\left(\partial\rho/\partial t\right)|_{\psi=0}$ we obtain the evolution equation for $X$ for the unpinned case
\begin{equation}
2R\left(\frac{dX}{dt}+ {U_0}\Phi H \right)=
{U_0}\Phi \left( {\rho \left( {0,t} \right) + \rho \left( {\pi ,t} \right)} \right)
\label{eq:Xunpinned}
\end{equation}
Following the same procedure but with eqn. \ref{eq:BHpinned} gives the evolution for $X$ for the pinned case
\begin{multline}
R \left(2\frac{dX}{dt}+U_0 \Phi H \right)=\\
U_0\left\{\left[h_0+\Phi\left(X+\rho\left(0,t\right)\right)\right]-R\cos\theta_0\right\}
\label{eq:Xpinned}
\end{multline}
The evolution of the unpinned contactline is governed by eqns. \ref{eq:BHunpinned} and \ref{eq:Xunpinned}; the pinned contactline is governed by \ref{eq:BHpinned} and \ref{eq:Xpinned}. To complete our model, we still need to address the dependence of $R$ on time. For example, in an isobaric process, one would expect $\dot{R}=0$. In the supplement, we consider a Gedanken experiment, wherein the bubble's volume is controlled in a specific way $V\propto t^3$. Although perhaps not practical, this case is instructive as it admits geometrically self-similar growth in the asymptotic limit $t\rightarrow\infty$, which is helpful for understanding contact line dynamics when the growth rate is independent of $R$. Here, however, we are interested in mass transfer-driven growth, which we address below.
We begin by writing an expression for the volume of the bubble; it can be divided into two contributions, a nearly cylindrical portion
\begin{equation}
{V_\text{I}} =
\int\limits_0^{2\pi } {{\rho ^2}\left( {{h_0} + \rho \cos \psi {\mathop{\rm tan}\nolimits} \Phi } \right){\rm{d}}\psi }
\end{equation}
and the portion that bulges out from the contact line
\begin{equation}
{V_{\text{II}}} = \int\limits_0^{2\pi } {\frac{{{R^2}}}{2}\left( \begin{array}{l}
\pi - 2\theta + 2\varphi\\
- \sin \left( {2\theta - 2\varphi } \right)
\end{array} \right){\cal N}{\rm{d}}\psi }
\label{eq:volII}
\end{equation}
In eqn. \ref{eq:volII}, we assumed $R/\rho\ll1$ to simplify the integrand. The integrals can be further simplified by applying, as before, the linearization, $\Phi\ll 1$ and $\partial\rho /\partial \psi \ll 1$, to yield
\begin{multline}
V=V_{\text{I}}+V_{\text{II}}=\\
\int\limits_0^{2\pi } {\frac{1}{2}\rho \left\{ {2{h_0}\rho + {R^2}\left[ {\pi - 2{{{\mathop{\rm cos}\nolimits} }^{ - 1}}\left( {\frac{{{h_0}}}{R}} \right) - \sin \left( {2{{{\mathop{\rm sin}\nolimits} }^{ - 1}}\left( {\frac{{{h_0}}}{R}} \right)} \right)} \right]} \right\}}\\
+ \Phi \left[ \frac{2}{3}{\rho ^3}\cos \psi - h_0^2\frac{{\partial \rho }}{{\partial \psi }}\sin \psi + {\rho ^2}\left( {\frac{{2h_0^2\cos \psi }}{{RH}} + X} \right)\right.\\
\left.+ \rho h_0^2\left( {\cos \psi + \frac{{2X}}{{RH}}} \right) \right]
+ \mathcal{O}\left( {{\Phi ^2}} \right)\mathcal{O}\left( {{{\frac{{\partial \rho }}{{\partial \psi }}}^2}} \right){\rm{d}}\psi
\label{eq:volume}
\end{multline}
Next, we derive the expression for the effective surface area of a bubble available for mass transport. While the bubble is not perfectly cylindrical, the radius of the outward bulge is small compared to the overall radius of the contact line. We therefore expect the concentration field around the bubble to be essentially similar to that around a cylindrical bubble. Hence, the surface area
\begin{equation}
S\sim\int\limits_0^{2\pi } {\int\limits_{ - h\left( \rho \right)}^{h\left( \rho \right)} \mathcal{N} {\rm{d}}z{\rm{d}}\psi }.
\end{equation}
Integration in the $z$-direction gives
\begin{equation}
S = 2\int\limits_0^{2\pi } {\left( {{h_0} + \left( {X + \rho \cos \psi } \right){\mathop{\rm tan}\nolimits} \Phi } \right)\mathcal{N} } {\rm{d}}\psi
\end{equation}
and linearizing yields
\begin{equation}
S=2\int\limits_0^{2\pi } {\rho \left( {{h_0} + \Phi \left( {X + \rho \cos \psi } \right)} \right) + \mathcal{O}\left( {{\Phi ^2}} \right)\mathcal{O}{{\left( {\frac{{\partial \rho }}{{\partial \psi }}} \right)}^2}{\rm{d}}\psi }.
\label{eq:area}
\end{equation}
Next, we estimate the quasi-static total mass flux $\dot{n}$ into a bubble growing in a supersaturated solution with fixed far field concentration $C_{\infty}$ (at $\rho=\rho_{\infty}$) by solving the steady diffusion equation in cylindrical coordinates. We assume that the bubble's contact line is nearly circular with radius $\rho\sim\rho_0$ (where $\rho_0$ is the leading order coefficient describing the shape of the contact line and will be described in eqn. \ref{eq:expansion}). The gas concentration next to the bubble's surface $C_s=C\left(\rho=\rho_0\right)$ is given by a combination of Laplace's equation and Henry's law $C_s=K\left(P_{\infty}+\gamma/R\right)$, where $K$ is the Henry constant. In the above, we again took advantage of $R/\rho\ll1$ by assuming that only the confinement radius $R$ contributes to the Laplace pressure. For simplicity, we assume that the state of the gas inside the bubble is described with the ideal gas equation of state $PV=nBT$, where $P$ is the pressure, $B$ is the universal gas constant, and $T$ is the absolute temperature. Taking the time derivative of the state equation and using eqns. \ref{eq:volume} and \ref{eq:area} gives the differential equation that couples geometry and pressure (radius of curvature)
\begin{multline}
\left( {{P_\infty } + \frac{\gamma }{R}} \right)\dot V - \gamma V\frac{{\dot R}}{{{R^2}}} =\\
S\frac{{D\left[ {{C_\infty } - K\left( {{P_\infty } + \gamma /R} \right)} \right]}}{{\ln \left( {{\rho _0}/{\rho _\infty }} \right)}}BT
\label{eq:state}
\end{multline}
We define supersaturation $\alpha$ or the excess dissolved gas concentration relative to the surface concentration of the initial bubble
\begin{equation}
\alpha = {\log _{10}}\left(C_{\infty}/C_0 - 1 \right),
\label{eq:alpha}
\end{equation}
where ${C_0} = K\left( {{P_\infty } + \gamma /R\left( {t = 0} \right)} \right)$. When supersaturation is large, the system is driven at a rate that is incommensurate with the BH velocity and the contact angle goes to zero somewhere along the contact line. To avoid such a situation, we restrict our analysis to small and moderate supersaturations.
For concreteness, we consider here circumstances when the initial geometry of the bubble is a spherical section with equilibrium contact angle along the entire contact line. While a spherical bubble does not satisfy our assumption $R/\rho\ll1$, it is the \emph{only} geometry that creates a uniform contact angle along the entire contact line. Any other geometry would introduce additional dynamics at early times, which we wish to avoid. While this initial state is somewhat artificial, we find that, in practice, the condition $R/\rho\ll1$ is reached quickly. FIG. \ref{fig:schematic}B depicts the geometry of the initial state of our bubble. Based on simple geometric considerations, we have the following relationship between the initial radius and the height of the channel at the center of the sphere
\begin{equation}
R_0=R\left(t=0\right)=h_0 \frac{\cos\Phi}{\cos\theta_0},
\label{eq:RIC}
\end{equation}
where $h_0$ is used in the definition of our coordinate system eqn. \ref{eq:localheight2}. The initial contact line shape is a circle with radius $R_0\sin\theta_0$. The projection of this circle onto the $x-y$ plane is an ellipse with major and minor radii $\left\{a,a^* \right\}$ given by
\begin{equation}
\left\{ {\begin{array}{*{20}{c}}
a\\
a^*
\end{array}} \right\} = \left\{ {\begin{array}{*{20}{c}}
{{{R}_0}\sin {\theta _0}}\\
{{{R}_0}\sin {\theta _0}\cos \Phi }
\end{array}} \right\} = \left\{ {\begin{array}{*{20}{c}}
{{h_0}\tan {\theta _0}\cos \Phi }\\
{{h_0}\tan {\theta _0}{{\cos }^2}\Phi }
\end{array}} \right\}
\end{equation}
In our polar coordinate system centered about $X$, the contact line is described by
\begin{equation}
\rho \left(\psi, {t = 0} \right) = \frac{{{h_0}\tan {\theta _0}{{\cos }^2}\Phi }}{{\sqrt {{{\cos }^2}\psi + {{\cos }^2}\Phi {{\sin }^2}\psi } }}
\label{eq:rhoIC}
\end{equation}
where the initial center of the bubble
\begin{equation}
X\left( {t = 0} \right) = - {h_0}\sin \Phi \cos \Phi.
\label{eq:XIC}
\end{equation}
As illustrated by FIG. \ref{fig:schematic}B, $X\left(t=0\right)$ is negative because the center of the initial contact line is to the left of the center of the initial sphere, which serves as the origin of the global coordinate system. To summarize, the problem statement consists of a partial differential equation (the BH relationship for contact line velocity for unpinned eqn. \ref{eq:BHunpinned} and pinned cases eqn. \ref{eq:BHpinned} ), equation of state eqn. \ref{eq:state}, evolution equation for the bubble's center (unpinned eqn. \ref{eq:Xunpinned} and pinned eqn. \ref{eq:Xpinned}), and initial conditions (eqns. \ref{eq:rhoIC} and \ref{eq:XIC}).
To simplify the numerical solution of this hybrid system of equations, we assume that $\rho$ is a continuous function of $\psi$ and use a spectral decomposition of the contact line's position
\begin{equation}
\rho \left( {\psi ,t} \right) = \sum\limits_{n = 0}^N {{\rho _n}\left( t \right)\cos \left( {n\psi } \right)}.
\label{eq:expansion}
\end{equation}
Only cosine terms are used in the above expansion because we restrict our analysis to bubbles symmetric with respect to the principal axis of the wedge $\psi=0,\pi$). We substitute eqn. \ref{eq:expansion} into the BH equation (eqns. \ref{eq:BHunpinned} or \ref{eq:BHpinned}) and require that it is satisfied in the sense of the weighted residuals
\begin{equation}
\int\limits_0^{2\pi } {\cos \left( {n\psi } \right)\left( {\text{BH}_{\text{LHS}} - \text{BH}_{\text{RHS}}} \right){\rm{d}}} \psi = 0,{\rm{ }}n = 0,1,2 \cdots.
\end{equation}
We show terms for the case $N=2$ in the electronic supplement. The above decomposition has the virtue of not only reducing the system to a set of non-linear ordinary differential equations, but also readily incorporating integral (eqn. \ref{eq:state}) and evolution equations (eqn. \ref{eq:Xunpinned} or eqn. \ref{eq:Xpinned}) in the model. We find rapid convergence and very few modes are needed when no-pinning is used (\emph{i.e.} when bubbles are free to translate, the contact line is nearly circular at all times). In the presence of pinning, the model with $N=$5 functions well for moderate time and is able to model bubble growth to the point they detached in the experiment. For long times, higher frequency modes begin to dominate, which invalidates assumptions made in our model. While we only use cosine terms, the problem could be generalized to include sine terms if, for example, initial conditions with broken symmetry were of interest or if one wished to include additional forces acting transverse to the wedge axis; otherwise, there is no physical reason for information to flow into the odd functions given the problem's symmetry.
\section{Experimental Method and Observations}
We imaged bubble nucleation, growth, and migration in our custom-made liquid cell (the nanoaquarium) \cite{Grogan2010,Grogan:2011dj, Grogan2014} with a transmission electron microscope (TEM). The liquid cell consists of a thin (nominally 200 nm) liquid layer confined and hermetically sealed between two very thin (50nm) silicon nitride membranes (100 $\mu$m x 100 $\mu$m). See FIG. S1 in the electronic supplement for a schematic depiction of our liquid cell. Additional details on the structure of the liquid cell and its fabrication are available in Grogan and Bau \cite{Grogan2010}. The liquid is sealed from the vacuum of the electron microscope and the entire assembly is thin enough to be transparent to electrons. In our experiments, the liquid cell is filled with a solution of water with trace amount of the surfactant cetrimonium bromide (CTAB). The liquid cell is imaged at 300kV using a beam of diameter 4 $\mu$m and current 1nA. FIG. \ref{fig:growthsequence} (movie S1) shows a series of bubbles nucleating from a location that is most likely a defect such as a pit on the interior surface of one of the liquid cell windows. These bubbles form because of radiolysis of water by the electron beam that generates gaseous species. The primary species are hydrogen and oxygen \cite{Schneider2014}, their production induces bubble nucleation and sustains bubble growth. Liquid cell electron microscopy is a new that can provide nanometer resolution of aqueous samples \cite{Ross2015}. In terms of imaging fluid dynamics phenomena, the technique is still evolving: the liquid layer geometry can not be defined accurately by the microscopist and the nucleation sites for bubbles are determined by random defects in the silicon nitride windows. This restricts our observations to somewhat uncontrolled experiments and precludes quantitative comparison between our theoretical predictions and data. Nevertheless, the liquid cell provides sufficient information to allow us to qualitatively compare our theoretical predictions with experimental data. TEM liquid cell microscopy has been used previously to image condensation \cite{Bhattacharya:2014cr}, motion of droplets \cite{Mirsaidov2012a}, dewetting phenomena \cite{Mirsaidov2012b} and bubble formation by heating \cite{White2012} however, the physics behind these phenomena have not been modeled extensively and appear distinct from the phenomena discussed here.
\begin{figure}[h]
\includegraphics[width=\columnwidth]{growthstages_modified_2.pdf}
\caption{(A-C) A series of schematics (1st column), bright field electron micrographs (2nd column) and intensity profiles taken alone the midline of the bubble indicated on the images (3rd column) of a bubbles growing in under confinement. The bubble in (C) is pinned to a nucleation site.}\label{fig:growthstages}
\end{figure}
\begin{figure}
\includegraphics[width=3in]{naqrm_collapse_nano.pdf}
\caption{(A) and (B), respectively, show a microscopy image and corresponding intensity profiles ( \emph{i} is taken along the centerline of the bubble, while \emph{ii} is taken parallel to it in a nearby region of the device ). Since the transmission of (\emph{ii}) is flat and similar to that of the bubble (\emph{i}) we conclude that the membranes have contacted, along the line labled in (A) and (B).}
\label{fig:collapsenano}
\end{figure}
FIG. \ref{fig:growthstages}A-C shows bright field electron microscope images of bubbles growing under confinement in our liquid cell. Three bubbles at different stages in their growth are shown along with a schematic side view (first column) that interprets the intensity profiles (third column). The scatter of electrons by the irradiated medium and the darkness of the image scale with the integrated atomic number density along the beam's path, or in our case, the local thickness of water. Darker (lower intensity) sections of the image therefore indicate a thicker liquid layer. The flat intensity profile of the bubble FIG. \ref{fig:growthstages}C indicates that the bubble has contacted both membranes, while the ``shaded'' bubbles in (A) and (B) indicate that electrons encounter both liquid and vapor along their path and that the bubbles have therefore not contacted both surfaces.
When we observe nucleation, the bubbles appear to emanate periodically from a presumed impurity in the silicon nitride film. FIG. \ref{fig:growthsequence} shows snapshots of bright field electron microscope micrographs of a bubble growing under high confinement (movie S1). We compare our theory to bubbles like these. Nucleated bubbles are first observed when they are $\sim80$ nm in diameter and depart when they are $\sim250$nm in size. The bubbles depart by breaking into two bubbles: a smaller bubble that remains attached to the nucleation site and a larger bubble that continues to translate. The process repeats with a frequency of 0.3 Hz. The reproducibility of the process from one bubble to the next suggests quasi-static far field supersaturation \cite{Grogan2014}. Indeed, theory suggests that the radiolysis process is self-regulating and that the concentration of radiolysis products achieve steady state \cite{Schneider2014}. Since we cannot measure the dissolved gas concentration, we use it as a fitting parameter in our model, $\alpha$ eqn. \ref{eq:alpha}.
The experimental data suggests that locally, bubbles always migrate in the same direction towards lower confinement, FIG. \ref{fig:growthsequence}B. In support of this, we show a bubble at a different nucleation site FIG. \ref{fig:collapsenano}A; FIG. \ref{fig:collapsenano}B depicts two intensity profiles along parallel lines (\emph{i}) and (\emph{ii}), shown as arrows in FIG. \ref{fig:collapsenano}A. Profile (\emph{i}) includes the bubble and profile \emph{ii} goes alongside the bubble. The recorded intensity declines as we go from left to right along the line (\emph{ii}), suggesting that the thickness of the liquid layer increases from left to right. Furthermore, the similar transmitted intensity of the bubble compared to the region from which it emanates indicate a very thin liquid layer and possibly membrane contact. During loading of the liquid cell, capillary forces can pull the silicon nitride membranes into close proximity. We show such collapsed membranes in the supplement at lower magnification using optical microscopy (FIG. S2). FIG. \ref{fig:collapsenano} and FIG. S3 show that bubbles grow and migrate in the tapered conduit in the direction of diverging plates. This is the situation for which we built our model in Section II above. Like FIG. \ref{fig:growthstages}C, the detected intensity along the bubble (\emph{i}) is nearly uniform and the transition from the bubble to the bulk liquid is relatively rapid compared to the size of the bubble's plateau, one can infer that the bubble is highly confined, justifying the assumption made in the theory section $R/\rho \ll 1$.
Using simple image processing algorithms, we automate measurements of bubble features such as centroid position, area, and contact line shape and stitch together bubble trajectories using established techniques \cite{Schneider:2016hm,Blair}. In the Section V, we compare experimental observations with theoretical predictions.
\section{Model Results and Discussion}
\begin{figure}[h]
\includegraphics[width=\columnwidth]{theory_2x2_2.pdf}
\caption{Model predictions for (A) bubble size, (B) velocity and (C) radius of curvature and (D) aspect ratio for two different slopes and contact line viscosities: $\xi=10^2$ (gray) and $\xi=10^4$(black), and slopes $\Phi=10^{-4}$ (dotted) and $\Phi=10^{-3.5}$ (solid) . In all cases $h_0=100$ nm and $\alpha=-6$. \label{fig:theory}}
\end{figure}
\begin{figure}[h]
\includegraphics[width=\columnwidth]{contactangleevolution.pdf}
\caption{(A) Contact angle $\theta\left(\psi,t\right)$ relative to the equilibrium contact angle $\theta_0$ as a function of polar angle $\psi$ at a few different stages in bubble growth. (B) $\theta_0-\theta$ at the leading (solid) and trailing (dashed) edges of the bubble as a function of time; for $t>0.3$ s dewetting occurs at the leading edge and wetting occurs at the trailing edge. Inset (\emph{i}) shows short dynamics in which, the bubble expands in all directions. $h_0=100$nm, $\alpha=-6$, $\xi=10^4$, and $\Phi=10^{-4}$. \label{fig:contactangle}}
\end{figure}
In all our theoretical predictions, we assume that the gaseous species is hydrogen and that the liquid cell is at room temperature ( $T=298$ K) and pressure ( $P=0.1$ MPa) \cite{Grogan2014}. We assume that as bubbles grow they do not significantly increase the pressure of the device since their volume is miniscule to that of the device. We use $\theta_0=30^{\circ}$ \cite{Arkles2006}, $K_{\text{H}_2}=7.74\times10^{-6}$ mol/Pa $\text{m}^3$ \cite{Sander2015}, $D_{\text{H}_2}$ $\text{m}^2$/s \cite{Cussler2009}, $\eta_{\text{CL}}=k_{\text{B}}T/\left(\kappa_0\lambda^3\right)=\xi\eta_0$ Pa s \cite{Petrov1997,Blake2006}, bulk viscosity $\eta_0=8.9\times 10^{-4}$ Pa s, and surface tension of the vapor-liquid interface and $\gamma=40$ mN/m. We use surface tension of the gas-liquid interface lower than that of pure water to account for the presence of trace amounts of the CTAB surfactant in our experiment \cite{Berg2010}. The negative value for $\alpha$ corresponds to a far field gas concentration that is only slightly above the equilibrium concentration at the surface of the initial bubble. In the expression for the contact line viscosity $\lambda$ is the lattice spacing, $k_\text{B}$ is the Boltzman constant, and $\kappa_0$ is the hopping frequency. We explore a range of dimensionless contact line viscosities ($\xi=\eta_{\text{CL}}/\eta_{0}=10^2-10^6$) because literature values vary greatly ($\lambda\sim10^{-10}-10^{-9}$ m and $\kappa_0\sim10^3-10^9$ $\text{s}^{-1}$ \cite{Blake2006}), are not known for our system, and may be altered by the presence of surfactants \cite{Petrov1997}. The local slope of the device is also not precisely known, but estimated to be in the range $\Phi=10^{-3}-10^{-2}$ as further discussed in the electronic supplement.
\begin{figure}[h]
\includegraphics[width=\columnwidth]{Plots3DPinnedUnpinnedExpCompare_4.pdf}
\caption{Comparison of measured (points) and predicted average bubble size $\rho_0$ as a function of time for several bubbles emerging from the same nucleation site (colors indicate different bubbles nucleating from the same site). Model results for unpinned (black) and pinned (gray) are shown along with the evolution of the radius of curvature $R$ (dashed). The open circle denotes the end of the validity of the pinned model. \label{fig:comparemetrics}}
\end{figure}
Before comparing to experiment, we examine the impact of some of the key parameters in the model: the slope of the channel $\Phi$ and contact line viscosity $\xi$ in FIG. \ref{fig:theory}. We fix initial channel height $h_0=100$nm and supersaturation $\alpha=-4$. FIG. \ref{fig:theory} depicts (A) the bubble size $\rho_0$, (B) the velocity of the bubble's center of mass $\dot{X}$, (C) the radius of curvature R, and (D) the ratio of curvatures $R/\left(\rho_0+R\left(1-\sin\theta_0\right)\right)$ as functions of time when $\xi=10^2$ (gray line) and $10^4$ (black line) and when $\Phi=10^{-3.5}$ (solid line) and $10^{-4}$ (dashed line). The effect of the contact line viscosity is intuitive. The greater the dissipation at the contact line, the more the contact angle must be pushed out of equilibrium to achieve the same velocity leading to slower growth. Interestingly, changing the slope of the taper also changes the time at which the bubble will appreciably depart from its initial size. To understand why this is the case we consider the short time dynamics more closely.
Initially, the bubble grows slowly because it takes time for the contact line to move. Contact-line motion is needed to enable the Laplace pressure and the equilibrium concentration of the dissolved gas next to the bubble's surface to decrease. At short times, before the contact line moves, the only way to accommodate mass is by \emph{decreasing} $R$, which bulges the bubble further beyond the contact line, increasing Laplace pressure, slowing mass transfer. Thus at short times, contact line resistance introduces a negative feedback mechanism on bubble growth. This dynamic is apparent when examining the contact angle distribution at short times ($t<0.3$ s) in FIG. \ref{fig:contactangle}, the local contact angle $\theta\left(\psi,t\right)$ is for all $\psi$ less than the equilibrium contact angle $\theta_0$ indicating that the bubble has displaced fluid in all directions.
According to BH theory, such a contact angle distribution will causes the contact line to advance into the liquid (de-wet) and give way to the growing bubble. While this occurs at all points on the contact line initially, the contact angle distribution eventually breaks symmetry giving rising to a faster velocity at the leading edge, FIG. \ref{fig:contactangle}. As the interface at the leading edge moves $h$ increases, allowing $R$ to increase, the equilibrium concentration of the dissolved gas to decrease, and the mass transport to increase. In other words, the increasing channel height provides a positive feedback mechanism that accelerates the bubble's growth once the contact line begins moving. The larger the opening slope or smaller the contact line viscosity, the quicker the switch between negative and positive feedback occurs. As the bubble's geometry evolves, the contact angle distribution switches from being non-uniformly below the equilibrium contact angle $\theta_0$ (corresponding to de-wetting) to a portion of the rear contact line being greater than the equilibrium value. In other words, once the bubble begins translating down the conduit, de-wetting occurs at the leading edge ($\psi=0$), and wetting occurs at the trailing edge ($\psi=\pi$), FIG. \ref{fig:contactangle} tracks the evolution of the contact angle at these two locations. The moment of the sign change of the trailing contact line is made clear in the inset of FIG. \ref{fig:contactangle}B\emph{i} (dashed lined).
Interestingly, while the velocity $\dot{X}$ initially increases (FIG. \ref{fig:theory}B), it achieves a maximum velocity and then slowly declines as $t\rightarrow\infty$. This velocity scaling is correlated with the aspect ratio $\varepsilon=R/\left(\rho_0+R\left(1-\sin\theta_0\right) \right)$ tending towards unity, indicating that the bubble's geometry is becoming more spherical over time. Since this must also means that the contact angle distribution is becoming more uniform (tending towards $\theta\left(\psi\right)\rightarrow \theta_0$), we consider the limit of fast contact line relaxation $\eta_{\text{CL}}=0$ in the supplement. In this case, the contact angle is always uniform and at equilibrium; the bubble is therefore a spherical section that grows self-similarly and translates down the conduit at a velocity required to satisfy geometric constraints. In the supplement we show that $\dot{X}\propto t^{-1/2}$, just like FIG. \ref{fig:theory}B for large times. For the parameters used, our theory therefore predicts that confinement and contact line dynamics become less important once bubbles are on the order of $\sim$ 100 $\mu$m - 1 mm in extent.
\begin{figure*}
\includegraphics[width=\textwidth]{PseudoSpectral_PlotProfilePinnedCompare_9.pdf}
\caption{: (Top) Experimental observations of a bubble growing from a nucleation site (dashed white lines). (Bottom) Pinned (black lines, eqn. \ref{eq:BHpinned}) and unpinned (dashed lines, eqn. \ref{eq:BHunpinned}) model predictions compared to contact line from top row (thick gray lines).\label{fig:compareshapes}}
\end{figure*}
\section{Comparison with Experimental Observations}
FIG. \ref{fig:comparemetrics} compares our model predictions, with (gray) and without (black) pinning (modeled with the weighted eqn. \ref{eq:BHpinned} that smoothly interpolate between a completely pinned contact line at $\psi=\pi$ and full mobility at $\psi=0$), to experimental observations of bubbles that are pinned to their nucleation site (symbols). While we know the bubbles to be pinned, for comparison we show predictions made by both pinned and unpinned versions of the model. For both models, $\Phi=10^{-3}$, $h_0=80$nm, $\alpha=-2.9$, and $\xi=5.4$. FIG. \ref{fig:comparemetrics}A depicts $\rho_0$ (solid line) and $R$ (dashed line) as functions of time. When $t<10$s, we find no significant difference in the leading order growth rates predicted in the absence and presence of pinning. This is because the feedback of higher order geometric terms $\rho_{1,2,\cdots}$ on $R$ is weak when the bubble nearly circular. The open circle in FIG. \ref{fig:comparemetrics} indicates the end of theoretical predictions for the pinned case; while the model fails at finite time, we note that in the experiment the bubble breaks apart prior to this point. In this paper, we do not consider instabilities that may lead to detachment. Before breakup, there is good qualitative agreement between the model and theory. In FIG. \ref{fig:compareshapes}, we more closely compare the predicted geometry of the contact line to observations of the same data set. We see that the theoretical predictions are in good agreement with the observations away from the pinning/nucleation site (dashed circle).
Finally, in FIG. \ref{fig:aspectratio} we examine the relationship between the size $\rho_0$ and the \emph{projected} aspect ratio $\mathscr{E}$ of the bubble (not to be confused the aspect ratio $\varepsilon$ plotted in FIG. \ref{fig:comparemetrics}D)
\begin{equation}
\mathscr{E}=\left(\rho_0-\rho_2+\rho_4 \right)/\left(\rho_0+\rho_2+\rho_4\right)
\label{eq:aspectratio}
\end{equation}
where $\rho_{2,4}$ are shape corrections to $\rho_0$ (for $N=$5). We find that the faster growing bubbles (those with higher superstation) become elongated at smaller size. Our model therefore predicts that contact line dynamics and the tapered channel together introduce a growth rate-dependent geometric evolution, which would otherwise be absent in the zero capillary and Bond number regime we consider. Since bubble breakup likely depends on the extent to which the bubbles elongate, we posit that bubble creation frequency and departure size will depend on super-saturation. Our current single-curvature model does not allow us model the breakup mechanism. Before breakup, a neck forms which invalidates assumptions of our model; however, controlled experiments on droplet breakup in tapered conduits in the quasi-static regime strongly implicate a geometric instability\cite{Dangla2013}.
\begin{figure}[h]
\includegraphics[width=\columnwidth]{PsuedoSpectral_AspectRatioParametricExperimentCompare_6.pdf}
\caption{Model predictions (lines) of bubble size $\rho_0$ as a function of aspect ratio $\mathscr{E}$ (eqn. \ref{eq:aspectratio}) for three different supersaturation values compared to experiment (points, colors indicate different bubbles nucleating from the same site). Arrows show the progression of time and circle highlights the point at which the bubbles break away from their nucleation sites ($t\sim$1-2 s).}
\label{fig:aspectratio}
\end{figure}
\section{Conclusion}
We imaged and analyzed the growth and motion of sub-micron bubbles in a supersaturated liquid confined between two narrowly separated, rigid, diverging plates in the asymptotic limit of zero Capillary and Bond numbers. To enable imaging sub-micrometer-size bubbles, we used the experimental technique liquid cell electron microscopy to provide spatial and temporal resolution that is not readily available by any other types of microscopy. The motion of macroscopic bubbles is often driven by capillary and buoyancy forces, both of which are negligible at the length scales considered here. We hypothesize that bubble motion and growth are rate-limited by contact line dynamics. To predict contact line motion, bubble growth, and interface geometry, we use the Blake-Haynes mechanism to describe contact line motion as a function of contact angle. Variations in contact angle result from gas mass flow into the bubble under supersaturated conditions. Our theoretical predictions agree qualitatively with experimental observations. At early stages, bubbles grow slowly as contact line-mediated curvature mitigates the positive feedback mechanism that would otherwise enhance mass transfer into the bubble and drive rapid growth. This is somewhat similar to one of the more recent mechanisms that has been proposed to explain the longevity of surface-bound nano-bubbles; the pinning of the contact line controls curvature \cite{Weijs2013,Lohse2015a}. At longer times, our model predicts that the growth rate of our bubbles accelerates due to positive feedback: decreased radius of curvature reduces the equilibrium gas concentration next to the bubbles' surfaces, enhancing mass transport, and decreased confinement.
Our observations and model predictions have implications for various processes where bubbles nucleate and grow within surface defects, fissures, or tailored nanostructures such as are used for boiling and hydrolysis. Clearly, one could exploit geometry to clear bubbles; this much has been explored for microgravity applications \cite{Jenson2014}, albeit relying on driving forces that are negligible at the nanoscale as in our experiments.
Perhaps more intriguing is our model's implication that bubble geometry depends on growth rate. This is somewhat unexpected given the overdamped regime but arises as the result of an additional time scale in the problem, the contact line relaxation. Growth-rate dependence is particularly apparent when aft contact line pinning is included in the model. Our prediction that the aspect ratio of growing teardrop bubbles can be controlled by controlling the supersaturation level is a novel mechanism that could be exploited in device design.
This paper has examined a fluid mechanical problem with the emerging experimental method of liquid cell electron microscopy. We demonstrate that liquid cell electron microscopy with its few nanometer and video-rate resolution can provide meaningful data that would be difficult, if not impossible, to obtain by other means. Although our liquid cell imaging provided qualitative information to support our theoretical predictions, the method is still at its infancy in terms of yielding quantitative data. Some calibration tools exist, for example the use of electron energy loss spectroscopy to measure liquid thickness \cite{Jungjohann2012}, but the accuracy is not sufficient for these types of experiments. We hope that in the future, liquid cells will be better equipped with diagnostic tools such as means to measure sufficiently accurately the distance between the confining plates and the conditions (pressure, temperature, chemical composition) of the liquid inside the liquid cell. Such developments would allow for better controlled experiments and will enable quantitative comparisons between theoretical predictions and experimental data.
\section{Acknowledgements}
The research was supported, in part, by the National Science Foundation CBET 1066573 and the Nano/Bio Interface Center through the National Science Foundation NSEC DMR08-32802. All movies were recorded by Joseph. M. Grogan at the IBM T. J. Watson Research Center.
|
2,877,628,090,394 | arxiv | \section{Introduction}
In studies of nonlinear wave dynamics in physical systems, nonlinear
Schr\"{o}dinger (NLS)-type equations play a prominent role. It is
known that a weakly nonlinear one-dimensional wave packet in a
generic physical system is governed by the NLS equation
\cite{Benney}. Hence this equation appears frequently in nonlinear
optics and water waves \cite{Agrawal_book, Hasegawa_book,
Ablowitz_Segur}. Recently, it has been shown that the nonlinear
interaction of atoms in Bose-Einstein condensates is governed by a
NLS-type equation as well (called Gross-Pitaevskii equation in the
literature) \cite{Dalfovo_1999}. In these physical systems, the
nonlinearity can be focusing or defocusing (i.e., the nonlinear
coefficient can be positive or negative), depending on the physical
situations \cite{Ablowitz_Segur} or the types of atoms in
Bose-Einstein condensates \cite{Dalfovo_1999}. When two wave packets
in a physical system or two types of atoms in Bose-Einstein
condensates interact with each other, their interaction then is
governed by two coupled NLS equations \cite{Agrawal_book,
Hasegawa_book,Ho_Shenoy,Pu_Bigelow_1,Pu_Bigelow_2,Goldstein_Meystre,Roskes,Menyuk_1987}.
The single NLS equation is exactly integrable
\cite{Zakharov_Shabat}. It admits bright solitons in the focusing
case, and dark solitons in the defocusing case. Its bright
$N$-soliton solutions were given in \cite{Zakharov_Shabat}, and its
dark $N$-soliton solutions can be found in \cite{Faddeev_book}. The
coupled NLS equations are also integrable when the nonlinear
coefficients have the same magnitudes
\cite{Manakov,Zakharov1982,Wang2010}. In these integrable cases, if
all nonlinear terms are of focusing type (i.e., the nonlinear
coefficients are all positive), the coupled NLS equations are the
focusing Manakov model which admits bright-bright solitons
\cite{Manakov}. If all nonlinear terms are of defocusing type (i.e.,
the nonlinear coefficients are all negative), the coupled NLS
equations are the defocusing Manakov model which admits bright-dark
and dark-dark solitons \cite{Sheppard_Kivshar_1997,RL,Ablowitz1}. If
the focusing and defocusing nonlinearities are mixed (i.e., the
nonlinear coefficients have opposite signs), these coupled NLS
equations admit bright-bright solitons \cite{Wang2010,KLTA} and
bright-dark solitons \cite{VKL_2008}. Existence of dark-dark
solitons in this mixed case has not been investigated yet.
Soliton interaction in these integrable generally coupled NLS
equations is a fascinating subject. In the focusing Manakov model,
an interesting phenomenon is that bright solitons change their
polarizations (i.e. relative energy distributions among the two
components) after collision \cite{Manakov}. In the coupled NLS
equations with mixed nonlinearities, energy can also transfer from
one soliton to another after collision \cite{KLTA}. In addition,
solitons can be reflected off by each other as well \cite{Wang2010}.
In the defocusing Manakov model, two bright-dark solitons can form a
stationary bound state, a phenomenon which does not occur for scalar
bright or dark solitons \cite{Sheppard_Kivshar_1997}. All these
interesting interaction behaviors can be described by multi-soliton
solutions in the underlying integrable system. In the focusing
Manakov model, $N$-bright-bright solitons were derived in
\cite{Manakov} by the inverse scattering transform method. In the
mixed-nonlinearity model, two- and three-bright-bright solitons and
two-bright-dark solitons were derived in \cite{KLTA,VKL_2008} by the
Hirota method, and $N$-bright-bright solitons were derived in
\cite{Wang2010} by the Riemann-Hilbert method. In the defocusing
Manakov model, $N$-bright-dark solitons were derived in
\cite{Sheppard_Kivshar_1997}, and degenerate two-dark-dark solitons
were derived in \cite{RL}, both by the Hirota method.
So far, progress on dark-dark solitons in the integrable generally
coupled NLS equations is very limited. While dark-dark solitons in
the defocusing Manakov model were derived in \cite{RL}, we will show
that their two- and higher-dark-dark solitons are actually
degenerate and reducible to scalar dark solitons. In
\cite{Ablowitz1}, the inverse scattering transform method was
developed for dark solitons in the defocusing Manakov model. But, as
we will show in this paper, their analysis can not yield general
dark-dark solitons either due to their choices of the boundary
conditions. To date, general multi-dark-dark solitons in the coupled
NLS equations have never been reported yet (to our knowledge). As we
will see, these general multi-dark-dark solitons are not easy to
obtain due to non-trivial parameter constraints which must be met.
In this paper, we comprehensively analyze dark-dark solitons and
their dynamics in the generally coupled integrable NLS equations.
First, we show that these coupled NLS equations can be obtained as a
reduction of the Kadomtsev-Petviashvili (KP) hierarchy. Then using
$\tau$-function solutions of the KP hierarchy, we derive the general
$N$-dark-dark solitons in terms of Gram determinants. These
dark-dark solitons exist in both the defocusing Manakov model and
the mixed-nonlinearity model. Recalling that bright-bright solitons
exist in the mixed-nonlinearity model as well \cite{Wang2010,RL}, we
see that the coupled NLS equations with mixed nonlinearities are the
rare integrable systems which admit both bright-bright and dark-dark
solitons. The dark-dark solitons obtained previously in
\cite{RL,Ablowitz1} for the defocusing Manakov model are only
degenerate cases of our general solutions. Next, we analyze
properties of these soliton solutions. For single dark-dark
solitons, we show that the degrees of ``darkness" in their two
components are different in general. When two dark-dark solitons
collide with each other, we show that energies in the two components
of each soliton completely transmit through. This contrasts
collisions of bright-bright solitons in these same equations, where
polarization rotation, power transfer and soliton reflection can
occur \cite{Manakov,Wang2010,KLTA}. Thus dark-dark solitons are much
more robust than bright-bright solitons with regard to collision. In
the case of mixed focusing and defocusing nonlinearities, an
interesting phenomenon is that two dark-dark solitons can form a
stationary bound state. This is the first report of
dark-dark-soliton bound states in integrable systems. However, three
or more dark-dark solitons can not form bound states, as we will
show in this paper.
We should mention that this KP-hierarchy reduction for deriving
soliton solutions in integrable systems was first developed by the
Kyoto school in the 1970s \cite{DKJM}. So far, this
method has been applied to derive bright solitons in many equations
such as NLS, modified KdV, Davey-Stewartson equations \cite{T,Date,O}.
This method has also
been applied to derive $N$-dark solitons in the defocusing NLS
equation \cite{O}. But this reduction for dark-dark solitons in the
generally coupled NLS equations is more subtle and has never been
done before. In this paper, we will derive general $N$-dark-dark
solitons by this KP-hierarchy reduction and the grace of deep use of
determinant expressions. Compared to the inverse scattering
transform method \cite{Ablowitz1} and the Hirota method \cite{RL},
our treatment is much more clean, and the solution formulae much
more elegant and general. Thus, the KP-reduction method has a
distinct advantage in derivations of dark-soliton solutions.
\section{The $N$-dark-dark solitons}
The generally coupled integrable NLS equations we investigate in
this paper are
\begin{equation}
\begin{array}{ll}
iu_t=u_{xx}+(\delta|u|^2+\epsilon|v|^2)u, \\
iv_t=v_{xx}+(\delta|u|^2+\epsilon|v|^2)v,
\end{array} \label{(1.1)}
\end{equation}
where $\delta$ and $\epsilon$ are real coefficients. This system is
integrable \cite{Manakov,Zakharov1982,Wang2010}. Through $u$ and $v$
scalings, the nonlinear coefficients $\delta$ and $\epsilon$ can be
normalized to be $\pm 1$ without loss of generality. When
$\epsilon=\delta=1$, this system is the focusing Manakov model which
supports bright-bright solitons \cite{Manakov}. When
$\epsilon=\delta=-1$, this system is the defocusing Manakov model
which supports bright-dark and dark-dark solitons
\cite{Sheppard_Kivshar_1997,RL,Ablowitz1}. When $\epsilon$ and
$\delta$ have opposite signs, the system exhibits mixed focusing and
defocusing nonlinearities. In this case, these equations support
bright-bright solitons \cite{Wang2010,KLTA}, bright-dark solitons
\cite{VKL_2008}, and dark-dark solitons (as we will see below).
In this section, we derive the general formulae for $N$-dark-dark
solitons in the integrable coupled NLS system \eqref{(1.1)}. The
basic idea is to treat Eq. \eqref{(1.1)} as a reduction of the KP
hierarchy. Then dark solitons in Eq. \eqref{(1.1)} can be obtained
from solutions of the KP hierarchy under this reduction. For this
purpose, let us first review Gram-type solutions for equations in
the KP hierarchy \cite{Hirota,MOS,OHTI}.
\begin{Lemma}
Consider the following equations in the KP hierarchy \cite{JM,DJM}
\begin{equation}
\begin{array}{ll}
(\frac{1}{2}D_xD_r-1)\tau(k)\cdot\tau(k)=-\tau(k+1)\tau(k-1),\\
(D_x^2-D_y+2aD_x)\tau(k+1)\cdot\tau(k)=0,
\end{array} \label{Lemma1-1}
\end{equation}
where $D$ is the Hirota derivative defined by
\begin{equation}
D_x^mD_y^n \hspace{0.03cm} f(x, y)\cdot g(x, y)\equiv
\left(\frac{\partial}{\partial x}-\frac{\partial}{\partial
x'}\right)^m \left(\frac{\partial}{\partial
y}-\frac{\partial}{\partial y'}\right)^n f(x, y) \hspace{0.06cm}
g(x', y')|_{x=x', \hspace{0.08cm} y=y'} \hspace{0.08cm},
\end{equation}
$a$ is a complex constant, $k$ is an integer, and $\tau(k)$ is a
function of three independent variables $(x,y,r)$. The Gram
determinant solution $\tau(k)$ of the above equations is given by
$$
\tau(k)=\det_{1\le i,j\le N}\Big(m_{ij}(k)\Big)
=\Big|m_{ij}(k)\Big|_{1\le i,j\le N},
$$
where the matrix element $m_{ij}(k)$ satisfies
\begin{equation}
\begin{array}{ll}
\partial_xm_{ij}(k)=\varphi_i(k)\psi_j(k),\\
\partial_ym_{ij}(k)=(\partial_x\varphi_i(k))\psi_j(k)
-\varphi_i(k)(\partial_x\psi_j(k)),\\
\partial_rm_{ij}(k)=-\varphi_i(k-1)\psi_j(k+1),\\
m_{ij}(k+1)=m_{ij}(k)+\varphi_i(k)\psi_j(k+1),
\end{array} \label{Lemma1-2}
\end{equation}\\
and $\varphi_i(k)$ and $\psi_j(k)$ are arbitrary functions
satisfying
\begin{equation}
\begin{array}{ll}
\partial_y\varphi_i(k)=\partial_x^2\varphi_i(k),\\
\varphi_i(k+1)=(\partial_x-a)\varphi_i(k),\\
\partial_y\psi_j(k)=-\partial_x^2\psi_j(k),\\
\psi_j(k-1)=-(\partial_x+a)\psi_j(k).
\end{array} \label{Lemma1-3}
\end{equation}
\end{Lemma}
Before proving this lemma, several remarks are in order. The first
equation in \eqref{Lemma1-1} is the bilinear equation for the
two-dimensional Toda lattice (see p.984 of \cite{JM} and p.4130 of
\cite{DJM}), and the second equation in \eqref{Lemma1-1} is the
lowest-degree bilinear equation in the 1st modified KP hierarchy
(see p.996 of \cite{JM}). Since the two-dimensional Toda lattice
hierarchy and modified KP hierarchies are closely related to the
(single-component) KP hierarchy, all these hierarchies will be
called the KP hierarchy in this paper. Regarding the parameter $a$
in the second equation in \eqref{Lemma1-1}, it corresponds to the
wave-number shift $k_0$ in \cite{JM} [see Eq. (10.3) there]. The
bilinear equation with this parameter was not explicitly written
down in \cite{JM}, but can be found in \cite{DJM} [see Eq. (N-3)
there]. This parameter can be formally removed by the Galilean
transformation for $y$ in \eqref{Lemma1-1}. But for our purpose, it
proves to be important to keep this parameter, as it will pave the
way for the introduction of another similar parameter $b$ in Lemma 2
later. In that case, $a$ and $b$ can not be removed simultaneously
by the Galilean transformation, and they are essential for the
construction of non-degenerate dark-dark solitons in the generally
coupled NLS system (\ref{(1.1)}).
\noindent{\bf \em Proof of Lemma 1.} By using \eqref{Lemma1-2} and
\eqref{Lemma1-3}, we can verify that the derivatives and shifts of
the $\tau$ function are expressed by the bordered determinants as
follows
\[ \begin{array}{ll}
\partial_x\tau(k)=\left|\begin {array}{cc}
m_{ij}(k) &\varphi_i(k) \cr -\psi_j(k) &0 \end {array}\right|,\\
\partial_x^2\tau(k)=\left|\begin {array}{cc}
m_{ij}(k) &\partial_x\varphi_i(k) \cr -\psi_j(k) &0 \end
{array}\right| +\left|\begin {array}{cc} m_{ij}(k) &\varphi_i(k) \cr
-\partial_x\psi_j(k) &0 \end {array}\right|,\\
\partial_y\tau(k)=\left|\begin {array}{cc}
m_{ij}(k) &\partial_x\varphi_i(k) \cr -\psi_j(k) &0 \end
{array}\right| -\left|\begin {array}{cc} m_{ij}(k) &\varphi_i(k) \cr
-\partial_x\psi_j(k) &0 \end {array}\right|,\\
\partial_r\tau(k)=\left|\begin {array}{cc}
m_{ij}(k) &\varphi_i(k-1) \cr \psi_j(k+1) &0 \end {array}\right|,\\
(\partial_x\partial_r-1)\tau(k)=\left|\begin {array}{ccc} m_{ij}(k)
&\varphi_i(k-1) &\varphi_i(k) \cr \psi_j(k+1) &0 &-1 \cr
-\psi_j(k) &-1 &0 \end {array}\right|,\\
\tau(k+1)=\left|\begin {array}{cc}
m_{ij}(k) &\varphi_i(k) \cr -\psi_j(k+1) &1 \end {array}\right|,\\
\tau(k-1)=\left|\begin {array}{cc} m_{ij}(k) &\varphi_i(k-1) \cr
\psi_j(k) &1 \end {array}\right|,\\
(\partial_x+a)\tau(k+1)=\left|\begin {array}{cc}
m_{ij}(k) &\partial_x\varphi_i(k) \cr -\psi_j(k+1) &a \end {array}\right|,\\
(\partial_x+a)^2\tau(k+1)=\left|\begin {array}{cc} m_{ij}(k)
&\partial_x^2\varphi_i(k) \cr -\psi_j(k+1) &a^2 \end {array}\right|
+\left|\begin {array}{ccc} m_{ij}(k) &\partial_x\varphi_i(k)
&\varphi_i(k) \cr -\psi_j(k+1) &a &1 \cr -\psi_j(k) &0
&0\end {array}\right|,\\
(\partial_y+a^2)\tau(k+1)=\left|\begin {array}{cc} m_{ij}(k)
&\partial_x^2\varphi_i(k) \cr -\psi_j(k+1) &a^2 \end {array}\right|
-\left|\begin {array}{ccc} m_{ij}(k) &\partial_x\varphi_i(k)
&\varphi_i(k) \cr -\psi_j(k+1) &a &1 \cr -\psi_j(k) &0 &0\end
{array}\right|.
\end{array}
\]
Here the bordered determinants are defined as
\[\left|\begin {array}{cc} m_{ij} &\varphi_i \cr -\psi_j &0
\end {array}\right| \equiv
\left| \begin{array}{ccccc} m_{11} & m_{12} & \cdots & m_{1N} &
\varphi_1 \\
m_{21} & m_{22} & \cdots & m_{2N} & \varphi_2 \\
\vdots & \vdots & \vdots & \vdots & \vdots \\
m_{N1} & m_{N2} & \cdots & m_{NN} & \varphi_N \\
-\psi_1 & -\psi_2 & \cdots & -\psi_N & 0 \end{array}\right|,
\]
and so on. By using the Jacobi formula of determinants
\cite{Hirota}, we obtain
the bilinear equations \eqref{Lemma1-1} from the above expressions. $\Box$\\
Using Lemma 1, we can obtain solutions to a larger class of
equations in the KP hierarchy below.
\begin{Lemma}
Consider the following equations in the KP hierarchy,
\begin{equation}
\begin{array}{ll}
(\frac{1}{2}D_xD_r-1)\tau(k,l)\cdot\tau(k,l)=-\tau(k+1,l)\tau(k-1,l),\\
(D_x^2-D_y+2aD_x)\tau(k+1,l)\cdot\tau(k,l)=0,\\
(\frac{1}{2}D_xD_s-1)\tau(k,l)\cdot\tau(k,l)=-\tau(k,l+1)\tau(k,l-1),\\
(D_x^2-D_y+2bD_x)\tau(k,l+1)\cdot\tau(k,l)=0,
\end{array} \label{(2.1)}
\end{equation}
where $a, b$ are complex constants, $k, l$ are integers, and
$\tau(k,l)$ is a function of four independent variables $(x,y,r,s)$.
The solution $\tau(k,l)$ to these equations is given by the Gram
determinant
\begin{equation}
\tau(k,l)=\det_{1\le i,j\le N}\Big(m_{ij}(k,l)\Big)
=\Big|m_{ij}(k,l)\Big|_{1\le i,j\le N}, \label{(2.2)}
\end{equation}
where the matrix element $m_{ij}(k,l)$ is defined by
\begin{equation}
\begin{array}{ll}
m_{ij}(k,l)=c_{ij}+\frac{1}{p_i+q_j}\varphi_i(k,l)\psi_j(k,l),\\
\varphi_i(k,l)=(p_i-a)^k(p_i-b)^le^{\xi_i},\\
\psi_j(k,l)=(-\frac{1}{q_j+a})^k(-\frac{1}{q_j+b})^le^{\eta_j},
\end{array} \label{(2.3)}
\end{equation}
with
\begin{equation}
\begin{array}{ll}
\xi_i=p_ix+p_i^2y+\frac{1}{p_i-a}r+\frac{1}{p_i-b}s+\xi_{i0},\\
\eta_j=q_jx-q_j^2y+\frac{1}{q_j+a}r+\frac{1}{q_j+b}s+\eta_{j0},
\end{array} \label{(2.4)}
\end{equation}
and $c_{ij}$, $p_i$, $q_j$, $\xi_{i0}$, $\eta_{j0}$ are complex
constants.
\end{Lemma}
It is noted that the system (\ref{(2.1)}) is an expansion of the
previous system (\ref{Lemma1-1}) by adding a new pair of independent
variables $(s,l)$ to the previous pair $(r,k)$.
\noindent {\bf \em Proof.} It is easy to see that functions
$m_{ij}(k,l)$, $\varphi_i(k,l)$ and $\psi_j(k,l)$ satisfy the
following differential and difference rules,
\begin{equation}
\begin{array}{ll}
\partial_xm_{ij}(k,l)=\varphi_i(k,l)\psi_j(k,l),\\
\partial_ym_{ij}(k,l)=(\partial_x\varphi_i(k,l))\psi_j(k,l)
-\varphi_i(k,l)(\partial_x\psi_j(k,l)),\\
\partial_rm_{ij}(k,l)=-\varphi_i(k-1,l)\psi_j(k+1,l),\\
m_{ij}(k+1,l)=m_{ij}(k,l)+\varphi_i(k,l)\psi_j(k+1,l),\\
\partial_y\varphi_i(k,l)=\partial_x^2\varphi_i(k,l),\\
\varphi_i(k+1,l)=(\partial_x-a)\varphi_i(k,l),\\
\partial_y\psi_j(k,l)=-\partial_x^2\psi_j(k,l),\\
\psi_j(k-1,l)=-(\partial_x+a)\psi_j(k,l).
\end{array} \label{(2.5)}
\end{equation}\\
Then from Lemma 1, we can verify the first two bilinear equations in
\eqref{(2.1)}. The other two equations in \eqref{(2.1)} can be
obtained directly by replacing $a$, $k$, $r$ as $b$, $l$, $s$ in Eq.
(\ref{Lemma1-1}) of Lemma 1. $\Box$
Next, we perform a reduction to the bilinear system (\ref{(2.1)}) in
the KP hierarchy. Solutions to the reduced bilinear equations are
given below.
\begin{Theorem}
Assume that $f$ is a real function of real $x$ and $t,$ and
$g,h$ are complex functions of real $x$ and $t,$ then the following
bilinear equations
\begin{equation}
\begin{array}{ll} (D_x^2+\delta|\mu|^2+\epsilon|\nu|^2)f\cdot f=\delta|\mu|^2
g\bar g+\epsilon|\nu|^2 h\bar h,\\
(iD_t+D_x^2+2icD_x)g\cdot f=0,\\
(iD_t+D_x^2+2idD_x)h\cdot f=0,
\end{array} \label{(2.6)}
\end{equation}
where $\delta$, $\epsilon$, $c$ and $d$ are real constants, $\mu$
and $\nu$ are complex constants, and the overbar `$\ \bar{}\ $'
represents complex conjugate, admit the following solutions,
\begin{equation}
\begin{array}{ll}
f=\Big|\delta_{ij}+\frac{1}{p_i+\bar p_j}
e^{\xi_i+\bar{\xi}_j}\Big|,\\
g=\Big|\delta_{ij}+\frac{1}{p_i+\bar p_j}(-\frac{p_i-ic}{\bar
p_j+ic}) e^{\xi_i+\bar\xi_j}\Big|,\\
h=\Big|\delta_{ij}+\frac{1}{p_i+\bar p_j}(-\frac{p_i-id}{\bar
p_j+id}) e^{\xi_i+\bar\xi_j}\Big|,
\end{array} \label{(2.7)}
\end{equation}
where
\begin{equation}
\xi_j=p_jx+ip_j^2t+\xi_{j0},
\end{equation}
$p_j$ are complex constants satisfying the constraint
\begin{equation} \label{constraint1}
\frac{\delta|\mu|^2}{|p_j-ic|^2}+\frac{\epsilon|\nu|^2}{|p_j-id|^2}=-2,
\end{equation}
and $\xi_{j0}$ are arbitrary complex constants.
\end{Theorem}
\noindent {\bf \em Proof.} In Lemma 2, if one assumes $x,r,s$ are
real, $y,a,b$ are pure imaginary, $k,l$ are integers, and $ q_j=\bar
p_j, \eta_{j0}=\bar\xi_{j0}, c_{ji}=\bar c_{ij},$ then we have
\begin{equation} \label{conjugation_constraint_0}
\eta_j=\bar\xi_j, \quad m_{ji}(k,l)=\overline{m_{ij}(-k,-l)}, \quad
\tau(k,l)=\overline{\tau(-k,-l)}.
\end{equation}
Therefore, defining
\begin{equation}
c_{ij}=\delta_{ij}, \quad {\rm Re}(p_i)>0, \quad f=\tau(0,0), \quad
g=\tau(1,0), \quad h=\tau(0,1),
\end{equation}
where $\delta_{ij}$ is 1 when $i=j$ and 0 otherwise, then
\begin{equation}
f=\Big|m_{ij}(0,0)\Big| =\Big|\delta_{ij}+\frac{1}{p_i+\bar
p_j}e^{\xi_i+\bar\xi_j}\Big|, \qquad \bar g=\tau(-1,0), \qquad \bar
h=\tau(0,-1),
\end{equation}
and
\begin{equation}
\begin{array}{ll}
(\frac{1}{2}D_xD_r-1)f\cdot f=-g\bar g,\\
(\frac{1}{2}D_xD_s-1)f\cdot f=-h\bar h,\\
(D_x^2-D_y+2aD_x)g\cdot f=0,\\
(D_x^2-D_y+2bD_x)h\cdot f=0.
\end{array} \label{(2.8)}
\end{equation}
Under the above reduction, the solution (\ref{(2.2)}) for $\tau$ can
be rewritten as
\begin{eqnarray}
\tau(k,l) & = & \Big|\delta_{ij}+\frac{1}{p_i+\bar p_j}
(-\frac{p_i-a}{\bar p_j+a})^k(-\frac{p_i-b}{\bar p_j+b})^l
e^{\xi_i+\bar\xi_j}\Big| \nonumber
\\
&=&e^{\xi_1+\cdots+\xi_N+\bar\xi_1+\cdots\bar\xi_N}
\Big|\delta_{ij}e^{-\xi_i-\bar\xi_i}+\frac{1}{p_i+\bar p_j}
(-\frac{p_i-a}{\bar p_j+a})^k(-\frac{p_i-b}{\bar p_j+b})^l\Big|,
\label{(2.800)}
\end{eqnarray}
with
$$
\xi_i+\bar\xi_i=(p_i+\bar p_i)x+(p_i^2-\bar p_i^2)y
+(\frac{1}{p_i-a}+\frac{1}{\bar p_i+a})r
+(\frac{1}{p_i-b}+\frac{1}{\bar p_i+b})s+\xi_{i0}+\bar\xi_{i0}.
$$
Thus if $p_i$ satisfies the constraint
\begin{equation}
\delta|\mu|^2(\frac{1}{p_i-a}+\frac{1}{\bar p_i+a})
+\epsilon|\nu|^2(\frac{1}{p_i-b}+\frac{1}{\bar p_i+b}) =-2(p_i+\bar
p_i), \label{(2.80)}
\end{equation}
i.e.,
\begin{equation} \label{constraint2}
\frac{\delta|\mu|^2}{(p_i-a)(\bar
p_i+a)}+\frac{\epsilon|\nu|^2}{(p_i-b)(\bar p_i+b)}=-2,
\end{equation}
then from Eqs. (\ref{(2.800)})-(\ref{(2.80)}), one gets
\begin{equation}
(\delta|\mu|^2\partial_r+\epsilon|\nu|^2\partial_s)\tau(k,l)=-2\partial_x\tau(k,l).
\label{(2.81)}
\end{equation}
Using $f=\tau(0,0)$, this equation gives
\begin{equation}
\delta|\mu|^2f_r+\epsilon|\nu|^2f_s=-2f_x. \label{(2.82)}
\end{equation}
Differentiation of (\ref{(2.82)}) with respect to $x$ gives
\begin{equation}
\delta|\mu|^2f_{xr}+\epsilon|\nu|^2f_{xs}=-2f_{xx}. \label{(2.83)}
\end{equation}
The first two equations of \eqref{(2.8)} are just
\begin{equation}
f_{xr}f-f_xf_r-f^2=-g\bar g,\label{(2.84)}
\end{equation}
\begin{equation}
f_{xs}f-f_xf_s-f^2=-h\bar h\label{(2.85)}.
\end{equation}
So from Eqs. \eqref{(2.82)}-\eqref{(2.85)}, we have
\begin{equation}
2f_{xx}f-2f_x^2+(\delta|\mu|^2+\epsilon|\nu|^2)f^2=\delta|\mu|^2g\bar
g+\epsilon|\nu|^2h\bar h\label{(2.86)},
\end{equation}
which is just
\begin{equation}
(D_x^2+\delta|\mu|^2+\epsilon|\nu|^2)f\cdot f=\delta|\mu|^2 g\bar
g+\epsilon|\nu|^2 h\bar h.
\end{equation}
Finally, denoting
\begin{equation}
y=it, \quad a=ic, \quad b=id,
\end{equation}
with $t$, $c$ and $d$ real, the second and third equations in
(\ref{(2.6)}) and (\ref{(2.7)}) are obtained directly from Lemma 2,
and the constraint (\ref{constraint1}) is obtained directly from Eq.
(\ref{constraint2}). Theorem 1 is then proved. $\Box$
Now we transform the bilinear equations \eqref{(2.6)} in Theorem 1
into a nonlinear form. To do so, we set
\begin{equation}
\tilde{u}=\mu\frac{g}{f},\qquad \tilde{v}=\nu\frac{h}{f},
\label{(2.11)}
\end{equation}
where $f,g,h$ satisfy Eq. (\ref{(2.6)}). From (\ref{(2.11)}), we
have
$$
(D_tg\cdot f)/f^2=\tilde{u}_t/\mu, \quad (D_th\cdot
f)/f^2=\tilde{v}_t/\nu,
$$
$$
(D_xg\cdot f)/f^2=\tilde{u}_x/\mu, \quad (D_xh\cdot
f)/f^2=\tilde{v}_x/\nu,
$$
\begin{equation}
(D_x^2g\cdot f)/f^2=\tilde{u}_{xx}/\mu+(\tilde{u}/\mu)(D_x^2f\cdot
f)/f^2,\label{(2.12)}
\end{equation}
$$(D_x^2h\cdot
f)/f^2=\tilde{v}_{xx}/\nu+(\tilde{v}/\nu)(D_x^2f\cdot f)/f^2.
$$
The first bilinear equation in \eqref{(2.6)} is
$$
D_x^2f\cdot f=-(\delta|\mu|^2+\epsilon|\nu|^2)f^2+\delta|\mu|^2
g\bar g+\epsilon|\nu|^2 h\bar h
$$
which can be further rewritten as
\begin{equation}
(D_x^2f\cdot
f)/f^2=-(\delta|\mu|^2+\epsilon|\nu|^2)+\delta|\tilde{u}|^2
+\epsilon|\tilde{v}|^2, \label{(2.13)}
\end{equation}
The second bilinear equation in \eqref{(2.6)} is just
\begin{equation}
\frac{(D_x^2+iD_t+2icD_x)g\cdot f}{f^2}=0. \label{(2.14)}
\end{equation}
Substituting \eqref{(2.12)} into \eqref{(2.14)}, we have
\begin{equation}
i\tilde{u}_t+\tilde{u}_{xx}+\tilde{u}(D_x^2f\cdot
f)/f^2+2ic\tilde{u}_x=0. \label{(2.15)}
\end{equation}
In the same way, from the third bilinear equation in \eqref{(2.6)}
we have
\begin{equation}
i\tilde{v}_t+\tilde{v}_{xx}+\tilde{v}(D_x^2f\cdot
f)/f^2+2id\tilde{v}_x=0. \label{(2.16)}
\end{equation}
Substituting \eqref{(2.13)} into \eqref{(2.15)} and
\eqref{(2.16)}, we get
\begin{equation}
\begin{array}{ll}
i\tilde{u}_t+2ic\tilde{u}_x+\tilde{u}_{xx}+\tilde{u}[-\delta|\mu|^2-\epsilon|\nu|^2+\delta|\tilde{u}|^2+\epsilon|\tilde{v}|^2]=0, \\
i\tilde{v}_t+2id\tilde{v}_x+\tilde{v}_{xx}+\tilde{v}[-\delta|\mu|^2-\epsilon|\nu|^2+\delta|\tilde{u}|^2+\epsilon|\tilde{v}|^2]=0.
\end{array} \label{(2.17)}
\end{equation}
Letting
\[\tilde{u}=ue^{i[(-\delta|\mu|^2-\epsilon|\nu|^2+c^2)t-cx]}, \]
\[\tilde{v}=ve^{i[(-\delta|\mu|^2-\epsilon|\nu|^2+d^2)t-dx]},\]
Eqs. \eqref{(2.17)} are then transformed into
\begin{equation}
\begin{array}{ll}
iu_t+u_{xx}+(\delta|u|^2+\epsilon|v|^2)u=0, \\
iv_t+v_{xx}+(\delta|u|^2+\epsilon|v|^2)v=0,
\end{array} \label{(2.18)}
\end{equation}
which has N-dark-dark soliton solutions as
\begin{equation}\label{(2.19)}
\begin{array}{ll}
u=\mu e^{i[cx+(\delta|\mu|^2+\epsilon|\nu|^2-c^2)t]}\frac{g_N}{f_N},
\\
v=\nu e^{i[dx+(\delta|\mu|^2+\epsilon|\nu|^2-d^2)t]}\frac{h_N}{f_N},
\end{array}
\end{equation}
with $f_N,g_N,h_N$ given by \eqref{(2.7)}. Finally, taking
$t\rightarrow-t$, Eqs. \eqref{(2.18)} become the generally coupled
NLS equations \eqref{(1.1)}. Hence we immediately have the following
theorem for solutions of Eq. (\ref{(1.1)}).
\begin{Theorem}
The N-dark-dark soliton solutions for the generally coupled NLS
equations \eqref{(1.1)} are
\begin{equation}
\begin{array}{ll}u=\mu e^{i[cx-(\delta|\mu|^2+\epsilon|\nu|^2-c^2)t]}\frac{G_N}{F_N},\\
v=\nu e^{i[dx-(\delta|\mu|^2+\epsilon|\nu|^2-d^2)t]}\frac{H_N}{F_N},
\end{array} \label{(2.9)}
\end{equation}
where
\begin{equation}
\begin{array}{ll}
F_N=\Big|\delta_{ij}+\frac{1}{p_i+\bar p_j}
e^{\theta_i+\bar\theta_j}\Big|_{N\times N},\\
G_N=\Big|\delta_{ij}-\frac{1}{p_i+\bar p_j}\frac{p_i-ic}{\bar
p_j+ic} e^{\theta_i+\bar\theta_j}\Big|_{N\times N},\\
H_N=\Big|\delta_{ij}-\frac{1}{p_i+\bar p_j}\frac{p_i-id}{\bar
p_j+id} e^{\theta_i+\bar\theta_j}\Big|_{N\times N},
\end{array} \label{(2.10)}
\end{equation}
\[\theta_j=p_jx-ip_j^2t+\theta_{j0},\]
$c,d$ are real constants, $\mu,\nu,p_j,\theta_{j0}$ are complex
constants, and these constants satisfy the following constraints
\begin{equation} \label{theorem2_constraints}
\frac{\delta|\mu|^2}{|p_j-ic|^2}+\frac{\epsilon|\nu|^2}{|p_j-id|^2}=-2,
~~~~~j=1,2,\cdots,N.
\end{equation}
\end{Theorem}
These solitons are dark-dark solitons, i.e., both $u$ and $v$
components are dark solitons, because it is easy to verify that
\begin{equation} \label{bc}
\begin{array}{ll}
u \to \mu e^{i[cx-(\delta|\mu|^2+\epsilon|\nu|^2-c^2)t +
\phi_{\pm}]}, \\
v \to \nu e^{i[dx-(\delta|\mu|^2+\epsilon|\nu|^2-d^2)t+\chi_\pm]},
\end{array} \quad x\to \pm\infty,
\end{equation}
where $\phi_\pm$ and $\chi_\pm$ are phase constants. Thus the $u$
and $v$ solutions approach constant amplitudes $|\mu|$ and $|\nu|$
at large distances. When $\delta>0$ and $\epsilon>0$, which
correspond to self-focusing nonlinearities for both $u$ and $v$
components in Eqs. (\ref{(1.1)}), the constraints
(\ref{theorem2_constraints}) can not be satisfied, thus dark-dark
solitons can not exist as expected. When $\delta<0$ and
$\epsilon<0$, which correspond to self-defocusing nonlinearities for
both $u$ and $v$ components, dark-dark solitons can exist as Ref.
\cite{Ablowitz1} shows. A new phenomenon revealed by Theorem 2 is
that, when $\delta$ and $\epsilon$ have opposite signs, which
correspond to mixed focusing and defocusing nonlinearities in the
$u$ and $v$ equations, the constraints (\ref{theorem2_constraints})
can still be satisfied, hence dark-dark solitons can still exist.
This phenomenon will be demonstrated in more detail in the next
section. Interestingly, when $\delta$ and $\epsilon$ have opposite
signs, Eqs. (\ref{(1.1)}) also admit bright-bright solitons
\cite{Wang2010}. Thus Eqs. (\ref{(1.1)}) with opposite signs of
$\delta$ and $\epsilon$ are the rare equations which support both
dark-dark and bright-bright solitons.
The parameter constraints (\ref{theorem2_constraints}) can be solved
explicitly, so that solutions (\ref{(2.9)}) can be expressed in
terms of free parameters only. Let us write
\[p_j=a_j+ib_j, \]
where $a_j$ and $b_j$ are the real and imaginary parts of $p_j$.
Then Eq. (\ref{theorem2_constraints}) becomes
\begin{equation} \label{aj_equation}
\frac{\delta|\mu|^2}{a_j^2+(b_j-c)^2}+\frac{\epsilon|\nu|^2}{a_j^2+(b_j-d)^2}=-2.
\end{equation}
Solving this equation, we find that $a_j^2$ can be obtained
explicitly as
\begin{eqnarray} \label{aj_formula}
a_j^2 & = & \frac{1}{2}\left\{-\left[(b_j-c)^2+(b_j-d)^2+\frac{1}{2}\delta |\mu|^2+\frac{1}{2}\epsilon
|\nu|^2\right] \right. \nonumber \\
& & \hspace{0.5cm} \left. \pm \sqrt{\left[(b_j-c)^2-(b_j-d)^2+\frac{1}{2}\delta |\mu|^2-\frac{1}{2}\epsilon
|\nu|^2\right]^2+\delta\epsilon |\mu|^2|\nu|^2} \right\}.
\end{eqnarray}
Here $\delta, \epsilon, \mu, \nu, c, d$ and $b_j$ are all free
parameters as long as the quantity under the square root of
(\ref{aj_formula}) as well as the whole right hand side of
(\ref{aj_formula}) are non-negative. If $a_j\le 0$, we will see that
the soliton solution (\ref{(2.9)}) would be singular. Thus in this
paper, we will always take $a_j>0$ to avoid this singularity.
We would like to make four remarks here. The first remark is on the
above derivation of dark solitons through KP-hierarchy reduction.
This derivation is non-trivial. To better understand it, we can
split it into two parts. One part is the reduction of the bilinear
equations (\ref{(2.6)}) of the generally coupled NLS equations
\eqref{(1.1)} from the KP-hierarchy equations (\ref{(2.1)}). The
other part is the reduction of the soliton solutions to the bilinear
equations (\ref{(2.6)}) from the $\tau$-solutions (\ref{(2.2)}) of
the KP-hierarchy equations (\ref{(2.1)}). In the first part, when we
impose on the $\tau$-functions the conjugation constraint [see
(\ref{conjugation_constraint_0})]
\begin{equation} \label{conjugation_constraint}
\tau(k,l)=\overline{\tau(-k,-l)},
\end{equation}
and the linear constraint [see (\ref{(2.81)})]
\begin{equation} \label{linear_constraint}
(\delta|\mu|^2\partial_r+\epsilon|\nu|^2\partial_s)\tau(k,l)=-2\partial_x\tau(k,l),
\end{equation}
and set
\[ f=\tau(0,0), \quad g=\tau(1,0), \quad h=\tau(0, 1), \quad y=it,
\quad a=ic, \quad b=id,
\]
with $t, c, d$ being real, then one can readily verify that the
KP-hierarchy equations (\ref{(2.1)}) reduce to the bilinear
equations (\ref{(2.6)}) of the coupled NLS equations \eqref{(1.1)}.
In the second part, in order for the $\tau$-functions (\ref{(2.2)})
to satisfy the conjugation constraint
(\ref{conjugation_constraint}), it is sufficient to require [see
(\ref{conjugation_constraint_0})]
\begin{equation} \label{mji_ij}
m_{ji}(k,l)=\overline{m_{ij}(-k,-l)}.
\end{equation}
A sufficient condition for (\ref{mji_ij}) to hold is that
\begin{equation} \label{cqeta_cond}
c_{ij}=\delta_{ij}, \quad q_j=\bar p_j, \quad \eta_j=\bar{\xi}_j,
\quad \eta_{j0}=\bar\xi_{j0},
\end{equation}
$x,r,s$ are real, and $y,a,b$ are pure imaginary. These conditions
are the same ones we imposed at the beginning of the proof of
Theorem 1. Under these conditions, the $\tau$-solutions
(\ref{(2.2)}) of the KP-hierarchy equations (\ref{(2.1)}) then
reduce to the solutions (\ref{(2.7)}) for the bilinear equations
(\ref{(2.6)}) of the coupled NLS equations \eqref{(1.1)}. In order
for the $\tau$-functions (\ref{(2.2)}) to satisfy the linear
constraint (\ref{linear_constraint}), by rewriting these
$\tau$-functions as (\ref{(2.800)}) and inserting them into this
linear constraint, we then get the parameter constraint
(\ref{constraint2}), which is equivalent to the parameter constraint
(\ref{theorem2_constraints}) in Theorem 1. This splitting of the
earlier derivation of dark solitons into these two parts helps to
clarify this derivation and make it more understandable.
The second remark is on the solution form (\ref{(2.9)}) of dark
solitons in the generally coupled NLS equations \eqref{(1.1)}. It is
known that the NLS equation of focusing type is a reduction of the
two-component KP hierarchy (see \cite{JM}, page 966 and 999), and
the NLS equation of defocusing type is a reduction of the
single-component KP hierarchy \cite{O}. It is also known that
solutions to the single-component KP hierarchy can be expressed as
single Wronskians \cite{Hirota,Freeman_Nimmo_1983,Nimmo_1989}, and
solutions to the two-component KP hierarchy can be expressed as
double Wronskians \cite{Freeman_1990}. Thus $N$-bright solitons in
the focusing NLS equation can be expressed as double Wronskians
\cite{Nimmo,Freeman}, and $N$-dark solitons in the defocusing NLS
equation can be expressed as single Wronskians \cite{O}. These
Wronskian solutions can also be expressed as Gram-type determinants
\cite{Hirota,MOS,Freeman_Nimmo_1983,Na,Ni}. For the vector
generalization \eqref{(1.1)} of the NLS equation, in order to obtain
its $N$-bright-soliton solutions, one should increase the number of
components, and take \eqref{(1.1)} as a reduction of the
three-component KP hierarchy. Thus $N$-bright solitons in
\eqref{(1.1)} can be expressed as three-component Wronskians (or the
corresponding Gram-type determinants \cite{Hirota}). But to obtain
$N$-dark solitons in Eqs. \eqref{(1.1)}, one should increase copies
of independent variables to $(r,k)$ and $(s,l)$ in the
single-component KP hierarchy [see Eqs. (\ref{(2.1)})], thus
$N$-dark solitons in Eqs. \eqref{(1.1)} can still be expressed as
single Wronskian (or the corresponding Gram determinant) as we have
done above.
The third remark we make is on comparison of the KP-hierarchy
reduction method and the inverse scattering method for deriving
dark-soliton solutions. As is well known, the inverse scattering
method is another way to derive soliton solutions. For bright
solitons, the inverse scattering method (or its modern
Riemann-Hilbert formulation) is a powerful way to derive such
solutions (see \cite{Zakharov_book, Shchesnovich_Yang} for
instance). Recently, bright-bright $N$-solitons in a very general
class of integrable coupled NLS equations were easily derived by
this method \cite{Wang2010}, and Eqs. \eqref{(1.1)} are special
cases of such general equations. But for dark solitons, the inverse
scattering method is more difficult due to non-vanishing boundary
conditions, which create branch cuts and other related intricacies
in the scattering process \cite{Faddeev_book}. In \cite{Ablowitz1},
the inverse scattering transform analysis was developed for the
defocusing Manakov equations [$\delta=\epsilon=-1$ in \eqref{(1.1)}]
with non-vanishing boundary conditions. But in their analysis, the
boundary conditions (\ref{bc}) were taken such that $c=d$ [see their
equation (2.3)] (actually $c=d=0$ was taken there, but the case of
$c=d\ne 0$ can be reduced to the case of $c=d=0$ through Galilean
transformation). When $c=d$, one can see from our general formula
(\ref{(2.9)}) that $u$ and $v$ are simply proportional to each
other, thus their inverse scattering analysis could only obtain
degenerate dark-dark solitons which are reducible to scalar dark
solitons in the defocusing NLS equation. In order to derive the more
general dark-dark solitons (\ref{(2.9)}) with $c\ne d$, the inverse
scattering method would be even more complicated than that in
\cite{Ablowitz1}. Comparatively, the KP-hierarchy reduction method
we used above is free of these difficulties, and is thus a simpler
method for deriving dark-soliton solutions.
Our last remark is on dark solitons in an even more general coupled
NLS equations
\begin{equation} \label{more_general}
\begin{array}{ll}
iu_{t}=u_{xx}+ \left( \delta |u|^{2}+\epsilon |v|^{2}+\gamma u\bar
v+\bar \gamma \bar uv \right)u, \\ iv_{t}=v_{xx}+ \left(
\delta|u|^{2}+\epsilon|v|^{2}+\gamma u\bar v+\bar \gamma \bar uv
\right)v,
\end{array}
\end{equation}
where $\delta, \epsilon$ are real constants as in \eqref{(1.1)}, and
$\gamma$ is a complex constant. If $\gamma=0$, (\ref{more_general})
reduces to \eqref{(1.1)}). This more general coupled NLS system
(\ref{more_general}) is also integrable. Its Lax pair as well as
$N$-bright-bright solitons are given in \cite{Wang2010}. To explore
dark-dark solitons in this system, we look for solutions with the
following large-distance asymptotics [as in (\ref{(2.9)})]
\begin{equation}
\left\{
\begin{array}{ll}u \to \mu e^{i[cx-\omega t]}, \\
v\to \nu e^{i[dx-\kappa t]},
\end{array} \right. \quad x \to -\infty, \label{large_x}
\end{equation}
where $\mu, \nu$ are non-zero complex constants, and $c, d, \omega,
\kappa$ are real constants. Inserting this asymptotic solution into
(\ref{more_general}), we see that due to the $\gamma$-terms, Eqs.
(\ref{more_general}) can hold only if $c=d$, and $\omega=\kappa$.
Based on the previous solutions (\ref{(2.9)}), this would imply that
the $u$ and $v$ components of dark-dark solitons in the general
system (\ref{more_general}) must be proportional to each other, thus
are equivalent to scalar dark solitons in the defocusing NLS
equation. Except these trivial dark-dark solitons, Eqs.
(\ref{more_general}) do not admit other dark-dark solitons of the
form (\ref{large_x}) when $\gamma\ne 0$. This is a dramatic
difference between the cases of $\gamma=0$ and $\gamma\ne 0$ in Eqs.
(\ref{more_general}). Whether the general system
(\ref{more_general}) admits dark-dark solitons with background
asymptotics different from (\ref{large_x}) is still unclear.
\section{Dynamics of dark solitons}
In what follows, we investigate the dynamics of single-dark-soliton
and two-dark-soliton solutions in the generally coupled NLS
equations \eqref{(1.1)}. In the analysis of these solutions,
$\delta$ and $\epsilon$ will be treated as arbitrary parameters. In
the illustrations of solutions in the figures, we will pick
\begin{equation}
\delta=1, \quad \epsilon=-1,
\end{equation}
which correspond to mixed focusing and defocusing nonlinearities.
The reason for this choice is that dark solitons under such mixed
nonlinearities have never been studied before. We will show that
under these mixed nonlinearities, some novel phenomena (such as
existence of two-dark-soliton bound states) would arise. Soliton
dynamics under other $\delta$ and $\epsilon$ values, such as in the
defocusing Manakov equations where $\delta=\epsilon=-1$, would also
be briefly discussed when appropriate.
\subsection{Single dark solitons}
In order to get single dark solitons in Eqs. \eqref{(1.1)}, we set
$N = 1$ in the formula \eqref{(2.9)}. After simple algebra, these
single dark solitons can be written as
\begin{equation}
\begin{array}{ll}
u=\frac{1}{2}\mu e^{i[cx-(\delta|\mu|^2+\epsilon|\nu|^2-c^2)t]}
\left[1+y_1+(y_1-1)\tanh(\frac{\theta_1+\bar\theta_1+\rho_1}{2})\right],
\end{array} \label{(3.1)}
\end{equation}
\begin{equation}
\begin{array}{ll}
v=\frac{1}{2}\nu e^{i[dx-(\delta|\mu|^2+\epsilon|\nu|^2-d^2)t]}
\left[1+z_1+(z_1-1)\tanh(\frac{\theta_1+\bar\theta_1+\rho_1}{2})\right],
\end{array} \label{(3.2)}
\end{equation}
where
\[\theta_1=p_1x-ip_1^2t+\theta_{10}, \quad e^{\rho_1}=1/(p_1+\bar p_1),\]
\[y_1=(ic-p_1)/(ic+\bar p_1), \quad z_1=(id-p_1)/(id+\bar p_1),\]
and $\mu,\nu,p_1,\theta_{10}$ are complex constants satisfying
\begin{equation} \label{constraint_1soliton}
\frac{\delta|\mu|^2}{|p_1-ic|^2}+\frac{\epsilon|\nu|^2}{|p_1-id|^2}=-2,
\end{equation}
or equivalently, $a_1$ is given by formula (\ref{aj_formula}), where
$p_1=a_1+ib_1$. This soliton would be singular if $p_1+\bar p_1\le
0$, i.e., $a_1\le 0$. Thus we will require $a_1>0$ below to avoid
singular solutions. It is easy to see that the intensity functions
$|u|$ and $|v|$ of these dark solitons move at velocity
$-2\hspace{0.06cm}b_1$. In addition, they approach constant
amplitudes $|\mu|$ and $|\nu|$ respectively as $x\to \pm \infty$. As
$x$ varies from $-\infty$ to $+\infty$, the phases of the $u$ and
$v$ components acquire shifts in the amount of $2\phi_1$ and
$2\chi_1$, where
\begin{equation}
y_1=e^{2i\phi_1}, \quad z_1=e^{2i\chi_1},
\end{equation}
i.e., $2\phi_1$ and $2\chi_1$ are the phases of constants $y_1$ and
$z_1$ respectively. Without loss of generality, we restrict $-\pi <
2\phi_1, 2\chi_1 \le \pi$, i.e., $-\pi/2< \phi_1, \chi_1\le \pi/2$.
At the center of the soliton where $\theta_1+\bar\theta_1+\rho_1=0$,
intensities of the two components are
\begin{equation}
|u|_{center}=|\mu|\cos\phi_1, \quad |v|_{center}=|\nu|\cos\chi_1.
\end{equation}
These center intensities are lower than the background intensities
$|\mu|$ and $|\nu|$, thus these solitons are dark solitons. Notice
that the center intensities of the $u$ and $v$ solutions are
controlled by their respective phase shifts $2\phi_1$ and $2\chi_1$,
thus these phase shifts dictate how ``dark" the center is. This
general single dark-dark soliton (\ref{(3.1)})-(\ref{(3.2)}) has
been derived for the defocusing Manakov model before by the Hirota
method in \cite{RL,Sheppard_Kivshar_1997}. In particular, a
parameter constraint similar to (\ref{constraint_1soliton}) was
given in \cite{RL}. If $c=d$, then $y_1=z_1$, hence $\phi_1=\chi_1$.
In this case, the $u$ and $v$ components are proportional to each
other, and have the same degrees of darkness at the center. This
soliton is equivalent to a scalar dark soliton in the defocusing NLS
equation, thus is degenerate. It is noted that the single-dark-dark
soliton derived in \cite{Ablowitz1} [see Eq. (5.8) there]
corresponds to this degenerate type of dark-dark solitons. To
illustrate, we take
\begin{equation} \label{parameter_S1}
\mu=1, \quad \nu=2,, \quad c=d=0, \quad p_1=\sqrt{1.5}, \quad
\theta_{10}=0,
\end{equation}
which satisfy the constraint (\ref{constraint_1soliton}).
Intensities of the solution (\ref{(3.1)})-(\ref{(3.2)}) are
displayed in Fig. \ref{fig_1soliton}(a). This soliton is stationary,
and both its $u$ and $v$ components are black (with zero intensity)
at the soliton center.
Non-degenerate single-dark-dark-solitons in Eqs. \eqref{(1.1)},
however, are such that $c\ne d$. The $u$ and $v$ components in these
solitons are not proportional to each other, thus are not reducible
to scalar single dark solitons in the defocusing NLS equation. Since
$c\ne d$, $y_1\ne z_1$, thus $\phi_1\ne \chi_1$. This means that the
$u$ and $v$ components in these non-degenerate solitons have
different degrees of darkness at its center. To illustrate, we take
\begin{equation} \label{parameter_S2}
\mu=1, \quad \nu=2, \quad c=0, \quad d=0.5, \quad p_1=1.0679,
\end{equation}
which also satisfies the constraint (\ref{constraint_1soliton}).
Here the $p_1$ value is obtained from the formula (\ref{aj_formula})
with the plus sign and $b_1=0$. Intensities of this soliton are
displayed in Fig. \ref{fig_1soliton}(b). This soliton is also
stationary. At its center, the $u$ component is black, but the $v$
component is only gray. This type of non-degenerate single dark-dark
solitons in the coupled NLS system (\ref{(1.1)}) has not been
obtained before (to our knowledge).
\begin{figure}[h]
\begin{center}
\includegraphics[width=0.75\textwidth]{fig1.eps}
\end{center}
\caption{Single dark-dark solitons in Eqs. (\ref{(1.1)}) with
$\delta=1, \epsilon=-1$: (a) a degenerate soliton with parameters
(\ref{parameter_S1}); (b) a non-degenerate soliton with parameters
(\ref{parameter_S2}).} \label{fig_1soliton}
\end{figure}
In the defocusing Manakov equations where $\delta=\epsilon=-1$,
their degenerate and non-degenerate single dark solitons
qualitatively resemble those shown in Fig. \ref{fig_1soliton}, and
are thus not shown.
\subsection{Collision of two dark solitons}
Two-dark-soliton solutions in system \eqref{(1.1)} correspond to
$N=2$ in the general formula \eqref{(2.9)}. In this case, we have
\begin{equation}
u=\mu e^{i[cx-(\delta|\mu|^2+\epsilon|\nu|^2-c^2)t]}\frac{G_2(x,
t)}{F_2(x, t)}, \label{(3.5)}
\end{equation}
\begin{equation}
v=\nu e^{i[dx-(\delta|\mu|^2+\epsilon|\nu|^2-d^2)t]}\frac{H_2(x,
t)}{F_2(x, t)}, \label{(3.6)}
\end{equation}
where
\begin{eqnarray}
F_2(x,t) & = &
1+e^{\theta_1+\bar\theta_1+\rho_1}+e^{\theta_2+\bar\theta_2+\rho_2}
+r e^{\theta_1+\bar\theta_1+\theta_2+\bar\theta_2+\rho_1+\rho_2},
\\
G_2(x,t) & = &
1+y_1e^{\theta_1+\bar\theta_1+\rho_1}+y_2e^{\theta_2+\bar\theta_2+\rho_2}+
ry_1y_2e^{\theta_1+\bar\theta_1+\theta_2+\bar\theta_2+\rho_1+\rho_2},
\\
H_2(x,t) & = &
1+z_1e^{\theta_1+\bar\theta_1+\rho_1}+z_2e^{\theta_2+\bar\theta_2+\rho_2}+
rz_1z_2e^{\theta_1+\bar\theta_1+\theta_2+\bar\theta_2+\rho_1+\rho_2},
\end{eqnarray}
\begin{equation} \label{rhoj}
\theta_j=p_j x-ip_j^2t+\theta_{j0}, \quad e^{\rho_j} =1/(p_j+\bar
p_j),
\end{equation}
\begin{equation} \label{yjzj}
y_j=(ic-p_j)/(ic+\bar p_j), \quad z_j=(id-p_j)/(id+\bar p_j),
\end{equation}
\begin{equation} \label{rdef}
r=1-(p_1+\bar p_1)(p_2+\bar p_2)/|p_1+\bar p_2|^2,
\end{equation}
and $\mu, \nu, p_1, p_2, \theta_{10},\theta_{20}$ are complex
constants satisfying the constraint (\ref{theorem2_constraints})
with $j=1, 2$, or equivalently, $a_j$ is given by the formula
(\ref{aj_formula}), where $p_j=a_j+ib_j$.
In generic cases where $\mbox{Im}(p_1) \ne \mbox{Im}(p_2)$, these
solutions describe the collision of two dark-dark solitons. To
demonstrate these collisions, we take parameters
\begin{equation} \label{ic_fig1}
\mu=1, \hspace{0.15cm} \nu=2, \hspace{0.15cm} c=0, \hspace{0.15cm}
d=0.5, \hspace{0.15cm} p_1=0.8426 - 0.2i, \hspace{0.15cm} p_2=1.1801
+ 0.2i, \hspace{0.15cm} \theta_{10}=\theta_{20}=0.
\end{equation}
Here the real parts of $p_1$ and $p_2$ are obtained from the formula
(\ref{aj_formula}) with the plus sign. The corresponding two
dark-dark soliton solution \eqref{(3.5)}-\eqref{(3.6)} is shown in
Fig. \ref{fig_collision}. We can see that after collision, the two
dark solitons pass through each other without any change of shape
and velocity in either of its two components. Hence the degrees of
darkness in each soliton do not change after collision, which means
that there is no energy transfer from one component to the other
inside each soliton after collision. In addition, there is no energy
transfer from one soliton to the other after collision either. This
complete transmission of dark solitons' energy in both its two
components after collision occurs not only for $\delta=1$ and
$\epsilon=-1$ as in Fig. \ref{fig_collision}, but also for all other
$\delta$ and $\epsilon$ values. Thus it is a common phenomenon of
the generally coupled NLS system (\ref{(1.1)}). For instance, it
also happens in the defocusing Manakov equations where
$\delta=\epsilon=-1$.
This complete transmission of dark-dark solitons' energy in both its
two components is a remarkable phenomenon, because it is in stark
contrast with collisions of bright-bright solitons in the same
coupled NLS system (\ref{(1.1)}). Indeed, for bright-bright solitons
in the focusing Manakov system (with $\delta=\epsilon=1$),
polarization rotations take place after collision, hence energy has
transferred from one component to the other in each soliton
\cite{Manakov}. For bright-bright solitons in the more general
coupled NLS system (\ref{more_general}) (such as $\delta=1$ and
$\epsilon=-1$ above), energy can also transfer from one soliton to
another after collision \cite{Wang2010}. Thus collisions between
bright-bright solitons and between dark-dark solitons in the coupled
NLS system (\ref{(1.1)}) are distinctly different.
The reason for this complete energy transmission in all components
in dark-soliton collisions is that the intensity profile of each
dark-dark soliton is completely characterized by the background
parameters $\mu, \nu, c, d$ and the soliton parameter $p_j$ [see
Eqs. (\ref{(3.1)})-(\ref{(3.2)})]. These background parameters are
the same for both colliding solitons, and clearly do not change
before and after collision. The soliton parameter $p_j$ corresponds
to the spectral discrete eigenvalue in the inverse scattering
transform method, and is a constant of motion throughout collision.
Consequently, the intensity profile of each dark-dark soliton (in
both $u$ and $v$ components) can not change before and after
collision. This property indicates that dark solitons are more
robust than bright solitons with regard to collision. The positions
of dark solitons do shift after collision though, as can be seen
clearly in Fig. \ref{fig_collision}. This position shift is always
toward the soliton's moving direction, which is the same as
collisions of bright solitons in the NLS equation
\cite{Zakharov_Shabat}.
\begin{figure}[h]
\begin{center}
\includegraphics[width=0.8\textwidth]{fig2ab.eps}
\vspace{0.6cm}
\includegraphics[width=0.8\textwidth]{fig2cd.eps}
\end{center}
\caption{Collision of two dark-dark solitons in Eqs. \eqref{(1.1)}
with $\delta=1, \epsilon=-1$ and parameters (\ref{ic_fig1}). The
upper row shows the $(x, t)$ evolution, and the lower row shows the
intensity profiles before and after collision: $t=-10$ (solid);
$t=10$ (dashed). } \label{fig_collision}
\end{figure}
\section{Dark-dark-soliton bound states}
In studies of dark solitons, multi-dark-soliton bound states is an
interesting subject. In the defocusing NLS equation, two dark
solitons repel each other, thus can not form a bound state
\cite{Kivshar_1993}. In the defocusing Manakov model,
multi-bright-dark-soliton bound states were reported in
\cite{Sheppard_Kivshar_1997}. Some of those bound states are
stationary, while the others are not. So far,
multi-dark-dark-soliton bound states have never been reported in
integrable systems. In a non-integrable system, namely, the
second-harmonic-generation (SHG) system, two-dark-dark-soliton bound
states do exist, as was reported in \cite{Buryak_Kivshar_1995}. In
this system, single dark-dark solitons with non-monotonic tails
exist. When two such dark-dark solitons weakly overlap with each
other and interact, their non-monotonic tails create local minima in
the effective interaction potential, hence the two dark-dark
solitons can form stationary bound states. In addition, some of
these bound states are stable \cite{Buryak_Kivshar_1995}.
In this section, we show that in the generally coupled NLS system
(\ref{(1.1)}), when both $\delta$ and $\epsilon$ are negative, i.e.,
all nonlinearities are defocusing (the defocusing Manakov model),
multi-dark-dark-soliton bound states can not exist. But for mixed
focusing and defocusing nonlinearities, where $\delta$ and
$\epsilon$ have opposite signs, two-dark-dark-soliton bound states
do exist and are stationary. To our knowledge, this is the first
report of multi-dark-dark-soliton bound states in integrable
systems. Properties and physical origins of these stationary bound
states in the mixed-nonlinearity model (\ref{(1.1)}) are quite
different from the stationary bright-dark-soliton bound states in
the defocusing Manakov model \cite{Buryak_Kivshar_1995} and
stationary dark-dark-soliton bound states in the non-integrable SHG
model \cite{Buryak_Kivshar_1995}, as we will explain later in this
section.
To obtain dark-dark-soliton bound states, the two dark solitons in
the solution (\ref{(3.5)})-(\ref{(3.6)}) should have the same
velocity, i.e., $\mbox{Im}(p_1)=\mbox{Im}(p_2)$ (or $b_1=b_2$), so
that the two constituent dark solitons can stay together for all
times. In order for this to happen, two different (positive) values
$a_1$ and $a_2$ from Eq. (\ref{aj_equation}) must exist for the same
values of $b_1=b_2$. When $\delta$ and $\epsilon$ are both negative,
where the nonlinearities are all defocusing, this is not possible.
The reason is that when $\delta<0$ and $\epsilon<0$, the function on
the left side of Eq. (\ref{aj_equation}) is an increasing function
of $a_j^2$. Thus for this function to reach the value level of $-2$
on the right side of Eq. (\ref{aj_equation}), there is at most one
$a_j^2$ solution, hence at most one positive $a_j$ value. This means
that when nonlinearities are all defocusing (i.e., the defocusing
Manakov model with $\delta=\epsilon=-1$), there are no
multi-dark-dark-soliton bound states. However, when $\delta$ and
$\epsilon$ have opposite signs, where focusing and defocusing
nonlinearities are mixed, the function on the left side of Eq.
(\ref{aj_equation}) may become non-monotone in $a_j^2$, hence it
becomes possible for Eq. (\ref{aj_equation}) to admit two different
positive values $a_1$ and $a_2$ for the same values of $b_1=b_2$
(see below). In the formula (\ref{aj_formula}), these different
$a_1$ and $a_2$ values correspond to the plus and minus signs
respectively. In this case, two-dark-dark-soliton bound states would
exist, and this is a new phenomenon in the coupled NLS equations
(\ref{(1.1)}) under mixed focusing and defocusing nonlinearities.
Physically, these results on bound states in Eqs. (\ref{(1.1)}) can
be heuristically understood as follows. We know that in the scalar
defocusing NLS equation, two dark solitons repel each other. In the
coupled NLS system (\ref{(1.1)}), if $\delta$ and $\epsilon$ are
both negative, all nonlinearities are defocusing, hence two
dark-dark solitons still repel each other, and no bound states can
be formed. However, if $\delta$ and $\epsilon$ have opposite signs,
parts of the nonlinear terms are focusing, and the other parts
defocusing. While the defocusing terms repel two dark solitons, the
focusing terms do just the opposite, which is to attract two dark
solitons. Thus, when these repulsive and attractive forces balance
each other, two dark-dark solitons then can form a stationary bound
state. This physical mechanism for the existence of
dark-dark-soliton bound states is quite different from that in the
SHG model \cite{Buryak_Kivshar_1995} (see earlier text).
Next we examine these two-dark-dark-soliton bound states in more
detail. Through Galilean transformation (i.e., in the moving
coordinate system with this common velocity), this common velocity
can be reduced to zero. Hence $p_1$ and $p_2$ become real
parameters. In this case, it is easy to see that this bound state
becomes
\begin{equation}
\begin{array}{ll}u=\mu e^{i[cx-(\delta|\mu|^2+\epsilon|\nu|^2-c^2)t]}\frac{G_2(x)}{F_2(x)},\\
v=\nu
e^{i[dx-(\delta|\mu|^2+\epsilon|\nu|^2-d^2)t]}\frac{H_2(x)}{F_2(x)},
\end{array}
\end{equation}
where
\begin{eqnarray}
F_2(x) & = & 1+e^{2p_1x+2\alpha_1+\rho_1}+e^{2p_2x+2\alpha_2+\rho_2}
+r e^{2p_1x+2p_2x+2\alpha_1+2\alpha_2+\rho_1+\rho_2},
\\
G_2(x) & = &
1+y_1e^{2p_1x+2\alpha_1+\rho_1}+y_2e^{2p_2x+2\alpha_2+\rho_2}+
ry_1y_2e^{2p_1x+2p_2x+2\alpha_1+2\alpha_2+\rho_1+\rho_2},
\\
H_2(x) & = &
1+z_1e^{2p_1x+2\alpha_1+\rho_1}+z_2e^{2p_2x+2\alpha_2+\rho_2}+
rz_1z_2e^{2p_1x+2p_2x+2\alpha_1+2\alpha_2+\rho_1+\rho_2},
\end{eqnarray}
$\alpha_j=\mbox{Re}(\theta_{j0})$, and $\rho_j, y_j, z_j, r$ are as
given in Eqs. (\ref{rhoj})-(\ref{rdef}). Notice that functions $F_2,
G_2$ and $H_2$ are time-independent, thus this bound state is
actually stationary. This is analogous to certain
bright-dark-soliton bound states in the defocusing Manakov model
\cite{Sheppard_Kivshar_1997} and dark-dark-soliton bound states in
the SHG model \cite{Buryak_Kivshar_1995}. An important feature of
these present bound states is that, as $x$ moves from $-\infty$ to
$+\infty$, these states acquire non-zero phase shifts. Indeed, it is
easy to see from the above solution formula that the phase shifts of
the $u$ and $v$ components are
\begin{equation} \label{phase_shift}
u\mbox{-phase shift}=2\phi_1+2\phi_2, \quad v\mbox{-phase
shift}=2\chi_1+2\chi_2,
\end{equation}
where $2\phi_j$ and $2\chi_j$ are the phases of $y_j$ and $z_j$
respectively. In other words, the total phase shifts of the bound
state are equal to the sum of the individual phase shifts of the two
constituent dark solitons, which are non-zero in general. This
contrasts stationary bright-dark-soliton bound states in the
defocusing Manakov model \cite{Sheppard_Kivshar_1997} and
dark-dark-soliton bound states in the SHG model
\cite{Buryak_Kivshar_1995}, where phase shifts of the dark
components across the soliton are all zero.
To demonstrate these stationary two-dark-soliton bound states, we
take parameters
\begin{equation} \label{ic_bound_state}
\mu=1, \hspace{0.15cm} \nu=2, \hspace{0.15cm} c=0, \hspace{0.15cm}
d=0.5, \hspace{0.15cm} p_1=1.0679, \hspace{0.15cm} p_2=0.3311,
\hspace{0.15cm} \alpha_1=\alpha_2=0.
\end{equation}
Here $p_1$ and $p_2$ are obtained from the formula
(\ref{aj_formula}) with $b_1=b_2=0$. The corresponding bound state
is displayed in Fig. \ref{fig_bound_state} (upper row). In this
bound state, the $u$-component is double-dipped (i.e., has a double
hole), signifying this is a two-soliton bound state, while the
$v$-component is single-dipped. By adjusting $\alpha_1$ and
$\alpha_2$ values, we can obtain bound states where both $u$ and $v$
components are double-dipped. For instance, when we take
$\alpha_1=-\alpha_2=2$ instead of zero in (\ref{ic_bound_state}), we
get such a bound state which is shown in the lower row of Fig.
\ref{fig_bound_state}. For both bound states, the total phase shift
of the $u$-component is zero, and the total phase shift of the
$v$-component is 3.4355, as can be calculated from formula
(\ref{phase_shift}).
\begin{figure}[h]
\begin{center}
\includegraphics[width=0.85\textwidth]{fig3ab.eps}
\vspace{0.5cm}
\includegraphics[width=0.85\textwidth]{fig3cd.eps}
\end{center}
\caption{Two examples of two-dark-soliton bound states in Eqs.
\eqref{(1.1)} with $\delta=1, \epsilon=-1$. Upper row: bound state
with parameters (\ref{ic_bound_state}); lower row: bound state with
parameters (\ref{ic_bound_state}) except that
$\alpha_1=-\alpha_2=2$. The left two panels show $(x, t)$ evolution
of $|u|$ and $|v|$ components, and the right panel shows the
stationary intensity profiles. } \label{fig_bound_state}
\end{figure}
From the above analytical formulae and Fig. \ref{fig_bound_state},
we can see that these stationary two-dark-soliton bound states have
six free parameters, $\mu, \nu, c, d, \alpha_1$ and $\alpha_2$ [the
positive $p_1$ and $p_2$ values are determined from formula
(\ref{aj_formula}) by setting $b_1=b_2=0$]. The first four
parameters characterize the background intensities and phase
gradients, while the parameters $\alpha_1$ and $\alpha_2$ control
the positions of the two dark solitons.
The above dark-soliton bound states in the integrable coupled NLS
system (\ref{(1.1)}) possess properties which are very different
from those in dark-soliton bound states in the non-integrable SHG
model \cite{Buryak_Kivshar_1995}. First, the bound states in the SHG
model are formed by identical dark solitons (see Fig. 4 in
\cite{Buryak_Kivshar_1995}), but the bound states in the coupled NLS
system are formed by different dark solitons since $p_1\ne p_2$ (see
lower row of Fig. \ref{fig_bound_state} in this paper). Second, the
bound states in the SHG model have zero phase shifts from one end to
the other, but the phase shifts of bound states in the coupled NLS
system (\ref{(1.1)}) are non-zero in general [see Eqs.
(\ref{phase_shift})]. Thirdly, the bound states in the SHG model
have non-zero binding energy, hence can be stable against
perturbations \cite{Buryak_Kivshar_1995}. But the bound states in
the present coupled NLS system have zero binding energy. Thus under
perturbations, the two constituent dark solitons in these bound
states generically will split apart, analogously to bright-soliton
bound states in the focusing NLS equation.
At this point, one may wonder if three- and higher-dark-dark-soliton
bound states exist in the coupled NLS system (\ref{(1.1)}). It turns
out that such bound states can not exist. The reason is that, in a
bound state, velocities of all constituent solitons must be the
same, i.e., all $b_j$ [i.e., $\mbox{Im}(p_j)$] must be the same. In
order for three- and higher-dark-soliton bound states to exist,
formula (\ref{aj_formula}) must give at least three distinct
positive solutions $a_j$ for the same $b_j$ value. This is clearly
impossible, since formula (\ref{aj_formula}) can give at most two
distinct positive $a_j$ values when the plus and minus signs are
taken. Consequently, three- and higher-dark-dark-soliton bound
states can not exist in Eqs. (\ref{(1.1)}). Note that in the
defocusing Manakov model, non-stationary three and higher
bright-dark-soliton bound states exist, but stationary three and
higher bright-dark-soliton bound states do not
\cite{Sheppard_Kivshar_1997}; while in the non-integrable SHG
system, stationary three and higher dark-dark-soliton bound states
do exist \cite{Buryak_Kivshar_1995}.
\section{Summary and discussion}
In this paper, we have investigated dark-dark solitons in the
integrable generally coupled NLS system (\ref{(1.1)}). By reducing
the Gram-type solution of the KP hierarchy, we derived the general
$N$-dark-dark solitons in this system. We showed that the dark-dark
solitons derived previously in the literature are only degenerate
cases of these general soliton solutions. We have also shown that
when these solitons collide with each other, energies in both
components of the solitons completely transmit through. This
behavior contrasts bright-bright solitons in this system, where
polarization rotation and soliton reflection can occur after
collision. In addition, we have shown that when focusing and
defocusing nonlinearities are mixed, two dark-dark solitons can form
a stationary bound state. These results will be useful for many
physical subjects such as nonlinear optics, water waves and
Bose-Einstein condensates, where the coupled NLS equations often
arise.
The dark-dark solitons obtained in this paper for the generally
coupled NLS system (\ref{(1.1)}) are useful for other purposes as
well. For instance, it is known that from dark solitons of the
defocusing NLS equation, one can obtain homoclinic solutions of the
focusing NLS equation through simple variable transformations
\cite{Ablowitz_homoclinic}. Thus, from these dark-dark solitons in
this paper, we can obtain homoclinic solutions to these generally
coupled NLS equations (\ref{(1.1)}). Since solutions near homoclinic
orbits often exhibit chaotic dynamics \cite{Ablowitz_homoclinic},
the homoclinic solutions for the generally coupled NLS equations
(\ref{(1.1)}) then can serve as the starting point to understand
chaotic behaviors in these systems.
Lastly, we would like to mention that $N$-bright-bright and
$N$-bright-dark solitons in the coupled NLS equations (\ref{(1.1)})
can also be obtained by the KP-hierarchy reduction method. But those
reductions will be different from the ones in this paper for
dark-dark solitons, and will be left for future studies.
\vskip .2cm \noindent{\bf Acknowledgments}
We thank Dr. Xingbiao Hu for helpful discussions. The work of Y.O.
was partly supported by JSPS Grant-in-Aid for Scientific Research
(B-19340031, S-19104002). The work of D.S.W. was supported by China
Postdoctoral Science Foundation. The work of J.Y. was supported in
part by the (U.S.) Air Force Office of Scientific Research under
grant USAF 9550-09-1-0228 and the National Science Foundation under
grant DMS-0908167.
|
2,877,628,090,395 | arxiv | \section{Introduction}
The Standard Model (SM) of Particle Physics is not capable to account for the apparent matter-antimatter asymmetry of the Universe. Physics beyond the SM is required and it is either probed by employing high energies (\textit{e.g.}, at LHC), or by striving for ultimate precision and sensitivity (\textit{e.g.}, in the search for electric dipole moments). Permanent electric dipole moments (EDMs) of particles violate both time reversal $(\mathcal{T})$ and parity $(\mathcal{P})$ invariance, and are via the $\mathcal{CPT}$-theorem also $\mathcal{CP}$-violating. Finding an EDM would be a strong indication for physics beyond the SM, and pushing upper limits further provides crucial tests for any corresponding theo\-retical model,\textit{ e.g.}, SUSY.
Up to now, EDM searches mostly focused on neutral systems (neutrons, atoms, and molecules). Storage rings, however, offer the possibility to measure EDMs of charged particles by observing the influence of the EDM on the spin motion in the ring. These \textit{direct} searches of \textit{e.g.}, proton and deuteron EDMs bear the potential to reach sensitivities beyond $\SI{e-29}{e.cm}$. Since the Cooler Synchrotron COSY\footnote{\textcolor{blue}{The synchrotron and storage ring COSY accelerates and stores unpolarized and polarized proton or deuteron beams in the momentum range of 0.3 to \SI{3.65}{GeV/c}\,\cite{PhysRevSTAB.18.020101}.}} at the Forschungszentrum J\"ulich provides polarized protons and deuterons up to momenta of 3.7 GeV/c, it constitutes an ideal testing ground and a starting point for such an experimental program.
The investigations presented here, carried out in the framework of the JEDI Collaboration\footnote{J\"ulich Electric Dipole moment Investigations\,\cite{jedi-collaboration}.}, are relevant for the preparation of the deuteron EDM measurement\,\cite{Rathmann:2019vtb}. A radio-frequency (RF) Wien filter (WF)\,\cite{Slim:2016pim,Slim:2016dct,Slim:2017bic} makes it possible to carry out EDM measurements in a conventional \textit{magnetic} machine like COSY. The idea is to look for an EDM-driven resonant rotation of the deuteron spins from the horizontal to vertical direction and vice versa, generated by the RF Wien filter at the spin precession frequency. The RF Wien filter \textit{per se} is transparent to the EDM of the particles, its net effect is a frequency modulation of the spin tune, the number of spin precessions per turn. This modulation couples to the EDM precession in the static motional electric field of the ring, and generates an EDM-driven up-down oscillation of the beam polarization\,\cite{PhysRevSTAB.16.114001}.
The search for EDMs of protons, deuterons, and heavier nuclei using storage rings\,\cite{srEDM-collaboration,jedi-collaboration} is part of an extensive world-wide effort to push further the frontiers of precision spin dynamics of polarized particles in storage rings. In this context, the JEDI results prompted the formation of the new CPEDM collaboration\footnote{Charged Particle Electric Dipole Moment Collaboration, \url{http://pbc.web.cern.ch/edm/edm-default.htm}}, which aims at the development of a purely electric prototype storage ring, with drastically enhanced sensitivities to the EDM of protons and deuterons, compared to what is presently feasible at COSY\,\cite{1812.08535,Rathmann:2019vtb}.
Precision experiments, such as the EDM searches, demand for an understanding of the spin dynamics with unprecedented accuracy, keeping in mind that the ultimate aim is to measure EDMs with a sensitivity up to 15 orders in magnitude better than the magnetic dipole moment (MDM) of the stored particles.
The description of the physics of the applied approach, called \textit{RF Wien filter mapping}, is discussed further in a forthcoming separate publication. The theoretical understanding of the method and its experimental exploitation are prerequisites for the planned EDM experiments at COSY\,\cite{jedi-collaboration}, and will also have an impact on the design of future dedicated EDM storage rings\,\cite{1812.08535}.
This paper discusses various polarization effects that are induced by the RF Wien filter and static solenoids in the ring. The approach taken here strongly simplifies the machine lattice, and deals \textit{solely} with spin rotations \textit{on the closed orbit}\,\cite{lee1997spin, Mane_2005}, described by the $\mathbf{SO(3)}$ formalism. One aim of the work is to obtain a basic understanding about the interplay of spin rotations in a magnetic ring equipped with an RF Wien filter and solenoid magnets, under the simplifying assumption mentioned above. In an ideal machine with perfect alignment of the magnetic elements, the spin rotations on the closed orbit are generated primarily by the dipole magnets, therefore, for the time being, spin rotations in the quadrupole magnets are not considered.
As we shall demonstrate below, even with an idealized ring, the parametric RF resonance-driven spin rotations reveal quite a reach pattern of spin dynamics. Our results set the background for more realistic spin tracking calculations, based on recent geodetic surveys of COSY that make available position offsets, roll, and inclination parameters for the quadrupole and dipole magnets. The treatment of the spin transport through these individually misaligned magnetic elements, can, however, be readily incorporated in the applied matrix formalism. Besides that, the spin dynamics simulations carried out in the framework of the present paper, will serve as a valuable crosscheck of the analytic approximate treatment of the parametric spin resonance, based on the Bogolyubov-Krylov-Mitropolsky averaging technique\,\cite{Bogolyubov}.
The JEDI collaboration is presently implementing a beam-based alignment scheme at COSY, which aims at providing optimized beam-transfer properties of the quadrupole and dipole magnets in the ring, with the aim to make the beam orbit as planar as possible\,\cite{TWagner}. Once this is accomplished, the spin dynamics in the ring will be largely governed by the misaligned dipoles alone. Thus effectively, the approach described here will appropriately describe an EDM experiment using an RF Wien filter in a beam-based aligned ring.
The paper is organized as follows. In Sec.\,\ref{sec:spin-rotations-in-the-ring}, the effect of an EDM on the spin-evolution in a ring is discussed in terms of the Thomas-BMT equation. The inclusion of an RF Wien filter in an otherwise ideal ring is treated in Sec.\,\ref{sec:RF-Wien-filter-plus-ring}, while the polarization evolution with an RF Wien filter and additional solenoids is discussed in Sec.\,\ref{sec:polarization-evolution-with-RF-Wien-filter-and-solenoids}. The main findings are summarized in the conclusions in Sec.\,\ref{sec:conclusions}. A brief outlook into additional aspects planned to be investigated using the simulation approach taken here in the near future is also given.
\section{Spin rotations in the ring}
\label{sec:spin-rotations-in-the-ring}
\subsection{Thomas-BMT equation}
Below, the basic formalism to decribe the spin evolution in electric and magnetic fields is briefly reiterated. The generalized form of the Thomas-BMT equation describes the spin motion of a particle with spin $\vec S$ in an arbitrary electric ($\vec E$) and magnetic field ($\vec B$). Including EDMs (in SI units), it reads\,\cite{Fukuyama:2013ioa},
\begin{equation}
\frac{\text{d} \vec S}{\text{d} t} = \underbrace{\left(\vec \Omega^\text{MDM} + \vec \Omega^\text{EDM} \right) }_{ \displaystyle = \vec \Omega^\text{tot}} \times \vec S\,,
\label{eq:BMT-EDM-MDM}
\end{equation}
where
\begin{widetext}
\begin{equation}
\begin{split}
\vec \Omega^\text{MDM} & = -\frac{q}{m} \left[ \left( G + \frac{1}{\gamma} \right) \vec B - \frac{G \gamma}{\gamma +1} \left(\vec \beta \cdot \vec B \right)\vec \beta - \left(G + \frac{1}{\gamma +1} \right) \frac{\vec \beta \times \vec E}{c}\right]\,, \\
\vec \Omega^\text{EDM} & = -\frac{q}{mc} \frac{\eta_\text{EDM}}{2} \left[\vec E - \frac{\gamma}{\gamma +1} \left(\vec \beta \cdot \vec E \right)\vec \beta + c\vec \beta \times \vec B \right]\,.
\label{eq:OmegaMDM-EDM}
\end{split}
\end{equation}
\end{widetext}
Here $m$, $\gamma$, and $\beta$ are the mass, Lorentz factor, and the velocity of a particle in units of the speed of light $c$ in vacuum, $\vec S$ is given in the particle rest frame, and the fields $\vec E$ and $\vec B$ are in the laboratory system. The magnetic dipole moment $\vec \mu$ (MDM) and the electric dipole moment $\vec d$ (EDM) are defined via the dimensionless Land\'e-factor $g$ and $\eta_\text{EDM}$
\begin{equation}
\vec \mu = g \frac{q}{2m} \vec S, \quad \text{and} \quad \vec d = \eta_\text{EDM} \frac{q}{2m\,c} \vec S\,,
\label{eq:defninitions-eta-mu}
\end{equation}
and the magnetic anomaly is given by
\begin{equation}
G = \frac{g-2}{2}\,.
\end{equation}
\begin{table*}[htb]
\renewcommand{\arraystretch}{1.25}
\caption{\label{tab:list-of-parameters} Parameters of the deuteron kinematics, the COSY ring, the deuteron elementary quantities, the electric dipole moment (EDM) assumed, and the field integrals of the idealized RF Wien filter (to eight decimal places). The deuteron momentum $P$ is used to specify the deuteron kinetic energy $T$, and the Lorentz factors $\beta$ and $\gamma$. The COSY circumference $\ell_\text{COSY}$ is used to specify the COSY revolution frequency $f_\text{rev}$ and the spin-precession frequency $f_s$. The deuteron mass $m$ and the deuteron $g$ factor, taken from the NIST database\,\cite{nist} (not from the most recent one), are used to specify $G$. The deuteron EDM $d$ is used to quantify $\eta_\text{EDM}$ and $\xi_\text{EDM}$.}
\begin{ruledtabular}
\begin{tabular}{lll}
\textbf{Quantity} & & \textbf{Value} \\ \hline
deuteron momentum (lab) & $P$ & \SI{970.00000000}{MeV \per c} \\
deuteron energy (lab) & $T$ & \SI{235.97981668}{MeV} \\
Lorentz factor (lab) & $\beta$ & \num{0.45936891} \\
Lorentz factor (lab) & $\gamma$ & \num{1.12581478} \\ \hline
COSY circumference & $\ell_\text{COSY}$ & \SI{183.57200000}{m} \\
COSY revolution frequency & $f_\text{rev}$ & \SI{750197.93487176}{Hz} \\
COSY spin precession frequency & $f_s$ & \SI{120764.75147311}{Hz} \\ \hline
deuteron mass & $m$ & \SI{1875.61279300}{MeV} \\
deuteron $g$ factor & $g$ & \num{1.71402546}\\
deuteron $G = (g-2)/2$ & $G$ & \num{-0.14298727} \\ \hline
deuteron EDM & $d$ & \SI{e-20}{e.cm}\\
deuteron dimensionless $\eta_\text{EDM}$
& $\eta_\text{EDM}$ & \num{1.90102028e-06} \\
deuteron EDM tilt angle & $\xi_\text{EDM}$ & \num{-3.05366207e-06} \\ \hline
RF Wien filter field amplification factor & $f_\text{ampl}$ & \num{e3} \\
RF Wien filter electric field integral & $\int E^\text{WF}_x \text{d} z$ & \SI{2.20000000e+06}{V} \\
RF Wien filter magnetic field integral & $\int B^\text{WF}_y \text{d} z$ & \SI{1.59749820e-02}{T.m} \\
RF Wien filter length & $\ell_\text{WF}$ & \SI{1.55000000}{m}
\end{tabular}
\end{ruledtabular}
\end{table*}
\subsection{EDM tilt angle $\xi$ from Thomas-BMT-equation}
In an ideal machine without unwanted magnetic fields, the axis about which the particle spins precess is given by the purely vertical magnetic field $\vec B =\vec B_\perp = B_\perp \cdot \vec e_y$. Equating the COSY angular orbit frequency $\Omega_\text{rev} = 2 \pi f_\text{rev}$ and the relativistic cyclotron angular frequency
\begin{equation}
\begin{split}
\vec \Omega_\text{rev} & = \left(
\begin{array}{c}
0 \\
2\pi\cdot f_\text{rev} \\
0
\end{array}
\right) =
\vec \Omega_\text{cyc} \\
& = -\frac{q}{\gamma \, m} \left( B_\perp - \frac{\vec \beta \times \vec E}{\beta^2 c} \right)\,,
\label{eq:cyclotron-frequency}
\end{split}
\end{equation}
yields, for $\vec E = 0$ with the parameters given in Table\,\ref{tab:list-of-parameters}, a vertical magnetic field of
\begin{equation}
\vec B_\perp =
\left(
\begin{array}{c}
0 \\
\num{1.1075e-01}\\
0
\end{array}
\right)
\, \si{T}\,,
\label{eq:Bfromcyclotron}
\end{equation}
which can be considered as the field that corresponds to an equivalent COSY ring where the magnetic fields are evenly distributed.
Inserting $\vec B$ from Eq.\,(\ref{eq:Bfromcyclotron}) and $\vec E = 0$ into Eq.\,(\ref{eq:OmegaMDM-EDM}), yields for the angular frequencies in the particle rest system
\begin{equation}
\begin{split}
\vec \Omega^\text{tot} & = \vec \Omega^\text{MDM} + \vec \Omega^\text{EDM} =
-\frac{q}{m}
\left(
\begin{array}{c}
\frac{1}{2}\eta_\text{EDM} \beta \\
G + \frac{1}{\gamma} \\
0
\end{array}
\right) B_\perp \\
& =
\left(
\begin{array}{c}
\num{-2.3171} \\
\num{3954845.3298} \\
\num{0.0000}
\end{array}
\right)\, \si{\per \second}.
\end{split}
\end{equation}
In the laboratory system, however, we observe with the parameters of Table\,\ref{tab:list-of-parameters} the precession frequency with respect to the cyclotron motion of the momentum,
\begin{equation}
\begin{split}
\vec \Omega^\text{Lab} & = \vec \Omega^\text{tot} - \vec \Omega_\text{rev} =
-\frac{q}{m}
\left(
\begin{array}{c}
\frac{1}{2}\eta_\text{EDM} \beta \\
G \\
0
\end{array}
\right)B_\perp \\
& =
\left(
\begin{array}{c}
\num{-2.3171} \\
\num{-758787.3121} \\
\num{0.0000}
\end{array}
\right) \, \si{\per \second}
\,,
\label{eq:omega-tot-lab}
\end{split}
\end{equation}
where $\vec \Omega_\text{rev}$ denotes the COSY angular frequency along $\vec e_y$.
The spin-precession frequency yields the familiar value of
\begin{equation}
\frac{\vec \Omega^\text{Lab}}{2 \,\pi} =
\left(
\begin{array}{c}
\num{-0.3688} \\
\num{-120764.7515} \\
\num{0.0000}
\end{array}
\right) \, \si{\per \second}\,,
\end{equation}
which is also listed in Table\,\ref{tab:list-of-parameters}. The angle by which the stable spin axis is tilted, \textit{i.e.}, the angle between $\vec \Omega^\text{Lab}$ and $\vec e_y$ is obtained by evaluating
\begin{equation}
\xi = \arctan \left| \frac{\vec \Omega^\text{Lab} \times \vec e_y}{\vec \Omega^\text{Lab} \cdot \vec e_y} \right| \,.
\label{eq:xiEDM2}
\end{equation}
Inspecting Eq.\,(\ref{eq:omega-tot-lab}), the effect of an EDM in a magnetic machine can be expressed by the tilt of the stable spin axis away from the vertical orientation in the ring, given by\footnote{In Eq.\,(\ref{eq:xiEDM}), an additional factor of 2 has been inserted in the denominator, correcting Eq.\,(10) of\,\cite{PhysRevAccelBeams.20.072801}). }
\begin{equation}
\tan \xi_\text{EDM} = \frac{\eta_\text{EDM} \, \beta }{2 G}\,.
\label{eq:xiEDM}
\end{equation}
For an assumed EDM of $d = \SI {1e-20}{e.cm}$, and for deuterons at a momentum of \SI{970}{MeV \per c}, Eqs.\,(\ref{eq:defninitions-eta-mu}) and (\ref{eq:xiEDM}) yield $\xi_\text{EDM}$ and $\eta_\text{EDM}$, as listed in Table\,\ref{tab:list-of-parameters}.
\subsection{Rotation matrices}
\label{sec:rotation-matrices}
Our description of the spin dynamics is based on the $\mathbf{SO(3)}$ formalism. A rotation by an angle $\theta$ around an arbitrary axis given by the unit vector $\vec n = (n_1, n_2, n_3)$ is described by the matrix\,\cite{MATHEDUC.06151274}
\begin{equation}
\mathbf{R}(\vec n, \theta) =
\left(
\begin{array}{ccc}
b_{11} & b_{12} & b_{13} \\
b_{21} & b_{22} & b_{23} \\
b_{31} & b_{32} & b_{33}
\end{array}
\right)\,,
\label{eq:generic-rotation-matrix1}
\end{equation}
with
\begin{equation}
\begin{split}
b_{11} & = \cos \theta + {n_1}^2 (1-\cos \theta) \\
b_{12} & = n_1 n_2 (1 - \cos \theta) - n_3 \sin \theta \\
b_{13} & = n_1 n_3 (1 - \cos \theta) + n_2 \sin \theta \\
b_{21} & = n_1 n_2 (1 - \cos \theta) + n_3 \sin \theta \\
b_{22} & = \cos \theta + {n_2}^2 (1 - \cos \theta) \\
b_{23} & = n_2 n_3 (1 - \cos \theta) - n_1 \sin \theta \\
b_{31} & = n_1 n_3 (1 - \cos \theta) - n_2 \sin \theta \\
b_{32} & = n_2 n_3 (1 - \cos \theta) + n_1 \sin \theta \\
b_{33} & = \cos \theta + {n_3}^2 (1 - \cos \theta) \,.
\end{split}
\label{eq:generic-rotation-matrix2}
\end{equation}
\subsection{One turn spin rotation matrix with EDM}
With a non-vanishing EDM, in the rotation matrix of Eq.\,(\ref{eq:generic-rotation-matrix1}), the spins do not precess anymore around the vertical axis $\vec e_y $, but rather around the direction given by
\begin{equation}
\vec c\,(\xi_\text{EDM} ) =
\left(
\begin{array}{c}
c_1 \\
c_2 \\
c_3
\end{array}
\right)
=
\left(
\begin{array}{c}
\sin \xi_\text{EDM} \\\
\cos \xi_\text{EDM} \\
0
\end{array}
\right)
\,.
\label{eq:n1-n2-n3-withEDM}
\end{equation}
Therefore, the ring rotation matrix can be obtained by inserting into Eq.(\ref{eq:generic-rotation-matrix1}) the coefficients $c_1$, $c_2$, $c_3$ from Eq.\,(\ref{eq:n1-n2-n3-withEDM}), and by setting
\begin{equation}
\theta := \theta(t) = \omega_s \, t = 2 \pi f_s \, t\,.
\label{eq:thetaoft}
\end{equation}
Here, the time $t$ is defined by the number of momentum revolutions $n$ in the ring,
\begin{equation}
t = n\cdot T_\text{rev} = \frac{n}{f_\text{rev}}\,.
\label{eq:connection-between-n-and-t}
\end{equation}
The spin-precession frequency $f_s$, related to $\vec \Omega^\text{Lab}$ introduced in Eq.\,(\ref{eq:omega-tot-lab}), can be expressed also via
\begin{equation}
f_s = \frac{\Omega^\text{Lab}}{2\pi} = \frac{ G \gamma }{ \cos \xi_\text{EDM}} \cdot f_\text{rev} \,,
\label{eq:spin-precession-frequency}
\end{equation}
where $f_\text{rev}$ denotes the revolution frequency. A negative $G$ factor indicates that the precession proceeds opposite to the orbit revolution.
Thus, a one-turn matrix including the EDM effect is obtained by inserting $\theta(t)$ from Eq.\,(\ref{eq:thetaoft}) into Eq.\,(\ref{eq:generic-rotation-matrix1}) at $t = T_\text{rev} = 1/f_\text{rev}$. For comparison with numerical simulations, the ring matrix is explicitly given below (to four decimal places) for the parameters listed in Table\,\ref{tab:list-of-parameters},
\begin{small}
\begin{equation}
\begin{split}
& \mathbf{U}_\text{ring}\left(\vec c, T_\text{rev}\right) =\\
&\left(
\begin{array}{ccc}
\num{5.3063e-01} & \num{-1.4333e-06} & \num{-8.4760e-01} \\
\num{-1.4333e-06} & \num{1.0000e+00} & \num{-2.5883e-06} \\
\num{8.4760e-01} & \num{2.5883e-06} & \num{5.3063e-01}
\end{array}
\right)\,.
\end{split}
\end{equation}
\end{small}
\subsection{Polarization evolution in the ring}
The evolution of the polarization vector $\vec S_1$ as function of time in the ideal bare ring is then described by
\begin{equation}
\vec S_1(t) = \mathbf{U}_\text{ring}(\vec c, t) \times \vec S_0\,,
\label{eq:polarization-evolution-without-WF}
\end{equation}
where $\vec S_0$ denotes the initial polarization vector.
Figure\,\ref{fig:tilt-angle-xi} shows the situation when the spin rotation axis $\vec c$, defined by Eq.\,(\ref{eq:n1-n2-n3-withEDM}), is tilted with respect to the normal to the ring plane $\vec n$ ($y$-axis in the figure)\footnote{ Here, it is supposed that the polarimeter is ideally aligned to the physical ring plane so that the left-right asymmetry measures $p_y(t)$, and the up-down asymmetry measures $p_x(t)$.}.
\begin{figure}[tb]
\vspace{-1.5cm}
\def50{50}
\def100{100}
\def38.23{100-90}
\def50{50}
\def120{120}
\def40{40}
\tdplotsetmaincoords{120}{40}
{\centering
\begin{tikzpicture}[scale=1.1,tdplot_main_coords,]
\draw[very thick,-stealth] (0,0,0) -- (-4,0,0) node[anchor=north east]{$x$};
\draw[very thick,-stealth] (0,0,0) -- (0,0,3) node[anchor=north east]{$y$};
\draw[very thick,-stealth] (0,0,0) -- (0,-4,0) node[anchor=north east]{$z$ (beam)};
\begin{scope}[canvas is yx plane at z=0]
\draw[black] (0,0) ellipse (3cm and 3cm);
\draw[magenta,-stealth, very thick] (0,13) [partial ellipse=290:245:13cm and 13cm];
\end{scope}
\begin{scope}[canvas is yx plane at z=0]
\end{scope}
\tdplotdefinepoints(0,0,0)(-1,0,0)({-cos(38.23)},0,{sin(38.23)})
\tdplotdrawpolytopearc[very thick, red, -stealth]{3}{red,above, yshift=+0.35cm, xshift = 0.2cm}{$\xi_{\rm EDM}$}
\draw[blue,very thick,-stealth] (0,0,0) -- ({3*sin(38.23)},0,{3*cos(38.23)}) node[anchor=north west]{$y' \parallel \vec c$};
\draw[blue, thick, dotted] ({3*sin(38.23)},0,-0.1) -- ({3*sin(38.23)},0,{3*cos(38.23)});
\tdplotsetrotatedcoords{0}{100}{0}
\begin{scope}[tdplot_rotated_coords,canvas is yz plane at x=0]
\draw[blue,dashed] (0,-3) -- (0,3);
\draw[blue,dashed] (-3,0) -- (3,0);
\draw[thick, blue] (0,0) ellipse (3cm and 3cm);
\draw[red,-stealth, very thick] (0,0) -- ({3*cos(70)},{3*sin(70)}) node[anchor=north, pos=0.65]{$\vec p(t)$};
\draw[blue,very thick,-stealth] (0,0,0) -- (0,-4,0) node[anchor=north east]{$x'$};
\end{scope}
\end{tikzpicture}}
\begin{center}
\caption{\label{fig:tilt-angle-xi} The beam particles move along the $z$ direction. In the presence of an EDM, \textit{i.e.}, $\xi_\text{EDM} > 0$, the spins precess around the $\vec c$ axis, and an oscillating vertical polarization component $p_y(t)$ is generated, as shown in Fig.\,\ref{fig:only-EDM-no-WF}. }
\end{center}
\end{figure}
In Fig.\,\ref{fig:only-EDM-no-WF}, the solutions of $\vec S_1(t)$ from Eq.\,(\ref{eq:polarization-evolution-without-WF}) for two different initial in-plane polarization vectors $\vec S_0$ are shown for 10 turns.
\begin{figure*}[htb]
\centering
\subfigure[\label{fig:only-EDM-no-WFa} Polarization evolution of $p_x$, $p_z$ (upper panel), and $p_y$ (lower panel) for the initial spin vector $\vec S_0 = (0,0,1)$.]{\includegraphics[width=\columnwidth]{plot-WF-off-pz.jpg}}
\hspace{0.3cm}
\subfigure[\label{fig:only-EDM-no-WFb} Same as panel (a), but for $\vec S_0 = (1,0,0)$.]{\includegraphics[width=\columnwidth]{plot-WF-off-px.jpg}}
\caption{ \label{fig:only-EDM-no-WF} Polarization evolution during idle precession for 10 turns in an ideal ring using Eq.\,(\ref{eq:polarization-evolution-without-WF}) and the parameters listed in Table\,\ref{tab:list-of-parameters}. Panel (a) shows $p_x(t)$, $p_z(t)$ and $p_y(t)$ for an initially longitudinal polarization, and panel.\,(b) the same for sideways polarization. The bunch revolution is indicated as well. The magnitude of the $p_y$ oscillation amplitude corresponds to the tilt angle $\xi_\text{EDM}$ (see also Eq.\,(\ref{eq:n1-n2-n3-withEDM}) and Fig.\,\ref{fig:tilt-angle-xi}).}
\end{figure*}
It is clearly visible that the polarization evolution occurs counter-clock wise with respect to the clock-wise rotation of the particles in the ring, since the deuteron $G$ factor is negative.
\section{RF Wien filter in a ring}
\label{sec:RF-Wien-filter-plus-ring}
\subsection{Electric and magnetic fields of the RF Wien filter}
The RF Wien filter, described\,\cite{Slim:2016pim}, has been designed in order to be able to manipulate the spins of the stored particles, avoiding as much as possible, the effect on the beam orbit. To this end, great care was taken to minimize the unwanted field components of the Wien filter and to characterize them via the Polynomial Chaos Expansion\,\cite{Slim201752}. In EDM mode, the main component of the magnetic induction $\vec B^\text{WF}$ is oriented along the $y$-axis, and the main component of the electric field $\vec E^\text{WF}$ along the $x$-axis.
In order to avoid betatron oscillations in the beam, the magnetic and electric field must be matched to each other to provide a vanishing Lorentz force $\vec F_\text{L}$ (see Eq.\,(3) of \cite{Slim:2016pim}),
\begin{equation}
\vec F_\text{L} = 0 \quad \Longleftrightarrow \quad \vec E_x^\text{WF} + c \vec \beta \times \vec B_y^\text{WF} = 0\,.
\label{eq:vanishing-Lorentz-force}
\end{equation}
According to a full-wave simulation (FWS) \footnote{CST Microwave Studio - Computer Simulation Technology AG, Darmstadt, Germany, \url{http://www.cst.com}.}, including the ferrite cage (see label 6 in Fig.\,1 of\,\cite{Slim:2016pim}), for an input power of $\SI{1}{kW}$, a field integral of $\vec B^\text{WF}$ along the beam axis of
\begin{equation}
\int_{-\ell_\text{WF}/2}^{\ell_\text{WF}/2} \vec B^\text{WF} \text{d} z =
\left(\begin{array}{ccc}
2.73 \times 10^{-9}\\
2.72 \times 10^{-2}\\
6.96 \times 10^{-7}\\
\end{array}
\right)\,\text{T\,mm}
\end{equation}
is obtained. Here, the active length of the RF Wien filter\,\cite{Slim:2016pim}, denoted by
\begin{equation}
\ell_\text{WF} = \SI{1550}{mm}\,,
\label{eq:effective-length-WF}
\end{equation}
is defined as the region, where the fields are non-zero. Under these conditions, the corresponding integrated electric field components with ferrites are
\begin{equation}
\int_{-\ell_\text{WF}/2}^{\ell_\text{WF}/2} \vec E^\text{WF} \text{d} z =
\left(\begin{array}{rrr}
3324.577 \\
0.018\\
0.006\\
\end{array}
\right)\,\text{V}\,.
\end{equation}
The design and construction of the RF Wien filter includes a ferrite cage surrounding the electrodes, which improves the field homogeneity and increases the magnitude of the fields\,\cite{Slim:2016pim}. However, in order to simplify the installation, the RF Wien filter was installed at COSY without ferrites, and in addition, it was decided to proceed without ferrites until a first direct deuteron EDM measurement is available.
For this situation \textit{without ferrites}, and for an input power of $\SI{1}{kW}$ [ignoring the unwanted components of the field integrals ($B^\text{WF}_x$, $B^\text{WF}_z$, and $E^\text{WF}_y$, $E^\text{WF}_z$)], one obtains from the full-wave simulation (FWS)
\begin{equation}
\begin{split}
\text{EDL}_x^\text{FWS} &= \int_{-\ell_\text{WF}/2}^{\ell_\text{WF}/2} E^\text{WF}_x \text{d} z \\
& = \SI{2204.677323}{\volt}\,, \text{and} \\
\text{BDL}_y^\text{FWS} & = \int_{-\ell_\text{WF}/2}^{\ell_\text{WF}/2} B^\text{WF}_y \text{d} z \\
& = \SI{1.598492e-5}{\tesla \meter} \,.
\end{split}
\end{equation}
\vspace{0.5cm}
The ratio of electric and magnetic field integrals from the FWS yields
\begin{equation}
\frac{1}{ \beta c} \cdot \frac{\text{EDL}_x^\text{FWS}}{\text{BDL}_y^\text{FWS}} = 1.0015 \,,
\label{eq:EB-ratio}
\end{equation}
should ideally be equal to unity. The subsequent calculations use the field integrals of an \textit{idealized} WF with vanishing Lorentz force $\vec F_\text{L}$, given in the last column of Table~\ref{tab:WF-parameters}.
A field amplification factor is applied in the simulations to increase the field integrals of the ideal RF Wien filter (last column Table~\ref{tab:WF-parameters}) in the simulations, so that
\begin{equation}
\begin{split}
\left.{\int E^\text{WF}_x \text{d} z}\right|_\text{used} & = f_\text{ampl} \cdot \left.{\int E^\text{WF}_x \text{d} z}\right|_\text{ideal} \\
\left.{\int B^\text{WF}_y \text{d} z}\right|_\text{used} & = f_\text{ampl} \cdot \left.{\int B^\text{WF}_y \text{d} z}\right|_\text{ideal}
\end{split}
\end{equation}
The field amplification allows one to speed up the simulation calculations accordingly, \textit{without} affecting other aspects of the spin dynamics of the polarization evolution in the ring. In the description of the spin evolution via spin rotations on the closed orbit, momentum and position kicks are not considered.
\begin{table*}[htb]
\renewcommand{\arraystretch}{1.25}
\caption{\label{tab:WF-parameters} Values for the main electric and magnetic field integrals from the full wave simulation
with and without ferrites for an input power of \SI{1}{kW} where $\vec B^\text{WF} \parallel \vec e_y$. The last column lists the electric and magnetic field integrals of an idealized Wien filter used in the simulations. In this case, the unwanted field components vanish, \textit{i.e.}, $\int E^\text{WF}_y \text{d} z = \int E^\text{WF}_z \text{d} z = \int B^\text{WF}_x \text{d} z =\int B^\text{WF}_z \text{d} z = 0$.}
\begin{ruledtabular}
\begin{tabular}{rrrr}
Field integrals RF Wien filter & $\text{with ferrites}$ & \multicolumn{2}{c}{without ferrites} \\
& (real WF) & (real WF) & (idealized Wien filter) \\ \hline
$\int E^\text{WF}_x \text{d} z$ [V] & $\num{3.325e3}$ & $\num{2.204677e3}$ & $\num{2.20000000e+03}$ \\
$\int B^\text{WF}_y \text{d} z$ [T m] & $\num{2.720e-5}$ & $\num{1.598492e-5}$ & $\num{1.59749820e-05}$
\end{tabular}
\end{ruledtabular}
\end{table*}
\subsection{Rotations induced by the RF Wien filter}
The effect of the RF Wien filter on the polarization evolution in the ring is implemented by an additional rotation matrix. The spin rotation in the Wien filter depends on the applied field integrals (right column of Table\,\ref{tab:WF-parameters}), multiplied by the factor $f_\text{ampl}$.
\vspace{0.1cm}
\subsubsection{Spin rotation angle in the Wien filter}
In the following, the spin rotation angle $\psi_\text{WF}$ in the RF Wien filter is calculated numerically using the Thomas-BMT equation of Eqs.\,(\ref{eq:BMT-EDM-MDM}) and (\ref{eq:OmegaMDM-EDM}) with
$\vec \Omega^\text{EDM} = 0$. We start with an initial spin vector
\begin{equation}
\vec S_\text{in} =
\left(
\begin{array}{c}
0\\
0\\
1
\end{array}
\right)\,,
\end{equation}
and we compute the final polarization vector $\vec S_\text{fin}$ via
\begin{equation}
\frac{\Delta \vec S}{\Delta t} = \frac{\vec S_\text{fin} - \vec S_\text{in}}{\Delta t}
= \vec \Omega^\text{MDM} \times \vec S_\text{in}\,.
\end{equation}
Electric and magnetic field vectors for $\vec \Omega^\text{MDM}$ in Eq.\,(\ref{eq:OmegaMDM-EDM}) are obtained by computing the average fields from the idealized field integrals of the RF Wien filter (last column of Table\,\ref{tab:WF-parameters}), given by
\begin{equation}
\begin{split}
\vec E^\text{WF} & =
\left(
\begin{array}{c}
\frac{\int E^\text{WF}_x \text{d} z}{ \ell_\text{WF} }\\
0\\
0
\end{array}
\right)
\,, \text{and } \\
\vec B^\text{WF} & =
\left(
\begin{array}{c}
0\\
\frac{ \int B^\text{WF}_y \text{d} z}{ \ell_\text{WF}}\\
0
\end{array}
\right)
\,,
\end{split}
\end{equation}
where the effective length of the Wien filter is taken from Eq.\,(\ref{eq:effective-length-WF}). These conditions provide for a vanishing Lorentz force $\vec F_\text{L}$ [see also Eq.\,(\ref{eq:vanishing-Lorentz-force})].
After passing the RF Wien filter once, the final polarization vector is given by
\begin{equation}
\begin{split}
\vec S_\text{fin} & = \left( \vec \Omega^\text{MDM} \times \vec S_\text{in} \right) \cdot \Delta t + \vec S_\text{in} \\
& \approx \left( \vec \Omega^\text{MDM} \times \vec S_\text{in} \right) \cdot \frac{\ell_\text{WF}}{\beta \, c} + \vec S_\text{in}\,,
\end{split}
\end{equation}
and, after normalizing $\vec S_\text{fin}$ to unity, the angle between $S_\text{in}$ and $\vec S_\text{fin}$ is determined from the four-quadrant inverse tangent
\begin{equation}
\arctantwo \left( \vec S_\text{in} \times \vec S_\text{fin} , \vec S_\text{in} \cdot \vec S_\text{fin} \right) =
\left(
\begin{array}{c}
\num{0.000000} \\
\psi_\text{WF} \\
\num{0.000000}
\end{array}
\right) \,,
\end{equation}
with
\begin{equation}
|\psi_\text{WF}| = \SI{3.75845773e-06}{\radian}\,.
\label{eq:psi_WF}
\end{equation}
The spin-rotation angle in the RF Wien filter, divided by the idealized transverse magnetic field integral from Table\,\ref{tab:WF-parameters}, yields
\begin{equation}
\frac{|\psi_\text{WF}|}{\int B^\text{WF}_y \text{d} z}
= \SI{2.35271485e-01}{\radian \per \tesla \per \meter}
\,.
\end{equation}
Validating the numerical result for the spin-rotation angle $\psi_\text{WF}$ in the RF Wien filter obtained in Eq.\,(\ref{eq:psi_WF}) against the analytic expression, given in \cite[Eq.\,(13)]{PhysRevAccelBeams.20.072801}, yields
\begin{equation}
\begin{split}
\Omega_\text{WF} \cdot \Delta t = \psi_\text{WF}
& = - \frac{q}{m} \cdot \frac{(1 + G)}{\gamma^2} \cdot B^\text{WF} \cdot \frac{\ell_\text{WF}}{\beta c} \\
& = - \frac{q}{m} \cdot \frac{(1 + G)}{\gamma^2 \beta c} \int B_\perp \text{d} \ell \\
& = - \frac{q}{m} \cdot \frac{(1 + G)}{\gamma^2 \beta^2 c^2} \int E_\perp \text{d} \ell \\
& = \SI{-3.75845773e-06}{\radian} \,,
\label{eq:psi-angle-in-WF}
\end{split}
\end{equation}
where the time interval $\Delta t$ in the Wien filter has been expressed through the length $\ell_\text{WF}$.
The spin rotation angle in the RF Wien filter, given in Eq.\,(\ref{eq:psi-angle-in-WF}), constitutes an upper limit, which corresponds to a situation when a sharp $\delta$-function-like bunch passes through the device. Realistically, the bunch distribution has to be folded in, and the spin-rotation angle will be reduced correspondingly.
\subsubsection{RF Wien filter rotation matrix}
The spin-rotation angle of the RF Wien filter changes as function of time according to
\begin{equation}
\psi (t) = \psi_\text{WF} \cos \left(\omega_\text{WF} \cdot t + \phi_\text{RF} \right)\,,
\label{eq:psi-of-t-in-wien-filter-including-phase}
\end{equation}
where
\begin{equation}
\omega_\text{WF} = 2 \pi f_\text{WF}\,.
\label{eq:omega-of-wien-filter}
\end{equation}
The Wien filter is operated on some harmonic of the spin-precession frequency $f_s$ [Eq.\,(\ref{eq:spin-precession-frequency})], given by
\begin{equation}
f_\text{WF} = \left( K + \frac{G \gamma}{\cos \xi_\text{EDM}} \right) \cdot f_\text{rev}\,, K \in \mathbb{Z} \,.
\label{eq:WF-frequencies}
\end{equation}
The RF Wien filter rotation matrix is given by
\begin{equation}
\mathbf{U}_\text{WF} (t) = \mathbf{R}(\vec n_\text{WF}, \psi(t))\,,
\label{eq:Wien-filter-matrix}
\end{equation}
where in the generic case, $\vec n_\text{WF}$ is a unit vector along the magnetic field of the Wien filter. The case
\begin{equation}
\vec n_\text{WF} = \vec e_y\,,
\label{eq:definition-of-EDM-mode}
\end{equation}
for instance, denotes the Wien filter EDM mode. The RF Wien filter matrix $\mathbf{U}_\text{WF} (t)$ is only evaluated once per turn when the condition
\begin{equation}
\mod(t,T_\text{rev}) \equiv 0
\label{eq:mod-condition-to-evaluate-wf-once-per-turn}
\end{equation}
is met stroboscopically, otherwise, the implemented function returns the $\mathbf{I}_3$ unit matrix.
When the Wien filter is rotated around the beam axis ($z$) by some angle $\phi_\text{rot}^\text{WF}$, and
\begin{equation}
\renewcommand{\arraystretch}{1.25}
\begin{split}
\vec n_\text{WF} & = \vec n_\text{WF}\left(\phi_\text{rot}^\text{WF}\right) = \mathbf{R}(\vec e_z, \phi_\text{rot}^\text{WF}) \times \vec e_y \\
& =
\left(\begin{array}{ccc}
\cos\left(\phi_\text{rot}^\text{WF}\right) & -\sin\left(\phi_\text{rot}^\text{WF}\right) & 0\\
\sin\left(\phi_\text{rot}^\text{WF}\right) & \cos\left(\phi_\text{rot}^\text{WF}\right) & 0\\
0 & 0 & 1
\end{array}\right)\times \vec e_y\,,
\end{split}
\label{eq:phi-rot-wien-filter-physical-rotation}
\end{equation}
the oscillations also receive a contribution from the rotation of the MDM in the horizontal magnetic field.
\subsection{Polarization evolution in the ring with RF Wien filter}
The evolution of the polarization vector $\vec S$ as function of time $t$ in the ring with RF Wien filter can be numerically evaluated via
\begin{equation}
\begin{split}
\vec S_2(t) =
& \underbrace{\mathbf{U}_\text{ring} (\vec c, t - n\cdot T_\text{rev})}_{\text{rest of last turn}} \\
& \times \underbrace{\left[ \mathbf{U}_\text{WF} (t=n \cdot T_\text{rev}) \times \mathbf{U}_\text{ring} (\vec c, T_\text{rev}) \right]}_{\text{turn n}} \\
& \times\ldots \\
& \times\underbrace{\left[ \mathbf{U}_\text{WF} (t=2\cdot T_\text{rev}) \times \mathbf{U}_\text{ring} (\vec c, T_\text{rev}) \right]}_{\text{turn 2}} \\
& \times\underbrace{\left[ \mathbf{U}_\text{WF} (t= T_\text{rev}) \times \mathbf{U}_\text{ring} (\vec c, T_\text{rev}) \right]}_{\text{turn 1}} \times \vec S_0\,.
\end{split}
\label{eq:polarization-evolution-with-WF}
\end{equation}
The corresponding situation is illustrated in Fig.\,\ref{fig:sketch-ring-and-WF}. The spin rotations in the ring can be described by $\mathbf{U}_\text{ring}$. A turn begins with the revolution in the ring, and it ends with one pass through the RF Wien filter.
Between two successive points in time at which a particle encounters the RF Wien filter, its spin is just idly precessing in the machine.
According to Eq.\,(\ref{eq:polarization-evolution-with-WF}), the spin motion is stroboscopic in the sense that the spin rotation follows the angle $\psi(t)$ of the RF Wien filter [Eq.\,(\ref{eq:psi-of-t-in-wien-filter-including-phase})] turn-by-turn. The RF Wien filter therefore induces a stroboscopic turn-by-turn conversion of the transverse in-plane polarization into a vertical one (or vice versa). Using the Bogolyubov-Krylov-Mitropolsky (BKM) averaging method\,\cite{Bogolyubov}, the turn-by-turn evolution of the polarization can be approximated by the continuous dependence on the revolution number, given by $ n = f_\text{rev} \cdot t$ [Eq.\,(\ref{eq:connection-between-n-and-t})]. For the generic orientation of the RF Wien filter, the BKM averaged buildup of the vertical polarization proceeds with the resonance tune (or strength)\,\cite{PhysRevAccelBeams.20.072801}
\begin{equation}
\varepsilon^\text{EDM} = \frac{1}{4\pi} \left| \vec c \times \vec n_\text{WF} \right| \cdot \psi_\text{WF}\,.
\label{eq:resonance-tune}
\end{equation}
The direct simulations using Eq.\,(\ref{eq:polarization-evolution-with-WF}), discussed below, will furnish important crosschecks with respect to the accuracy of the analytic approximations based on the BKM averaging.
\begin{figure}[tb]
\centering
\resizebox{\columnwidth}{!}{
\begin{tikzpicture}[scale=1,cap=round,>=latex]
\filldraw (0,2) circle (1pt);
\filldraw (0,-2) circle (1pt);
\centerarc[black,thick](0,2)(0:180:3);
\centerarc[black,thick](0,-2)(180:360:3);
\draw[dashed,thick] (-3,2) -- (-3,-2);
\draw[dashed,thick] (3,2) -- (3,-2 );
\draw[very thin, gray] (-4.8,2) -- (4,2);
\draw[very thin, gray] (-4,-2) -- (4,-2);
\foreach \i in {1,2,...,12} {
\filldraw[black] ({-3*cos(172.5 - (\i-1) * 15)},{3*sin(172.5 - (\i-1) * 15)+2}) circle (2pt);
\draw ({-3.6*cos(7.5 + (\i-1) * 15)},{3.6*sin(7.5 + (\i-1) * 15)+2}) node {D$_{\i}$};
\draw[very thin, gray] (0,2) -- ({-3.2*cos((\i-1) * 15)},{3.2*sin((\i-1) * 15)+2});
}
\foreach \i in {13,14,...,24} {
\filldraw[black] ({-3*cos(172.5 - (\i-1) * 15)},{3*sin(172.5 - (\i-1) * 15) - 2}) circle (2pt);
\draw ({-3.6*cos(7.5 + (\i-1) * 15)},{3.6*sin(7.5 + (\i-1) * 15) - 2}) node {D$_{\i}$};
\draw[very thin, gray] (0,-2) -- ({-3.2*cos((\i-1) * 15)},{3.2*sin((\i-1) * 15) - 2});
}
\centerarc[-stealth,dashed, very thick](0,2)(180:150:4.5);
\draw[black,very thick] (-7.0,2.1) node[anchor=west]{$t = 0, \, T_\text{rev}$,};
\draw[black,very thick] (-6.2,1.6) node[anchor=west]{\ldots, $n\cdot T_\text{rev}$};
\draw[thick] (-4.8,2) -- (-4.2,2);
\filldraw[red] (-3,1.5) circle(3pt);
\draw (-1.75,1.5) node {Wien filter};
\draw (-5.3,3.9) node {$t \in (0, T_\text{rev})$};
\end{tikzpicture}}
\caption{\label{fig:sketch-ring-and-WF}
Sequence of elements in the ring, corresponding to Eq.\,(\ref{eq:polarization-evolution-with-WF}). The D$_i$ ($i = 1, \ldots, 24$) indicate the 24 dipole magnets of COSY. The counting of $t$ begins with one turn in the ring, and, as indicated, the Wien filter is passed at the end of each revolution. For the discussion presented here, the dashed lines have zero length.}
\end{figure}
\subsection{Radial magnetic RF field in the Wien filter}
\subsubsection{Driven oscillations and resonance strength $\varepsilon^\text{\rm MDM}$}
As an illustration of the principal features of the polarization evolution, we take the case where the RF Wien filter is rotated in the so-called MDM mode with magnetic field along $-\vec e_x$, \textit{i.e.}, for $\phi_\text{rot}^\text{WF} = \SI{90}{\degree}$, where the initial polarization $\vec S_0 = -\vec e_y$.
Using the function for $\vec S_2(t)$, given in Eq.\,(\ref{eq:polarization-evolution-with-WF}), for the conditions of Table\,\ref{tab:list-of-parameters}, driven oscillations for RF Wien filter with magnetic field aligned along $- \vec e_x$, for $\phi_\text{rot}^\text{WF} = \SI{90}{\degree}$ [see Eq.\,(\ref{eq:phi-rot-wien-filter-physical-rotation})] were simulated. One example for $K = -1$ is shown in Fig.\,\ref{fig:driven-oscillations}. Subsequently, the simulated oscillation data were fitted using the function
\begin{equation}
f(t) = p_y(t) = a \cdot \sin (bt + c) + d\,.
\label{eq:fitted-function-driven-oscillations}
\end{equation}
\begin{figure}[tb]
\centering
\includegraphics[width=1\columnwidth]{plot-driven-oscillation-k-equal-minus-1.jpg}
\caption{\label{fig:driven-oscillations} Simulated driven oscillation on resonance ($\Delta f_\text{WF} = 0$) using $\vec S_2(t)$ from Eq.\,(\ref{eq:polarization-evolution-with-WF}) with initial vertical polarization $\vec S_0 = -\vec e_y$, $\phi_\text{RF} = 0$ [Eq.\,(\ref{eq:WF-frequencies})], and $\phi_\text{rot}^\text{WF} = \SI{90}{\degree}$ [Eq.\,(\ref{eq:phi-rot-wien-filter-physical-rotation})] for the parameters given in Table\,\ref{tab:list-of-parameters} and for the harmonic $K = -1$. The plot contains \SI{101}{points} for a total of \SI{10000}{turns}.}
\end{figure}
The quality of the fit to the numerical data is evaluated in terms of squared deviations via
\begin{equation}
\text{SSE} = \sum_{i=1}^{n_\text{points}} w_i \left[ p_y(t_i) - f(t_i)\right] ^2\,,
\label{eq:SSE}
\end{equation}
where the weight factors are $w_i=1$, and $p_y(t) = \vec e_y \cdot \vec S_2(t)$. In the last row of Table\,\ref{tab:driven-oscillations-different-K}, the reduced $\chi^2 = \text{SSE}/\text{ndf}$ is given, where $n_\text{points} = 101$, and $\text{ndf} = n_\text{points} - 4 = 97$, since the fitted function in Eq.\,(\ref{eq:fitted-function-driven-oscillations}) has four parameters.
\begin{table}[t]
\renewcommand{\arraystretch}{1.25}
\caption{\label{tab:driven-oscillations-different-K} Typical fit results of a simulated driven oscillation, shown in Fig.\,\ref{fig:driven-oscillations}, using \SI{10000}{turns} with \SI{101}{data points}. The other four cases $K = \pm 1$ and $\pm 2$, within the given precision, yield identical values. $\text{SSE}/ \text{ndf}$ denotes the sum of squared deviations, computed using Eq.\,(\ref{eq:SSE}), divided by the number of degrees of freedom (ndf).}
\begin{ruledtabular}
\begin{tabular}{rl}
$K$ & $-1$ \\
$f_\text{ampl}$ & \num{e3} \\
$f_\text{WF}$ & \SI{870962.6863}{Hz} \\ \hline
$a$ & $(\num{10000 \pm 2 }) \cdot \num{e-4}$ \\
$b$ & \SI{1409.7817 \pm 0.0470 }{\per \second} \\
$c$ & \SI{0.4997 \pm 0.0001 }{\pi} \\
$d$ & (\num{0.0000 \pm 0.0001 }) \\
$\text{SSE}/ \text{ndf}$ & \num{3.801e-07}
\end{tabular}
\end{ruledtabular}
\end{table}
The resulting angular oscillation frequency $\Omega^\text{driven} = b$, given in Table\,\ref{tab:driven-oscillations-different-K}, was obtained using the field integrals, listed in Table\,\ref{tab:list-of-parameters}. The uncertainties were obtained using a computation time of about \SI{40}{s}\footnote{Lenovo T460s, all calculations use 64-bit double-precision floating point numbers, for which the $\text{machine epsilon} = \num{2.2e-16} = 2^{-52}$.}. The oscillation frequency normalized to the real magnetic field integral yields,
\begin{equation}
\frac{\Omega^\text{driven}}{\int B^\text{WF}_y \text{d} z \cdot f_\text{ampl}} = (\num{88.249} \pm 0.003) \,\, \si{\per \second \per \tesla \per \metre}\,.
\end{equation}
The driven oscillations of the vertical polarization $p_y(t)$ (Fig.\,\ref{fig:driven-oscillations}) are induced by the horizontal magnetic field of the RF Wien filter that couples to the deuteron MDM. Since the device is operated exactly at the spin-precession frequency, the associated resonance strength or \textit{resonance tune}\,\cite{PhysRevAccelBeams.20.072801} can conveniently be expressed via
\begin{equation}
\varepsilon^\text{MDM} = \frac{\Omega^\text{driven}}{\Omega^\text{rev} \cdot f_\text{ampl}} = \num{3e-07} \pm \num{6e-13} \,.
\end{equation}
\subsubsection{Width of the spin resonance}
\label{sec:Width-of-spin-resonance}
The detuning of the frequency at which the RF Wien filter is operated can be parametrized by substituting in Eq.\,(\ref{eq:omega-of-wien-filter})
\begin{equation}
f_\text{WF} \rightarrow f_\text{WF} + \Delta f_\text{WF}\,.
\label{eq:off-resonance-substitution}
\end{equation}
As shown in Fig.\,\ref{fig:off-resonance}, the resulting oscillation pattern is modified. Specifically, the oscillation amplitude of $p_y(t)$ in Eq.\,(\ref{eq:fitted-function-driven-oscillations}) is altered. The argument of the sine function is subjected to the substitution
\begin{equation}
b \cdot t = \Omega^\text{driven} \cdot t \rightarrow \Omega^\text{driven} \cdot\frac{ \sin(2\pi\Delta f_\text{WF} \cdot t)}{2\pi\Delta f_\text{WF}}\,,
\end{equation}
which can readily be derived from Eqs.\,(A7) and (A8) of\,\cite{PhysRevAccelBeams.20.072801}.
From a number of such simulations,
\begin{figure}[tb]
\centering
\includegraphics[width=\columnwidth]{WF-off-resonance.jpg}
\caption{\label{fig:off-resonance} Off-resonance driven oscillations with $\Delta f_\text{WF} = \SI{200}{\hertz}$ in Eq.\,(\ref{eq:off-resonance-substitution}) for the conditions of Table\,\ref{tab:list-of-parameters}.
}
\end{figure}
the oscillation amplitudes and the oscillation frequencies as function of $\Delta f_\text{WF}$ are obtained by fitting. In order to reduce the time required for the simulations, again a field amplification factor of $f_\text{ampl} = \num{e3}$ was used. This leads to oscillations that are faster by the same factor. The simulated data can be described by a Lorentz curve of the form,
\begin{equation}
L(f_\text{WF}) = \frac{a}{\left( \frac{\displaystyle \Gamma}{ \displaystyle 2} \right)^2 + (\underbrace{f_\text{WF} - f_s}_{\Delta f_\text{WF}})^2}\,.
\label{eq:Breit-Wigner-resonance}
\end{equation}
The left panel of Fig.\,\ref{fig:resonance-width} shows the simulated spin resonance, already corrected for the field amplification factor. For all harmonic excitations used in the RF Wien filter, the simulations yield, within errors given, the same width of
\begin{equation}
\Gamma = \SI{0.4488 \pm 0.0001}{Hz} \,.
\label{eq:Breit-Wigner-resonance-numerical-value}
\end{equation}
Using the nominal fields of the RF Wien filter (right column of Table\,\ref{tab:WF-parameters} and $f_\text{ampl}=1$), the driven oscillations have a frequency of
\begin{equation}
\frac{\Omega^\text{driven}}{f_\text{ampl}} = \SI{1.4105 \pm 0.0006}{Hz} \,.
\end{equation}
The two panels on the right side show that a quadratic fit to the driven oscillation frequency should be only used in a narrow region around the minimum.
\begin{figure*}[tb]
\centering
\includegraphics[width=.75\textwidth]{WF-width-resonance-frequency-K-equal-minus1.jpg}
\caption{\label{fig:resonance-width} The left panel shows the amplitude $a$ of simulated driven oscillations as function of the frequency change $\Delta_\text{WF}$. The oscillation amplitudes were extracted from fits using Eq.\,(\ref{eq:fitted-function-driven-oscillations}). The full width at half maximum of the fitted Breit-Wigner resonance [Eq.\,(\ref{eq:Breit-Wigner-resonance})] is indicated, and the resonance curves for $K=\pm 0, 1$, and $\pm 2$ are very similar. Both panels on the right show the frequency of the driven oscillations as function of $\Delta f_\text{WF}$ together with a parabolic fit.
}
\end{figure*}
The quality factor of an underdamped oscillator $Q$ is defined as
\begin{equation}
Q = \frac{f^\text{driven}}{\Delta f^\text{driven}}\,,
\end{equation}
where $\Delta f^\text{driven}$ is the full width at half maximum, and $f^\text{driven}$ is the resonance frequency. Thus, at a deuteron momentum of $P = \SI{970}{MeV/c}$, a theoretical estimate of the $Q$ value of the oscillating deuteron spins in the machine amounts to
\begin{equation}
Q = \frac{\num{120764.751}}{\num{0.4488}} \approx \num{270000}\,.
\end{equation}
\subsection{Vertical magnetic field in the RF Wien filter}
With a vertical magnetic field in the RF Wien filter ($\vec n_\text{WF} = \vec e_y$), in the expression of the spin-resonance strength [Eq.\,(\ref{eq:resonance-tune})], we then have
\begin{equation}
\left| \vec c \times \vec n_\text{WF} \right| = \sin \xi_\text{EDM}\,.
\end{equation}
In this case, the experimental determination of the resonance strength $\varepsilon^\text{EDM}$ amounts to the determination of the tilt angle $\xi_\text{EDM}$ and of the associated EDM, via Eqs.\,(\ref{eq:xiEDM}) and (\ref{eq:defninitions-eta-mu}).
\subsubsection{Polarization evolution with development of $p_y(t)$}
In the following, the polarization buildup in the machine is addressed. The interplay of the different frequencies involved is illustrated in Fig.\,\ref{fig:different-frequencies-with-Wien-filter}.
\begin{figure}[htb]
\centering
\includegraphics[width = \columnwidth]{WF-different-frequencies-overview-buildup.jpg}
\caption{\label{fig:different-frequencies-with-Wien-filter} Horizontal and longitudinal polarization components $p_x(t)$ and $p_z(t)$ during the ten turns in the machine, as described by $S_2(t)$ using Eq.\,(\ref{eq:polarization-evolution-with-WF}) for the $K=-1$ harmonic and an initial polarization vector $\vec S_0$ in the horizontal ($xz$) plane. The magnetic field $\vec B^\text{WF}$ of the RF Wien filter points along $\vec e_y$, and $f_\text{ampl} = \num{e3}$. The evolution of $p_y(t)$ for the same initial condition $\vec S_0 = (0,0,1) $ is shown in Fig.\,\ref{fig:only-EDM-no-WFa}. Also indicated are the bunch revolution and the Wien filter RF frequency, and the corresponding RF amplitude when the beam bunch meets the Wien filter RF (\textcolor{magenta}{$\bullet$}).
}
\end{figure}
The same situation as in Fig.\,\ref{fig:different-frequencies-with-Wien-filter} is depicted in Fig.\,\ref{fig:vertical-buildup-with-Wien-filter}, the only difference is the larger turn number. The graph illustrates the experimental evidence for an EDM, namely a non-vanishing slope of the vertical polarization $p_y(t)$. This slope describes the steady out-of-plane rotation of the polarization vector on the background of oscillations shown in the bottom panels of Fig.\,\ref{fig:only-EDM-no-WF}, where the oscillation amplitude $A$ perfectly matches with the angle $\xi_\text{EDM}$, used in the simulation (see Table\,\ref{tab:list-of-parameters}).
The slope can be determined by fitting using
\begin{equation}
p_y(t) = A \cdot \sin(2 \pi f_s \cdot t + \phi) + B \cdot t + C\,,
\label{eq:full-wave-fit}
\end{equation}
where $f_s$ is not a fit parameter, but taken from Eq.\,(\ref{eq:spin-precession-frequency}).
\begin{figure}[htb]
\centering
\includegraphics[width = \columnwidth]{plot-vertical-buildup.jpg}
\caption{\label{fig:vertical-buildup-with-Wien-filter} Buildup of a vertical polarization component for the conditions as indicated. The amplitude of the oscillating $p_y(t)$ corresponds to the EDM tilt angle $\xi_\text{EDM}$, given in Table\,\ref{tab:list-of-parameters}. The red line is a fit to the data using Eq.\,(\ref{eq:full-wave-fit}) that yields an initial slope of $\left. \text{d} p_y(t) / \text{d} t \right|_{t=0} = B = (\num{4305.059 \pm 5.268}) \times \num{e-6}\,\si{\per \second}$ (for $f_\text{ampl} = \num{e3}$).
}
\end{figure}
Thus, using the above parametrization, the initial slope is given by
\begin{equation}
\left. \dot p_y(t) \right|_{t=0} = B\,.
\label{eq:linear-slope-fit-with-offset}
\end{equation}
\subsubsection{$p_y(t)$ dependence on the phases $\phi_\text{\rm RF}$ and $\phi_{S_0^x}$}
The RF phase $\phi_\text{RF}$ is introduced in Eq.\,(\ref{eq:psi-of-t-in-wien-filter-including-phase}). During a real experiment, this phase needs to be maintained by a \textit{phase-locking} system (for details see\,\cite{PhysRevLett.119.014801}). Another way to parametrize the same effect is via the angle $\phi_{S_0^x} = \angle(\vec S_0, \vec e_x)$, which is illustrated in Fig.\,\ref{fig:definition-phi_SX}.
Within the formalism described in\,\cite{PhysRevAccelBeams.20.072801}, it is the interplay between the stable spin axis $\vec c$ at the RF Wien filter and its magnetic axis $\vec n_\text{WF}$ ($\parallel \vec B^\text{WF})$ that controls via $[\vec c \times \vec n_\text{WF}]$ the orientation of $\vec S_0$. On the other hand, one could start by fixing the orientation of $\vec S_0$ by picking some angle $\phi_{S_0^x}$. The resulting evolution of $p_y(t)$, however, must be the same, except for a possible constant shift between the two phases $\phi_\text{RF}$ and $\phi_{S_0^x}$.
\begin{figure*}[tb]
\centering
\subfigure[\label{fig:definition-phi_SX} Azimuthal angle $\phi_{S_0^x}$.
{
\begin{tikzpicture}[scale=1.43,cap=round,>=latex]
\draw (0,0) circle (4pt);
\draw[-stealth, very thick] (1,0) -- (-3,0) node[anchor=north east]{$\vec{e}_x$};
\draw[-stealth, very thick] (0, -1) -- (0,2) node[anchor=north east]{$\vec{e}_z$};
\centerarc[blue,thick,-stealth](0,0)(180:140:1.5);
\node[blue, yshift = -0.5cm] at (-2.4,1.2) {$\phi_{S_0^x} = \angle(\vec{S_0},\vec{e}_x)$};
\draw[red,very thick,-stealth] (0,0) -- ({2.5*cos(140)},{2.5*sin(140)}) node[anchor=west, yshift = 0.2cm, xshift = 0.2cm]{$\vec{S_0}$};
\end{tikzpicture}
}
\hspace{0.1cm}
\subfigure[\label{fig:inclination-angle-alpha} $\vec S_y(t)$ and inclination angle $\alpha$
]
{ \def38.23{38.23}
\centering
\tdplotsetmaincoords{65}{250}
\begin{tikzpicture} [scale=6.05, tdplot_main_coords, axis/.style={->,blue,thick},
vector/.style={-stealth,red,very thick},
vector guide/.style={dashed,red,thick}]
\coordinate (O) at (0,0,0);
\pgfmathsetmacro{\ax}{0.7}
\pgfmathsetmacro{\ay}{0.6}
\pgfmathsetmacro{\az}{0.3}
\pgfmathsetmacro{\aX}{0.84}
\pgfmathsetmacro{\aY}{0.72}
\pgfmathsetmacro{\aZ}{0.0}
\coordinate (P) at (\ax,\ay,\az);
\draw[axis] (0,0,0) -- (1,0,0) node[anchor= west, yshift = 0.2cm]{$z$};
\draw[axis] (0,0,0) -- (0,0.8,0) node[anchor=north west]{$x$};
\draw[axis] (0,0,0) -- (0,0,0.5) node[anchor=west]{$y$};
\draw[very thick, -stealth] (O) -- (P) node[anchor = east, yshift = 0.3cm, xshift = +1cm ]{$\vec{S}(t)$};
\draw[-stealth, very thick] (O) -- (\ax,\ay,0) node[anchor = north, xshift = -0.4cm, yshift = 0cm ]{$\vec{S}_{xz}(t)$};
\draw[red, very thick, -stealth] (\ax,\ay,0) -- (P) node[anchor = east, yshift = -0.3cm,]{$\vec{S}_{y}(t)$};
\draw[gray, very thin] (\ax,\ay,0) -- (0,\ay,0);
\draw[gray, very thin] (\ax,\ay,0) -- (\ax,0,0);
\tdplotdefinepoints(0,0,0)(\ax, \ay, 0)(\ax, \ay, \az)
\tdplotdrawpolytopearc[very thick, red, -stealth]{0.5}{red, below, yshift=+0.5cm, xshift = -0.45cm}{$\alpha(t)$}
\tdplotdefinepoints(0,0,0)(0,0.5,0)(\ax,\ay,0)
\tdplotdrawpolytopearc[thick, -stealth]{0.4}{anchor = east, yshift=+0.1 cm, xshift = -0.4cm}{$\phi_{S^x_0}$}
\end{tikzpicture}
}
\caption{\label{fig:definition-phis} Panel (a): Definition of the in-plane initial spin orientation angle $\phi_{S^x_0}$, and (b) relation between $\vec S_y(t)$ and the out-of-plane inclination angle $\alpha(t)$. }
\end{figure*}
The buildup of a vertical polarization component, which is equivalent to a rotation of the polarization vector out of the ring plane due to the EDM for a set of random azimuthal angles $\phi_{S_0^x}$ and $\phi_\text{RF}$ has been computed. The results are shown in Fig.\,\ref{fig:initial-slope-as-function-of-phi}.
\begin{figure}[htb]
\centering
\includegraphics[width = \columnwidth]{initial-slope-as-function-of-phi.jpg}
\caption{\label{fig:initial-slope-as-function-of-phi}
The red (blue) curve shows the initial slope as function of 25 random values of $\phi_{S_0^x}$ ($\phi_\text{RF}$), using a field amplification factor $f_\text{ampl} = \num{e3}$. The simulated data are fitted using the functions indicated in the inset. The resulting parameters are listed in Table\,\ref{tab:summary-slopes-ideal-ring}. Each data point is obtained from a graph like the one shown in Fig.\,\ref{fig:vertical-buildup-with-Wien-filter}, but for \SI{10000}{turns} and \SI{501}{points}. }
\end{figure}
\begin{table}[tb]
\renewcommand{\arraystretch}{1.25}
\caption{\label{tab:summary-slopes-ideal-ring}
Summary of parameters obtained (for $K = -1$) via fitting the oscillatory patterns of the initial slopes shown in Fig.\,\ref{fig:initial-slope-as-function-of-phi} as function of $\phi_{S_0^x}$ and $\phi_\text{RF}$, still including the factor $f_\text{ampl} = \num{e3}$. For the other harmonics ($K=0$, $1$, and $\pm 2$), within the given uncertainties, the same values are obtained. }
\begin{ruledtabular}
\begin{tabular}{lll}
& $\phi_\text{RF}$ & $\phi_{S_0^x}$ \\ \hline
$a$ & $(\num{4309.884 \pm 2.945}) \times \num{e-6}$ & $(\num{4304.623 \pm 2.290}) \times \num{e-6}$ \\
$b$ & $(\num{15711.584 \pm 6.254}) \times \num{e-4}$ & $(\num{-17.686 \pm 3.637}) \times \num{e-4}$ \\
$c$ & $(\num{ 8.516 \pm 2.075}) \times \num{e-6}$ & $(\num{0.367 \pm 1.280}) \times \num{e-6}$ \\
$\frac{\chi^2}{\text{ndf}}$ & \num{4.2e-17} & \num{5.9e-17}
\end{tabular}
\end{ruledtabular}
\end{table}
Within the given uncertainties, the two simulated data sets for $\phi_{S_0^x}$ and $\phi_\text{RF}$, as expected, yield the \textit{same} results. The only difference is a phase shift of $\pi/2$ between $f(\phi_{S_0^x})$ and $g(\phi_\text{RF})$. The weights that are used to find the optimum parameters are all equal in the two data sets.
Correcting the initial slope parameter $a$ in Table\,\ref{tab:summary-slopes-ideal-ring} for the employed field amplification factor used in the simulation, yields a prediction for the initial slope that one would expect in an ideal ring in the presence of an EDM of $d = \SI{e-20}{e.cm}$. For an initial polarization $|\vec S_0| = 1$, with the parameters for the idealized RF WF, given in the last column of Table\,\ref{tab:WF-parameters}, one obtains
\begin{equation}
\left. \dot p_y(t) \right|_{t=0} = \frac{a(\phi_{S_0^x})}{f_\text{ampl}}
= (\num{4.305 \pm 0.002}) \times \num{e-6} \, \si{\per \second}\,.
\label{eq:dpydt-estimate}
\end{equation}
Since the comparison of $\dot{p}_y(t) |_{t=0}$ with experiment requires knowledge of the magnitude of $\vec S(t)$, the approach taken in\,\cite{PhysRevAccelBeams.21.042002} is convenient, because the angle of the out-of-plane rotation $\alpha$ is \textit{independent} of the magnitude of the beam polarization. The quantity of interest, indicated in Fig.\,\ref{fig:inclination-angle-alpha}, in that case is $\left. \dot\alpha (t) \right|_{t=0}$. The polarimeter measures $p_y(t)$, irrespective of the in-plane polarization $p_{xz}(t)$, given by
\begin{equation}
p_{xz}(t) = \sqrt{ p_{xz}(0)^2 - p_y(t)^2 }\,.
\end{equation}
From this it follows that
\begin{equation}
\begin{split}
\dot{\alpha}(t) & = \frac{\text{d}}{\text{d} t} \arctan\left[ \frac{p_y(t)} {p_{xz}(t)} \right] \\
\Rightarrow \left. \dot\alpha (t) \right|_{t=0} & = \frac{\left.\dot{p}_y(t)\right|_{t=0}} {{p}_{xz}(0)}\,.
\end{split}
\end{equation}
\subsubsection{Initial slope versus slow oscillation amplitude}
Figure\,\ref{fig:initial-slopes-from-different-EDMsa} shows the initial slopes for four different assumed EDMs, for an ideal ring and an idealized Wien filter, based on the conditions listed in Table\,\ref{tab:list-of-parameters}. The EDMs manifest themselves twofold, namely in different slopes and in larger amplitudes of the fast oscillation.
\begin{figure*}[tb]
\centering
\subfigure[\label{fig:initial-slopes-from-different-EDMsa} Vertical polarization as function of time for four different EDMs for $\vec S_0 = (0,0,1)$.]{\includegraphics[width=0.47\textwidth]{initial-slopes-from-different-EDMs.jpg}}
\hspace{0.3cm}
\subfigure[\label{fig:initial-slopes-from-different-EDMsb} Oscillation pattern for a large EDM and a large field amplification factor. The parametrization of the red curve in panel (b) is given in Eq.\,(\ref{eq:parametrization-of-red-curve}). ]
{\includegraphics[width=0.49\textwidth]{full-oscillation-with-large-EDM.jpg}}
\caption{\label{fig:initial-slopes-from-different-EDMs} Various EDM induced oscillation pattern for short (panel a) and long evolution times (b) using different amplification factors and values for the EDM. }
\end{figure*}
The linear slopes in Fig.\,\ref{fig:initial-slopes-from-different-EDMsa} are of course just the very beginning of a sinusoidal oscillation that becomes visible only when the EDM becomes large, as depicted in Fig.\,\ref{fig:initial-slopes-from-different-EDMsb}, where
\begin{equation}
d = \SI{e-15}{e.cm}
\end{equation}
has been used in the simulation.
The initial slope of the vertical polarization component is related to the strength of the EDM spin resonance. Another way to obtain this information is to vary the RF phase $\phi_\text{RF}$, as indicated in Fig.\,\ref{fig:initial-slope-as-function-of-phi}. The initial slope can of course also be obtained from the slow oscillation. The vertical polarization can be described by
\begin{equation}
p_y(t) = a \sin (\omega t) \cdot \cos \phi_\text{RF}\,,
\label{eq:ansatz-for-py(t)}
\end{equation}
which respects the property that for any $\phi_\text{RF}$, $p_y(t)|_{t=0} = 0$. The derivative of $p_y(t)$ with respect to time is
\begin{equation}
\begin{split}
\dot{p}_y(t) & = a \omega \cos(\omega t) \cdot \cos \phi_\text{RF} \\
\Rightarrow \dot{p}_y(t)|_{t=0} & = a \omega \cdot \cos \phi_\text{RF} = (\num{3933 \pm 19})\,\si{\per \second}\,,
\label{eq:initial-slope-from-full-oscillation}
\end{split}
\end{equation}
where the value given corresponds to the situation shown in Fig.\,\ref{fig:initial-slopes-from-different-EDMsb}.
Numerically, the red curve in Fig.\,\ref{fig:initial-slopes-from-different-EDMsb} has been parametrized by the function
\begin{equation}
f(t) = p_y(t) = a \sin(\omega \cdot t + \phi)\,.
\end{equation}
It turns out that the amplitude of the averaged oscillation [red curve in Fig.\,\ref{fig:initial-slopes-from-different-EDMsb}] can be determined directly from the tilt angle of the stable spin axis due to the EDM, via
\begin{equation}
a = \cos\left( \xi_\text{EDM}(d = \SI{e-15}{e.cm}) \right) = \num{0.9564}\,.
\end{equation}
With $\xi_\text{EDM}(d = \SI{e-15}{e.cm}) = \num{-0.296373}$, within the errors, one obtains a perfect match to the value of $a$ given by
\begin{equation}
\begin{split}
a & = \num{0.9560 \pm 0.0038}\,, \\
\omega & = \SI{4114.3813 \pm 11.8908}{\per \second}\,, \text{ and} \\
\phi & = \num{-0.0034 \pm 0.0082}\,.
\end{split}
\label{eq:parametrization-of-red-curve}
\end{equation}
The envelope $b^\text{osc}(t)$ of the fast oscillations is perfectly consistent with the law
\begin{equation}
b^\text{osc}(t) = \sin \xi_\text{EDM}(d) \cdot \cos (\omega t) \,.
\end{equation}
According to\,\cite{PhysRevAccelBeams.20.072801}, the EDM induced angular oscillation frequency $\omega$ in Eq.\,(\ref{eq:ansatz-for-py(t)}) can be expressed through the EDM resonance strength $\varepsilon^\text{EDM}$ and the angular revolution frequency $ \omega_\text{rev}$, via
\begin{equation}
\omega = \varepsilon^\text{EDM} \cdot \omega_\text{rev}
\end{equation}
In terms of the initial slope, the resonance strength is given by
\begin{equation}
\varepsilon^\text{EDM} = \frac{\dot{p}_y(t)|_{t=0}}{a \cos \phi_\text{RF} } \frac{1}{ \omega_\text{rev} } \,.
\label{eq:resaonce-strength-from-pydot}
\end{equation}
While the slopes can be easily determined as function of $\phi_\text{RF}$, the latter method using Eq.\,(\ref{eq:resaonce-strength-from-pydot}) clearly also requires knowledge about the oscillation amplitude $a$. Knowing the initial slopes alone, does not allow one to determine the resonance strength $\varepsilon^\text{EDM}$.
Using the technique of variation of $\phi_\text{RF}$, as shown in Fig.\,\ref{fig:initial-slope-as-function-of-phi}, Fig.\,\ref{fig:initial-slope-as-function-of-phi-large-EDM} yields an initial slope of
\begin{equation}
\left.\dot p_y(t)\right|_{t=0} = (\num{3959 \pm 35})\,\si{\per \second}\,,
\end{equation}
which agrees numerically well within errors with the value given in the last line of Eq.\,(\ref{eq:initial-slope-from-full-oscillation}).
\begin{figure}[tb]
\centering
\includegraphics[width = \columnwidth]{initial-slope-as-function-of-phi-large-EDM.jpg}
\caption{\label{fig:initial-slope-as-function-of-phi-large-EDM}
Initial slope as function of 24 random values of $\phi_\text{RF}$ using a field amplification factor $f_\text{ampl} = \num{e4}$ and the indicated EDM. The simulated data are fitted using the function indicated in the inset. The resulting parameters are $a = (\num{3959.122 \pm 35.344})$, $b = (\num{4135.901 \pm 0.009})$, and $c = (\num{39.861 \pm 25.995})$. Each data point is obtained from a graph like the one shown in Fig.\,\ref{fig:vertical-buildup-with-Wien-filter}, but for 10 turns and 1001 points.
}
\end{figure}
\subsubsection{Determination of the running spin tune, based on the polarization evolution $\vec S_2(t)$}
\label{sec:determination-of-spintune}
The standard definition of the spin tune as a rotation around the local stable spin axis $\vec n_s$ at every point in the machine does not involve a time dependence of the polarization evolution, like the one generated by the RF Wien filter. When a time-dependent polarization is involved, in the following, the term \textit{running} or \textit{instantaneous spin tune} is used. In case there is a time-dependent or instantaneous spin tune, the direction of $\vec n_s$ also changes as function of time, \textit{i.e.}, $\vec n_s \equiv \vec n_s(t)$ (see further Sec.\,\ref{sec:determination-spin-closed-orbit}).
Using the numerical simulations for $\vec S_2(t)$, or any other spin-evolution function, one can numerically determine the running spin tune in the following way. For this one needs three spin vectors from the spin-evolution function, say
\begin{equation}
\begin{split}
\vec a & = \vec S_2(t)\,, \\
\vec b & = \vec S_2(t+T_\text{rev})\,, \text{ and } \\
\vec c & = S_2(t + 2\cdot T_\text{rev})
\end{split}
\end{equation}
Using these three vectors, two more vectors are constructed,
\begin{equation}
\vec d(t) = \vec a - \vec b \quad \text{ and } \quad \vec e(t) = \vec a - \vec c \,.
\end{equation}
The in-plane angle between $\vec d(t)$ and $\vec e(t)$ can be used to determine the running, time-dependent spin tune $\nu_s(t)$. To this end, we define the normal vector $\vec N$ of the plane that contains $\vec d$ and $\vec e$,
\begin{equation}
\vec N = \frac{\vec d \times \vec e}{\left |\vec d \times \vec e \,\right|}\,,
\label{eq:definition-of-normal-vector-N}
\end{equation}
that corresponds to the \textit{instantaneous} (running) spin axis. Using $\vec N$, we find the in-plane components of $\vec b$ and $\vec c$, via
\begin{equation}
\vec b_\perp = \vec b \times \vec N \quad \text{ and } \quad \vec c_\perp = \vec c \times \vec N\,.
\end{equation}
The normalized versions of these vectors are called
\begin{equation}
\vec f = \frac{\vec b_\perp}{| \vec b_\perp |} \quad \text{ and } \quad \vec g = \frac{\vec c_\perp}{| \vec c_\perp |}\,,
\end{equation}
and the running spin tune is determined from
\begin{equation}
\nu_s (t) = \frac{1}{2\pi} \frac{G}{|G|} \arctan \left| \frac{\vec f(t) \times \vec g(t) }
{\vec f(t) \cdot \vec g(t)} \right|\,.
\label{eq:spin-tune-calculated-from-S8}
\end{equation}
The factors in front of $\arctan$ take care that $\nu_s(t)$ generates the correct sign based on the $G$-factor and the number of spin-precessions per turn.
\begin{figure*}[htb]
\centering
\includegraphics[width = 1\textwidth]{spin-tune-plot.jpg}
\caption{\label{fig:spin-tunes}
\small The graph on the \textbf{left} shows the spin tune in the machine, calculated using Eq.\,(\ref{eq:spin-tune-calculated-from-S8}) at the conditions of Table\,\ref{tab:list-of-parameters} for $d = 0$ (red) and $d = \SI{1e-20}{e.cm}$ (blue), when the RF Wien filter is switched OFF. On the \textbf{right}, the RF Wien filter is switched ON in EDM mode with $f_\text{ampl} = 1$. The red curve shows the spin oscillation frequency $f_s$ from Eq.\,(\ref{eq:spin-precession-frequency}), and the blue line the running spin tune difference $\nu_s(t) - \nu_s^{(1)}$ for each turn. It should be noted that the initial spin vector $\vec S_0$ is not in the ring ($xz$) plane (see Fig.\,\ref{fig:tilt-angle-xi}).}
\end{figure*}
As a cross check of the algorithm, with RF WF switched off, for the beam conditions given in Table\,\ref{tab:list-of-parameters}, Eq.\,(\ref{eq:spin-tune-calculated-from-S8}) yields
\begin{widetext}
\begin{equation}
\begin{split}
\text{for } d = 0: \quad \nu_s^{(0)} = G \gamma = & -\num{1.609771846321990e-01} \,, \\
\text{for } d = \SI{1e-20}{e.cm}: \quad \nu_s^{(1)} = \frac{G \gamma}{\cos\xi_\text{EDM}} = & -\num{1.609771846329495e-01} \,,\text{and } \\
\Delta \nu_s = \nu_s^{(0)} - \nu_s^{(1)} = & +\num{7.505e-13}\,,
\end{split}
\label{eq:spin-tunes-numerical-values}
\end{equation}
\end{widetext}
where all three numbers have been calculated using Eq.\,({\ref{eq:spin-tune-calculated-from-S8}). As an additional cross check, the difference of the spin tunes
\begin{equation}
\frac{\nu_s^{(0)}}{\cos \xi_\text{EDM}} - \nu_s^{(1)} \approx \num{e-16}\,,
\end{equation}
which is very close to the achievable machine precision\footnotemark[4].
During a revolution in the machine, as prescribed by $\vec S_2(t)$ using Eq.\,(\ref{eq:polarization-evolution-with-WF}), the spin tune remains constant during each turn (see Fig.\,\ref{fig:spin-tunes}). When the RF Wien filter is switched on, due to the additional spin rotation in the time-varying RF field, the spin tune jumps from turn to turn. The oscillation amplitude of the spin tune variation due to the RF Wien filter using a power of \SI{1}{kW} (see Table\,\ref{tab:list-of-parameters}) is well consistent with the expectation from the spin rotation formalism
\begin{equation}
a = (\num{5.7 \pm 0.2}) \times \num{e-7} \approx \frac{|\psi_\text{WF}|}{2\pi} = \num{6.0e-7}\,.
\end{equation}
The \textit{average} spin tune, however, remains constant.
\vspace{0.1cm}
\subsubsection{Instantaneous spin orbit determination based on $\vec S_2(t)$ \label{sec:determination-spin-closed-orbit}}
The running spin orbit vector $\vec n_s$ can be easily determined from the procedure of the previous section, using the normal vector $\vec N$, defined in Eq.\,(\ref{eq:definition-of-normal-vector-N}),
\begin{equation}
\vec n_s (t) = \vec N(t) \,.
\label{eq:spin-closed-orbit-vector-as-fct-of-time}
\end{equation}
Similarly to the running (instantaneous) spin tune, the instantaneous spin orbit (running spin axis) exhibits oscillatory in-plane components.
\vspace{0.1cm}
\section{Polarization evolution with RF Wien filter and solenoids}
\label{sec:polarization-evolution-with-RF-Wien-filter-and-solenoids}
\subsection{Evolution equation with additional static solenoids}
In the course of this paper, with the RF Wien filter in EDM mode ($\vec B^\text{WF} \parallel \vec e_y$), the EDM interaction with the motional electric field in the ring, was the only source of up-down spin-oscillations.
In the following, two static solenoids in the straight sections will be added to the ring. Besides that, we shall make an allowance for rotations of the RF Wien filter around the longitudinal $\vec e_z$ (momentum) direction. Such rotations induce a radial magnetic RF field, and, in conjunction with the solenoidal magnetic fields, we start mixing the EDM and MDM induced rotations. The idea, common to all EDM experiments, is to disentangle the EDM signal from an extrapolation to a vanishing MDM contribution\,\cite{Afach:2015sja,Afach:2015ima}.
With two static solenoids added to the ring, the resulting sequence of elements is depicted in Fig.\,\ref{fig:ring-with-two-more-solenoids}.
\begin{figure*}[htb]
\centering
\resizebox{0.75\textwidth}{!}{
\begin{tikzpicture}[scale=1,cap=round,>=latex]
\filldraw (0,2) circle (1pt);
\filldraw (0,-2) circle (1pt);
\centerarc[black,thick](0,2)(0:180:3);
\centerarc[black,thick](0,-2)(180:360:3);
\draw[dashed,thick] (-3,2) -- (-3,-2);
\draw[dashed,thick] (3,2) -- (3,-2 );
\draw[very thin, gray] (-4.8,2) -- (4,2);
\draw[very thin, gray] (-4,-2) -- (4,-2);
\foreach \i in {1,2,...,12} {
\filldraw[black] ({-3*cos(172.5 - (\i-1) * 15)},{3*sin(172.5 - (\i-1) * 15)+2}) circle (2pt);
\draw ({-3.6*cos(7.5 + (\i-1) * 15)},{3.6*sin(7.5 + (\i-1) * 15)+2}) node {D$_{\i}$};
\draw[very thin, gray] (0,2) -- ({-3.2*cos((\i-1) * 15)},{3.2*sin((\i-1) * 15)+2});
}
\foreach \i in {13,14,...,24} {
\filldraw[black] ({-3*cos(172.5 - (\i-1) * 15)},{3*sin(172.5 - (\i-1) * 15) - 2}) circle (2pt);
\draw ({-3.6*cos(7.5 + (\i-1) * 15)},{3.6*sin(7.5 + (\i-1) * 15) - 2}) node {D$_{\i}$};
\draw[very thin, gray] (0,-2) -- ({-3.2*cos((\i-1) * 15)},{3.2*sin((\i-1) * 15) - 2});
}
\centerarc[-stealth,dashed, very thick](0,2)(180:150:4.5);
\draw[black,very thick] (-7.0,2.1) node[anchor=west]{$t = 0, \, T_\text{rev}$,};
\draw[black,very thick] (-6.2,1.6) node[anchor=west]{\ldots, $n \cdot T_\text{rev}$};
\draw[black,very thick] (4.3,2.1) node[anchor=west]{$t = 0.5 \cdot T_\text{rev}$, $1.5 \cdot T_\text{rev}$,};
\draw[black,very thick] (4.3,1.6) node[anchor=west]{\ldots, $ (n + \frac{1}{2}) \cdot T_\text{rev}$};
\draw[black,very thick] (4.3,-1.6) node[anchor=west]{$t = 0.5 \cdot T_\text{rev}$, $1.5 \cdot T_\text{rev}$,};
\draw[black,very thick] (4.3,-2.1) node[anchor=west]{\ldots, $ (n + \frac{1}{2}) \cdot T_\text{rev}$};
\draw[thick] (-4.8,2) -- (-4.2,2);
\filldraw[red] (-3,1.5) circle(3pt);
\draw (-1.9,1.5) node {Wien filter};
\draw (-5.3,3.9) node {$t \in (0, T_\text{rev})$};
\filldraw[blue] (-3,0) circle(3pt);
\draw (-2.5,0) node {S$_2$};
\filldraw[blue] (3,0) circle(3pt);
\draw (2.5,0) node {S$_1$};
\end{tikzpicture}}
\caption{\label{fig:ring-with-two-more-solenoids}
Sequence of elements in the ring, corresponding to Eq.\,(\ref{eq:polarization-evolution-with-WF-and-two-solenoids}), including besides the RF Wien filter, also two static solenoids
S$_1$ and S$_2$.
}
\end{figure*}
The one-turn ring matrix can be split into two arcs, one arc made of the dipole magnets D$_1$ to D$_{12}$, and the second arc made of dipoles D$_{13}$ to D$_{24}$. Since
\begin{equation}
\mathbf{U}_\text{ring}\left(\vec c, T_\text{rev}\right) = \mathbf{U}_\text{ring}^\text{arc\,2}\left(\vec c, T_\text{rev}/2 \right) \times \mathbf{U}_\text{ring}^\text{arc\,1}\left(\vec c, T_\text{rev}/2 \right)\,,
\label{eq:evolution-equation-with-additional-solenoids}
\end{equation}
the two additional solenoids can be inserted before and behind arc\,2, leading to
\begin{widetext}
\begin{equation}
\mathbf{U}_\text{ring}^\text{2\,sol}\left(\vec c, T_\text{rev}, \chi_\text{rot}^{\text{S}_1}, \chi_\text{rot}^{\text{S}_2} \right)
= \mathbf{R} \left(\vec e_z, \chi_\text{rot}^{\text{S}_2}\right)
\times \mathbf{U}_\text{ring}^\text{arc\,2}\left(\vec c, T_\text{rev}/2 \right)
\times \mathbf{R}\left(\vec e_z, \chi_\text{rot}^{\text{S}_1}\right)
\times \mathbf{U}_\text{ring}^\text{arc\,1}\left(\vec c, T_\text{rev}/2 \right)\,,
\label{eq:}
\end{equation}
\end{widetext}
invoking again the generic rotation matrix $\mathbf{R}(\vec e_z, \chi_\text{rot})$ from Eq.\,(\ref{eq:generic-rotation-matrix1}).
In a similar fashion as in Eq.\,(\ref{eq:polarization-evolution-with-WF}), one can write for the polarization evolution,
\begin{widetext}
\begin{equation}
\begin{split}
\vec S_3(t) =
& \underbrace{\mathbf{U}_\text{ring} (\vec c, t - n\cdot T_\text{rev})}_{\text{rest of last turn}} \times
\underbrace{\left[ \mathbf{U}_\text{WF} (t=n \cdot T_\text{rev}) \times \mathbf{U}_\text{ring}^\text{2\,sol} \left(\vec c, T_\text{rev},\chi_\text{rot}^{\text{S}_1}, \chi_\text{rot}^{\text{S}_2} \right) \right]}_{\text{turn n}} \\
& \times \ldots \\
& \times \underbrace{\left[ \mathbf{U}_\text{WF} (t=2\cdot T_\text{rev}) \times \mathbf{U}_\text{ring}^\text{2\,sol} \left(\vec c, T_\text{rev},\chi_\text{rot}^{\text{S}_1}, \chi_\text{rot}^{\text{S}_2} \right) \right]}_{\text{turn 2}} \times
\underbrace{\left[ \mathbf{U}_\text{WF} (t= T_\text{rev}) \times \mathbf{U}_\text{ring}^\text{2\,sol} \left(\vec c, T_\text{rev},\chi_\text{rot}^{\text{S}_1}, \chi_\text{rot}^{\text{S}_2}\right) \right]}_{\text{turn 1}} \times \vec S_0\,.
\end{split}
\label{eq:polarization-evolution-with-WF-and-two-solenoids}
\end{equation}
\end{widetext}
\subsection{Spin-rotation angle in a static solenoid}
In a solenoidal magnet with a field integral $\text{BDL} = \int B_\parallel \text{d} \ell$, the spins are rotated around the longitudinal direction $\vec e_z$, and the rotation angle is given by
\begin{equation}
\chi_\text{rot}^{\text{Sol}} = - \frac{q}{m} \cdot \frac{(1 + G)}{\gamma \beta c} \int B_\parallel \text{d} \ell \,.
\end{equation}
The spin rotation angle in the solenoid for deuterons at a momentum of $P=\SI{970}{MeV \per c}$, normalized to the magnetic field integral, amounts to
\begin{equation}
\frac{\chi_\text{rot}^{\text{Sol}}}{\int B_\parallel \text{d} \ell} = \SI{-0.264872}{\radian \per \tesla \per \meter}\,.
\end{equation}
\subsection{Spin tune and spin closed orbit with solenoids using $\vec S_3(t)$}
\begin{figure*}[t]
\centering
\includegraphics[width=\textwidth]{spin-tune-change-as-fct-of-chi1-and-chi2.jpg}
\caption{\label{fig:spin-tune-change-as-fct-of-chi1-and-chi2}
\small Change of the spin tune $\Delta \nu_s(\chi_1, \chi_2)$ for deuterons using solenoids in the machine (see Fig.\,\ref{fig:ring-with-two-more-solenoids}) under the conditions of Table\,\ref{tab:list-of-parameters} using Eq.\,(\ref{eq:spin-tune-calculated-from-S8}) and $\vec S_3(t)$ from Eq.\,(\ref{eq:polarization-evolution-with-WF-and-two-solenoids}). Panels (a) and (c) show for $d=0$ $\Delta \nu_s(\chi_1, \chi_2) = \nu_s(t) - \nu_s^{(0)}$, while (b) and (d) show for $d=\SI{e-20}{e.cm}$ $\Delta \nu_s(\chi_1, \chi_2) = \nu_s(t) - \nu_s^{(1)}$.
Panel (a) and (b): $\chi_2=0$, c): $\chi_1=\chi_2$, and (d): $\chi_1 = -\chi_2$. $\nu_s^{(0)}$ and $\nu_s^{(1)}$ are given in the inserts [see also Eq.\,(\ref{eq:spin-tunes-numerical-values})]. Residuals show the difference between the simulations (\textcolor{blue}{$\circ$}) and the approximations from Eq.\,(\ref{eq:spin-tune-change-as-fct-of-chi1-and-chi2}) (red lines).
}
\end{figure*}
In the following, the abbreviation, \textit{e.g.}, $\chi_\text{rot}^{\text{Sol 1}} = \chi_1$ is used. For an ideal ring, free of magnetic imperfections, the spin tune change $\Delta \nu_s (\chi_1, \chi_2)$, due to solenoids S$_1$ and S$_2$ in the ring (see Fig.\,\ref{fig:ring-with-two-more-solenoids}), the left side of Eq.\,(30) of Ref.\,\cite{PhysRevAccelBeams.20.072801} can be approximated by $\pi \Delta \nu_s(\chi_1, \chi_2) \cdot \sin(\pi \nu_s^0)$, where $\nu_s^0$ denotes the unperturbed spin tune in the machine. For small spin rotation angles in the solenoids, Eq.\,(30) can thus be approximated by
\begin{equation}
\Delta \nu_s(\chi_1, \chi_2) = \frac{2\chi_1\chi_2 + \cos\left( \pi \nu_s^0 \right) \cdot \left( \chi_1^2 + \chi_2^2 \right) }{8\pi\sin \left(\pi\nu_s^0 \right) }\,.
\label{eq:spin-tune-change-as-fct-of-chi1-and-chi2}
\end{equation}
In order to validate the spin evolution equation for $\vec S_3(t)$, given in Eq.\,(\ref{eq:polarization-evolution-with-WF-and-two-solenoids}), in Fig.\,\ref{fig:spin-tune-change-as-fct-of-chi1-and-chi2} the spin tune changes $\Delta \nu_s$ are compared to the approximation of Eq.\,(\ref{eq:spin-tune-change-as-fct-of-chi1-and-chi2}) for four different cases.
\subsection{Spin-closed orbit in a non-ideal lattice}
The static solenoids or magnetic imperfections in the ring affect the spin-closed orbit vector $\vec n_s = \vec c$\, in the machine. The situation is similar to the one depicted in Fig.\,\ref{fig:tilt-angle-xi}, but there, only the tilt due to the EDM was taken into account. The presence of static solenoids in the ring can be numerically evaluated using Eq.\,(\ref{eq:spin-closed-orbit-vector-as-fct-of-time}) with $\vec S_3(t)$ [Eq.\,(\ref{eq:polarization-evolution-with-WF-and-two-solenoids})].
Since the time $t$ begins to count right behind the RF Wien filter (see Fig.\,\ref{fig:ring-with-two-more-solenoids}), evaluation of Eq.\,(\ref{eq:spin-closed-orbit-vector-as-fct-of-time}) at $t = T_\text{rev}$ (or integer multiples of $T_\text{rev}$ [see Eq.\,(\ref{eq:mod-condition-to-evaluate-wf-once-per-turn})]), yields the orientation of the spin-closed orbit vector $\vec c$ at the RF Wien filter
\begin{equation}
\vec c = \vec n_s(t=T_\text{rev})\,.
\end{equation}
Figure\,\ref{fig:spinclosedorbitatWF} shows how the axis $\vec c = (c_x, c_y, c_z)$ is affected by the solenoids S$_1$ and S$_2$, and the presence of an EDM $d$.
\begin{figure*}[htb]
\centering
\includegraphics[width=\textwidth]{plot-spinclosedorbitatWienfilter.jpg}
\caption{\label{fig:spinclosedorbitatWF} Six panels showing the components of $\vec c = (c_x, c_y, c_z)$ for different combinations of rotations in the solenoids, for deuterons at a momentum of $P=\SI{970}{MeV \per c}$.
}
\end{figure*}
For numerical comparisons, a number of special cases are numerically evaluated in Table\,\ref{table:various-chis-and-SCO-vector}.
\begin{table*}[htb]
\begin{ruledtabular}
\caption{ \label{table:various-chis-and-SCO-vector}
\small Components of the spin closed orbit vector $\vec c = (c_x, c_y, c_z)$ right at the RF Wien filter, for different settings of the solenoids S$_1$ and S$_2$ in the machine (see Fig.\,\ref{fig:ring-with-two-more-solenoids}).}
\begin{tabular}{rrrrrr}
$\chi_1$\,[\si{\degree}] & $\chi_2$\,[\si{\degree}] & $d$\,[e\, cm] & $c_x$ & $c_y$ & $c_z$ \\\hline
$0$ & $0$ & 0 & \num{0.000000e+00} & \num{1.000000e+00} & \num{0.000000e+00} \\
$0$ & $0$ & \num{e-20} & \num{-3.053662e-06} & \num{1.000000e+00} & \num{4.255557e-17} \\
$1$ & $0$ & \num{e-20} & \num{-3.053167e-06} & \num{9.998378e-01} & \num{1.801136e-02} \\
$0$ & $1$ & \num{e-20} & \num{-8.728505e-03} & \num{9.998378e-01} & \num{1.575676e-02} \\
$1$ & $1$ & \num{e-20} & \num{-8.724615e-03} & \num{9.993921e-01} & \num{3.375307e-02} \\
$1$ & $-1$ & \num{e-20} & \num{8.723460e-03} & \num{9.999594e-01} & \num{2.254871e-03} \\
$-1$ & $1$ & \num{e-20} & \num{-8.729567e-03} & \num{9.999594e-01} & \num{-2.254871e-03} \\
$-1$ & $-1$ & \num{e-20} & \num{8.718511e-03} & \num{9.993922e-01} & \num{-3.375307e-02}
\end{tabular}
\end{ruledtabular}
\end{table*}
\begin{figure*}[t]
\centering
\subfigure[\label{fig:ResonanceStrength-3by3a}
$\left(\phi_\text{rot}^\text{WF}, \chi_\text{rot}^\text{Sol\,1}\right) = \left(\SI{-1}{\degree}, \SI{-1}{\degree}\right)$]
{\includegraphics[width=0.47\textwidth]{plots-Sout10-3by3-1.jpg}}
\hspace{0.2cm}
\subfigure[\label{fig:ResonanceStrength-3by3e}
$\left(\phi_\text{rot}^\text{WF}, \chi_\text{rot}^\text{Sol\,1}\right) = \left(\SI{0}{\degree}, \SI{0}{\degree}\right)$]
{\includegraphics[width=0.47\textwidth]{plots-Sout10-3by3-middle-panel.jpg}}
\caption{
\label{fig:ResonanceStrength-3by3}
Two examples for the evolution of $p_y(t)$ using $\vec S_3(t)$ from Eq.\,(\ref{eq:polarization-evolution-with-WF-and-two-solenoids}) for different combinations of Wien filter and solenoid spin rotation angle, denoted by
$\left(\phi_\text{rot}^\text{WF},\chi_\text{rot}^\text{Sol\,1}\right)$, where $\chi_\text{rot}^\text{Sol\,2} = 0$. The parameters used for the calculation are indicated in each panel. For the beam, the conditions of Table\,\ref{tab:list-of-parameters} apply. The Wien filter is operated at harmonic $K = -1$. The EDM assumed in panel (b) is $1000$ times larger than in (a). The ratio of the fitted oscillation amplitudes in panels (a) and (b) is compatible with the expectation of a factor $\sqrt{2}/2$ [see Eq.\,(\ref{eq:amplitude-ratio-sqrt2-over-2})].
}
\end{figure*}
\subsection{Strength of the EDM resonance}
As depicted in Fig.\,\ref{fig:spin-tunes}, and already discussed in Sec.\,\ref{sec:determination-of-spintune}, the operation of the RF Wien filter modulates the spin tune. While the \textit{average} spin tune is equal to the one obtained when the RF Wien filter is switched off, solenoids and magnet misalignments in the ring, however, affect the spin tune. Therefore, the spin-precession frequency and thus the frequency at which the RF Wien filter should be operated, differs from the unperturbed spin tune. The spin tune $\nu_s$ must be determined anew for every solenoid setting to ensure that the resonance frequency for the RF Wien filter is given by
\begin{equation}
f_\text{WF} = \left( K + \nu_s \right) \cdot f_\text{rev}\,, K \in \mathbb{Z} \,,
\label{eq:WF-frequencies2}
\end{equation}
and this frequency needs to be used in $\psi(t))$ [Eq.\,(\ref{eq:psi-of-t-in-wien-filter-including-phase})], as it controls the RF Wien filter spin-rotation matrix $\mathbf{R}(\vec n_\text{WF}, \psi(t))$ [Eq.\,(\ref{eq:Wien-filter-matrix})].
The EDM resonance strength $\varepsilon^\text{EDM}$, actually a \textit{resonance tune}, is defined as the ratio of the angular frequency of the vertical polarization oscillation $\Omega^{p_y}$ induced by the EDM relative to the orbital angular frequency $\Omega^\text{rev}$,
\begin{equation}
\varepsilon^\text{EDM} = \frac{\Omega^{p_y}}{\Omega^\text{rev}}\,.
\label{eq:resonance-strength}
\end{equation}
Since $\Omega_{p_y}$ corresponds to $\omega$ [first line in Eq.\,(\ref{eq:initial-slope-from-full-oscillation})], the resonance strength can in principle be determined from a single observation of $\Omega_{p_y}$.
Alternatively, the resonance strength can be determined from the last line in Eq.\,(\ref{eq:initial-slope-from-full-oscillation}) via
\begin{equation}
\varepsilon^\text{EDM} = \frac{\left. \dot p_y(t) \right|_{t=0}}{a\,\cos\phi} \cdot \frac{1}{\Omega^\text{rev}} \,,
\label{eq:resonance-strength-from-phi-variation}
\end{equation}
but this requires that the initial slopes need to be determined as function of, \textit{e.g.}, $\phi = \phi_\text{RF}$. The statistical aspects of this will be further elucidated in Sec.\,\ref{sec:comparison-of-different-epslion-extractions}.
\subsubsection{Evolution of $p_y(t)$ as function of $\phi_\text{rot}^\text{\rm WF}$ and $\chi_\text{rot}^\text{\rm Sol\,1}$}
\label{sec:Evolution-of-py-as-function-of-phiWF-and-chiSol}
The EDM resonance strength $\varepsilon^\text{EDM}$ [Eq.\,(\ref{eq:resonance-strength})] manifests itself in the oscillation frequency, as illustrated in Fig.\,\ref{fig:ResonanceStrength-3by3} for two pairs of Wien filter rotation angle and spin-rotation angle in solenoid S$_1$, $(\phi_\text{rot}^\text{WF},\chi_\text{rot}^\text{Sol\,1})$, where $\chi_\text{rot}^\text{Sol\,2} = 0$.
The resulting oscillation pattern of $p_y$ is fitted using
\begin{equation}
f(t) = a \sin (\omega\,t + \phi) +b \,,
\label{eq:function-fitted-to-oscillation-pattern-to-get-resonance-strength}
\end{equation}
amplitude $a$ and frequency $\omega$ are given in each panel, together with various other parameters. The calculation for the ideal ring situation in panel (b) uses a $\num{1000}$ times larger assumed EDM value of $d = \SI{e-17}{e.cm}$ and a larger number of turns $n_\text{turns} = \num{100000}$, in order to make the oscillations of $p_y(t)$ visible as well.
\subsubsection{Comparison of $\varepsilon^\text{\rm EDM}$ from $\Omega^{p_y}$ and $\dot p_y(t)|_{t=0}$ by variation of $\phi_\text{\rm RF}$}
\label{sec:comparison-of-different-epslion-extractions}
One would expect that the variation of the RF phase $\phi_\text{RF}$ will affect the resulting oscillation amplitudes $a$ and offsets $b$ of Fig.\,\ref{fig:ResonanceStrength-3by3}, while the oscillation frequencies $\omega$, and thus the resonance strengths $\varepsilon^\text{EDM}$ remain unchanged.
In the panels of Fig.\,\ref{fig:ResonanceStrength-3by3-phiRF-variation}, for the same combinations of $\left( \phi_\text{rot}^\text{WF}, \chi_\text{rot}^\text{Sol\,1} \right)$, shown in Fig.\,\ref{fig:ResonanceStrength-3by3}, $\dot p_y(t)|_{t=0}$ and the oscillation frequency $\omega$ are computed for 36 randomly picked values of $\phi_\text{RF}$. The graph illustrates that in the presence of solenoid fields and RF Wien filter misalignments, the determination of $\dot p_y(t)|{_{t=0}}$ by variation of $\phi_\text{RF}$, making use of Eq.\,(\ref{eq:resonance-strength-from-phi-variation}) yields results comparable to the direct determination of the resonance strength from the oscillation frequency $\Omega^{p_y}$ via Eq.\,(\ref{eq:resonance-strength}). The oscillation amplitudes $a$ and $\dot{p}_y|_{t=0}$ exhibit an identical dependence on $\phi^\text{RF}$, while the obtained resonant tune $\varepsilon^\text{EDM}$ remains constant over the whole range of $\phi^\text{RF}$.
\begin{figure*}[htb]
\centering
\subfigure[\label{fig:ResonanceStrength-3by3-phiRF-variation-a}
$(\phi_\text{rot}^\text{WF}, \chi_\text{rot}^\text{Sol\,1}) = (\SI{-1}{\degree}, \SI{-1}{\degree})$]
{\includegraphics[width=0.49\textwidth]{plot-out3by3RFphases-Sout10-set-1000.jpg}}
\hspace{0.1cm}
\subfigure[\label{fig:ResonanceStrength-3by3-phiRF-variation-e}
$(\phi_\text{rot}^\text{WF}, \chi_\text{rot}^\text{Sol\,1}) = (\SI{0}{\degree}, \SI{0}{\degree})$, $n_\text{turns} = \num{e5}$, $d = \SI{e-17}{e.cm}$.]
{\includegraphics[width=0.49\textwidth]{plot-out3by3RFphases-Sout10-set-1004.jpg}}
\caption{ \label{fig:ResonanceStrength-3by3-phiRF-variation} Two examples showing 36 random values of $\phi_\text{RF}$ that are used to obtain the resonance strengths $\varepsilon^\text{EDM}$ from graphs like those shown in Fig.\,\ref{fig:ResonanceStrength-3by3} using Eqs.\,(\ref{eq:resonance-strength}) and (\ref{eq:resonance-strength-from-phi-variation}) for combinations of the Wien filter and solenoid spin rotation angle, denoted by $\left(\phi_\text{rot}^\text{WF},\chi_\text{rot}^\text{Sol\,1}\right)$. Depicted here as function of the randomly chosen $\phi_\text{RF}$ are the extracted initial slopes $\dot p_y(t)|_{t=0}$, $\omega = \Omega^{p_y}$, and the amplitude $a$ of the $p_y$ oscillation [Eq.\,(\ref{eq:function-fitted-to-oscillation-pattern-to-get-resonance-strength})]. The parameters used for the calculation are $n_\text{turns} = \num{2e4}$, $n_\text{points} = 200$, and $d = \SI{e-20}{e.cm}$. In panel (b), $n_\text{turns} = \num{e5}$, and the assumed EDM is $d = \SI{e-17}{e.cm}$ , \textit{i.e.}, $1000$ times larger than in (a), in order to enhance the effect. For the beam, the conditions of Table\,\ref{tab:list-of-parameters} apply. The RF Wien filter is operated at harmonic $K = -1$. The extracted resonance strengths are summarized in Table\,\ref{tab:comparison-Omega_y-todot-py}.}
\end{figure*}
The resonance strengths extracted from $\dot p_y(t)|_{t=0}$ and $\Omega^{p_y}$ make use of the very same simulated data. The results are summarized in Table\,\ref{tab:comparison-Omega_y-todot-py}, where for the numbers that should match, the same color is used. Although the different extraction methods show good overall agreement, the uncertainties of $\varepsilon^\text{EDM}(\Omega^{p_y})$, however, are substantially smaller than those from $\varepsilon^\text{EDM}(\dot p_y|_{t=0})$ by a factor of at least $20$. The reason for this is that in general frequencies can be measured more accurately than other quantities, and the determination of $\varepsilon^\text{EDM}(\Omega^{p_y})$ involves fewer uncertainties in the error propagation. The most accurate determinations are obtained from $\Omega^{p_y}$ when $\chi_\text{rot}^\text{Sol\,1} = 0$.
\begin{table*}[!]
\caption{\label{tab:comparison-Omega_y-todot-py} Resonance strengths extracted from Fig.\,\ref{fig:ResonanceStrength-3by3-phiRF-variation} for nine different combinations $\left( \phi_\text{rot}^\text{WF}, \chi_\text{rot}^\text{Sol\,1} \right)$ for an otherwise ideal COSY ring assuming a deuteron EDM of $d = \SI{e-20}{e.cm}$ (for (b), at $(\SI{0}{\degree}, \SI{0}{\degree})$, $d = \SI{e-17}{e.cm}$). The beam conditions are given in Table\,\ref{tab:list-of-parameters} using the \textit{real} field magnitudes of the RF Wien filter, since $f_\text{ampl}$ has been divided out. For the calculations $n_\text{turns} = \num{2e4}$ and $n_\text{points} = 200$, except for (b) , where $n_\text{turns} = \num{e5}$.}
\renewcommand{\arraystretch}{1.25}
\begin{ruledtabular}
\begin{tabular}{rlccc}
\multicolumn{2}{c}{[$\num{e-11}$ Hz]} & \multicolumn{3}{c}{ $\left(\phi_\text{rot}^\text{WF},\chi_\text{rot}^\text{Sol\,1}\right)$}\\\hline
& & $(-\SI{1}{\degree}, -\SI{1}{\degree})$ & $\left( \SI{0}{\degree}, -\SI{1}{\degree} \right)$ & $(\SI{1}{\degree}, -\SI{1}{\degree})$ \\
\multirow{ 2}{*}{$\varepsilon^\text{EDM}$} & from $\dot p_y|_{t=0}$ & \textcolor{blue}{$\num{745.563 \pm 3.910}$} & \textcolor{magenta}{$\num{539.778 \pm 1.695}$} & \textcolor{blue}{$\num{750.455 \pm 3.312}$} \\
& from $\Omega^{p_y}$ & \textcolor{blue}{$\num{750.017 \pm 0.117}$} & \textcolor{magenta}{$\num{538.659 \pm 0.099}$} & \textcolor{blue}{$\num{749.840 \pm 0.128}$} \\\hline
& & $(-\SI{1}{\degree}, \SI{0}{\degree})$ & $(\SI{0}{\degree}, \SI{0}{\degree})$ & $(\SI{1}{\degree}, \SI{0}{\degree})$ \\
\multirow{ 2}{*}{$\varepsilon^\text{EDM}$} & from $\dot p_y|_{t=0}$ & \textcolor{red}{$\num{517.167 \pm 2.741}$} & $(\num{90.251 \pm 0.404})\cdot \num{e-3}$ & \textcolor{red}{$\num{518.440 \pm 2.284}$} \\
& from $\Omega^{p_y}$ & \textcolor{red}{$\num{521.890 \pm 0.001}$} & $(\num{91.312 \pm 0.005})\cdot \num{e-3}$ & \textcolor{red}{$\num{521.681 \pm 0.001}$} \\\hline
& & $(-\SI{1}{\degree}, \SI{1}{\degree})$ & $(\SI{0}{\degree}, \SI{1}{\degree})$ & $(\SI{1}{\degree}, \SI{1}{\degree})$ \\
\multirow{ 2}{*}{$\varepsilon^\text{EDM}$} & from $\dot p_y|_{t=0}$ & \textcolor{blue}{$\num{748.511 \pm 3.249}$} & \textcolor{magenta}{$\num{540.799 \pm 3.136}$} & \textcolor{blue}{$\num{749.413 \pm 3.891}$}\\
& from $\Omega^{p_y}$ & \textcolor{blue}{$\num{749.960 \pm 0.121}$} & \textcolor{magenta}{$\num{538.619 \pm 0.129}$} & \textcolor{blue}{$\num{749.842 \pm 0.113}$}
\end{tabular}
\end{ruledtabular}
\end{table*}
In the following, we briefly comment on some features of the results obtained so far (Fig.\,\ref{fig:ResonanceStrength-3by3}, Table\,\ref{tab:comparison-Omega_y-todot-py}). We observe that numerically $2\sin \pi \nu_s = 1.0041 \simeq 1$. Then, according to Appendix\,\ref{sec:appendixA}, we expect
\begin{equation}
a( \SI{-1}{\degree}, \SI{-1}{\degree}) = \cos\left(\frac{\pi}{4}\right) \cdot a( \SI{0}{\degree}, \SI{0}{\degree})\,,
\label{eq:amplitude-ratio-sqrt2-over-2}
\end{equation}
in good agreement with the results shown in Fig.\,\ref{fig:ResonanceStrength-3by3-phiRF-variation}. The resonance tunes determined from $\dot{p}_y|_{t=0}$ and from $\Omega^{p_y}$ are identical. For the above reason of $2\sin \pi \nu_s \simeq 1$ and small EDM contribution, the equalities
\begin{equation}
\begin{split}
\varepsilon^\text{EDM}(\SI{-1}{\degree},\SI{-1}{\degree}) & = \varepsilon^\text{EDM}(\SI{1}{\degree},\SI{1}{\degree}) \,,\text{ and} \\
\varepsilon^\text{EDM}(\SI{\pm 1}{\degree},\SI{-1}{\degree}) & = \sqrt{2} \cdot \varepsilon^\text{EDM}(\SI{-1}{\degree},\SI{0}{\degree})
\end{split}
\label{eq:simple-relationships}
\end{equation}
hold.
\subsection{Resonance strength $\varepsilon^\text{EDM}$ for random points $\left(\phi_\text{rot}^\text{WF},\chi_\text{rot}^\text{Sol\,1}\right)$}
\begin{figure*}[tb]
\centering
\subfigure[\label{fig:ResonanceStrengthc} $\varepsilon^\text{EDM}$ for $d = \SI{e-18}{e.cm}$.]
{\includegraphics[width=0.95\columnwidth]{Resonance-Strength-3D-Small.jpg}}
\hspace{0.3cm}
\subfigure[\label{fig:ResonanceStrengthd} Contour plot of panel (a).]
{\includegraphics[width=0.95\columnwidth]{Resonance-Strength-3D-Small-contour.jpg}}
\caption{ \label{fig:ResonanceStrength}
Panels (a) and (b) show the resonance strengths $\varepsilon^\text{EDM}$ on a grid in the range $\phi^\text{WF}_\text{rot} = [-0.1\,\si{\degree}, \ldots, +0.n1\,\si{\degree}]$ and $\chi^{\text{Sol}\, 1}_\text{rot} = [-0.1\,\si{\degree}, \ldots, +0.1\,\si{\degree}]$ with an assumed EDM of $d = \SI{e-18}{e.cm}$. Each point in panels (a) and (b) is obtained from a calculation with
$n_\text{turns} = \num{200000}$ and $n_\text{points} = 100$. }
\end{figure*}
The resonance strengths shown in Fig.\,\ref{fig:ResonanceStrength} are obtained using the fit function of Eq.\,(\ref{eq:function-fitted-to-oscillation-pattern-to-get-resonance-strength}) ($\omega = \Omega^{p_y}$) and then Eq.\,(\ref{eq:resonance-strength}) for a set of randomly chosen pairs of $(\phi_\text{rot}^\text{WF},\chi_\text{rot}^\text{Sol\,1})$ and $\chi_\text{rot}^\text{Sol\,2} = 0$. For all points, $\phi_\text{RF}=0$ and $\vec S_0 = (0,0,1)$
Using the evolution function $\vec S_3(t)$ [Eq.\,\ref{eq:polarization-evolution-with-WF-and-two-solenoids}] which includes the ideal ring with solenoid S$_1$ and the RF Wien filter and an assumed EDM of $\SI{e-18}{e.cm}$, for which the EDM tilt angle is $\xi_\text{EDM} \approx \SI{300}{\micro \radian}$, in the angular range, $\phi^\text{WF}_\text{rot} = [-0.1\,\si{\degree}, \ldots, +0.1\,\si{\degree}]$, $\chi^{\text{Sol}\, 1}_\text{rot} = [-0.1\,\si{\degree}, \ldots, +0.1\,\si{\degree}] $, and $\chi_\text{rot}^\text{Sol\,2} = 0$, the pattern shift is clearly visible, as seen in Fig.\,\ref{fig:ResonanceStrengthd}.
The relative uncertainties of the points shown in Fig.\,\ref{fig:ResonanceStrength} were obtained from the fits. In
panels\,\ref{fig:ResonanceStrengthc} and \ref{fig:ResonanceStrengthd}, $\Delta \varepsilon^\text{EDM}/ \varepsilon^\text{EDM}$ ranges from \num{2.0e-05} to \num{4.1e-2}.
For the set of points $\left(\phi_\text{rot}^\text{WF}, \chi_\text{rot}^\text{Sol\,1}\right)$ shown in Fig.\,\ref{fig:ResonanceStrength}, the initial spin tunes $\nu_s$, \textit{i.e.}, before the RF WF is turned on, are shown in Fig.\,\ref{fig:spintune}. The result indicates the familiar quadratic dependence $\Delta \nu_s(\chi_1, \chi_2 = 0) \propto \chi_1^2$, described by Eq.\,(\ref{eq:spin-tune-change-as-fct-of-chi1-and-chi2}).
\begin{figure}[tb]
\centering
\includegraphics[width=1\columnwidth]{Resonance-Strength-3D-Small-spintune.jpg}
\caption{ \label{fig:spintune} Initial spin tunes $\nu_s$ for the angular intervals $\phi^\text{WF}_\text{rot} = \chi^{\text{Sol}\, 1}_\text{rot} = \interval{ -0.1\,\si{\degree}} { \ldots, +0.1\,\si{\degree} }$ for the data points $\left( \phi^\text{WF}_\text{rot}, \chi^{\text{Sol}\, 1}_\text{rot} \right)$ shown in Figs.\,\ref{fig:ResonanceStrengthc} and \ref{fig:ResonanceStrengthd} with an assumed EDM of $d = \SI{e-18}{e.cm}$.}
\end{figure}
\subsection{Characterization of $\varepsilon^\text{EDM}\left(\phi_\text{rot}^\text{WF}, \chi_\text{rot}^\text{Sol\,1}\right)$ }
\subsubsection{Operation of RF Wien filter exactly on resonance}
In this section, the contour of the surface $\varepsilon^\text{EDM}\left(\phi_\text{rot}^\text{WF}, \chi_\text{rot}^\text{Sol\,1}\right)$, shown in Fig.\,\ref{fig:ResonanceStrengthc}, is compared to the theoretical expectation, given in Eq.\,(\ref{eq:EpsilonMap}). The functional dependence describes a quadratic surface, also know as \textit{Elliptic Paraboloid}, and is used here in the form
\begin{equation}
\begin{split}
\left({\varepsilon^\text{EDM}}\right)^2 = \quad & A \cdot \left( \phi_\text{rot}^\text{WF} - \phi_0 \right)^2 \\
&+ B\cdot \left(\frac{\chi_\text{rot}^\text{Sol\,1}}{2\sin\pi\nu_s^{(2)}} + \chi_0\right)^2 + C \,,
\label{eq:surface-fit-function-for-epsilon}
\end{split}
\end{equation}
where the unperturbed spin tune $\nu_s^{(2)}$ for the EDM of $d = \SI{e-18}{e.cm}$, assumed in the simulation, is given by
\begin{equation}
\begin{split}
\nu_s^{(2)} & = \num{-0.160977192137641}\,, \text{and} \\
2\sin\pi\nu_s^{(2)} & = -\num{0.968883216683076}\,.
\end{split}
\end{equation}
It should be emphasized that the simulations shown in Fig.\,\ref{fig:ResonanceStrength} reflect the situation when the RF Wien filter is operated \textit{exactly on resonance}. During the corresponding EDM experiments in the ring, however, a certain spin-tune feedback is imperative to maintain for long periods of time the resonance condition, \textit{i.e.}, the spin-precession frequency in Eq.\,(\ref{eq:psi-of-t-in-wien-filter-including-phase}), using the measured spin tune\,\cite{PhysRevLett.115.094801}. To maintain phase \textit{and} frequency when the RF Wien filter is actively operating, turns out to be much more tricky, and more sophisticated approaches, beyond those outlined in\,\cite{PhysRevLett.119.014801}, are presently being pursued by the JEDI collaboration. Only such a phase \textit{and} frequency lock during a measurement cycle enables one to take full advantage of the large spin-coherence time (SCT) of $\tau_\text{SCT} \simeq \SI{1000}{s}$, achieved by JEDI at COSY\,\cite{Guidoboni:2016bdn,Guidoboni:2017ayl}.
The result of a fit without weighting is shown in Fig.\,\ref{fig:ResonanceStrengthc-FIT}. It should be noted that within the uncertainties obtained from the fit, $A=B$, while $C$ and $\chi_0$ are compatible with zero. Here, $\chi_0$ represents a primordial tilt of the stable spin axis at the RF Wien filter along the horizontal axis, $c_x$. For the model ring, one would expect
\begin{equation}
\chi_0 = 0 = c_x\,,
\end{equation}
a property which is nicely returned by the fit shown in Fig.\,\ref{fig:ResonanceStrengthc-FIT}.
\begin{figure*}[htb]
\centering
\subfigure[\label{fig:ResonanceStrengthc-FIT} Fit to the surface of $\left(\varepsilon^\text{EDM}\right)^2$, shown in Fig.\,\ref{fig:ResonanceStrengthc}, using Eq.\,(\ref{eq:surface-fit-function-for-epsilon}). The resonance strengths have been scaled by a factor around \num{6e9}.]
{\includegraphics[width=\columnwidth]{Resonance-Strength-3D-Small-oscillation-amplitudes-FIT.jpg}}
\subfigure[\label{fig:ResonanceStrengthc-FIT2} Fit to the simulated data from Fig.\,\ref{fig:ResonanceStrengthc}, using Eq.\,(\ref{eq:surface-fit2-function-for-epsilon}) with $F = \num{e20}$.]
{\includegraphics[width=\columnwidth]{Resonance-Strength-3D-Small-oscillation-amplitudes-FIT2.jpg}}
\caption{\label{fig:ResonanceStrengthc-FIT-and-FIT2} Fits to the simulated data for the resonance strength $\left(\varepsilon^\text{EDM}\right)^2$ as function of $\left(\phi_\text{rot}^\text{WF}, \chi_\text{rot}^\text{Sol\,1}\right)$.}
\end{figure*}
In addition, the fit to the simulated data is expected to return $\phi_0 = \left|\xi_\text{EDM}(d = \SI{e-18}{e.cm})\right| = \SI{0.3054}{\milli \radian}$, given by Eq.\,(\ref{eq:xiEDM}), and the fitted result
\begin{equation}
\phi_0 = (\num{0.3054} \pm \num{0.0002}) \, \si{{\milli \radian}}
\end{equation}
returns this value accurately.
\subsubsection{Validation of the scale of $\varepsilon^\text{\rm EDM}$}
The fit with the elliptic paraboloid, shown in Fig.\,\ref{fig:ResonanceStrengthc-FIT}, indicates that the surface is described with $A = B$. In the following, the first fit function from Eq.\,(\ref{eq:surface-fit-function-for-epsilon}) is slightly altered, yielding
\begin{equation}
\begin{split}
\left({\varepsilon^\text{EDM}}\right)^2 = \frac{A}{F} \cdot
\Biggl[ & \left( \phi^\text{WF}_\text{rot} - \phi_0 \right)^2 \\
& + \left(\frac{\chi_\text{rot}^\text{Sol\,1}}{2\sin\pi\nu_s^{(2)}} + \chi_0\right)^2 \Biggr] + C\,,
\label{eq:surface-fit2-function-for-epsilon}
\end{split}
\end{equation}
where a factor $F = \num{e20}$ has been used to scale the resonance strength. The second fit now uses weights derived from the uncertainty of the fitted $\Omega^{p_y}$ using Eq.\,(\ref{eq:resonance-strength}). The resulting fit is shown in Fig.\,\ref{fig:ResonanceStrengthc-FIT2}. The agreement between theoretical model and simulated data is good, the $\chi^2/\text{ndf} = 374.4/194 = 1.9$.
According to Eq.\,(\ref{eq:EpsilonMap}), the factor in front of the brackets in Eq.\,(\ref{eq:surface-fit2-function-for-epsilon}) reads
\begin{equation}
\frac{A}{F} = k \stackrel{!}{=} \frac{ \psi_\text{WF}^2}{16\pi^2} \,,
\end{equation}
where the Wien filter rotation angle $\psi_\text{WF}$ from Eq.\,(\ref{eq:psi-angle-in-WF}) is used. Inserting the numerical value of $A$ from the fit (inset Fig.\,\ref{fig:ResonanceStrengthc-FIT2}), and taking into account that the results are in \si{\milli \radian}, the ratio
\begin{equation}
\frac{A \cdot \num{e6}}{F \cdot k} = \num{9.9954e-01}
\end{equation}
yields the expected value near unity, which validates the scaling factor in Eq.\,(\ref{eq:EpsilonMap}).
The second fit yields a similar value for
\begin{equation}
\begin{split}
\phi_0 & = (\num{0.30534} \pm \num{0.00005}) \, \si{\milli \radian} \\
& \approx \left|\xi_\text{EDM}(d = \SI{e-18}{e.cm})\right| \,,
\end{split}
\end{equation}
compared to the first fit, shown in Fig.\,\ref{fig:ResonanceStrengthc-FIT}, and $\chi_0$ and $C$ are both compatible with zero.
\section{Conclusions and outlook}
\label{sec:conclusions}
The $\mathbf{SO(3)}$ matrix formalism used here to describe the spin rotations on the closed orbit, \textit{i.e.,} the spin dynamics of the interplay of an RF Wien filter with a machine lattice that includes solenoids, proved very valuable. The general features of the deuteron EDM experiment at COSY can be obtained rather immediately. Of course, the approach taken is no replacement for more advanced spin-tracking codes, but the results obtained here can be applied to benchmark those codes.
In addition, it should be noted that the JEDI collaboration is presently applying beam-based alignment techniques to improve the knowledge about the absolute beam positions in COSY\,\cite{TWagner}. Once this is accomplished, the approach described here to parametrize the spin rotations \textit{solely} on the basis of the closed orbit, will become more realistic.
The polarization evolution in the ring in the presence of an RF Wien filter that is operated on resonance, in terms of the \textit{resonance tune} or \textit{resonance strength} $\varepsilon^\text{EDM}$ is theoretically well understood. This will allow us to investigate in the future effects of increasingly smaller magnetic imperfections, either through additional \textit{solenoidal} fields in the ring, or by \textit{transverse} magnetic fields via the rotation of the RF Wien filter around its axis.
In the near future, it is planned to incorporate into the developed matrix formalism also dipole magnet displacement and rotation parameters, available from a recent survey at COSY. This will allow us to determine the orientation of the stable spin axis of the machine at the location of the RF Wien filter, and to extract the EDM from a measurement of the resonance strengths as function of $\left(\phi_\text{rot}^\text{WF}, \chi_\text{rot}^\text{Sol\,1}\right)$. It is possible to incorporate the spin rotations from misplaced and rotated quadrupole magnets on the closed orbit into the formalism as well.
An approach based on the polynomial chaos expansion has been successfully applied to determine a hierarchy of uncertainties during the construction of the RF Wien filter\,\cite{Slim:2017bic}. Such a methodology, in conjunction with the spin-tracking approach based on the matrix formalism outlined here, can be employed to efficiently generate a hierarchy of uncertainties for the EDM prototype ring\,\cite{1812.08535} from the different design parameters of the ring.
The spin-tracking approach used here, shall be also applied to study various aspects of the presently applied spin-tune feedback system, used to phase-lock the spin precession to the RF of the Wien filter\,\cite{PhysRevLett.119.014801}.
\section*{Acknowledgment}
This work has been performed in the framework of the JEDI collaboration and is supported by an ERC Advanced-Grant of the European Union (proposal number 694340). The work of N.N.N. was a part of the Russian MOS program 0033-2019-0005. Numerous discussions with colleagues went into this document, foremost we would like to acknowledge those with Volker Hejny, Alexander Nass, Jörg Pretz, and Artem Saleev.
\begin{appendix}
\section{Dependence of the EDM resonance strength on $\phi^\text{WF}$ and $\chi^\text{Sol\,1}$}
\label{sec:appendixA}
The functional dependence of a physical rotation of the Wien filter around the beam axis by $\phi^\text{WF}_\text{rot}$ and of a spin rotation in static solenoids (see Fig.\,\ref{fig:sketch-ring-and-WF}) by $\chi_\text{rot}^{\text{Sol}_{1,2}}$ on the resonance strength $\varepsilon^\text{EDM}$ [Eq.\,(\ref{eq:resonance-strength})] is discussed.
At the location of the polarimeter, only the vertical and radial components of the beam polarization [$S_y(t)$ and $S_x(t)$] can be determined. At the RF Wien filter, the orientation of the stable spin axis is denoted by $\vec c$, and in EDM mode the direction of the magnetic field by $\vec n_\text{WF}$ [see Eq.\,(\ref{eq:definition-of-EDM-mode})]. The in-plane $S_x(t)$ thus obviously depends on $[\vec n_\text{WF} \times \vec c\,]$.
In an ideal all-magnetic ring under consideration, the stable spin axis is close to the vertical direction $\vec e_y$,
\begin{equation}
\begin{split}
\vec{c} & = \cos\xi_\text{EDM} \cdot \vec e_y + \sin\xi_\text{EDM} \cdot \vec e_x \\
& \approx \vec e_y + \xi_\text{EDM} \cdot \vec e_x\,.
\end{split}
\end{equation}
In EDM mode, the magnetic axis of the RF Wien filter can be approximated by
\begin{equation}
\begin{split}
\vec n_\text{WF} & = \cos \phi^\text{WF}_\text{rot} \cdot \vec e_y + \sin \phi^\text{WF}_\text{rot} \cdot \vec e_x \\
& \approx \vec e_y + \phi^\text{WF}_\text{rot} \cdot \vec e_x \,.
\end{split}
\end{equation}
The stable spin axis $\vec c$ can be manipulated by static solenoids in the ring, and the drift solenoids $S_{1,2}$ of the electron coolers (or the Siberian snake instead of S$_1$) generate the spin kicks $\chi_{1,2}$. When both solenoids $S_{1,2}$ are turned on, one can write for the stable spin axis
\begin{equation}
\begin{split}
c_x & = \xi_\text{EDM} + \frac{1}{2}\chi_2\, , \\
c_z & = \frac{1}{2\sin \pi \nu_s}( \chi_1 + \chi_2 \cos \pi \nu_s)\, .
\end{split}
\end{equation}
In case solenoid S$_2$ is off ($\chi_2 = 0$), one obtains
\begin{equation}
[\vec n_\text{WF} \times \vec c\,] = \left( \xi_\text{EDM} - \phi^\text{WF}_\text{rot} \right) \vec e_x + \frac{\chi_1}{2\sin \pi \nu_s} \vec e_z \,.
\end{equation}
Thus the resonance strength squared can be written as a sum of two independent quadratic functions,
\begin{equation}
\left(\epsilon^\text{EDM}\right)^2 = \frac{\psi_\text{WF}^{2}}{16\pi^2} \left[ \left(\xi_\text{EDM} - \phi^\text{WF}_\text{rot}\right)^2 + \left(\frac{ \chi_1}{2\sin \pi \nu_s}\right)^2 \right]\, . \label{eq:EpsilonMap}
\end{equation}
where $\psi_\text{WF}$ is defined in Eq.\,(\ref{eq:psi-angle-in-WF}).
\end{appendix}
\bibliographystyle{apsrev4-2}
|
2,877,628,090,396 | arxiv | \section{Introduction}
\label{sec:intro}
An increasing number of remote sensing instruments measure the
polarization state of electromagnetic radiation. Therefore,
polarized radiative transfer codes
are required to interpret and analyze the measurements and to
develop retrieval algorithms. Sensors that measure polarization from
space are, e.g., the Polarization and Directionality of the Earth׳s
Reflectances (POLDER) instrument onboard PARASOL (Polarization and
Anisotropy of Reflectances for Atmospheric Sciences coupled with
Observations from a Lidar) \citep{deschamps1994} and the Thermal and
Near Infrared Sensor for
Carbon Observation Fourier-Transform Spectrometer (TANSO-FTS) on the Greenhouse gas
Observing SATellite GOSAT \citep{kuze2009}.
Future missions include
e.g. the Climate Absolute Radiance and Refractivity Observatory
(CLARREO) \citep{wielicki2013} and the Multi-Viewing Multi-Channel
Multi-Polarization Imaging mission (3MI) on METOP-SG (Meteorological
Operational Satellite - Second Generation).
All-sky imaging systems are available to measure
the polarized radiance distribution; such systems are
described for instance by \citet{liu1997}, \citet{kreuter2009}
and references therein.
The Research Scanning Polarimeter (RSP) \citep{Cairns1999, Cairns2003} has been used
for ground-based as well as airborne aerosol measurements.
Other multi-channel polarimetric instruments are the Airborne
Multiangle SpectroPolarimetric Imager (AirMSPI) \citep{diner2012} and
the Observing System Including PolaRisation in the Solar Infrared
Spectrum (OSIRIS) \citep{auriol2008}.
The commercially available ground-based polarimeter, CE318-DP,
developed by CIMEL Electronic (Paris,
France) is now available at several AERONET stations \citep{Li2014}.
A large number of models for
polarized radiative transfer have been developed in the last years for
various specific applications. They mostly have been validated against
existing benchmark data; e.g., \citet{coulson1960} and \citet{nataraj2009} for
Rayleigh scattering; e.g., \citet{dehaan1987,WaubenW1994a,garcia1989} for
layers including aerosols; and \citet{Kokhanovsky2010} for cloud and
aerosol scattering including realistic phase matrices.
More references to published benchmark results are given on the IPRT website,
section ``benchmark results''. However, all
existing benchmark results are limited to one or two plane-parallel
layers with an underlying Lambertian surface. To simulate the
measurements of the above mentioned sensors, far more realistic
settings are required. Reasonable height profiles of
molecules, aerosols and clouds should be taken into account. For
clear-sky atmospheres, a plane-parallel model geometry is a
reasonable approximation. When clouds are analyzed it is also
important to look into effects resulting from the geometrical structure of
clouds, commonly called 3D-effects, hence validated 3D vector
radiative transfer codes are required. In order to simulate limb
observations, fully spherical vector codes are needed. Polarization by
the surface must be considered, in particular for aerosol remote
sensing from space.
In order to support model developers and to set standards for
polarized radiative transfer modeling the International Radiation Commission
(IRC) has established the working group ``International Polarized
Radiative Transfer'' (IPRT) which is charged by the task to provide benchmark data for
polarized radiative transfer simulations for realistic atmospheric
setups as needed to simulate the current and future satellite,
airborne and ground-based polarimetric sensors.
In order to establish this benchmark dataset a model intercomparison
project has been launched. This paper summarizes the results from the
first phase of the project. Six vector radiative transfer models from various
international institutions have participated. The models use different
approaches to solve the vector radiative transfer equation, among them
are deterministic approaches based on discrete ordinates or spherical
harmonics and also statistical approaches based on Monte Carlo
methods. The test cases in the first phase include simple one-layer
setups, cases with polarized surface reflection, and cases with
realistic height profiles of molecules and aerosol particles.
The focus of this intercomparison project is the establishment of
benchmark results, therefore all models were run in high accuracy
mode. For realistic applications with limited computational time
the models are usually run with lower accuracy.
The first intercomparisons between models showed several larger differences,
some of them due to model errors which have been fixed in the course of this
project. The participants were allowed to provide corrected or more
accurate data. Finally a very good agreement for all test cases has
been found for most models. The commonly
established benchmark results are available at the IPRT website
(\url{http://www.meteo.physik.uni-muenchen.de/~iprt}).
The next phase of the intercomparison project will start soon with
focus on 3D radiative transfer.
\section{Radiative transfer models}
\begin{table*}[htbp]
\centering
\label{tab:rte_solvers}
\caption{Overview of radiative transfer models}
\smallskip
\begin{tabularx}{1.0\hsize}{lp{3cm}lp{2.2cm}X}
\hline
model name & method & geometry & arbitrary \newline output altitude &
references \\ \hline
3DMCPOL & Monte Carlo & 1D/3D & no & \citet{cornet2010,fauchez2014} \\
IPOL & discrete ordinate & 1D & no &
\url{ftp://climate1.gsfc.nasa.gov/skorkin/IPOL/} \\
MYSTIC & Monte Carlo & 1D/3D$^{(a)}$ & yes & \citet{mayer2009, emde2010} \\
Pstar & discrete ordinate & 1D & yes & \citet{ota2010} \\
SHDOM & spherical harmonics \newline discrete ordinate & 1D/3D & yes & \citet{Evans1998} \\
SPARTA & Monte Carlo & 1D/3D & no & \citet{Barlakas2014} \\ \hline
\multicolumn{5}{l}{$^{(a)}$MYSTIC includes fully spherical
geometry for 1D and 3D.} \\
\end{tabularx}
\end{table*}
\subsection{3DMCPOL}
\label{sec:3DMCPOL}
3DMCPOL is a forward Monte-Carlo model for radiative transfer in
three-dimensional atmosphere. It can compute the reflected or transmitted
Stokes vector as well as upwelling and downwelling fluxes. Initially
developed for solar radiation \citep{cornet2010}, it was recently
extended to thermal radiation \citep{fauchez2014}. To save time and
for an accurate computation of radiances, it uses the Local Estimate
Method \citep{marshak2005,mayer2009}. The medium is
divided into voxels (3D pixels) with constant cloud and aerosol
optical properties, that are the extinction coefficient, the single
scattering albedo, the phase function and the cloud temperature. For
highly peaked phase functions, the truncation of \citet{potter1970}
is implemented and
we also added recently the variance reduction method of
\citet{buras2011}.
Atmospheric profiles including temperature, pressure and
absorption coefficient of a correlated k-distribution can also be
specified. The molecular scattering is computed automatically
according to the pressure profile. A depolarization factor can be
specified. To save substantial time, the absorption computation is
done following the Equivalence Theorem \citep{partain2000, emde2011}:
the computation of radiative transfer trough the
scattering medium is done once and the radiances are attenuated
according to the absorption coefficient of the k-distribution along
the geometrical path of the photons. A heterogeneous surface can also
be specified with Lambertian reflection, ocean or snow bidirectional
function. 3DMCPOL applications concern mainly the cloud
heterogeneities effects on total and polarized radiances and the
errors on retrieved parameters from passive sensors. For example, for
the polarized and multi-angular radiometer POLDER3/PARASOL, 3DMCPOL
was used to study on synthetic data the bias for retrieved optical
thickness and effective radius \citep{cornet2013} and also to test
aerosol above cloud retrieval \citep{waquet2013}. Studies on the
thermal radiation were also conducted to assess the bias due to cirrus
heterogeneity on the brightness temperature measured by the radiometer
IIR/CALIPSO \citep{fauchez2014} and on the retrieved effective
optical thickness and effective diameters \citep{fauchez2015}.
For this intercomparison project 10$^8$ photons were used for
simulations without ocean and 10$^7$ photons for
simulations including an ocean surface. For ocean reflection,
the polarized bidirectional reflectance contribution function
and corresponding probability densities are computed at the beginning of the
simulation and during the calculation interpolations are done to
obtain the new direction and the contribution to the top of the
atmosphere. These interpolations increase computational time,
therefore less photons were used for the simulations with ocean.
\subsection{IPOL}
\label{sec:IPOL}
IPOL is a radiative transfer code that computes Intensity and
POLarization of radiation reflected from or transmitted through the
Earth atmosphere over a reflecting surface. Radiation field inside the
atmosphere is not computed thus saving computation time and
memory. The code is suitable for remote sensing systems located on the
ground and aboard satellites or high altitude aircrafts. The code is
written in Fortran 90/95, requires external BLAS-LAPACK libraries, and
is freely available for downloading from
\url{ftp://climate1.gsfc.nasa.gov/skorkin/IPOL/}.
Following libRadtran and XRTM
(\url{http://reef.atmos.colostate.edu/~gregm/xrtm/}), IPOL will soon
incorporate several solvers for the vector radiative transfer
equation. This allows for fast yet accurate computation in a variety
of scenarios. In this intercomparision, only the discrete ordinates
solver is validated. The next solver to be included in the IPOL
project, SORD (Successive ORDers of scattering), currently undergoes
intensive testing. SORD is already available at
\url{ftp://climate1.gsfc.nasa.gov/skorkin/SORD/}.
In IPOL, the system of coordinates and direction of positive rotation
of the frame of reference is defined exactly following
\citet[p.~11 and Sec.~3.2]{hovenier2004}.
The Stokes vector is computed at arbitrary viewing
directions, except for the horizon, using the dummy-node technique
\citep{chalhoub2000}. Singular value decomposition is used to
solve the system of equations at Gauss and dummy nodes. Scaling
transformation \citep{karp1980} stabilizes the solution of the system
for an arbitrary atmospheric optical thickness. Layers with different
optical properties and a reflecting surface are bound together using
the matrix-operator method \citep{nakajima86, Plass1973, ota2010}.
Single scattering path radiance and reflection of
the direct solar beam from the surface are computed analytically. In
order to avoid errors in the aureole \citep{korkin2012}, none of
the phase function truncation techniques \citep{rozanov2010}
has been implemented so far. IPOL ignores atmospheric curvature, 3D
effects, and thermal emission.
We have tested IPOL against the vector codes APC \citep{korkin2013},
RT3 \citep{evans1991}, SCIATRAN \citep{rozanov2013}, and the published
results (see references in the Introduction). The radiative transfer
code SHARM \citep{lyapustin2005} was used to test the total
intensity. Scalar surface models and interface for IPOL were adapted
from SHARM as well.
IPOL was run with 16 streams (half-sphere) for all Rayleigh and the spherical
aerosol case, with 128 streams for the spheroidal aerosol cases,
and with 256 streams for all cloud cases to ensure high accuracy of benchmark results.
\subsection{MYSTIC}
\label{sec:MYSTIC}
The radiative transfer model MYSTIC (Monte-Carlo code for the
phYsically correct Tracing of photons in Cloudy atmospheres)
\citep{mayer2009} is a
versatile Monte-Carlo code for atmospheric radiative transfer which is
operated as one of several radiative transfer solvers of the
libRadtran software package \citep{mayer2005}.
The 1D version of MYSTIC is freely
available at \url{http://www.libradtran.org}. MYSTIC may be used to
calculate polarized solar and thermal radiances, and also for
irradiances, actinic fluxes and heating rates \citep{klinger2014}.
The model has been used
extensively to generate realistic synthetic measurements for the
validation of various retrieval algorithms for cloud and aerosol
properties \citep{davis2013, bugliaro2011}. Further application fields
are e.g. photochemistry \citep{suminska2012} or remote
sensing of exo-planets. MYSTIC allows the definition of arbitrarily
complex 3D clouds and aerosols, an inhomogeneous surface albedo and
topography. Polarized surface reflection is also included. The model
can be operated in fully spherical geometry \citep{emde2007},
hence it can also be used for limb sounding applications.
Polarization has been included by combining various methods
\citep{emde2010}. The local
estimate method \citep{marchuk1980, marshak2005} has been adapted to
account for polarization, which is essential for accurate radiance
simulations. An importance sampling method similar to \citet{collins1972}
is used to sample the
photon direction after scattering or surface reflection, the
probability of which depends
not only on the scattering angle (as in scalar radiative transfer) but
also on the relative azimuth angle between incident and scattered
direction. Sophisticated variance reduction methods are included
\citep{buras2011} which allow to calculate unbiased radiances for scattering
media characterized by strongly peaked phase functions without any
approximations. It is also possible to calculate polarized radiances
in high spectral resolution efficiently \citep{emde2011}.
For all cloudless simulations shown in this intercomparison $10^8$ photons were run
and for the simulations including clouds $10^7$ photons were
used. For clouds less photons were used because of the much larger
computational time due to multiple scattering. Even though only $10^7$
photons were used, the results are not too noisy because of the
sophisticated variance reduction methods included in MYSTIC.
\subsection{Pstar}
\label{sec:Pstar}
The radiative transfer (RT) code, Pstar, has been developed to
simulate the polarized radiation field of a vertically inhomogeneous 1D
system as approximated by several homogeneous layers \citep{ota2010}.
Pstar has been used to simulate polarized solar and thermal
radiation as measured by satellite, and to develop an aerosol
retrieval algorithm, an atmospheric correction algorithm, and a vicarious
calibration system that include the polarization effect
(e.g. \citet{fukuda2013, murakami2013}). The RT scheme of Pstar is
constructed using the discrete ordinate method and the matrix operator
method. The discrete ordinate method is applied to each homogeneous
layer in order to obtain the reflection/transmission matrices and the
source vector of the layer. Then, the matrix operator method is
applied to all layers to obtain the radiation field of the
multi-layered system. The Stokes parameters at any interfaces between
the homogeneous layers as well as at the top of the atmosphere and at
any propagation direction are obtained by post-processing using the
source function integration technique. Finally, more accurate Stokes
parameters are obtained using the single scattering correction
procedure. This RT scheme is originally based on the formulations of
\citet{nakajima86, nakajima88}, which are implemented as the scalar
RT code series of System for Transfer of Atmospheric Radiation (STAR)
\citep{ruggaber1994}. \citet{ota2010} have extended the RT
formulation to express the polarized radiation field and
implemented it in Pstar code. The extended RT scheme is constructed to be
flexible for a vertically inhomogeneous system including the oceanic
layers as well as the ocean surface. Accordingly, Pstar can be used to
simulate the radiation field in the coupled atmosphere-ocean system
including the polarization effect. Pstar computes all four Stokes
parameters in the vector mode, although only the total radiance (I) is
obtained in the scalar mode. Furthermore, the semi-vector mode that
computes the three Stokes parameters (I, Q, and U) on the basis of the
3x3 phase matrix approximation is available. The computation of eigen
solutions of the discrete ordinate method is one of the most time consuming
parts. In the vector mode, the direct decomposition method
\citep{ota2010}
is used in order to acquire the complex eigen solutions,
which is necessary to calculate the Stokes parameter V
accurately. However, in the scalar and semi-vector modes, a
square-root decomposition technique as described by \citet{nakajima86}
is invoked to obtain the real eigen solutions
efficiently. In the inter-comparison of this paper, the vector mode
was used to compute four Stokes parameters. In the Rayleigh scattering
cases, 15 streams were used for both single and multi-layer
conditions. In the aerosol and cloud scattering cases, 90 streams were
used for single-layer cases and 30 streams for multi-layer cases.
The number of streams refer to the half-sphere.
\subsection{SHDOM}
\label{sec:shdom}
The spherical harmonics discrete ordinate method (SHDOM) was developed
for unpolarized 3D atmospheric radiative transfer \citep{Evans1998}.
SHDOM is a non-Monte Carlo method in that the radiation field in the
domain is discretized and solved for iteratively. The source function
is discretized with a spherical harmonics series for the angular aspect
and grid points in a cartesian geometry for the spatial aspect. For
computational efficiency an adaptive grid is used in which addition grid
points may be added to the regular base grid where the source function
is changing rapidly, such as at illuminated cloud boundaries. The
solution iterations consist of 1) transforming the source function from
spherical harmonics to discrete ordinates, 2) integrating the source
function along discrete ordinates to obtain the radiance field, 3)
transforming the radiance field to spherical harmonics, and 4) computing
the source function (including the scattering integral) efficiently in
spherical harmonics from the radiance. A sequence acceleration method
is used to speed up convergence of the iterations. For highly-peaked
phase functions the delta-M method is used and the output radiance is
computed with the TMS method of \citet{nakajima88}, in which the single
scattering contribution is calculated with the exact phase function
instead of the truncated spherical harmonics approximation. SHDOM does
not implement higher order scattering corrections and thus does not
provide accurate results for highly-peaked (i.e. cloud) phase functions
in the solar aureole region.
The SHDOM model can calculate the radiance field from solar and/or
thermal emission sources of radiation. The extinction and single
scattering albedo of the medium are specified on a 3D grid and
trilinearly interpolated between grid points. Instead of specifying the
phase matrix at every grid point, to save memory, a table of expansion
coefficients for many phase matrices is input, and each grid point has a
specified index into the phase matrix table. Once the SHDOM iterations
are completed, radiances in many directions on a grid at any height,
hemispheric fluxes, net fluxes, mean radiances, and net flux convergence
may be efficiently computed. Several types of bidirectional reflection
distribution function (BRDF) models for the surface are implemented, and
their parameters may vary across the domain. A k-distribution approach
is used to integrate across spectral bands. For large 3D domains SHDOM
may be run on multiple processors using the Message Passing Interface
\citep{pincus2009}. SHDOM is distributed from
{\tt http://coloradolinux.com/shdom/}.
Recently polarization capability was added to SHDOM using the real
generalized spherical harmonics method of \citet{doicu2013}. Key pieces
of Adrian Doicu's VSHDOM research code were adapted for use in polarized
SHDOM. The generalized spherical harmonics basis uses 4x4 matrices,
$\mathsf{Y}_{lm}(\mu,\phi)$ with 6 non-zero elements (including a 2x2
block for $Q$ and $U$). The angles $\mu=\cos(\theta)$ and $\phi$
describe the radiance direction, where $\theta$ is the
zenith angle and $\phi$ is the azimuth angle in polar coordinates. The elements of the
$\mathsf{Y}_{lm}$ matrix
are various Wigner d-functions in $\mu$ multiplied by Fourier functions
in $\phi$. The radiance and source function are represented by vectors
with $N_{stokes}=1$ (scalar), 3, or 4 elements, and thus the memory use
for a polarized calculation is about $N_{stokes}$ times that for a
scalar calculation. SHDOM has special purpose subroutines for the
unpolarized case ($N_{stokes}=1$) so the polarized code serves
efficiently for scalar calculations. There are two polarized surface
reflection models: Fresnel surface with waves \citep{mishchenko1997} used
for ocean, and depolarizing modified-RPV (including the hotspot) with
polarizing Fresnel reflection from randomly oriented microfacets
\citep{diner2012} used for land. When enough memory is available, the
surface BRDF is precomputed for all incoming and outgoing discrete
ordinates, greatly speeding up computation for uniform non-Lambertian
surfaces. The polarized SHDOM distribution includes Mie and T-matrix
\citep{Mishchenko1998b} codes for generating SHDOM scattering tables from
spherical or spheroidal/cylindrical shaped particles, respectively. The
six unique elements of the phase matrices are represented as series
expansions in Wigner d-functions (the $I-I$ and $V-V$ elements are
standard Legendre polynomials).
The SHDOM results shown below are for a high resolution run
with the number of discrete ordinates in zenith and azimuth angles of
$N_{mu}=128$ and $N_{phi}=256$ and the cell splitting accuracy of
0.00003.
\subsection{SPARTA}
\label{sec:sparta}
The Solver for Polarized Atmospheric Radiative Transfer Applications
(SPARTA) is a new three-dimensional
(3D) vector radiative transfer model introduced in
\citet{Barlakas2014}. When finished it will become freely available.
The model is based on the statistical Monte
Carlo method (in the forward scheme) and calculates column-response
pixel-based polarized radiances for 3D inhomogeneous
cloudless and cloudy atmospheres. Hence, it is well suited for use in
remote sensing applications. SPARTA is based on the established
scalar Monte Carlo model of the Institute for Marine Research at the
UNIversity of Kiel (MC-UNIK, \citealt{Macke1999}). MC-UNIK has been
extended to take into account the polarization state of the
electromagnetic radiation due to multiple scattering by randomly
oriented non-spherical particles, i.e., coarse mode dust particles or
ice particles. The SPARTA model considers a 3D Cartesian domain with a
cellular structure. The latter is divided into grid-boxes, characterized
by a volume extinction coefficient $\beta_{\mathrm{ext}}$
or a scattering coefficient $\beta_{\mathrm{sca}}$, a scattering phase
matrix {\bf P}($\Theta$) with a scattering angle $\Theta$, and a
single scattering albedo $\omega_{\mathrm{o}}$. Directions are
specified by the azimuth and zenith angles.
Free path
lengths are simulated as outlined by \citet{marchuk1980} by random
number processes with attenuation described by the law of
Lambert. Scattering directions are calculated according to an
importance sampling method
\citep{collins1972,marchuk1980,emde2010}. Absorption is taken into
account by decreasing the initial photon weight by the integrated
absorption coefficient, along the photon path, according to Lambert's
law. The surface contribution is calculated assuming isotropic
reflection (Lambertian surface) or ocean reflection as outlined by
\citet{mishchenko1997}. In order to obtain precise radiance
calculations for each wavelength the so-called Local Estimate Method
(LEM) has been applied
\citep{collins1972,marchuk1980,marshak2005}.
Other variance reduction methods have not been implemented so far.
The selected number of
photons used for all the test cases in this intercomparison was
$10\textsuperscript{8}$.
\section{Definition of test cases}
\subsection{Model coordinate system and Stokes vector}
For all test cases the Stokes parameters, which are defined as time
averages of linear combinations of the electromagnetic field vector
\citep{chandrasekhar50, hansen1974,
mishchenko2002, wendisch2012}, are calculated:
\begin{eqnarray}
\begin{pmatrix}
I \\ Q \\ U \\ V
\end{pmatrix}
= \frac{1}{2}\sqrt{\frac{\epsilon}{\mu_p}}
\begin{pmatrix}
E_l E_l^\ast + E_r E_r^\ast \\
E_l E_l^\ast - E_r E_r^\ast \\
-E_l E_r^\ast - E_r E_l^\ast \\
i(E_l E_r^\ast- E_r E_l^\ast)
\end{pmatrix}
\label{eq:stokes}
\end{eqnarray}
Here, $E_l$ and $E_r$ are the components of the electric field vector
paralle{\bf l} and perpendicula{\bf r} to
the reference plane respectively. The pre-factor on the right hand
side contains the electric permittivity $\epsilon$ and the magnetic
permeability $\mu_p$.
The model coordinate system is defined by the vertical (z-axis), the Southern
direction (x-axis) and the Eastern direction (y-axis).
The Stokes vector is defined in the reference frame spanned by the
z-axis and the propagation direction of the radiation.
The sign of Stokes parameters $U$ and $V$ depends on the definition of
the model coordinate system. The results shown in this paper are for
the coordinate system as defined in the books by
\citet{hovenier2004} and \citet{mishchenko2002}. The sign of $U$ and
$V$ changes when the viewing azimuthal angle definition is changed from
anti-clockwise to clockwise and also when the definition of the
viewing zenith angle is with respect to the downward normal instead of
the upward normal.
The models IPOL, SHDOM and 3DMCPOL use the definition according to
\citet{hovenier2004}. SPARTA uses a
different coordinate system but the signs are consistent with
\citet{hovenier2004}. MYSTIC and Pstar also use different coordinate
systems and obtain opposite signs for $U$ and $V$, all results for
these Stokes components shown
in this paper have been multiplied by -1.
The position of the sun is defined by the vector pointing from the
surface to the sun position, i.e. the direction opposite to the
propagation direction of the incoming radiation.
The degree of polarization $P$ is defined as follows:
\begin{equation}
\label{eq:pol_deg}
P=\frac{\sqrt{(Q^2+U^2+V^2)}}{I}
\end{equation}
The definition of the test cases can also be found at
\url{http://www.meteo.physik.uni-muenchen.de/~iprt}.
\subsection{Test cases including a single layer}
The first set of test cases are for a single layer including
different atmospheric constituents, i.e. molecules, aerosols and cloud
droplets. There are two cases including surface reflection, one is for
a Lambertian surface and the other includes an ocean reflectance
matrix.
\subsubsection{A1 -- Rayleigh scattering}
\label{sec:a1_setup}
The most simple setup contains one layer with scattering
(non-absorbing) molecules. The radiation field is calculated at the
top and at the bottom of the layer for various sun positions and an
optical thickness of 0.5 (see Tab.~\ref{tab:a1_setup}).
Viewing zenith angles range from 0\ensuremath{^\circ}\ to 80\ensuremath{^\circ}\ at the bottom
and from 100\ensuremath{^\circ}\ to 180\ensuremath{^\circ}\ at the top of the layer with an increment of
5\ensuremath{^\circ}. Viewing azimuth angles range from 0\ensuremath{^\circ}\ to
360\ensuremath{^\circ}\ with an increment of 5\ensuremath{^\circ}.
This test is partly contained in
the tables by \citet{coulson1960, nataraj2009}. We also include
non-zero solar azimuth angles to test whether the models use consistent
coordinate systems to define the Stokes vector, this will be
particularly important for future intercomparisons in
three-dimensional geometry. Also we
include a non-zero Rayleigh depolarization factor as defined in
\citep{hansen1974}, who defines the Rayleigh phase matrix as
follows:
\begin{align}
\label{eq:rayleigh_phase_matrix}
& {\bf P}(\Theta) = \\
& \begin{array}{l}
\Delta \left[
\begin{array}{cccc}
\frac{3}{4}(1+\cos^2\Theta) & -\frac{3}{4}\sin^2\Theta & 0 & 0 \\
-\frac{3}{4}\sin^2\Theta & \frac{3}{4}(1+\cos^2\Theta) & 0 & 0 \\
0 & 0 & \frac{3}{2}\cos \Theta & 0 \\
0 & 0 & 0 & \Delta' \frac{3}{2}\cos \Theta
\end{array}
\right] \nonumber \\[6ex]
\hspace{10ex}+(1-\Delta)\left[
\begin{array}{cccc}
\ 1\ &\ 0\ &\ 0\ &\ 0\ \\
\ 0\ &\ 0\ &\ 0\ &\ 0\ \\
\ 0\ &\ 0\ &\ 0\ &\ 0\ \\
\ 0\ &\ 0\ &\ 0\ &\ 0\
\end{array}
\ \right],
\end{array}
\end{align}
where
\begin{eqnarray}
\label{eq:depol}
\Delta=\frac{1-\delta}{1+\delta/2}, \qquad
\Delta'=\frac{1-2\delta}{1-\delta},
\end{eqnarray}
and $\delta$ is the depolarization factor that accounts for the
anisotropy of the molecules. $\Theta$ is the scattering angle,
i.e. the angle between incoming and scattered directions.
\begin{table}[h]
\centering
\begin{tabularx}{\columnwidth}{|X|X|X|}
\hline
solar zenith \newline angle $\theta_0$ & solar azimuth \newline angle $\phi_0$ &
depolarization factor $\delta$
\\ \hline
0\ensuremath{^\circ} & 65\ensuremath{^\circ} & 0.0 \\
30\ensuremath{^\circ} & 0\ensuremath{^\circ} & 0.03 \\
30\ensuremath{^\circ} & 65\ensuremath{^\circ} & 0.1 \\
\hline
\end{tabularx}
\caption{Sun positions and depolarization factors for test case A1,
pure Rayleigh scattering layer.}
\label{tab:a1_setup}
\end{table}
As an example Fig.~\ref{fig:a1_rayleigh} shows the radiation field for
$\theta_0$=30\ensuremath{^\circ}, $\phi_0$=65\ensuremath{^\circ}, and $\delta$=0.1. The $Q$-component of
the Stokes vector is negative in the principle plane. The
$U$-component is zero in the principal plane and it becomes positive
in the clockwise azimuthal direction and negative in the
counter-clockwise direction. The maximum degree of polarization
$P$ is clearly visible at a scattering angle of 90\ensuremath{^\circ}.
\begin{figure*}[t]
\centering
\includegraphics[width=.8\hsize]{./mystic_alt1.png}\\[1ex]
\includegraphics[width=.8\hsize]{./mystic_alt0.png}
\caption{Results (MYSTIC)
for Rayleigh scattering layer with an optical thickness of 0.5 and
a depolarization factor of 0.1. The sun position is
($\theta_0$=30\ensuremath{^\circ}, $\phi_0$=65\ensuremath{^\circ}).
}
\label{fig:a1_rayleigh}
\end{figure*}
\subsubsection{A2 -- Rayleigh scattering above Lambertian surface}
\label{sec:a2_setup}
This test case includes one layer with non-absorbing molecules with an
optical thickness of 0.1 above a Lambertian surface with albedo
0.3. The Rayleigh depolarization factor is 0.03 and the sun position
is $(\theta_0=50\ensuremath{^\circ}, \phi_0=0\ensuremath{^\circ})$. The viewing directions are
as in Sec.~\ref{sec:a1_setup}. The only difference is that viewing
azimuths only range from 0\ensuremath{^\circ}\ to 180\ensuremath{^\circ}\ because the
radiation field is symmetric about the principal plane and the solar
azimuth angle is 0\ensuremath{^\circ}. The top row in Fig.~\ref{fig:a_cases} shows the
transmittance (top part of polar plots) and the reflectance (bottom
part of polar plots). The transmitted radiation field is still highly polarized
whereas the degree of polarization $P$ becomes much smaller in the
reflected field (from $\sim$60\% in the maximum to $\sim$30\%) because the total intensity becomes
higher due to surface reflection. The order of magnitude of the Stokes
components $Q$ and $U$ is similar in transmitted and reflected
radiation fields.
\subsubsection{A3 -- Spherical aerosol particles}
\label{sec:a3_setup}
\begin{figure}[h]
\centering
\includegraphics[width=1.\hsize]{./all_phamat.pdf}
\caption{Phase matrices of spherical and spheroidal aerosol
particles, and cloud droplets.}
\label{fig:phamats}
\end{figure}
Here we calculate the transmitted and the reflected radiance fields
for a layer including spherical aerosol particles. The optical
thickness of the layer is 0.2 and the sun position is
$(\theta_0=40\ensuremath{^\circ}, \phi_0=0\ensuremath{^\circ})$.
The aerosol microphysical properties
correspond to typical water soluble aerosol for a relative humidity of
50\% at 350~nm as provided in the OPAC database \citep{hess1998,emde2010}: the complex
refractive index is 1.422-2.649\ee{-3}$i$, the size distribution is
log-normal with a mode radius of 26.2~nm and a width of 2.24, and the
mass density is 1.42~g/cm$^{-3}$. The optical properties including the
phase matrix $P$ are calculated using the Mie tool of the libRadtran
package \citep{mayer2005, wiscombe80a}. The phase matrix for
randomly oriented particles depends only
on the scattering angle $\Theta$:
\begin{eqnarray}
\label{eq:a3_phase_matrix}
\begin{array}{l}
{\bf P}(\Theta) =
\left[
\begin{array}{cccc}
P_1(\Theta) & P_2(\Theta) & 0 & 0 \\
P_2(\Theta) & P_5(\Theta) & 0 & 0 \\
0 & 0 & P_3(\Theta) & P_4(\Theta) \\
0 & 0 & -P_4(\Theta) & P_6(\Theta)
\end{array}
\right]
\end{array}
\end{eqnarray}
The black dashed-dotted lines in Fig.~\ref{fig:phamats} show the phase matrix
elements $P_1$--$P_4$. For spherical particles it has only four
independent elements, $P_5$ is equal to $P_1$
and $P_6$ is equal to $P_3$.
The expansion moments over generalized spherical functions (see \citet[Sec. 2.8]{hovenier2004})
for the phase matrix have also been made available for models which require
those as input.
The radiance fields for this case are shown in the second row of
Fig.~\ref{fig:a_cases}. The maximum degree of polarization can be seen
at a scattering angle of approximately 100\ensuremath{^\circ}. The $V$-component
of the Stokes vector is non-zero because scattering at spherical
droplets produces circular polarization. The size parameter of this
aerosol particles is small, therefore we do not see strong forward
scattering in the radiance field $I$.
\subsubsection{A4 -- Spheroidal aerosol particles}
\label{sec:a4_setup}
A similar scenario as given in Sec.~\ref{sec:a3_setup} is calculated
for a layer including prolate spheroids with an aspect ratio of
3. Again a log-normal size distribution was assumed, with a mode
radius of 390~nm and a width of 2. The complex refractive index is
1.52-0.01$i$ and the mass density is 2.6~g/cm$^{-3}$.
The optical properties at 350~nm were calculated from a spheroid scattering data
base as described by \citet[Sec. 3.2]{gasteiger2011}. The scattering data base was
created using the T-matrix code by \citet{Mishchenko1998b} and the
geometric optics code by \citet{yang2007}.
When the asphericity of the aerosol particles is considered the
scattering phase matrix has six independent
elements.
The red dashed lines in Fig.~\ref{fig:phamats} show the phase matrix elements for the
spheroidal particles. In contrast to the spherical aerosol particles
used in Sec.~\ref{sec:a3_setup} the phase function $P_1$ shows much
more forward scattering as the size parameter, i.e. the ratio between
particle size and wavelength, is larger. Also we see a
positive maximum in $P_2/P_1$ at a scattering angle of approximately
170\ensuremath{^\circ}, whereas for Rayleigh scattering and for the small spherical
aerosol particles this ratio is always negative.
For this case the sun position is $(\theta_0=40\ensuremath{^\circ},
\phi_0=0\ensuremath{^\circ})$ and the optical thickness is 0.2.
The third row in Fig.~\ref{fig:a_cases} shows the results for this
case. The reflected radiance field (upper half of the polar plots)
shows two maxima in the degree of polarization which are at the
scattering angles of the minima and maxima in the ratio $P_2/P_1$.
The transmitted radiance field shows high radiances $I$ in the
forward scattering region. The $Q$ and $U$ components show a
characteristic pattern in the forward scattering region.
\subsubsection{A5 -- Liquid water cloud}
\label{sec:a5_setup}
This test case includes a single layer with typical cloud droplets. The
scattering phase matrix (see blue lines in Fig.~\ref{fig:phamats}) has been
computed using the Mie tool of libRadtran for a wavelength of 800~nm
assuming a gamma size distribution with an effective radius of 10~$\mu$m and a width of
0.1. The scattering phase matrix has the same structure as the one for
spherical aerosol particles (see Eq.~\ref{eq:a3_phase_matrix}).
The cloud optical thickness is set to 5. This test case is used
to check whether features of the phase matrix, e.g. the cloudbow can be
simulated accurately. This is particularly important because
retrievals of the cloud effective radius from polarized observations
use the position and the width of the cloudbow \citep{breon2005,
alexandrov2012}. Furthermore we want to check, whether the forward
scattering peak of highly asymmetric phase matrices can be taken into
account accurately by the models. The sun position is at
$(\theta_0=50\ensuremath{^\circ}, \phi_0=0\ensuremath{^\circ})$.
The radiance is calculated at an
angular resolution of 1\ensuremath{^\circ}\ in the principal plane
(i.e. $\phi\in[0\ensuremath{^\circ}, 180\ensuremath{^\circ}]$) at the top and at the bottom of
the layer. The same angular resolution is calculated in the almucantar
plane (i.e. $\theta=50\ensuremath{^\circ}$ and $\phi\in[0\ensuremath{^\circ},1\ensuremath{^\circ},...,180\ensuremath{^\circ}]$).
The reflected and transmitted radiances in the principal and
almucantar planes are shown in Fig.~\ref{fig:a5_principal} and
Fig.~\ref{fig:a5_almucantar} respectively. The $I$-component of the Stokes vector
shows the strong forward scattering peak. The cloudbow can be seen in
the $I$- and much more pronounced in the $Q$-component of the Stokes
vector. The reason is that $Q$ is less affected by multiple scattering
than $I$, because photons that are scattered multiple times have
random polarization states.
\begin{figure}[h]
\centering
\includegraphics[width=1.\hsize]{./A5_principal_plane.pdf}
\caption{Transmitted and reflected radiance (IPOL simulations) for a
cloud layer. The viewing directions $\theta$ are in the solar
principal plane ($\phi\in[0\ensuremath{^\circ}, 180\ensuremath{^\circ}]$).}
\label{fig:a5_principal}
\end{figure}
\begin{figure}[h]
\centering
\includegraphics[width=.9\hsize]{./A5_almucantar_plane.pdf}
\caption{Transmitted and reflected radiance (IPOL simulations) for a
cloud layer. Viewing directions are in the almucantar plane
($\theta=\theta_0, \phi\in[0\ensuremath{^\circ},1\ensuremath{^\circ},...,360\ensuremath{^\circ}]$).}
\label{fig:a5_almucantar}
\end{figure}
\subsubsection{A6 -- Rayleigh atmosphere above ocean surface}
\label{sec:a6_setup}
In order to test whether the surface reflection matrix is correctly
included in the models, the radiance field is calculated for a
Rayleigh scattering layer (optical thickness 0.1, Rayleigh
depolarization factor 0.03) with an underlying ocean surface. The
reflectance matrix is calculated using a combination of Fresnel
equations and wave distribution including shadowing effects as
implemented by \citet{mishchenko1997}. The real part of refractive index of water
is assumed to be 1.33 and the imaginary part is zero. The wind speed
is assumed to be 2~m/s. The sun position is $(\theta_0=45\ensuremath{^\circ},
\phi_0=0\ensuremath{^\circ})$.
The last row of Fig.~\ref{fig:a_cases} shows the results for this
case. The sun-glint is clearly visible in the reflected radiance (
$I$, $Q$- and $U$-components of the Stokes vector). Note that the
sign of $Q$ and $U$ of the reflection in the sun-glint is the same as
for Rayleigh scattering.
\begin{figure*}[t]
\centering
\includegraphics[width=1.\hsize]{./mystic_A2.png}\\[0.8cm]
\includegraphics[width=1.\hsize]{./mystic_A3.png}\\[0.8cm]
\includegraphics[width=1.\hsize]{./mystic_A4.png}\\[0.8cm]
\includegraphics[width=1.\hsize]{./mystic_A6.png}
\caption{Results for Lambertian surface reflection (A2, 1st
row), spherical aerosol particles (A3, 2nd row), aspherical aerosol
particles (A4, third row), and ocean reflectance (A6, 4th row). For
details about the setup please refer to the text,
Secs.~\ref{sec:a2_setup}--\ref{sec:a6_setup}. The top part of the
polar plots shows the Stokes components $I$, $Q$, $U$ and $V$ and the degree of
polarization $P$ of the reflected radiance field $R$ and the bottom part
shows the transmitted radiance field $T$. All results shown
here are calculated using MYSTIC.
}
\label{fig:a_cases}
\end{figure*}
\subsection{Test cases with realistic atmospheric profiles}
All following test cases are for the US-standard atmosphere
\citep{anderson1986} from 0 to 30 km altitude. The atmosphere is
divided into 30 layers with a thickness of 1~km. The radiance field is
calculated at the surface, at the top of the atmosphere and at an
altitude of 1~km. The radiance is calculated for viewing zenith angles
from 0\ensuremath{^\circ}\ to 80\ensuremath{^\circ}\ (up-looking) and from 100\ensuremath{^\circ}\ to
180\ensuremath{^\circ}\ (down-looking) and for viewing azimuth angles from 0\ensuremath{^\circ}\
to 180\ensuremath{^\circ}. The angular resolution in zenith and azimuth is
5\ensuremath{^\circ}. The solar azimuth angle is generally 0\ensuremath{^\circ}. The Rayleigh
depolarization factor is 0.03 and the surface albedo is 0 for all cases
apart from the case with ocean surface (Sec.~\ref{sec:b4_setup}).
\begin{figure}[h]
\centering
\includegraphics[width=1.\hsize]{./profiles.pdf}
\caption{Scattering , absorption and extinction coefficient profiles for
multilayer test cases. The left plot
shows the molecular scattering coefficient, the middle plot
the molecular absorption coefficient, and the right plot the
aerosol extinction coefficient. Different lines correspond to
case B1 (450~nm, dash-dotted), case B2 (325~nm, solid), and case B3
(350~nm, dashed).}
\label{fig:profiles}
\end{figure}
\subsubsection{B1 -- Rayleigh scattering for a standard atmosphere}
\label{sec:b1_setup}
The test case checks whether the discretization of the atmosphere into
plane-parallel layers is correctly implemented in the models.
The radiance field is calculated at 450~nm taking into account
only Rayleigh scattering, i.e. molecular absorption is neglected.
The sun position is $(\theta_0=60\ensuremath{^\circ}, \phi_0=0\ensuremath{^\circ})$.
The scattering coefficient profile is shown in the left plot of
Fig.~\ref{fig:profiles}, the dash-dotted line corresponds to
450~nm. The Rayleigh depolarization factor is 0.03.
The upper two rows of Fig.~\ref{fig:B1_B2_results} show the radiance
field at top of atmosphere and surface and at 1~km altitude. As
expected the pattern is very similar to the simulation with one layer
(see Sec.~\ref{sec:a1_setup}).
\subsubsection{B2 -- Rayleigh scattering and absorption for a standard atmosphere}
\label{sec:b2_setup}
This case checks whether absorption is correctly taken into account.
We calculate the radiance field at 325~nm for the US-standard
atmosphere. The sun position is $(\theta_0=60\ensuremath{^\circ},
\phi_0=0\ensuremath{^\circ})$.
The scattering coefficient profile is shown in the left plot of
Fig.~\ref{fig:profiles} and the absorption coefficient profile
is shown in the middle plot; generally the solid lines corresponds to 325~nm.
Besides the strong absorption at this wavelength due to ozone Rayleigh scattering
is also much stronger than at 450~nm.
The third and fourth row in
Fig.~\ref{fig:B1_B2_results} show the radiance field at top of
atmosphere, at the surface and at 1~km altitude. All Stokes components
show the characteristic Rayleigh scattering
pattern as for case B1 (pure Rayleigh scattering). However the degree
of polarization is smaller than for case B1.
\begin{figure*}[t!]
\centering
\includegraphics[width=1.\hsize]{./mystic_B1.png}\\[0.3cm]
\includegraphics[width=1.\hsize]{./mystic_B2.png}
\caption{Radiation fields at surface (0km), top of atmosphere (30km),
and at 1~km altitude. The upper two rows correspond to the pure
Rayleigh scattering case (B1, see Sec.~\ref{sec:b1_setup}) and the lower 2
rows are for Rayleigh scattering and absorption (B2, see
Sec.~\ref{sec:b2_setup}).
The upper parts of the polar plots are for
downlooking directions and the lower parts for uplooking. The
Stokes components and the degree of polarization are shown.}
\label{fig:B1_B2_results}
\end{figure*}
\subsubsection{B3 -- Aerosol profile and standard atmosphere}
\label{sec:b3_setup}
Here we check whether the models can correctly handle different
atmospheric constituents with similar extinction coefficients in the
same layers.
We perform simulations at 350~nm for a standard
atmosphere including molecular absorption and scattering and
additionally an aerosol profile similar to \citet{shettle89} with a total optical
thickness of 0.2. Molecular absorption and scattering profiles and the
aerosol extinction profile are shown in Fig.~\ref{fig:profiles}
(dashed lines). We assume spheroidal aerosol partials with the same
optical properties as in Sec.~\ref{sec:a4_setup}. The sun position is $(\theta_0=30\ensuremath{^\circ},
\phi_0=0\ensuremath{^\circ})$. The upper two rows of Fig.~\ref{fig:B3_B4_results}
show the radiance field at top of atmosphere, at the surface and at 1~km
altitude. The total intensity $I$ for up-looking directions is dominated
by the forward scattering peak. The polarization pattern is dominated by
Rayleigh scattering and the features in the
radiation field for test case A4 (layer with aspherical aerosol
particles, see Fig.~\ref{fig:a_cases}), e.g. the two maxima in the
degree of polarization or the patterns in $U$ and $V$ in the forward
scattering region, are no longer visible. The main effect of aerosol
is a decrease in the degree of polarization which is most likely due to
the dilution of the strong Rayleigh polarization by the weaker aerosol
polarization.
\subsubsection{B4 -- Cloud above ocean surface}
\label{sec:b4_setup}
This most sophisticated test case includes a cloud layer embedded in a
standard atmosphere above an ocean surface.
The calculation is performed at a wavelength of
800~nm, molecular scattering is included, absorption is neglected.
The ocean surface is defined as in
Sec.~\ref{sec:a6_setup}, here also we assume a wind speed of 2~m/s
and we use 1.33+0$i$ as the refractive index for water.
Additionally, a cloud layer with an optical thickness of 5 is included
from 2~km to 3~km altitude. The cloud optical properties are the same
as in Sec.~\ref{sec:a5_setup}. The sun position for this case is $(\theta_0=60\ensuremath{^\circ},
\phi_0=0\ensuremath{^\circ})$.
The lower two rows of Fig.~\ref{fig:B3_B4_results}
show the radiance field at top of atmosphere, at the surface and at 1~km
altitude. The down-looking radiance field (reflectance) at the top of
the atmosphere shows
the cloudbow very clearly with high contrast in the degree of
polarization, whereas in the total intensity the feature is very
weak. The degree of polarization for down-looking directions at 1¨km
altitude shows an interesting feature, it is $\sim$90\% for
a viewing angle of $\sim$53\ensuremath{^\circ}\ corresponding to the
Brewster angle for the water surface ($\theta_B=\arctan(1.33)$). Due to multiple
scattering in the cloud layer we get incident radiation on the
water surface from all directions. Now the radiation which hits the surface
at the Brewster angle is fully polarized after reflection, and the
reflected direction is at the same angle, this nicely
explains the observed pattern. The ``ring'' is smeared because the
ocean surface with waves is not an ideal mirror.
As for the aerosol case the total intensity for the up-looking
directions at the surface and at 1~km altitude
is dominated by sharp forward scattering peak. The patterns for $Q$
and $U$ for up-looking directions show a characteristic cloud
scattering pattern which is very different from the Rayleigh
scattering pattern.
\begin{figure*}[t!]
\centering
\includegraphics[width=1.\hsize]{./mystic_B3.png}\\[0.3cm]
\includegraphics[width=1.\hsize]{./mystic_B4.png}
\caption{Radiation fields at surface (0km), top of atmosphere (30km),
and at 1~km altitude. The upper two rows correspond to the case
with aerosol profile (B3, see Sec.~\ref{sec:b3_setup}) and the lower 2
rows are for a cloud layer above an ocean surface (B4, see
Sec.~\ref{sec:b4_setup}).
The upper parts of the polar plots are for
downlooking directions and the lower parts for uplooking. The
Stokes components and the degree of polarization are shown.}
\label{fig:B3_B4_results}
\end{figure*}
\section{Model intercomparison}
\label{sec:model_intercomparison}
This section presents results of all models for an exemplary selection
of viewing angles. For each test case, one or two plots show the
simulated Stokes vector and the absolute differences between MYSTIC
and IPOL, SPARTA, Pstar, SHDOM, and 3DMCPOL respectively.
Out of range values are shown as arrows in the difference plots.
Comparison plots for all viewing directions are provided on the IPRT
website and as supplementary material.
In order to quantify the level of agreement between the models we
calculate the relative root mean square differences
$\Delta_m$ between MYSTIC and other codes for the full radiation
field including all up- and down-looking directions. This yields one
representative number for each test case.
We define $\Delta_m$ for $I$ as follows:
\begin{equation}
\label{eq:rel_diff}
\Delta_m=\frac{\sqrt{\sum_{i=1}^{N}{\left(I^{i}_{\rm MYSTIC}
- I^{i}_ {m}\right)^2}}}
{\sqrt{\sum_{i=1}^{N}{\left(I^{i}_{\rm MYSTIC}\right)^2}}}
\end{equation}
Here $m$ denotes the radiative transfer model and the summation is
done over all $N$ directions, for which the radiation field is
calculated. For the other Stokes components $\Delta_m$ is calculated
accordingly.
We look at relative root mean square differences because the
Stokes components $Q$, $U$ and $V$ are differences between intensities
(see Eq. \ref{eq:stokes}) and for some geometries they have zero or
extremely small values. Mean relative differences are therefore not
meaningful because relative differences
for radiance values very close to zero become very large.
A weakness of the definition of $\Delta_m$ is that it
might be dominated by a few very large differences at specific viewing
directions.
For cases with strongly peaked phase functions (i.e. cloud cases)
$\Delta_m$ is dominated by the forward scattering region. Therefore we
also calculated $\Delta_m$ without the solar aureole region,
i.e. directions up to 10\ensuremath{^\circ}\ from the sun direction are taken out
of the summation in Eq.~\ref{eq:rel_diff}.
\subsection{Test cases including a single layer}
The relative root mean square differences for the single layer test cases are
listed in Table~\ref{tab:a_results}, mostly the level of agreement is
of the order of 0.1\%. For the cloud case A5, the MYSTIC reference results,
especially for circular polarization, are more noisy, hence the
relative root mean square difference becomes larger although most
models still agree perfectly within the expected accuracy range.
\begin{table*}[t!]
\centering
\begin{tabular}{|l l|c|c|c|c|c|c|c|c|c|}
\hline
model name & & A1 & A2 & A3 & A4 & A5$^{\rm pp}$ & A5$^{\rm pp}_{\rm part}$ & A5$^{\rm al}$ & A5$^{\rm al}_{\rm part}$ & A6 \\ \hline \hline
IPOL & I & 0.017 & 0.009 & 0.102 & 0.088 & 0.183 & 0.077 & 0.178 & 0.075& 0.064 \\
& Q & 0.024 & 0.036 & 0.287 & 0.028 & 0.794 & 0.771 & 0.816 & 0.803& 0.124 \\
& U & 0.029 & 0.029 & 0.295 & 0.034 & - & - & 1.058 & 1.039& 0.190 \\
& V & - & - & 0.800 & 1.277 & - & - & 28.313 & 26.735& - \\
\hline
3DMCPOL & I & 0.092 & 0.010 & 0.051 & 0.009 & 2.432 & 0.213 & 2.665 & 0.197& 0.499 \\
& Q & 1.681 & 0.354 & 0.117 & 0.041 & 1.221 & 0.986 & 1.094 & 1.070& 2.574 \\
& U & 2.129 & 0.275 & 0.108 & 0.061 & - & - & 1.264 & 1.198& 22.359 \\
& V & - & - & 0.519 & 1.840 & - & - & 36.313 & 34.160& - \\
\hline
SPARTA & I & 0.088 & 0.011 & 0.051 & 0.027 & 0.183 & 0.198 & 0.213 & 0.143& 0.146 \\
& Q & 0.367 & 0.055 & 0.120 & 0.041 & 2.256 & 1.725 & 2.050 & 2.011& 0.152 \\
& U & 0.275 & 0.042 & 0.084 & 0.060 & - & - & 2.928 & 2.881& 0.231 \\
& V & - & - & 0.639 & 2.607 & - & - & 67.027 & 64.972& - \\
\hline
SHDOM & I & 0.044 & 0.068 & 0.111 & 0.077 & 3.383 & 0.077 & 3.640 & 0.075& 0.089 \\
& Q & 0.034 & 0.233 & 0.274 & 0.051 & 8.735 & 9.311 & 0.841 & 0.805& 0.128 \\
& U & 0.038 & 0.112 & 0.270 & 0.055 & - & - & 1.084 & 1.065& 0.195 \\
& V & - & - & 0.567 & 1.818 & - & - & 28.745 & 27.062& - \\
\hline
Pstar & I & 0.017 & 0.009 & 0.100 & 0.025 & 0.336 & 0.081 & 0.159 & 0.075& 0.100 \\
& Q & 0.024 & 0.036 & 0.289 & 0.028 & 0.809 & 0.775 & 0.827 & 0.811& 0.165 \\
& U & 0.029 & 0.029 & 0.299 & 0.035 & - & - & 1.048 & 1.029& 0.273 \\
& V & - & - & 0.808 & 1.278 & - & - & 28.302 & 26.732& - \\
\hline
\end{tabular}
\caption{Relative root mean square differences $\Delta_m$ in per cent between MYSTIC and
IPOL, 3DMCPOL, SPARTA, SHDOM and Pstar for the single layer
intercomparison cases.
For the cloud cases the columns A5$^{\rm pp}_{\rm part}$ and
A5$^{\rm al}_{\rm part}$ are added which include $\Delta_m$
calculated without the solar aureole region,
i.e. viewing angles up to 10\ensuremath{^\circ}\ from the sun direction are taken
out of the summation in Eq.~\ref{eq:rel_diff}. }
\label{tab:a_results}
\end{table*}
\subsubsection{A1 -- Rayleigh scattering}
The left plots in Fig.~\ref{fig:a1_results_1} show the Stokes vector calculated at the
top of the layer for down-looking directions. The sun is located in
the zenith, which means that $U$ is 0.
The Rayleigh depolarization factor in this case is
0. The right plots show the absolute differences between the
models. The grey area corresponds to two standard deviations (2$\sigma$)
of the MYSTIC results, this means that with a probability of 95.4\%
the difference between the MYSTIC result and the true value lies in
the grey area.
\label{sec:a1_results}
\begin{figure}[htbp]
\centering
\includegraphics[width=1.\hsize]{./IPRT_caseA1_depol0_altitude1_90.pdf}
\caption{Test case A1, Rayleigh scattering layer,
depolarization factor 0, sun in the zenith, $\phi$=90\ensuremath{^\circ}.
Left: Stokes vector at the top of layer. Right: Absolute difference between individual
models and MYSTIC, the grey area corresponds to 2$\sigma$ of the
Monte Carlo calculations.}
\label{fig:a1_results_1}
\end{figure}
Looking at the left plots, we do not see differences between
the models, all lines are on top of each other.
The right plots show that the differences between the models are three
orders of magnitude smaller than the radiance values. The level of
agreement between IPOL and Pstar is even better since the symbols of the
two models always lie on top of each other. This is not surprising
because the models use the same method. For IPOL, SPARTA and Pstar
the differences are centered about 0 and they are well in the 2$\sigma$
range, hence we may conclude that these models agree perfectly with
MYSTIC on a very high accuracy level. For $I$ the SHDOM results are
slightly smaller than MYSTIC whereas the 3DMCPOL results
are slightly larger. For $Q$, 3DMCPOL is slightly
smaller than MYSTIC. The difference plots show a similar progression for all
models, this is due to statistical noise of the MYSTIC results. The Monte Carlo
models SPARTA and 3DMCPOL use a technique to sample all directions
based on the same photon paths. In this case the statistical error is
the same for all viewing directions whereas for MYSTIC each direction
is calculated separately with an independent statistical error.
\begin{figure}[htbp]
\centering
\includegraphics[width=1.\hsize]{./IPRT_caseA1_depol10_altitude0_90.pdf}
\caption{ Test case A1, Rayleigh scattering layer,
depolarization factor 0.1, sun at $(\theta_0=30\ensuremath{^\circ},
\phi_0=65\ensuremath{^\circ})$, $\phi$=90\ensuremath{^\circ}.
Left: Stokes vector at the surface. Right: Absolute difference between individual
models and MYSTIC, the grey area corresponds to 2$\sigma$.
Arrows indicate out-of-range values.}
\label{fig:a1_results_2}
\end{figure}
Fig.~\ref{fig:a1_results_2} shows results for a Rayleigh scattering
matrix with depolarization factor 0.1. Here the solar zenith angle is
30\ensuremath{^\circ}, hence $Q$ and $U$ are non-zero. In the left plots we see
that almost all model results are on top of each other, only the
3DMCPOL values are slightly below the results of the other models. The
maximum deviation is about 4\% for $Q$, the differences between MYSTIC
and 3DMCPOL are mostly out of range in the right plots. This indicates
that the depolarization factor in 3DMCPOL is not correctly implemented.
The depolarization factor bias
observed for 3DMCPOL was not corrected, consequently it has some
impacts on the results of the next sections.
The models IPOL and Pstar again agree perfectly, as in
Fig.~\ref{fig:a1_results_2} the symbols showing the differences to
MYSTIC lie on top of each other. The SPARTA results are very close to
IPOL and Pstar. The differences between IPOL, Pstar and SPARTA are
centered about 0 and well within the 2$\sigma$ range, hence they agree
perfectly with MYSTIC. For SHDOM the same is true for $Q$ and $U$,
whereas for $I$ SHDOM is systematically slightly smaller than MYSTIC.
Tab.~\ref{tab:a_results} shows that for test case A1, $\Delta_m$ is smaller than
0.03\% for all Stokes components for the models IPOL, Pstar. Indeed the
numbers of $\Delta_m$ are exactly the same for IPOL and Pstar which
shows that the models use exactly the same method to solve the
radiative transfer equation for Rayleigh scattering. For SHDOM
$\Delta_m$ is smaller than 0.05\%, for SPARTA smaller than 0.4\% and for
3DMCPOL smaller than 2.1\%.
\subsubsection{A2 -- Rayleigh atmosphere above Lambertian surface}
\label{sec:a2_results}
\begin{figure}[t]
\centering
\includegraphics[width=1.\hsize]{./IPRT_caseA2_depol3_altitude1_45.pdf}
\caption{Test case A2, Rayleigh scattering layer (depolarization
factor 0.03) above Lambertian
surface with albedo 0.3, sun at $(\theta_0=50\ensuremath{^\circ},
\phi_0=0\ensuremath{^\circ})$, $\phi$=45\ensuremath{^\circ}.
Left: Stokes vector at top of atmosphere.
surface reflection. Right: Absolute difference between individual
models and MYSTIC, the grey area corresponds to 2$\sigma$. Arrows
indicate out-of-range values.}
\label{fig:a2_results}
\end{figure}
Fig.~\ref{fig:a2_results} shows results for the Rayleigh atmosphere
above a Lambertian surface. In the radiance plots on the left side we
see that all lines are on top of each other. The difference plots show that
again, the models IPOL and Pstar agree, their symbols are exactly
on top of each other. The differences for IPOL, Pstar and SPARTA
scatter around 0 and they mostly lie within the 2$\sigma$ range, hence
these models agree perfectly within the expected accuracy of the
MYSTIC results. For $I$ the same is true for 3DMCPOL, whereas for $Q$
and $U$ the 3DMCPOL results are systematically smaller than
MYSTIC. The reason for this small, but systematic difference could be,
as for the case without surface,
a slightly different implementation of the Rayleigh depolarization
factor which is 0.03 in this case.
The SHDOM results are slightly below MYSTIC for $I$ and $Q$ and
slightly above for $U$.
Quantitatively we see in Tab.~\ref{tab:a_results} that the level of
agreement between MYSTIC and the models IPOL, Pstar and SPARTA is
$\Delta_m<$0.05\% and for
SHDOM and 3DMCPOL $\Delta_m<$0.35\%.
\subsubsection{A3 -- Spherical aerosol particles}
\label{sec:a3_results}
\begin{figure}[t]
\centering
\includegraphics[width=1.\hsize]{./IPRT_caseA3_depol0_altitude1_135.pdf}
\caption{Test case A3, layer with spherical aerosol particles,
optical thickness 0.2, sun
at $(\theta_0=40\ensuremath{^\circ}, \phi_0=0\ensuremath{^\circ})$, $\phi$=135\ensuremath{^\circ}.
Left: Stokes vector at
top of atmosphere. Right: Absolute difference between individual
models and MYSTIC, the grey area corresponds to 2$\sigma$.}
\label{fig:a3_results}
\end{figure}
Fig.~\ref{fig:a3_results} shows that all models agree very well for
the layer with the small spherical aerosol particles. The left side of
the plot shows that all models produce the full Stokes vector, also
the component for circular polarization $V$ very accurately, all curves
are here on top of each other. The difference plots on the right show
that the differences for all models lie in the 2$\sigma$ range.
For $I$ the 3DMCPOL results seem systematically
larger than MYSTIC, although they are still in the 2$\sigma$ range.
The relative root mean square differences are $<$0.3\% for $I$,
$Q$ and $U$ and $<$0.8\% for $V$ (see Tab.~\ref{tab:a_results}).
\subsubsection{A4 -- Spheroidal aerosol particles}
\label{sec:a4_results}
\begin{figure}[t]
\centering
\includegraphics[width=1.\hsize]{./IPRT_caseA4_depol0_altitude1_45.pdf}
\caption{Test case A4, layer with spheroidal aerosol particles, optical thickness 0.2, sun
at $(\theta_0=40\ensuremath{^\circ}, \phi_0=0\ensuremath{^\circ})$, $\phi$=45\ensuremath{^\circ}.
Left: Stokes vector at the top of the atmosphere. Right: Absolute difference between individual
models and MYSTIC, the grey area corresponds to 2$\sigma$.}
\label{fig:a4_results1}
\end{figure}
The left plots in Fig.~\ref{fig:a4_results1} show the Stokes vector
at the top of the atmosphere, again all lines are on top of each other.
The absolute
differences between MYSTIC and all other models are in the 2$\sigma$
range for all Stokes components.
The scattering phase function for the particles considered here shows
strong forward scattering.
\begin{figure}[t]
\centering
\includegraphics[width=1.\hsize]{./IPRT_caseA4_depol0_altitude0_0.pdf}
\caption{Test case A4, settings as in Fig.~\ref{fig:a4_results1} but
for $\phi$=0\ensuremath{^\circ}.
Left: Stokes vector at the surface.
Right: Absolute difference between individual
models and MYSTIC, the grey area corresponds to 2$\sigma$.}
\label{fig:a4_results2}
\end{figure}
Fig.~\ref{fig:a4_results2} shows Stokes
vector at the surface for a viewing azimuth of 0\ensuremath{^\circ}, for which the
$U$ and $V$ are 0. This geometry
includes the sun direction and the forward scattering peak. The left
plots show, that also the forward scattering peak is calculated
accurately by all models.
The models 3DMCPOL, Pstar and SPARTA agree to MYSTIC within the 2$\sigma$
range, even in exact forward scattering directions,
where SHDOM and IPOL are a little lower (0.1\%).
For $Q$ all models agree with MYSTIC in the expected
2$\sigma$ range.
Tab.~\ref{tab:a_results} shows that the relative root mean square
difference between MYSTIC and all models is
0.09\% or better for $I$, $Q$, and $U$.
For $V$ the differences are
of the order of 1--3\%. The absolute value of $V$ is of the order
of $10^{-7}$ and the statistical uncertainty
of the Monte Carlo results for this small radiances is about 1--3\%.
\subsubsection{A5 -- Liquid water cloud}
\label{sec:a5_results}
\begin{figure}[htbp]
\centering
\includegraphics[width=1.\hsize]{./diff_pp_transmittance.pdf}
\caption{Test case A5, cloud layer with optical thickness of 5,
effective radius of 10, principal plane, sun at $(\theta_0=50\ensuremath{^\circ}, \phi_0=0\ensuremath{^\circ})$.
Left: Stokes vector at the surface. Right: Absolute difference between individual
models and MYSTIC, the grey area corresponds to 2$\sigma$.
A marker is plotted only at every 5\ensuremath{^\circ}, although the
simulations were done in 1\ensuremath{^\circ}\ steps.
Arrows indicate out-of-range values.}
\label{fig:a5_results1}
\end{figure}
Fig.~\ref{fig:a5_results1} shows the principal plane. In the
region of the forward scattering peak the models IPOL and SPARTA agree
to MYSTIC within
the expected accuracy (2$\sigma$ of MYSTIC calculation).
For 3DMCPOL the value of the forward
scattering peak in total intensity $I$ about 3\% smaller whereas
for Pstar and SHDOM it is larger (about 0.5\% and 6\% respectively).
SHDOM has an artefact in the second
Stokes component Q around 0$^{\circ}$ viewing zenith angle. Further
tests have shown that this artefact around 0$^{\circ}$ and 180$^{\circ}$
viewing zenith angles occurs only with highly peaked phase functions and
are largest for moderate optical depths (there is no artefact for single
scattering). The width of the artefact decreases with higher SHDOM
angular resolution. The artefact is believed to be a result of a
deficiency in the delta-M formulation for polarization.
\begin{figure}[t]
\centering
\includegraphics[width=1.\hsize]{./diff_al_reflectance.pdf}
\caption{Test case A5, settings as in Fig:~\ref{fig:a5_results1} but
for ``almucantar plane'', viewing zenith angle of 50\ensuremath{^\circ}.
Left: Stokes vector at the top of the layer. Right: Absolute difference between individual
models and MYSTIC, the grey area corresponds to 2$\sigma$.
A marker is plotted only at every 5\ensuremath{^\circ}, although the
simulations were done in 1\ensuremath{^\circ}\ steps.}
\label{fig:a5_results2}
\end{figure}
Fig.~\ref{fig:a5_results2} shows the reflected radiance in the
``almucantar'' plane, i.e. the viewing zenith angle is constant and
corresponds to the solar zenith angle of 50\ensuremath{^\circ}.
Here we see clearly that the standard
deviation of 3DMCPOL and SPARTA is a little higher than for MYSTIC,
therefore several points are outside the 2$\sigma$ range.
For SPARTA this is not surprising because it does not use any variance
reduction techniques for highly asymmetric scattering phase
functions. For the very small
Stokes component $V$, all Monte Carlo models are quite noisy, which can
be seen in the radiance plot for $V$. Using more photons or including
better variance reduction method could decrease the noise.
Within the Monte Carlo noise the models
agree perfectly.
Tab.~\ref{tab:a_results} shows that the smallest relative root mean square
difference is found for IPOL and Pstar, with $\Delta_m <$0.4\% for
$I$ and values about 1\%
for $Q$ and $U$. For $V$, $\Delta_m$ is much larger, about 30\%. The
reason is the large noise in the MYSTIC calculations. Except for $V$, a
good agreement (within the range of 0.2\%-3\%) is found for the models
3DMCPOL and SPARTA. For SHDOM, $\Delta_m$ for $Q$ is dominated by the
artefact mentioned before. Without the specific directions (forward
scattering and 0\ensuremath{^\circ}\ viewing zenith angle),
SHDOM also agrees perfectly to all other
models (see column A5$^{al}_{part}$ in Tab.~\ref{tab:a_results}).
\subsubsection{A6 -- Rayleigh atmosphere above ocean surface}
\label{sec:a6_results}
\begin{figure}[t]
\centering
\includegraphics[width=1.\hsize]{./IPRT_caseA6_depol3_altitude1_10.pdf}
\caption{Test case A6, Rayleigh scattering layer above ocean surface
with windspeed of 2~m/s,
sun at $(\theta_0=45\ensuremath{^\circ}, \phi_0=0\ensuremath{^\circ})$, $\phi$=10\ensuremath{^\circ}.
Left: Stokes vector at top of atmosphere.
Right: Absolute difference between individual
models and MYSTIC, the grey area corresponds to 2$\sigma$.
Arrows indicate out-of-range values.}
\label{fig:a6_results}
\end{figure}
Fig.~\ref{fig:a6_results} shows the reflected Stokes vector for the
Rayleigh scattering layer above the ocean surface. All models produce
the sunglint and the models MYSTIC, IPOL, SHDOM and SPARTA agree very well.
For Pstar, we see a small bias for all Stokes components in the shown
geometry. The 3DMCPOL results show larger deviations
for all Stokes components due to an error in the code which has not
been discovered so far.
The numbers in Tab.~\ref{tab:a_results} show that the models IPOL,
SPARTA, Pstar and SHDOM agree very well to MYSTIC with
$\Delta_m<\sim$0.3\%. The 3DMCPOL results are inaccurate especially for $U$.
\subsection{Test cases with realistic atmospheric profiles}
For the multi-layer test cases, the radiation fields have been
calculated at the surface, at the top of the atmosphere and inside the
atmosphere at an altitude of 1~km. However, at present time
only MYSTIC, SHDOM and
Pstar are capable to calculate the radiance field inside the
atmosphere.
The relative root mean square differences for the multi layer test cases are
listed in Table~\ref{tab:b_results}, the details are discussed in the
following sections.
\begin{table*}[t!]
\centering
\begin{tabular}{|l l|c|c|c|c|c|c|}
\hline
model name & & B1 & B2 & B3 & B3$_{\rm part}$ & B4 & B4$_{\rm part}$ \\ \hline \hline
IPOL & I & 0.016 & 0.012 & 0.060 & 0.014 & 0.111 & 0.091 \\
& Q & 0.024 & 0.023 & 0.049 & 0.034 & 0.575 & 0.472 \\
& U & 0.019 & 0.019 & 0.043 & 0.026 & 0.601 & 0.478 \\
& V & - & - & 1.489 & 1.073 & 19.377 & 15.091 \\
\hline
3DMCPOL & I & 0.033 & 1.387 & 0.040 & 0.064 & 1.152 & 0.727 \\
& Q & 0.558 & 0.975 & 1.296 & 1.261 & 27.614 & 14.805 \\
& U & 0.505 & 0.517 & 1.037 & 0.979 & 5.965 & 5.878 \\
& V & - & - & 4.347 & 3.833 & 94.928 & 62.132 \\
\hline
SPARTA & I & 0.020 & 0.013 & 0.055 & 0.026 & 0.344 & 0.326 \\
& Q & 0.030 & 0.029 & 0.071 & 0.045 & 3.710 & 2.699 \\
& U & 0.023 & 0.023 & 0.064 & 0.036 & 4.368 & 2.856 \\
& V & - & - & 1.982 & 1.439 & 182.181 & 88.770 \\
\hline
SHDOM & I & 0.052 & 0.054 & 0.426 & 0.069 & 1.059 & 0.109 \\
& Q & 0.068 & 0.071 & 0.148 & 0.085 & 2.377 & 1.654 \\
& U & 0.040 & 0.054 & 0.136 & 0.057 & 3.846 & 2.153 \\
& V & - & - & 2.567 & 1.548 & 23.672 & 19.700 \\
\hline
Pstar & I & 0.017 & 0.013 & 0.154 & 0.017 & 34.947 & 0.182 \\
& Q & 0.025 & 0.026 & 0.052 & 0.039 & 0.644 & 0.579 \\
& U & 0.020 & 0.021 & 0.047 & 0.031 & 2.583 & 2.661 \\
& V & - & - & 1.553 & 1.127 & 23.695 & 19.730 \\
\hline
\end{tabular}
\caption{Relative root mean square differences $\Delta_m$ in per cent between MYSTIC and
IPOL, 3DMCPOL, SPARTA, SHDOM and Pstar for the multi-layer
intercomparison cases. For B3$_{\rm part}$ and B4$_{\rm part}$ the
solar aureole region is taken out of the calculation of $\Delta_m$,
i.e. viewing angles up to 10\ensuremath{^\circ}\ from the sun direction are taken
out of the summation in Eq.~\ref{eq:rel_diff}.}
\label{tab:b_results}
\end{table*}
\subsubsection{B1 -- Rayleigh scattering for a standard atmosphere}
\label{sec:b1_results}
\begin{figure}[htbp]
\centering
\includegraphics[width=1.\hsize]{./IPRT_caseB1_depol3_altitude0_90.pdf}
\caption{Test case B1, US-standard atmosphere, 450~nm, no absorption, sun at $(\theta_0=60\ensuremath{^\circ}, \phi_0=0\ensuremath{^\circ})$, $\phi$=90\ensuremath{^\circ}. Left: Stokes vector at the surface. Right: Absolute difference between individual
models and MYSTIC, the grey area corresponds to 2$\sigma$.
Arrows indicate out-of-range values.}
\label{fig:b1_results}
\end{figure}
Fig.~\ref{fig:b1_results} shows the results for a multi-layer
atmosphere with pure Rayleigh scattering.
As for the 1-layer case, we find a very good agreement between all
models for pure Rayleigh scattering. The models IPOL, SPARTA and Pstar
agree among each other. SHDOM is slightly smaller
for $I$ and $Q$ and slightly larger for $U$. 3DMCPOL shows small but
systematic deviations, which might again be due to a different
implementation of the Rayleigh depolarization factor, which was set to
0.03 in this test case.
Tab.~\ref{tab:b_results} shows that relative root mean square
deviations are mostly smaller than 0.05\% between MYSTIC and IPOL,
SPARTA, SHDOM, and Pstar respectively. For 3DMCPOL, $\Delta_m$ is
0.03\% for $I$ and about 0.5\% for $Q$ and $U$.
\subsubsection{B2 -- Rayleigh scattering and absorption for a standard atmosphere}
\label{sec:b2_results}
\begin{figure}[htbp]
\centering
\includegraphics[width=1.\hsize]{./IPRT_caseB2_depol3_altitude0_45.pdf}
\caption{Test case B2, US-standard atmosphere, scattering an
absorption, 325~nm, sun at $(\theta_0=60\ensuremath{^\circ},
\phi_0=0\ensuremath{^\circ})$, $\phi$=45\ensuremath{^\circ}.
Left: Stokes vector at the surface.
Right: Absolute difference between individual
models and MYSTIC, the grey area corresponds to 2$\sigma$.
Arrows indicate out-of-range values.}
\label{fig:b2_results}
\end{figure}
Fig.~\ref{fig:b2_results} shows the results for the US-standard
atmosphere simulated at 325~nm, where absorption has been included.
The models IPOL, SPARTA and Pstar agree to MYSTIC within two standard
deviations. The SHDOM results show a small bias, for $I$ they are
slightly smaller than the MYSTIC results. The 3DMCPOL results differ by
more than 1\%.
In Tab.~\ref{tab:b_results} we see that relative root mean square
deviations are as for test case B1
mostly smaller than 0.05\% between MYSTIC and IPOL,
SPARTA, SHDOM, and Pstar respectively. For 3DMCPOL, $\Delta_m$ is
in the range of 0.5\%--1.5\% for all Stokes components.
\subsubsection{B3 -- Aerosol profile and standard atmosphere}
\label{sec:b3_results}
\begin{figure}[t]
\centering
\includegraphics[width=1.\hsize]{./IPRT_caseB3_depol3_altitude0_45.pdf}
\caption{Test case B3, US-standard atmosphere, 350~nm, aerosol profile,
spheroidal particles, sun at $(\theta_0=30\ensuremath{^\circ},
\phi_0=0\ensuremath{^\circ})$, $\phi$=45\ensuremath{^\circ}. Left: Stokes vector at the
surface. Right: Absolute difference between individual
models and MYSTIC, the grey area corresponds to 2$\sigma$.
Arrows indicate out-of-range values.}
\label{fig:b3_results}
\end{figure}
For the standard atmosphere including a realistic aerosol profile
again all models agree very well. Fig.~\ref{fig:b3_results} shows that
deviations between MYSTIC and
IPOL, SPARTA and Pstar are again within the expected
uncertainty for most angles. There are some tiny deviations for
SHDOM. 3DMCPOL again shows small systematic deviations, especially for $Q$
and $U$.
The relative root mean square deviations from MYSTIC
(Tab.~\ref{tab:b_results}) are smaller than
0.07\% for $I$, $Q$ and $U$ for the models IPOL and SPARTA. For Pstar and
SHDOM, the differences are slightly larger (up to 0.4\%).
$\Delta_m$ is about 1.5\% for $V$ for IPOL and
Pstar. This larger deviation is due to an increased relative standard
deviation of MYSTIC, because $V$ is four orders of magnitude smaller
than $Q$ and $U$.
For 3DMCPOL, $\Delta_m\sim$0.04\% for $I$, of the order of 1\% for
$Q$ and $U$ and 4\% for $V$.
\subsubsection{B4 -- Cloud above ocean surface}
\label{sec:b4_results}
\begin{figure}[t]
\centering
\includegraphics[width=1.\hsize]{./IPRT_caseB4_depol3_altitude0_0.pdf}
\caption{Test case B4, US-standard atmosphere, 800~nm, ocean
surface, cloud layer with optical thickness of 5,
sun at $(\theta_0=60\ensuremath{^\circ},
\phi_0=0\ensuremath{^\circ})$, $\phi$=0\ensuremath{^\circ}.
Left: Stokes vector at the surface.
Right: Absolute difference between individual
models and MYSTIC, the grey area corresponds to 2$\sigma$.
Arrows indicate out-of-range values.}
\label{fig:b4_results1}
\end{figure}
\begin{figure}[t]
\centering
\includegraphics[width=1.\hsize]{./IPRT_caseB4_depol3_altitude1_135_down-looking.pdf}
\caption{Test case B4, settings as Fig.~\ref{fig:b4_results1} but for $\phi$=135\ensuremath{^\circ}.
Left: Stokes vector at 1~km altitude for downward viewing directions.
Right: Absolute difference between individual
models and MYSTIC, the grey area corresponds to 2$\sigma$.
Arrows indicate out-of-range values.}
\label{fig:b4_results2}
\end{figure}
\begin{figure}[t]
\centering
\includegraphics[width=1.\hsize]{./IPRT_caseB4_depol3_altitude30_135.pdf}
\caption{Test case B4, settings as Fig.~\ref{fig:b4_results2}.
Left: Stokes vector at the top of the atmosphere.
Right: Absolute difference between individual
models and MYSTIC, the grey area corresponds to 2$\sigma$.
Arrows indicate out-of-range values.}
\label{fig:b4_results3}
\end{figure}
For the most demanding case of a cloud layer above an ocean surface
embedded in a Rayleigh atmosphere we find the largest
differences between the models.
Figs.~\ref{fig:b4_results1}--\ref{fig:b4_results3} show the radiance
field and the differences between the models.
Fig.~\ref{fig:b4_results1} shows the radiance field for a viewing azimuth
angle of 0\ensuremath{^\circ}\ at the surface. Here all models
agree quite well when the forward scattering direction is excluded,
only the 3DMCPOL result for $Q$ is different from other models.
This difference is mainly due to the error in the implementation of
surface reflection in 3DMCPOL.
Within the 2$\sigma$ range the models Pstar, IPOL and SHDOM mostly
agree to MYSTIC. There is one outlier in the SHDOM results for $Q$ at
a viewing zenith angle of 0\ensuremath{^\circ}. In the exact forward direction Pstar is
more than a factor of 2 larger than the other models. This is because
Pstar uses a relatively small number of streams, i.e. 60 for this
case. The result in exact forward direction improves by increasing the number of
streams.
The value of the forward scattering peak agrees for MYSTIC, IPOL and
SPARTA.
Fig.~\ref{fig:b4_results2} shows the radiance field at a viewing azimuth
angle of 135\ensuremath{^\circ}\ for down-looking directions at 1~km altitude, below the cloud layer,
We find that the models MYSTIC, SHDOM and Pstar agree for $I$, $Q$ and $V$.
For $U$ SHDOM and MYSTIC are negative whereas Pstar is
slightly positive.
Fig.~\ref{fig:b4_results3} shows the radiance field at a viewing azimuth
angle of 135\ensuremath{^\circ}\ for down-looking directions at the top of the
atmosphere, where we see part of the cloudbow.
IPOL, SHDOM and Pstar mostly agree to MYSTIC within the 2$\sigma$
range. SHDOM again shows an outlier at 180\ensuremath{^\circ} viewing zenith angle.
SPARTA results are more noisy than MYSTIC but there are no
obvious systematic differences and within the
uncertainty the models agree. For 3DMCPOL the differences are a bit
larger and systematic, e.g. for $I$ the 3DMCPOL results are
systematically larger than other model results.
Tab.~\ref{tab:b_results} shows that the relative root mean square
difference between MYSTIC and all other models
is very small for $I$, the largest difference is 0.7\%.
For $Q$ and $U$, $\Delta_m$ is
smaller than 0.6\% for IPOL, 1--4\% for SHDOM, Pstar and SPARTA,
and 5--27\% for 3DMCPOL. For $V$, $\Delta_m$ is about 20\%
for IPOL, SHDOM and Pstar; this large value is due to the noisy MYSTIC
result. Since 3DMCPOL and SPARTA are more noisy than MYSTIC the values
are even larger.
\section{Conclusion and Outlook}
Overall, we found a very good agreement between all models and for
the test cases of this intercomparison project.
The achieved level of agreement is very high, for cases without
clouds the relative root mean square difference is mostly below 0.05\%
for total intensity and linear polarization.
However some
significant deviations were found: for non-zero depolarization
factors, for ocean reflection, and for simulations including cloud
droplets. For these
settings some of the models need to be corrected or improved.
For all single layer calculations we found an agreement between
the models MYSTIC, IPOL, SPARTA and Pstar.
SHDOM also agrees for all cases and almost all viewing directions, but
for the cloud layer cases it shows artefacts at a few specific viewing
directions.
3DMCPOL agrees well for Rayleigh scattering with depolarization factor
set to 0, for the Lambertian surface, and for aerosol and cloud
cases. There are small differences when the depolarization
factor is non-zero and also for the case with ocean reflectance
matrix, for these two cases we may conclude that 3DMCPOL is not
consistent with other models and should be corrected.
For the multi-layer cases the radiance fields at the surface and at
the top of the atmosphere were provided by all participants.
We again find a perfect agreement between MYSTIC, IPOL, SPARTA and
with a few exceptions SHDOM, which shows artefacts at a few specific
angles for the cloud case,
in particular at viewing angles of 0\ensuremath{^\circ} and 180\ensuremath{^\circ}.
Pstar agrees for all cases except the last one with ocean reflectance
matrix and cloud layer, where we find small deviations at specific
geometries for the $U$
component of the Stokes vector.
3DMCPOL shows small differences for all cases due to the
non-zero depolarization factor of 0.03 which was used for all
multi-layer cases. Larger differences appear when absorption is taken
into account, here the 3DMCPOL model should be improved.
Also for the cloud layer above ocean surface we find larger
differences, as expected because we have already seen these differences
in the single layer case with ocean reflectance matrix.
As benchmark we provide the MYSTIC results, which agree to IPOL and
SPARTA for the delivered cases at the surface and the top of the
atmosphere. MYSTIC also agrees to SHDOM for most viewing directions,
the exceptions are obviously due to artefacts in SHDOM at specific
angles. Also it agrees
to Pstar for cases without ocean reflectance matrix.
Along with the radiance data we
provide the standard deviations which are helpful when developers want to
use the benchmark data for testing their models.
The detailed setup for all cases, the benchmark results as well as
plots showing model results of all cases are publically available at
the IPRT website
(\url{http://www.meteo.physik.uni-muenchen.de/~iprt}).
The next phase of the intercomparison project will start in spring
2015. We will then focus on three-dimensional scenarios including
clouds and aerosols.
\section*{Acknowledgement}
We thank Dr. Michael Mishchenko for providing the code to calculate the ocean reflectance matrix.
Furthermore we thank Dr. Josef Gasteiger for providing optical properties of
the aspherical aerosol particles.
\bibliographystyle{elsarticle-num-names}
|
2,877,628,090,397 | arxiv | \section{Introduction}
With the new generation of telescopes and instruments, such as multi-object spectrographs and large field of view camera, it now becomes possible to perform large, deep redshift surveys. Using these data, the structure and geometry of the Universe can be studied through new channels, namely geometrical tests such as the angular diameter test (angular size - redshift relation). To perform such geometrical tests, a population of objects that can be followed through redshift space needs to be identified. The angular diameter test requires a standard rod to be defined, the angular size of which is then to be measured at various epochs. The objects taken to serve that purpose could be as diverse as galaxies, clusters of galaxies, or dark matter halos. In this study, the relation between the physical size of a disk and its rotation velocity is used \cite{tf77}. By virtue of this relation, selecting a population of objects with a given rotation velocity is equivalent to selecting them by their physical size \cite{marinoni}. Spectroscopically selecting the photometric standard rods also has the advantage of minimizing selection effects due to the Malmquist bias.
Most ground-based studies of galaxies at high redshift rely on the [OII]$\lambda 3727$ \AA\ line, including the VIMOS-VLT Deep Survey (VVDS)\cite{lefevre} and the DEEP2 Redshift Survey \cite{davis01}, since H$\alpha$ used locally is redshifted into the infrared at about $z>0.4$. Specifically, the VVDS redshift survey will measure [OII] line widths up to $z \sim 1.4$. In order to compare data sets of local and distant galaxies, it is therefore necessary to understand how rotation velocities extracted from different lines relate, in this case H$\alpha$ and [OII]. The results are then used to calibrate the angular diameter test that is performed on data from the VVDS.
\section{Optical Spectroscopy of Spirals at $0.155<z<0.25$}
Using the Hale 200 inch telescope during the course of three observing runs between 2003 March and 2004 February (total of 12 nights, about 50\% of which were lost to bad weather), long-slit spectra were obtained for a sample of 32 spiral galaxies. The galaxies were chosen in the area covered by the Data Release 1 of the {\it Sloan Digital Sky Survey} \cite{stoughton}, and selected by their [OII]$\lambda 3727$ equivalent width and apparent magnitude. The galaxies were also required to be in the redshift range $0.155<z<0.25$, so that using the Double Spectrograph \cite{okegunn}, both H$\alpha \lambda$6563\AA\ and [OII]$\lambda 3727$\AA\ (thereafter, H$\alpha$ and [OII]) emission lines could be observed simultaneously.
The spectral resolution on the red side of the spectrograph is 0.65 \AA\ pixel$^{-1}$ with 24$\mu$m pixels and the spatial scale is of 0\farcs468 pixel$^{-1}$. The CCD camera had a size of 1024$\times$1024 pixels, and a 1\arcsec$\times$128\arcsec\ slit was used (except for a few galaxies where the 2\arcsec$\times$128\arcsec\ slit was used due to very poor seeing conditions). The blue camera used a 2788$\times$512 chip (15$\mu$m pixels), giving a spatial resolution of 0\farcs390 pixel$^{-1}$, as well as a 0.55 \AA\ pixel$^{-1}$ spectral resolution. The exposure time was of 3600 s for most galaxies, though it was extended to 4800 s for a few sources when the flux in the [OII] line was very low.
\section{Rotation Velocities}
The rotation curves were modeled in two steps. First, a Universal Rotation Curve\cite{urc} was fitted and used to determine the velocity and radial offsets required to get the best folding of the rotation curves. Secondly, a Polyex model \cite{polyex} was applied to the folded data.
Using these fits, the rotational velocity of each galaxy was determined, both for its H$\alpha$ and [OII] lines. The velocity adopted for a given rotation curve is the value of the Polyex fit at a radius corresponding to $r_{83}$, the radius containing 83\% of the light of the galaxy. There is good agreement between the two sets of measurements, especially for $V_{rot}<$220 km s$^{-1}$ (for more details and figures, see Saintonge et al. 2006 \cite{paperI}). This is a first clue that the [OII] derived velocities can be reliably compared to H$\alpha$ velocities.
Since rotation curves cannot be traced out for the high redshift galaxies of the VVDS sample, a method based on velocity histograms is used. We apply the technique to the low redshift sample for which rotation curves are also available to establish reliability. The velocity histograms are built by collapsing the two-dimensional images along the spatial direction in order to form one-dimensional spectra. A Gaussian was fitted to each of the H$\alpha$ emission lines. The full width at half maximum (FWHM) of that Gaussian was converted into the rotation velocity of the galaxy. For the [OII] line, which is in fact a doublet, a two Gaussian model is fitted. There is a good correlation between the techniques, such that the velocities derived from the H$\alpha$ rotation curves, which are the most accurate for our low redshift data, can be used to select objects for the angular diameter test in a way that is consistent with the selection of objects from the VVDS survey.
\section{The Angular Diameter Test}
The angular diameter test can discriminate between cosmological models by tracing the apparent angular diameter of galaxies through redshift space. In this case, the velocity-diameter relation is used to select standard rods, for which the effect of galaxy evolution needs to be untangled from that of geometric evolution. It is possible to infer cosmological information knowing a priori only the upper limit value for disc evolution at the maximum redshift of the data set,$z_{max}$, no matter what the specific evolutionary scenario is (see Marinoni et al. 2006 for details). One can even construct a self-consistent cosmology evolution plane where to any given chosen evolutionary upper limit in diameters at $z_{max}$ corresponds a specific region of the cosmological parameter space. Vice versa, given a cosmology, one may directly determine the evolution in magnitude and size of the selected sample of galaxies.
Therefore, this diagram may be used to directly detect, in a fully geometric way, the presence of dark energy in the universe, in a way that is complementary to the use of Supernovae. In particular, it is found that if discs were less than 30\% smaller at $z=1.5$ than at present epoch, then an Einstein-de Sitter critical universe ($\Omega_m=1$) may be geometrically discriminated from a $\Lambda$CDM cosmology.
|
2,877,628,090,398 | arxiv | \section{Introduction}
In this paper, we consider a nonlinear volume-surface reaction-diffusion system, which couples a non-negative volume-concentration $u(x,t)$ diffusing on a bounded domain $\Omega \subset \mathbb R^N (N\geq 1)$ with a non-negative surface-concentration $v(x,t)$ diffusing on the sufficiently smooth boundary $\Gamma:= \partial\Omega$ of $\Omega$ (e.g. $\partial\Omega\in C^{2+\epsilon}$ for $\epsilon>0$).
The interface conditions connecting these two concentrations are a nonlinear Robin-type boundary condition for the volume-concentration $u(x,t)$ and a matching reversible reaction source term in the equation for the surface-concentration $v(x,t)$:
\begin{equation}
\begin{cases}
u_t - \delta_{u}\Delta u = 0, &x\in\Omega, t>0,\\
\delta_u\frac{\partial u}{\partial \nu} = -\alpha(k_u u^{\alpha} - k_v v^{\beta}),&x\in\Gamma, t>0,\\
v_t - \delta_{v}\Delta_{\Gamma}v= \beta(k_u u^{\alpha} - k_v v^{\beta}), &x\in\Gamma, t>0,\\
u(0,x) = u_0(x)\ge0, &x\in\Omega,\\
v(0,x) = v_0(x)\ge0, &x\in\Gamma.
\end{cases}
\label{e1}
\end{equation}
Here, we denote by $\Delta$ the Laplace operator on $\Omega$ with a positive diffusion coefficient $\delta_u>0$ and by $\Delta_{\Gamma}$ the Laplace-Beltrami operator on $\Gamma$ (see e.g. \cite{GT}) with a non-negative diffusion coefficients $\delta_v \geq 0$ and $\nu(x)$ denotes the unit outward normal vector of $\Gamma$ at the point $x$. Moreover, we shall consider nonnegative initial concentrations $u_0(x)\geq 0$ on $\Omega$ and $v_0(x)\geq 0$
on $\Gamma$.
The stoichiometric coefficients $\alpha, \beta\in [1,\infty)$ together with the positive, bounded reaction rates $k_u(t,x), k_v(t,x) \in L_{+}^{\infty}([0,\infty)\times \Gamma)$ characterise the key feature of the model system \eqref{e1}, which is the nonlinear reversible reaction between the volume density $u(t,x)$ and
the surface density $v(t,x)$ located at the boundary $\Gamma$.
We emphasise that the reversible reaction between volume and boundary in system \eqref{e1} {\it preserves the total initial mass} $M$, which shall be assumed positive in the following:
\begin{align}
\label{cons}
M&=\beta\int_{\Omega}u(t,x)\,dx + \alpha\int_{\Gamma}v(t,x)\,dS,\qquad \forall t\ge 0\\
&=\beta\int_{\Omega}u_0(x)\,dx + \alpha\int_{\Gamma}v_0(x)\,dS>0.\nonumber
\end{align}
The study of system \eqref{e1} is motivated by models of \emph{asymmetric stem cell division}. In stem cells undergoing asymmetric cell division, particular proteins (so-called cell-fate determinants) are localised in only one of the two daughter cells during mitosis. These cell-fate determinants trigger in the following the differentiation of one daughter cell into specific tissue while the other daughter cell remains a stem cell.
In Drosophila, SOP stem cells provide a well-studied biological example model of asymmetric stem cell division,
see e.g. \cite{BMK,MEBWK,WNK} and the references therein. The mechanism of asymmetric cell division in SOP stem cells operates around a key protein called Lgl (Lethal giant larvae), which exists in two conformational states: a non-phosphorylated form which regulates the localisation of the cell-fate-determinants in the membrane of one daughter cell, and a phosphorylated form which is inactive.
First mathematical models describing the evolution and localisation of phosphorylated and non-phosphorylated Lgl in SOP stem cells were presented in \cite{BFR, Ros} under the assumption of linear phosphorylation and de-phosphorylation kinetics. However, it is known that Lgl offers three phosphorylation sites \cite{BMK}. Thus, if more than one site needs to be phosphorylated in order to effectively deactivate Lgl, a realistic model should rather consider nonlinear kinetics.
\medskip
The system \eqref{e1} formulates a nonlinear mathematical core model, which strongly simplifies the biological model
for SOP stem cells by focussing only on the concentration $u(x,t)$ of the phosphorylated Lgl in the cytoplasm (i.e. in the cell volume) and the concentration $v(x,t)$
of non-phosphorylated Lgl at the cortex/membrane of the cell.
The exchange of phosphorylated Lgl $u(x,t)$ and non-phosphorylated Lgl $v(x,t)$ is described by the above nonlinear reaction located at the boundary. The considered evolution process conserves the total mass of Lgl as quantified in the conservation law \eqref{cons}.
Models related to \eqref{e1} have recently gained rapidly increasing attention
as they occur naturally in many areas of cell-biology and also fluid-dynamics, see e.g. \cite{AJLRRW,ElRa,FrNeRa,KD,MS,NGCRSS} and references therein.
\medskip
The first aim of this paper is to prove the global existence of a unique weak solution to the model system \eqref{e1} (see Theorem \ref{theo:ExistenceAndUniqueness} below). The main difficulties arise from the arbitrary power-law nonlinearities located at the boundary $\Gamma$ and shall be overcomed by applying an iteration method of converging upper and lower solutions, in the spirit of e.g. \cite{Pao}. This method is based on proving a comparison principle for upper and lower solutions (see e.g. \cite{Souplet}), which so far - up to our knowledge - has not been established for volume-surface reaction-diffusion systems. Once the comparison principle is shown, the existence of weak solutions to \eqref{e1} follows from an iteration argument, which uses that the involved nonlinearities are quasi-monotone non-decreasing. The existence of solutions to related linear models was proven in \cite{ElRa,FrNeRa} by fix-point methods. Our approach has the advantage of providing intrinsic a-priori bounds, which allows to obtain global solutions of the superlinear problem \eqref{e1}.
\medskip
The second part of the manuscript proves \emph{explicit exponential convergence to equilibrium} for the system \eqref{e1} via the so-called \emph{entropy method}.
The basic idea of the entropy method consists of studying the large-time asymptotics of a dissipative PDE model by looking for a nonnegative Lyapunov
functional $E(f)$ and its nonnegative dissipation
$$
D(f)=-\frac{d}{dt} E(f(t))
$$
along the flow of the PDE model, which is
well-behaved in the following sense: firstly, all states with $D(f)=0$, which also satisfy all the involved conservation laws, identify a unique entropy-minimising equilibrium $f_{\infty}$, i.e.
$$
D(f) = 0\quad \text{and \quad conservation laws} \iff f=f_{\infty},
$$
and secondly, there exists an \emph{entropy entropy-dissipation
estimate} of the form
$$
D(f) \ge \Phi(E(f)-E(f_{\infty})), \qquad \Phi(x)\ge0, \qquad \Phi(x) = 0 \iff x=0,
$$
for some nonnegative function $\Phi$.
Generally, such an inequality can only hold when all the conserved quantities are taken into account.
If $\Phi'(0) \neq 0$, one usually gets exponential convergence toward
$f_{\infty}$ in relative entropy $E(f)-E(f_{\infty})$ with a rate, which can be explicitly estimated.
\medskip
The entropy method is a fully nonlinear alternative to arguments based on linearisation around the equilibrium and has the advantage of being quite robust with respect to variations and generalisations of the model system.
This is due to the fact that the entropy method
relies mainly on functional inequalities which have no direct link with the original PDE model.
Generalised models typically feature related entropy and entropy-dissipation functionals and previously established entropy entropy-dissipation estimates may very usefully be re-applied.
The entropy method has previously been used for scalar equations: nonlinear
diffusion equations (such as fast diffusions \cite{CV,dolbeault}, Landau equation \cite{DVlandauII}),
integral equations (such as the spatially homogeneous
Boltzmann equation \cite{toscani_villani1, toscani_villani2, villani_cerc}),
kinetic equations (see e.g. \cite{DVinhom1,DVinhom2, fell}), or coagulation-fragmentation equations (see e.g. \cite{CDF08,CDF08a}).
For certain systems of drift-diffusion-reaction equations in semiconductor physics, an entropy entropy-dissipation estimate has been shown indirectly via a compact\-ness-based contradiction argument in \cite{GGH96,GH97,Gro92}.
\medskip
A first proof of entropy entropy-dissipation estimates for systems with explicit rates and constants was established in \cite{DeFe06, DeFe07, DeFe08} in the case of reversible reaction-diffusion equations. Recently, a new idea of proving entropy entropy-dissipation estimates based on a convexification argument was presented in \cite{MHM}.
\medskip
In this paper, we shall prove a new entropy entropy-dissipation estimate for the model system \eqref{e1}, which entails exponential convergence to equilibrium with explicitly computable constants and rates (see Theorem \ref{theo:Convergence} below).
We remark two novelties: i) this is (up to our knowledge) the first entropy entropy-dissipation estimate for a mixed volume-surface reaction-diffusion system, and ii) secondly, we introduce a new idea in the proof of entropy entropy-dissipation estimates for a system with general, superlinear, power-like nonlinearities, which we hope to turn out very useful when proving entropy entropy-dissipation estimates in more general settings.
Moreover, we remark that, although the existence of weak solutions is obtained for general rates, which can depend on time and space, we restrict for the sake of clarity the proof of explicit exponential convergence to equilibrium to the case of constant rates $k_u$ and $k_v$. The case of non-constant (in space and/or e.g. periodic in time) reactions rates leads to non-constant equilibria and requires a more involved formalism which shall be treated in future works.
\medskip
We emphasise that we distinguish two cases in the equilibration analysis of \eqref{e1}: The non-degenerate diffusion case $\delta_v >0$ and the degenerate diffusion case $\delta_v = 0$. If $\delta_v >0$, then the surface diffusion term $-\delta_v\Delta_{\Gamma}v$ enables us to obtain an entropy-entropy dissipation estimate by only using the natural a-priori estimates derived from mass conservation, entropy and entropy dissipation.
In the case of degenerate boundary diffusion $\delta_v = 0$, we derive an entropy entropy-dissipation estimate by using $L^{\infty}$ a-priori bounds of the solution. While such $L^{\infty}$-bounds can be shown to hold for the model \eqref{e1}, they are often out of reach for more general systems with more concentrations in higher space dimensions, see e.g. \cite{CDF13}.
However, we conjecture that in some (yet not all) cases of stoichiometric coefficients $\alpha, \beta$, the use of $L^{\infty}$-bounds should not be essential for the proof and could be avoided by more careful estimates. An example of such an estimate is presented in Proposition \ref{re:linear} when $\alpha = \beta = 1$.
\medskip
For future work, we hope that the robustness of the entropy method will enable us to study the
large time behaviour of more complicated and realistic models of asymmetric cell division by reusing
the entropy entropy-dissipation estimate derived in Lemma \ref{lem:E-EDEstimate} for the non-degenerate case $\delta_v >0$ and Lemma \ref{lem:degenerate_estimate} for the degenerate case $\delta_v =0$.
Thus, the considered mathematical core problem \eqref{e1} is also motivated by the goal of deriving
core entropy entropy-dissipation estimates, which encompasses the nonlinear boundary dynamics featured by the system \eqref{e1}.
\medskip
The rest of the paper is organised as follows. In Section 2, we prove the global existence of a unique weak solution for system \eqref{e1}. Section 3 is devoted to the entropy method, establishing entropy entropy-dissipation estimates and proving explicit exponential convergence to equilibrium.
\section{Existence of a global solution}
In this section, we will prove global existence of a unique weak solution to system \eqref{e1} by the method of converging upper and lower solutions. We define first our notion of weak solutions:
\begin{definition}\label{def:weaksolution_1}
A pair of functions $(u,v)$ is called a {\it weak solution} to system \eqref{e1} on $(0,T)$ if
\begin{equation}
u\in C([0,T];L^2(\Omega)), \ \ \text{ and }\ \ u\in L^{\infty}(0,T;L^{\infty}(\Omega))\cap L^{2}(0,T;H^1(\Omega)),
\label{regular_u}
\end{equation}
\begin{equation}
v\in C([0,T]; L^2(\Gamma)),\ \ \text{ and }\ \ v\in L^{\infty}(0,T;L^{\infty}(\Gamma))\cap L^2(0,T;H^1(\Gamma)),
\label{regular_v}
\end{equation}
and the following weak formulation
holds for all testfunctions $\varphi\in C^1([0,T];L^2(\Omega))\cap L^2(0,T;H^1(\Omega))$ and $\psi\in C^1([0,T];L^2(\Gamma))\cap L^2(0,T;H^1(\Gamma))$ with $\varphi \geq 0$, $\psi\geq 0$ and $\varphi(T) = \psi(T) = 0$:
\begin{equation}\label{weakformulation}
\begin{cases}
\int_0^T\!\!\int_{\Omega}[-u\varphi_t + \delta_u\nabla u\nabla \varphi]dxdt
= \int_{\Omega}u_0\varphi(0)dx-\alpha\int_0^T\!\int_{\Gamma}(k_uu^{\alpha} - k_vv^{\beta})\varphi dSdt,\\[2mm]
\int_0^T\!\int_{\Gamma}[-v\psi_t + \delta_v\nabla_{\Gamma}v\nabla_{\Gamma}\psi]dSdt
= \int_{\Gamma}v_0\psi(0)dS + \beta\int_0^T\!\int_{\Gamma}(k_uu^{\alpha} - k_vv^{\beta})\psi dSdt,\end{cases}
\end{equation}
in which $\nabla_{\Gamma}$ is the tangential gradient on $\Gamma$.
\end{definition}
\begin{remark}\label{remark:Definition}
With the regularity of $u$ and $v$ as stated in \eqref{regular_u} and \eqref{regular_v}, all left hand terms in \eqref{weakformulation} are clearly well defined. For the nonlinear reaction terms $\int_{\Gamma}k_uu^{\alpha}\varphi dS$ on the right hand side of \eqref{weakformulation}, we proceed as follows: First, if $u\in H^1(\Omega)\cap L^{\infty}(\Omega)$, we have
\begin{equation*}
\begin{aligned}
\int_{\Gamma}|u|^{2\alpha}dx &= \|u^{\alpha}\|_{L^2(\Gamma)}^2\\
&\leq C(\|\nabla(u^\alpha)\|_{L^2(\Omega)}^2 + \|u^{\alpha}\|_{L^2(\Omega)}^2)\qquad (\text{by using the Trace Theorem})\\
&\leq C(\alpha^2\|u\|_{L^{\infty}(\Omega)}^{2\alpha-2}\|\nabla u\|_{L^2(\Omega)}^2 + |\Omega|\|u\|_{L^{\infty}(\Omega)}^{2\alpha}).
\end{aligned}
\end{equation*}
Hence, $u^{\alpha}|_{\Gamma}\in L^2(\Gamma)$. Therefore, with the help of the estimate
\begin{equation*}
\int_{\Gamma}k_u(t,x)u^{\alpha}\varphi dS\leq \|k_{u}\|_{\infty}\|u^{\alpha}\|_{L^{2}(\Gamma)}^{2}\|\varphi\|_{L^2(\Gamma)}^2,
\end{equation*}
the weak formulation in Definition \ref{def:weaksolution_1} makes sense.
\end{remark}
\begin{definition}\label{aecomparision}
We shall use the following short notation
$$
(u_1, v_1)\geq (u_2, v_2)
$$
for two pairs of functions $(u_1, v_1)$ and $(u_2, v_2)$ where $u_i(t,x): I\times \Omega \rightarrow \mathbb R$ and $v_i(t,x): I\times \Gamma\rightarrow \mathbb R, \; i = 1, 2, I\subset \mathbb R$, which means that
\begin{align*}
u_1(t,x)\geq u_2(t,x)\qquad \text{for a.e.}\quad (t,x)\in I\times \Omega, \\
v_1(t,x)\geq v_2(t,x)\qquad \text{for a.e.}\quad(t,x)\in I\times \Gamma.
\end{align*}
\end{definition}
\medskip
Next, we define the notation
\begin{equation*}
F(t,x,u,v):= -\alpha\bigl(k_u(t,x)u^{\alpha} - k_v(t,x)v^{\beta}\bigr), \qquad (t,x)\in[0,\infty)\times\Gamma,
\end{equation*}
and
\begin{equation*}
G(t,x,u,v) := \beta\bigl(k_u(t,x)u^{\alpha} - k_v(t,x)v^{\beta}\bigr),\qquad (t,x)\in[0,\infty)\times\Gamma.
\end{equation*}
\begin{lemma}\label{Nonlinearities}
We have the following properties for the nonlinearities $F$ and $G$:
\begin{itemize}
\item[i)] $F(t,x,u,\cdot)$, $G(t,x,\cdot,v)$ are non-decreasing for all $(t,x)\in[0,\infty)\times\Gamma$,
\item[ii)]$F(t,x,\cdot,v)$, $G(t,x,u,\cdot)$ are non-increasing for all $(t,x)\in[0,\infty)\times\Gamma$,
\item[iii)] $F(t,x,\cdot,\cdot)$ and $G(t,x,\cdot,\cdot)$ are locally Lipschitz. In particular, given a pair of non-negative functions $(\overline{u}, \overline{v}) \ge (0,0)$,
there exist two non-negative bounded functions $L_u(t, x)$, $L_v(t, x)\in L^{\infty}([0,\infty)\times\Gamma)$ such that, for all $(\overline{u},\overline{v})\geq (u_1,v_1),(u_2,v_2)\geq (0,0)$, the followings hold pointwise in $(t,x)\in[0,\infty)\times\Gamma$:
\begin{align}
F(t, x, u_1, v_1) - F(t, x, u_2, v_2) &\leq \alpha L_u(t,x)(u_2 - u_1)_{+}+\alpha L_v(t,x)(v_1 - v_2)_{+},
\label{FLipschitzupper} \\
F(t, x, u_1, v_1) - F(t, x, u_2, v_2) &\geq -\alpha L_u(t,x)(u_1 - u_2)_{+}-\alpha L_v(t,x)(v_2 - v_1)_{+},
\label{FLipschitzlower}
\end{align}
and
\begin{align}
G(t,x,u_1,v_1) - G(t,x,u_2,v_2) &\leq \beta L_u(t,x)(u_1 - u_2)_{+}+\beta L_v(t,x)(v_2 - v_1)_{+},\label{GLipschitzupper}\\
G(t,x,u_1,v_1) - G(t,x,u_2,v_2) &\geq -\beta L_u(t,x)(u_2 - u_1)_{+}-\beta L_v(t,x)(v_1 - v_2)_{+},\label{GLipschitzlower}
\end{align}
and $(\cdot)_{+}$ denotes the positive part, that is $(w)_{+} = w$ if $w\geq 0$ and $(w)_{+} = 0$ otherwise.
\end{itemize}
\end{lemma}
\begin{proof}
The proof of (i) and (ii) follows trivially from the positivity of the reaction rates $k_u(t,x)$, $k_v(t,x)\in L_{+}^{\infty}([0,\infty)\times\Gamma)$. To prove (iii), we apply (after suppressing the pointwise dependency on $t$ and $x$) the mean-value theorem
\begin{equation*}
\begin{aligned}
F(u_1,v_1) - F(u_2,v_2)
&= -\alpha k_u (u_1^{\alpha} - u_2^{\alpha}) + \alpha k_v(v_1^{\beta} - v_2^{\beta}) \\
&= -\alpha^2 k_u (\theta_{u})^{\alpha - 1}(u_1-u_2)
+\alpha^2 k_v(\theta_{v})^{\beta - 1}(v_1-v_2),
\end{aligned}
\end{equation*}
with $\theta_{u}(t,x)=\theta u_1 +(1-\theta)u_2$ and $\theta_{v}=\theta v_1 +(1-\theta)v_2$ for various $\theta(t,x)\in(0,1)$ pointwise for all $(t,x)\in[0,\infty)\times\Gamma$.
Thus \eqref{FLipschitzupper} and \eqref{FLipschitzlower} follow with $L_u(t,x) = \alpha k_u(t,x)\overline{u}^{\alpha-1}$ and $L_v(t,x) = \beta k_v(t,x)\overline{v}^{\beta-1}$.
The proof of \eqref{GLipschitzupper} and \eqref{GLipschitzlower} follows analogously.
\end{proof}
In the following, we will prove the existence of a unique weak solution to the system \eqref{e1} by the method of converging upper and lower solutions, see e.g. \cite{Pao}. In order to apply this method, we need to prove the comparison principle for system \eqref{e1} for pairs of upper and lower solutions. Before we derive such a comparison principle, we recall the following Trace inequality:
\begin{lemma}\label{lem:TraceInequality}\cite[Theorem 1.5.1.10]{Grisvard}
For any $\varepsilon>0$, there exists $C_\varepsilon >0$ such that
\begin{equation*}
\int_{\Gamma}|u|^2dS \leq \varepsilon\int_{\Omega}|\nabla u|^2dx + C_{\varepsilon}\int_{\Omega}|u|^2dx
\end{equation*}
for all $u\in H^1(\Omega)$.
\end{lemma}
\begin{definition}\label{def:UpperLowerSolution}
A pair $(\overline{u}, \overline{v})$ is called an upper solution to the problem \eqref{e1} if $(\overline{u}, \overline{v})$ satisfy the regularity \eqref{regular_u} and \eqref{regular_v} and that for all testfunctions $\varphi\in C^1(0,T;L^2(\Omega))\cap L^2(0,T;H^1(\Omega))$, $\psi\in C^1(0,T;L^2(\Gamma))\cap L^2(0,T;H^1(\Gamma))$ with $\varphi, \psi\geq 0$ and $\varphi(T) = \psi(T) = 0$, we have
\begin{equation}
\begin{cases}
\int_0^T\!\!\int_{\Omega}[-\overline{u}\varphi_t + \delta_u\nabla\overline{u}\nabla\varphi]dxdt - \int_0^T\!\int_{\Gamma} F(t,x,\overline{u},\overline{v})\varphi dSdt \geq \int_{\Omega}\overline{u}(0)\varphi dx,\\[1mm]
\int_0^T\!\int_{\Gamma}[-\overline{v}\psi_t + \delta_v\nabla_{\Gamma}\overline{v}\nabla_{\Gamma}\psi]dSdt - \int_0^T\!\int_{\Gamma} G(t,x,\overline{u},\overline{v})\psi dSdt \geq \int_{\Gamma}\overline{v}(0)\psi dS,\\[1mm]
\overline{u}(0,x)\geq u_0(x) \text{ a.e. } x\in\Omega,\\[1mm]
\overline{v}(0,x)\geq v_0(x) \text{ a.e. } x\in\Gamma.
\end{cases}
\label{a3}
\end{equation}
Similarly for a lower solution $(\underline{u}, \underline{v})$:
\begin{equation}
\begin{cases}
\int_0^T\!\!\int_{\Omega}[-\overline{u}\varphi_t + \delta_u\nabla\overline{u}\nabla\varphi]dxdt - \int_0^T\!\int_{\Gamma} F(t,x,\overline{u},\overline{v})\varphi dSdt \leq \int_{\Omega}\overline{u}(0)\varphi dx,\\[1mm]
\int_0^T\!\int_{\Gamma}[-\overline{v}\psi_t + \delta_v\nabla_{\Gamma}\overline{v}\nabla_{\Gamma}\psi]dSdt - \int_0^T\!\int_{\Gamma} G(t,x,\overline{u},\overline{v})\psi dSdt \leq \int_{\Gamma}\overline{v}(0)\psi dS,\\[1mm]
\overline{u}(0,x)\leq u_0(x) \text{ a.e. } x\in\Omega,\\[1mm]
\overline{v}(0,x)\leq v_0(x) \text{ a.e. } x\in\Gamma.
\end{cases}
\label{a4}
\end{equation}
\end{definition}
We are now going to prove a comparison principle for upper and lower solutions. The idea of the proof is motivated by \cite{Souplet}.
\begin{lemma}[Comparison Principle for Pairs of Upper and Lower Solutions]\label{lem:comparison}\hfill\\
Let $0<T<\infty$, $\underline{u}$, $\overline{u}$ satisfy \eqref{regular_u} and $\underline{v}$, $\overline{v}$ satisfy \eqref{regular_v}. Assume that for all testfunctions $\varphi\in C^1(0,T;L^2(\Omega))\cap L^2(0,T;H^1(\Omega)), \psi\in C^1(0,T;L^2(\Gamma))\cap L^2(0,T;H^1(\Gamma))$ with $\varphi, \psi\geq 0$ and $\varphi(T) = \psi(T) = 0$, we have
\begin{equation}\label{e2}
\begin{cases}
\int_0^T\!\!\int_{\Omega}[-(\underline{u} - \overline{u})\varphi_t + \delta_u\nabla(\underline{u} - \overline{u})\nabla \varphi]dxdt\\
\qquad-\int_0^T\!\int_{\Gamma} (F(t,x,\underline{u},\underline{v}) - F(t,x,\overline{u}, \overline{v}))\,\varphi \,dSdt \leq \int_{\Omega}(\underline{u}(0)-\overline{u}(0))\varphi dx,\\[1mm]
\int_0^T\!\int_{\Gamma}[-(\underline{v} - \overline{v})\psi_t + \delta_v\nabla_{\Gamma}(\underline{v} - \overline{v})\nabla_{\Gamma}\psi]dSdt\\
\qquad- \int_0^T\!\int_{\Gamma}(G(t,x,\underline{u},\underline{v})-G(t,x,\overline{u},\overline{v}))\,\psi\, dSdt\leq \int_{\Gamma}(\underline{v}(0) - \overline{v}(0))\psi dS,\\[1mm]
\underline{u}(0,x)\leq \overline{u}(0,x), \;x\in\Omega,\\[1mm]
\underline{v}(0,x)\leq \overline{v}(0,x), \;x\in\Gamma.
\end{cases}
\end{equation}
Then, $(\underline{u},\underline{v})\leq (\overline{u},\overline{v})$ in the sense of Definition \ref{aecomparision}.
\end{lemma}
\begin{proof}
We denote $w = \underline{u} - \overline{u}$ and $z = \underline{v} - \overline{v}$ and rewrite system \eqref{e2} as
\begin{equation}
\begin{cases}
\int_0^T\!\!\int_{\Omega}[-w\varphi_t+\delta_u\nabla w\nabla \varphi]dxdt \\
\qquad - \int_0^T\!\int_{\Gamma}(F(t,x,\underline{u},\underline{v}) - F(t,x,\overline{u},\overline{v}))\varphi dSdt \leq \int_{\Omega}w(0)\varphi dx \leq 0,\\[1mm]
\int_0^T\!\int_{\Gamma}(-z\psi_t + \delta_v\nabla_{\Gamma}z\nabla_{\Gamma}\psi)dS\\
\qquad - \int_0^T\!\int_{\Gamma}(G(t,x,\underline{u},\underline{v}) - G(t,x,\overline{u},\overline{v}))\psi dSdt \leq \int_{\Gamma}z(0)\psi dS \leq 0,\\[1mm]
w(0,x) \leq 0, \; z(0,x) \leq 0.
\end{cases}
\label{e3}
\end{equation}
Taking $\varphi$ as the positive part $w_+\in L^2((0,T);H^1(\Omega))$ in \eqref{e3}, we get for a.e. $t\in (0,T)$
\begin{equation}
\begin{aligned}
&\frac{1}{2}\frac{d}{dt}\int_{\Omega}|w_+|^2dx + \delta_u\int_{\Omega}|\nabla w_+|^2dx\leq \int_{\Gamma}(F(t,x,\underline{u},\underline{v}) - F(t,x,\overline{u},\overline{v}))w_+dS\\
&\leq \alpha \int_{\Gamma}L_u (-w)_{+}w_+dS+\alpha \|L_v\|_{\infty}\int_{\Gamma}z_+w_+dS \qquad (\text{by } \eqref{FLipschitzupper})\\
&\leq \frac{\alpha\|L_v\|_{\infty}}{2}\int_{\Gamma}|z_+|^2dS + \frac{\alpha\|L_v\|_{\infty}}{2}\int_{\Gamma}|w_+|^2dS \qquad (\text{by Young's inequality})\\
&\leq \frac{\alpha\|L_v\|_{\infty}}{2}\int_{\Gamma}|z_+|^2dS + {\delta_u}\int_{\Omega}|\nabla w_+|^2dx + C\int_{\Omega}|w_+|^2dx,
\end{aligned}
\label{e6_0}
\end{equation}
where we have applied the Trace inequality as in Lemma \ref{lem:TraceInequality} in the last step
and the constant $C=C(\alpha,\|L_v\|_{\infty},\delta_u,\Omega)$.
Hence,
\begin{equation}\label{e6}
\frac{d}{dt}\int_{\Omega}|w_+|^2dx\leq C\left(\int_{\Gamma}|z_+|^2dS + \int_{\Omega}|w_+|^2dx\right),
\end{equation}
for a constant $C$ and similarly, using \eqref{GLipschitzupper}
\begin{equation}\label{e7}
\frac{d}{dt}\int_{\Gamma}|z_+|^2dS \leq C\left(\int_{\Gamma}|z_+|^2dS + \int_{\Omega}|w_+|^2dx\right).
\end{equation}
Thus, combining \eqref{e6} and \eqref{e7} and using Gronwall's inequality
with $w_+(0) = 0$ and $z_+(0) = 0$ yields $w_+(t) = 0$ and $z_+(t) = 0$ for a.e. $t\in (0,T)$, which completes the proof.
\end{proof}
\medskip
By subtracting \eqref{a4} from \eqref{a3}, the comparison Lemma \ref{lem:comparison} yields the following.
\begin{lemma}\label{lem:compare}
If $(\overline{u}, \overline{v})$ is an upper solution and $(\underline{u}, \underline{v})$ is a lower solution to \eqref{e1}, then we have $(\overline{u},\overline{v})\geq (\underline{u},\underline{v})$ in the sense of Definition \ref{aecomparision}.
\end{lemma}
\begin{proposition}\label{UpperLower}
There exists an upper solution $(\overline{u},\overline{v})$ and a lower solution $(\underline{u},\underline{v})$ to the system \eqref{e1}.
\end{proposition}
\begin{proof}
Clearly $(\underline{u},\underline{v}) = (0,0)$ is a lower solution. To find an upper solution, we choose a function $B_0 \in L^{\infty}(\Gamma)$ which satisfies $B_0\geq v_0$ a.e. in $\Gamma$. Then we let $B(t,x)$ be the unique solution to the equation
\begin{equation*}
\begin{cases}
B_t(t,x) - \delta_v\Delta_{\Gamma}B(t,x) = 0, &0<t<T, x\in\Gamma,\\
B(0,x) = B_0(x), &x\in\Gamma.
\end{cases}
\end{equation*}
By the classical smoothing effect for this homogeneous linear heat equation, we get that $B$ is smooth in $(0,T)\times \Gamma$. Since $k_u(t,x), k_v(t,x)$ are uniformly bounded above and away from zero, we can define
\begin{equation*}
A_{bd}(t,x) = \left(\frac{k_v(t,x)}{k_u(t,x)}B^{\beta}(t,x)\right)^{1/\alpha} \text{ for } (t,x)\in (0,T)\times \Gamma.
\end{equation*}
We now choose $A_0\in L^{\infty}(\Omega)$ satisfying $A_0 \geq u_0$ a.e. in $\Omega$ and then let $A(t,x)$ be the unique solution to the following heat equation in $\Omega$ with non-homogeneous Dirichlet boundary condition:
\begin{equation*}
\begin{cases}
A_t(t,x) - \delta_u \Delta A(t,x) = 0,&0<t<T, x\in\Omega,\\
A(t,x) = A_{bd}(t,x), &0<t<T, x\in\Gamma,\\
A(0,x) = A_0(x), &x\in\Omega.
\end{cases}
\end{equation*}
It is now easy to verify that $(\overline{u},\overline{v}) = (A, B)$ is an upper solution to system \eqref{e1}.
\end{proof}
\begin{remark}\label{ConstantRates}
In the case where $k_u$ and $k_v$ are constants, we can define the upper solution by setting $\overline{u} = A$ and $\overline{v} = B$ where $A$ and $B$ are two positive constants satisfying
\begin{equation*}
A\geq \|u_0\|_{L^{\infty}(\Omega)},\ B \geq \|v_0\|_{L^{\infty}(\Gamma)} \text{ and } k_uA^{\alpha} = k_vB^{\beta}.
\end{equation*}
\end{remark}
\begin{remark}
The proof of the comparison principle of pairs of upper and lower solutions in Lemma \ref{lem:comparison} can readily be generalised to locally Lipschitz functions $F$, $G$, which satisfy
$F(\underline{u},\underline{v}) - F(\overline{u},\overline{v})\le L (|\underline{u}-\overline{u}|+(\underline{v}-\overline{v})_{+})$
and $G(\underline{u},\underline{v}) - G(\overline{u},\overline{v})\le L ((\underline{u}-\overline{u})_{+}+|\underline{v}-\overline{v}|)$ pointwise in $(t,x)$. However, for general $F$ and $G$, the existence of an upper solution is unclear and can not be expected in general. Otherwise, the below construction of global solutions from converging upper and lower solutions would lead to global existence of solutions, where no such existence can be expected. Nevertheless, we refer to \cite[Section 1.8]{Pao} for finding upper and lower solutions for several specific problems.
\end{remark}
In order to prove our existence result, we introduce the following auxiliary functions, which will be useful for proving the monotonicity of the sequences of upper and lower solutions.
\begin{align}
f(t,x,u,v) &= F(t,x,u,v) + \alpha L_u(t,x)\,u \nonumber\\
g(t,x,u,v) &= G(t,x,u,v) + \beta L_v(t,x)\,v.
\label{fg}
\end{align}
\begin{lemma}\label{newnonlinearities}
Functions $f$ and $g$ inherit the following properties from functions $F$ and $G$:
\begin{itemize}
\item[(i)] The functions $f(t,x,u, \cdot)$ and $g(t,x,\cdot,v)$ are non-decreasing for any $t,x\in \mathbb R\times \Gamma$ and any $u, v\in \mathbb R$.
\item[(ii)] For all $(\overline{u},\overline{v})\geq (u_1,v_1)\geq (u_2,v_2)\geq (0,0)$ there holds:
\begin{multline*}
\qquad\qquad f(t, x, u_1, v) - f(t, x, u_2, v) \geq\\ -\alpha L_u(t,x)(u_1 - u_2)_{+}+ \alpha L_u(t,x)(u_1-u_2)\geq 0,
\end{multline*}
and
\begin{multline*}
\qquad\qquad g(t,x,u,v_1) - g(t,x,u,v_2) \geq \\-\beta L_v(t,x)(v_1 - v_2)_{+}+\beta L_v(t,x)(v_1-v_2)\geq 0.
\end{multline*}
Thus, the functions $f(t,x,\cdot,v)$ and $g(t,x,u,\cdot)$ are monotone non-de\-creasing for all $(\overline{u},\overline{v})\geq (u_1,v_1)\geq (u_2,v_2)\geq (0,0)$ contrary to $F$ and $G$.
\end{itemize}
\end{lemma}
\begin{proof}
The statements of the above Lemma follow directly from Lemma \ref{Nonlinearities},
in particular from \eqref{FLipschitzlower} and \eqref{GLipschitzlower}.
\end{proof}
\medskip
With the notation \eqref{fg}, the system \eqref{e1} rewrites as
\begin{equation}
\begin{cases}
u_t - \delta_u\Delta u = 0, &x\in\Omega, t>0,\\
\delta_u\frac{\partial u}{\partial \nu} + \alpha L_u(t,x)\,u = f(t,x,u,v),&x\in\Gamma, t>0\\
v_t - \delta_v\Delta_{\Gamma}v + \beta L_v(t,x)\,v= g(t,x,u,v),&x\in\Gamma, t>0.
\end{cases}
\label{a2_2}
\end{equation}
Hereafter, we write $f(u,v)$ and $g(u,v)$ for $f(t,x,u,v)$ and $g(t,x,u,v)$ respectively except where it is stated otherwise.
\medskip
Starting from the pair of lower and upper solutions $(\underline{u},\underline{v})\leq(\overline{u},\overline{v})$
as constructed in Proposition \ref{UpperLower}, we will construct a sequence of lower solutions $\{(\underline{u}^{(k)}, \underline{v}^{(k)})\}_{k\geq 0}$ as follows:
\begin{equation}\tag{I.0}\label{I0}
(\underline{u}^{(0)}, \underline{v}^{(0)}) = (\underline{u}, \underline{v})
\end{equation}
and for all $k \geq 1$, $\underline{u}^{(k)}$ and $\underline{v}^{(k)}$ are the solutions of the following heat equation with inhomogeneous Robin boundary condition:
\begin{equation}\tag{I.1}\label{I1}
\begin{cases}
\partial_t\underline{u}^{(k)} - \delta_u\Delta\underline{u}^{(k)} = 0, &x\in\Omega, t>0,\\
\delta_u\frac{\partial\underline{u}^{(k)}}{\partial\nu} + \alpha L_u\underline{u}^{(k)} = f(\underline{u}^{(k-1)}, \underline{v}^{(k-1)}), &x\in\Gamma, t>0,\\
\underline{u}^{(k)}(0,x) = u_0(x) \in L^{\infty}(\Omega),&x\in\Omega,
\end{cases}
\end{equation}
and the following linear inhomogeneous equation:
\begin{equation}\tag{I.2}\label{I2}
\begin{cases}
\partial_t\underline{v}^{(k)} - \delta_v\Delta_{\Gamma}\underline{v}^{(k)} + \beta L_v\underline{v}^{(k)} = g(\underline{u}^{(k-1)}, \underline{v}^{(k-1)}),&x\in\Gamma, t>0,\\
\underline{v}^{(k)}(0,x) = v_0(x)\in L^{\infty}(\Gamma), &x\in\Gamma.
\end{cases}
\end{equation}
Similarly, we construct a sequence of upper solutions $\{(\overline{u}^{(k)}, \overline{v}^{(k)})\}_{k\geq 0}$:
\begin{equation}\tag{II.0}\label{II0}
(\overline{u}^{(0)}, \overline{v}^{(0)}) = (\overline{u}, \overline{v})
\end{equation}
and for all $k \geq 1$, $\overline{u}^{(k)}$ and $\overline{v}^{(k)}$ are the solutions of the following heat equation with inhomogeneous Robin boundary condition:
\begin{equation}\tag{II.1}\label{II1}
\begin{cases}
\partial_t\overline{u}^{(k)} - \delta_{u}\Delta\overline{u}^{(k)} = 0,&x\in\Omega,t>0,\\
\delta_u\frac{\partial\overline{u}^{(k)}}{\partial\nu} + \alpha L_u\overline{u}^{(k)}= f(\overline{u}^{(k-1)}, \overline{v}^{(k-1)}), &x\in\Gamma, t>0,\\
\overline{u}^{(k)}(0,x) = u_0(x),&x\in\Omega,
\end{cases}
\end{equation}
and the following linear inhomogeneous equation:
\begin{equation}\tag{II.2}\label{II2}
\begin{cases}
\partial_t\overline{v}^{(k)} - \delta_v\Delta_{\Gamma}\overline{v}^{(k)} + \beta L_v\overline{v}^{(k)} = g(\overline{u}^{(k-1)}, \overline{v}^{(k-1)}), &x\in\Gamma, t>0,\\
\overline{v}^{(k)}(0,x) = v_0(x), &x\in\Gamma.
\end{cases}
\end{equation}
The existence of unique sequences of lower and upper solutions $\underline{u}^{(k)}$ and $\underline{v}^{(k)}$ follows from classical arguments
in an iterative way starting from \eqref{I0} and \eqref{II0}. Given for instance $(\underline{u}^{(k-1)}, \underline{v}^{(k-1)})$, the system \eqref{I1} is a heat equation with inhomogeneous Robin boundary condition and bounded coefficients $L_u(t,x)\in L^{\infty}(\mathbb{R}_+\times\Gamma)$. Thus, the
existence of a unique weak solution in the sense of Definition \ref{def:weaksolution_1} follows e.g. from \cite{RMJ, RMJ1, NiPhD}. Moreover, the equation \eqref{I2} is a linear heat equation on a manifold without boundary and existence of a unique weak solutions follows e.g. from \cite[Chapter 6]{Taylorbook}.
Moreover, if $\underline{u}^{(k-1)}$ and $\underline{v}^{(k-1)}$ satisfy the regularity \eqref{regular_u} and \eqref{regular_v}, then by the locally Lipschitz properties of $f$ and $g$, we get $f(\underline{u}^{(k-1)}, \underline{v}^{(k-1)})$, $g(\underline{u}^{(k-1)}, \underline{v}^{(k-1)})\in L^{\infty}(\Gamma)$, which implies from (I.1) that $\underline{u}^{(k)}$ satisfies \eqref{regular_u} and from (I.2) that $\underline{v}^{(k)}$ satisfies \eqref{regular_v}.
An analogous argument can be applied to the equations \eqref{II1} and \eqref{II2} in order to get the unique existence and the regular properties of $(\overline{u}^{(k)}, \overline{v}^{(k)})$.
\begin{lemma}\label{lem:monotonicity}
The sequence $\{(\underline{u}^{(k)}, \underline{v}^{(k)})\}_{k\geq 0}$ is an increasing sequence of lower solutions while $\{(\overline{u}^{(k)}, \overline{v}^{(k)})\}_{k\geq 0}$ is a decreasing sequence of upper solutions. More precisely, for all $k\geq 0$,
\begin{equation*}
(\overline{u}^{(k)}, \overline{v}^{(k)}) \geq (\overline{u}^{(k+1)}, \overline{v}^{(k+1)}) \geq (\underline{u}^{(k+1)}, \underline{v}^{(k+1)}) \geq (\underline{u}^{(k)}, \underline{v}^{(k)})
\end{equation*}
in the sense of Definition \ref{aecomparision}.
\end{lemma}
\begin{proof}\hfill\break
\noindent\underline{\bf Claim 1:} $(\underline{u}^{(k+1)}, \underline{v}^{(k+1)}) \geq (\underline{u}^{(k)}, \underline{v}^{(k)})$ for all $k\geq 0$. We proceed by induction. Denote $\underline{w}^{(k)} = \underline{u}^{(k+1)} - \underline{u}^{(k)}$ and $\underline{z}^{(k)} = \underline{v}^{(k+1)} - \underline{v}^{(k)}$. From \eqref{I1} and \eqref{I2}, by noticing that $(\underline{u}^{(0)}, \underline{v}^{(0)}) = (0,0)$ and thus $f(\underline{u}^{(0)}, \underline{v}^{(0)}) = g(\underline{u}^{(0)}, \underline{v}^{(0)})= 0$, we have
\begin{equation}
\begin{cases}
\partial_t\underline{w}^{(0)} - \delta_u\Delta\underline{w}^{(0)} \geq 0,\\
\delta_{u}\frac{\partial\underline{w}^{(0)}}{\partial\nu} + \alpha L_u\underline{w}^{(0)} \geq 0,\\
\underline{w}^{(0)}(0) \ge 0\\
\partial_t\underline{z}^{(0)} - \delta_{v}\Delta_{\Gamma}\underline{z}^{(0)} + \beta L_v\underline{z}^{(0)} \geq 0,\\
\underline{z}^{(0)}(0) \ge 0.
\end{cases}
\label{a8}
\end{equation}
and the maximum principle for weak solutions (see e.g. \cite[Theorem 11.9]{Mi00}) implies $(\underline{w}^{(0)}, \underline{z}^{(0)}) \geq 0$.
Now, assume that $(\underline{u}^{(i)}, \underline{v}^{(i)}) \geq (\underline{u}^{(i-1)}, \underline{v}^{(i-1)})$ for all $i = 1, 2, \ldots, k$. Then, a pair $(\underline{w}^{(k)}, \underline{z}^{(k)})$ satisfies
\begin{equation}
\begin{cases}
\partial_t\underline{w}^{(k)} - \delta_u\Delta\underline{w}^{(k)} = 0,\\
\delta_{u}\frac{\partial\underline{w}^{(k)}}{\partial\nu} + \alpha L_u\underline{w}^{(k)} = f(\underline{u}^{(k)}, \underline{v}^{(k)}) - f(\underline{u}^{(k-1)}, \underline{v}^{(k-1)}) ,\\
\partial_t\underline{z}^{(k)} - \delta_{v}\Delta_{\Gamma}\underline{z}^{(k)} + \beta L_v\underline{z}^{(k)} = g(\underline{u}^{(k)}, \underline{v}^{(k)}) - g(\underline{u}^{(k-1)}, \underline{v}^{(k-1)}),\\
\underline{w}^{(k)}(0) = 0, \underline{z}^{(k)}(0) = 0.
\end{cases}
\label{a9}
\end{equation}
Using Lemma \ref{newnonlinearities} we have
\begin{multline}\label{newestimate}
f(\underline{u}^{(k)}, \underline{v}^{(k)}) - f(\underline{u}^{(k-1)}, \underline{v}^{(k-1)})=f(\underline{u}^{(k)}, \underline{v}^{(k)}) - f(\underline{u}^{(k)}, \underline{v}^{(k-1)})\\
+f(\underline{u}^{(k)}, \underline{v}^{(k-1)}) - f(\underline{u}^{(k-1)}, \underline{v}^{(k-1)}) \geq 0
\end{multline}
since $(\underline{u}^{(k)}, \underline{v}^{(k)}) \geq (\underline{u}^{(k-1)}, \underline{v}^{(k-1)})$. Analogously, we have also the estimate
\begin{equation*}
g(\underline{u}^{(k)}, \underline{v}^{(k)}) - g(\underline{u}^{(k-1)}, \underline{v}^{(k-1)}) \geq 0.
\end{equation*}
Thus, by using the maximum principle again, we have
\begin{equation*}
(\underline{w}^{(k)}, \underline{z}^{(k)}) \geq 0
\end{equation*}
or equivalently
\begin{equation*}
(\underline{u}^{(k+1)}, \underline{v}^{k+1}) \geq (\underline{u}^{(k)}, \underline{v}^{(k)}).
\end{equation*}
\medskip
\noindent\underline{\bf Claim 2:} $(\overline{u}^{(k+1)}, \overline{v}^{(k+1)}) \leq (\overline{u}^{(k)}, \overline{v}^{(k)})$ for all $k\geq 0$.
The proof is similar to Claim 1 and is omitted.
\medskip
\noindent\underline{\bf Claim 3:} For each $k\geq 0$ , $(\overline{u}^{(k)}, \overline{v}^{(k)})$ is an upper solution and $(\underline{u}^{(k)}, \underline{v}^{(k)})$ is a lower solution. We will show the result for $(\underline{u}^{(k)}, \underline{v}^{(k)})$ and that of $(\overline{u}^{(k)}, \overline{v}^{(k)})$ follows similarly. We again prove this claim by induction: The case $k = 0$ follows directly from \eqref{I0}. Assume that $(\underline{u}^{(i)}, \underline{v}^{(i)})$ are lower solutions for all $i = 0, 1, \ldots, k - 1$. We will check that $(\underline{u}^{(k)}, \underline{v}^{(k)})$ is also a lower solution by using Definition \ref{def:UpperLowerSolution} and the monotonicity of $\{(\underline{u}^{(j)}, \underline{v}^{(j)})\}_{j\geq 0}$. Taking the weak formulation of \eqref{I1}, we have
\begin{multline*}
\int_0^T\!\!\int_{\Omega}[-\underline{u}^{(k)}\varphi_t + \delta_u\nabla\underline{u}^{(k)}\nabla\varphi]dxdt\\
- \int_0^T\!\int_{\Gamma}[-\alpha L_u\underline{u}^{(k)} + f(\underline{u}^{(k-1)}, \underline{v}^{(k-1)})]\varphi dSdt = \int_{\Omega}\underline{u}^{(k)}(0)\varphi dx.
\end{multline*}
Hence, we have by \eqref{fg}
\begin{multline*}
\int_0^T\!\!\int_{\Omega}[-\underline{u}^{(k)}\varphi_t + \delta_u\nabla\underline{u}^{(k)}\nabla\varphi]dxdt - \int_0^T\!\int_{\Gamma} F(\underline{u}^{(k)}, \underline{v}^{(k)})\varphi dSdt\\
= \int_{\Omega}\underline{u}^{(k)}(0)\varphi dx + \int_0^T\!\int_{\Gamma}[f(\underline{u}^{(k-1)}, \underline{v}^{(k-1)}) - f(\underline{u}^{(k)}, \underline{v}^{(k)})]\varphi dSdt\\
\leq \int_{\Omega}\underline{u}^{(k)}(0)\varphi dx,
\end{multline*}
where we have used Lemma \ref{newnonlinearities} as above in estimate \eqref{newestimate} and the non-decreasing monotonicity of the sequence $\{\underline{u}^{(j)}, \underline{v}^{(j)}\}_{j\geq 0}$. An analogous argument provides
\begin{multline*}
\int_0^T\!\!\int_{\Omega}[-\underline{v}^{(k)}\psi_t + \delta_v\nabla_{\Gamma}\underline{v}^{(k)}\nabla_{\Gamma}\psi]dSdt - \int_0^T\!\int_{\Gamma} G(\underline{u}^{(k)}, \underline{v}^{(k)})\psi dSdt\\
= \int_{\Gamma}\underline{v}^{(k)}(0)\psi dS + \int_0^T\!\int_{\Gamma}[g(\underline{u}^{(k-1)}, \underline{v}^{(k-1)}) - g(\underline{u}^{(k)}, \underline{v}^{(k)})]dSdt\\
\leq \int_{\Gamma}\underline{v}^{(k)}(0)\psi dS.
\end{multline*}
Taking into account that $\underline{u}^{(k)}(0) = u_0$ and $\underline{v}^{(k)}(0) = v_0$, we have that $(\underline{u}^{(k)}, \underline{v}^{(k)})$ is a lower solution.
\end{proof}
\medskip
From Lemma \ref{lem:monotonicity} with the help of the monotone convergence theorem, we have the following almost everywhere pointwise limits in $(0,T)\times \Omega$ and $(0,T)\times \Gamma$ respectively:
\begin{equation}
\lim\limits_{k\rightarrow \infty}(\underline{u}^{(k)}, \underline{v}^{(k)}) = (\underline{u}^{*}, \underline{v}^{*}) \text{ and } \lim\limits_{k\rightarrow \infty}(\overline{u}^{(k)}, \overline{v}^{(k)}) = (\overline{u}^{*}, \overline{v}^{*}).
\label{a11}
\end{equation}
The following {\it a priori estimates} are uniform in $k$ pointwise for all times $t\in(0,T)$, and will allow us to pass to the limit $k\to\infty$:
\begin{lemma}\label{lem:APrioriEstimate}
The sequences $\{\overline{u}^{(k)}\}_{k\geq 0}$ and $\{\overline{v}^{(k)}\}_{k\geq 0}$ are bounded uniformly in $k$ in $L^{\infty}(0,T;L^{\infty}(\Omega))\cap L^2(0,T;H^1(\Omega))$ and $L^{\infty}(0,T;L^{\infty}(\Gamma))\cap L^2(0,T;H^1(\Gamma))$, respectively, for any given $T>0$. Moreover, the sequence $\{(\overline{u}^{(k)})^{\alpha}|_{\Gamma}\}_{k\geq 0}$ is bounded in $L^2(0,T;L^2(\Gamma))$. We also have analogous estimates for $\{\underline{u}^{(k)}\}_{k\geq 0}$ and $\{\underline{v}^{(k)}\}_{k\geq 0}$.
\end{lemma}
\begin{proof}
We will prove only for $\{\overline{u}^{(k)}\}_{k\geq 0}$ and $\{\overline{v}^{(k)}\}_{k\geq 0}$. The estimate
\begin{equation*}
(\overline{u}^{(k)}, \overline{v}^{(k)}) \leq (\overline{u}, \overline{v})
\end{equation*}
yields that $\{\overline{u}^{(k)}\}_{k\geq 0}$ is bounded in $L^{\infty}(0,T;L^{\infty}(\Omega))$ and $\{\overline{v}^{(k)}\}_{k\geq 0}$ is bounded in $L^{\infty}(0,T;L^{\infty}(\Gamma))$. More precisely, there exists $C_0 >0$ independent of $k$ such that
\begin{equation*}
\|\overline{u}^{(k)}\|_{L^{\infty}(0,T;L^{\infty}(\Omega))} \leq C_0 \text{ and }\|\overline{v}^{(k)}\|_{L^{\infty}(0,T;L^{\infty}(\Gamma))} \leq C_0 \text{ for all } k\geq 0, T>0.
\end{equation*}
We now rewrite the equation for $\overline{u}^{(k)}$ from \eqref{II1}
\begin{equation*}
\begin{cases}
\partial_t\overline{u}^{(k)} - \delta_u\Delta \overline{u}^{(k)} = 0,\\
\delta_u\frac{\partial\overline{u}^{(k)}}{\partial\nu} + \alpha L_u \overline{u}^{(k)} = -\alpha[k_u(\overline{u}^{(k-1)})^{\alpha} - k_v(\overline{v}^{(k-1)})^{\beta}] + \alpha L_u \overline{u}^{(k-1)}.
\end{cases}
\end{equation*}
By taking inner product with $\overline{u}^{(k)}$ in $L^2(\Omega)$, we get
\begin{multline}\label{H1estimate}
\frac{1}{2}\frac{d}{dt}\|\overline{u}^{(k)}\|_{L^2(\Omega)}^2 + \delta_u\|\nabla\overline{u}^{(k)}\|_{L^2(\Omega)}^2\\
= \int_{\Gamma}\left(- \alpha L_u \overline{u}^{(k)} -\alpha[k_u(\overline{u}^{(k-1)})^{\alpha} - k_v(\overline{v}^{(k-1)})^{\beta}] + \alpha L_u \overline{u}^{(k-1)}\right)\overline{u}^{(k)}dS\\
\leq \alpha\int_{\Gamma}k_v(\overline{v}^{(k-1)})^{\beta}\overline{u}^{(k)}dS + \alpha\int_{\Gamma}L_u\overline{u}^{(k-1)}\overline{u}^{(k)}dS
\end{multline}
thanks to the nonnegativity of $\overline{u}^{(k)}$, $\overline{u}^{(k-1)}$ and $k_u(t,x)\geq 0$ and $L_u = \alpha k_u \overline{u}^{\alpha - 1} \geq 0$. In order to estimate the right hand side of \eqref{H1estimate}, we first have
\begin{equation}
\begin{aligned}
\alpha\int_{\Gamma}&k_v(\overline{v}^{(k-1)})^{\beta}\overline{u}^{(k)}dS\\
&\leq 2\alpha \|k_v\|_{\infty}\left(\int_{\Gamma}|\overline{v}^{(k-1)}|^{2\beta}dS + \int_{\Gamma}|\overline{u}^{(k)}|^2dS\right)\\
&\qquad(\text{by using the modified Trace inequality in Lemma \ref{lem:TraceInequality}})\\
&\leq 2\alpha \|k_v\|_{\infty}|\Gamma|\|\overline{v}^{(k-1)}\|_{L^{\infty}(\Gamma)}^{2\beta} + \frac{\delta_u}{4}\|\nabla \overline{u}^{(k)}\|_{L^2(\Omega)}^2 + C\|\overline{u}^{(k)}\|_{L^\infty(\Omega)}^2.
\end{aligned}
\label{es1}
\end{equation}
Moreover, by using $L_u(t,x) = \alpha k_u \overline{u}^{\alpha-1}(t,x) \leq \alpha \|k_u\|_{\infty}\|\overline{u}\|_{L^{\infty}(0,T;L^{\infty}(\Omega))}^{\alpha - 1} =: C_1$, we get
\begin{equation}
\begin{aligned}
\alpha\int_{\Gamma}&L_u\overline{u}^{(k-1)}\overline{u}^{(k)}dS\\
&\leq 2\alpha C_1\left(\int_{\Gamma}|\overline{u}^{(k-1)}|^2dS + \int_{\Gamma}|\overline{u}^{(k)}|^2dS\right)\\
&\leq 2\alpha C_1\left(\frac{\delta_u}{8\alpha C_1}\|\nabla \overline{u}^{(k-1)}\|_{L^2(\Omega)}^2 + C\|\overline{u}^{(k-1)}\|_{L^2(\Omega)}^2 \right.\\
&\qquad\qquad\qquad \left.+ \frac{\delta_u}{8\alpha C_1}\|\nabla \overline{u}^{(k)}\|_{L^2(\Omega)}^2 + C\|\overline{u}^{(k)}\|_{L^2(\Omega)}^2\right)\\
&\leq \frac{\delta_u}{4}\|\nabla \overline{u}^{(k-1)}\|_{L^2(\Omega)}^2 + \frac{\delta_u}{4}\|\nabla \overline{u}^{(k)}\|_{L^2(\Omega)}^2 + C(\|\overline{u}^{(k-1)}\|_{L^{\infty}(\Omega)}^2 + \|\overline{u}^{(k)}\|_{L^{\infty}(\Omega)}^2),
\end{aligned}
\label{es2}
\end{equation}
with a constant $C=C(C_1,\delta_u,\delta_v,\alpha)$.
By applying \eqref{es1} and \eqref{es2} to \eqref{H1estimate}, we obtain
\begin{multline}
\frac{d}{dt}\|\overline{u}^{(k)}\|_{L^2(\Omega)}^2 + \frac{\delta_u}{2}\|\nabla \overline{u}^{(k)}\|_{L^2(\Omega)}^2\\
\leq \frac{\delta_u}{4}\|\nabla \overline{u}^{(k-1)}\|_{L^2(\Omega)}^2 + C\left(\|\overline{u}^{(k)}\|_{L^{\infty}(\Omega)}^2 + \|\overline{u}^{(k-1)}\|_{L^{\infty}(\Omega)}^2 + \|\overline{v}^{(k-1)}\|_{L^{\infty}(\Gamma)}^{2\beta}\right).
\label{es3}
\end{multline}
Integrating \eqref{es3} on $(0,T)$ yields
\begin{align}
\|\nabla &\overline{u}^{(k)}\|_{L^2(0,T;L^2(\Omega))}^2 \leq \frac{2}{\delta_u}\|\overline{u}^{(k)}(0)\|_{L^2(\Omega)}^2+\frac{1}{2}\|\nabla \overline{u}^{(k-1)}\|_{L^2(0,T;L^2(\Omega))}^2\nonumber\\
&\quad + C\left(\|\overline{u}^{(k)}\|_{L^{\infty}(0,T;L^{\infty}(\Omega))}^2 + \|\overline{u}^{(k-1)}\|_{L^{\infty}(0,T;L^{\infty}(\Omega))}^2 + \|\overline{v}^{(k-1)}\|_{L^{\infty}(0,T;L^{\infty}(\Gamma))}^{2\beta}\right)\nonumber\\
&\leq \frac{1}{2}\|\nabla \overline{u}^{(k-1)}\|_{L^2(0,T;L^2(\Omega))}^2 + \frac{2}{\delta_u}\|u_0\|_{L^2(\Omega)}^2+ C(2C_0^2 + C_0^{2\beta})\nonumber\\
&\leq \frac{1}{2}\|\nabla \overline{u}^{(k-1)}\|_{L^2(0,T;L^2(\Omega))}^2 + C.
\label{es4}
\end{align}
Thus, we can have
\begin{equation*}
\begin{aligned}
\|\nabla \overline{u}^{(k)}\|_{L^2(0,T;L^2(\Omega))}^2 &\leq \frac{1}{2}\|\nabla \overline{u}^{(k-1)}\|_{L^2(0,T;L^2(\Omega))}^2 + C\\
&\leq \frac{1}{4}\|\nabla \overline{u}^{(k-2)}\|_{L^2(0,T;L^2(\Omega))}^2 + C\left(1+\frac{1}{2}\right)\\
&\leq \ldots\\
&\leq \frac{1}{2^k}\|\nabla \overline{u}^{(0)}\|_{L^2(0,T;L^2(\Omega))}^2 + C\left(1 + \frac{1}{2} + \ldots + \frac{1}{2^{k-1}}\right)\\
&\leq \frac{1}{2^k}\|\nabla \overline{u}\|_{L^2(0,T;L^2(\Omega))}^2 + 2C\\
\end{aligned}
\end{equation*}
Thefore, we have $\{|\nabla \overline{u}^{(k)}|\}_{k\geq 0}$ is bounded in $L^2(0,T;L^2(\Omega))$ uniformly in $k$. Taking into account that $\{\overline{u}^{(k)}\}_{k\geq 0}$ is bounded in $L^{\infty}(0,T;L^{\infty}(\Omega))\hookrightarrow L^2(0,T;L^2(\Omega))$, we conclude that $\{\overline{u}^{(k)}\}_{k\geq 0}$ is uniformly bounded in $L^2(0,T;H^1(\Omega))$.
We next prove that $\{(\overline{u}^{(k)})^{\alpha}|_{\Gamma}\}_{k\geq 0}$ is bounded in $L^2(0,T;L^{2}(\Gamma))$. Indeed, adapting the estimate in Remark \ref{remark:Definition}, we get
\begin{align*}
\|(\overline{u}^{(k)})^{\alpha}&\|_{L^2(0,T;L^2(\Gamma))}^2 = \int_0^T\!\int_{\Gamma}(\overline{u}^{(k)})^{2\alpha}dSdt\\
&\leq C\int_{0}^{T}\left(\|\overline{u}^{(k)}\|_{L^{\infty}(\Omega)}^{2\alpha - 2}\|\nabla \overline{u}^{(k)}\|_{L^2(\Omega)}^2 + \|\overline{u}^{(k)}\|_{L^{\infty}(\Omega)}^{2\alpha}\right)dt\\
&\leq C\|\overline{u}^{(k)}\|_{L^{\infty}(0,T;L^{\infty}(\Omega))}^{2\alpha - 2}\|\nabla \overline{u}^{(k)}\|_{L^{2}(0,T;L^2(\Omega))}^2 + C\|\overline{u}^{(k)}\|_{L^{\infty}(0,T;L^{\infty}(\Omega))}^{2\alpha}.
\end{align*}
Thus, the boundedness of $\{(\overline{u}^{(k)})^{\alpha}|_{\Gamma}\}_{k\geq 0}$ in $L^2(0,T;L^2(\Gamma))$ follows from the boundedness of $\{\overline{u}^{(k)}\}_{k\geq 0}$ in $L^{\infty}(0,T;L^{\infty}(\Omega))$ and $L^2(0,T;H^1(\Omega))$.
It remains to prove that $\{\overline{v}^{(k)}\}_{k\geq 0}$ is bounded in $L^2(0,T;H^1(\Omega))$. Multiplying the equation of $\overline{v}^{(k)}$
\begin{equation*}
\partial_t\overline{v}^{(k)} - \delta_v\Delta_{\Gamma} \overline{v}^{(k)} + \beta L_v \overline{v}^{(k)} = \beta[k_u(\overline{u}^{(k-1)})^{\alpha} - k_v(\overline{v}^{(k-1)})^{\beta}] + \beta L_v \overline{v}^{(k-1)}
\end{equation*}
by $\overline{v}^{(k)}$ in $L^2(\Gamma)$, we have
\begin{multline}\label{es5}
\frac{1}{2}\frac{d}{dt}\|\overline{v}^{(k)}\|_{L^2(\Gamma)}^2 + \delta_v\|\nabla_{\Gamma}\overline{v}^{(k)}\|_{L^2(\Gamma)}^2 + \beta\int_{\Gamma}L_v|\overline{v}^{(k)}|^2dS\\
= \beta\int_{\Gamma}[k_u(\overline{u}^{(k-1)})^{\alpha} - k_v(\overline{v}^{(k-1)})^{\beta}]\overline{v}^{(k)}dS + \beta\int_{\Gamma}L_v\overline{v}^{(k-1)}\overline{v}^{(k)}dS\\
\leq \beta \|k_u\|_{\infty}\int_{\Gamma}(\overline{u}^{(k-1)})^{\alpha}\overline{v}^{(k)}dS + \beta C_2\int_{\Gamma}\overline{v}^{(k-1)}\overline{v}^{(k)}dS
\end{multline}
since $k_v(\overline{v}^{(k-1)})^{\beta}\overline{v}^{(k)} \geq 0$ and $L_v = \beta k_v \overline{v}^{\beta - 1} \leq \beta \|k_v\|_{\infty}\|\overline{v}\|_{L^{\infty}(0,T;L^{\infty}(\Gamma))}^{\beta - 1} =: C_2$. By Young's inequality, we obtain
\begin{equation*}
\int_{\Gamma}(\overline{u}^{(k-1)})^{\alpha}\overline{v}^{(k)}dS \leq \frac{1}{2}\|(\overline{u}^{(k-1)})^{\alpha}\|_{L^2(\Gamma)}^2 + \frac{1}{2}\|\overline{v}^{(k)}\|_{L^2(\Gamma)}^2,
\end{equation*}
and
\begin{equation*}
\int_{\Gamma}\overline{v}^{(k-1)}\overline{v}^{(k)}dS \leq \frac{1}{2}\|\overline{v}^{(k-1)}\|_{L^2(\Gamma)}^2 + \frac{1}{2}\|\overline{v}^{(k)}\|_{L^2(\Gamma)}^2.
\end{equation*}
Therefore, it follows from \eqref{es5} that
\begin{multline}
\frac{d}{dt}\|\overline{v}^{(k)}\|_{L^2(\Gamma)}^2 + 2\delta_v\|\nabla_{\Gamma}\overline{v}^{(k)}\|_{L^2(\Gamma)}^2\\
\leq \beta \|k_u\|_{\infty}\|(\overline{u}^{(k-1)})^{\alpha}\|_{L^2(\Gamma)}^2 + \beta( \|k_u\|_{\infty} + C_2)\|\overline{v}^{(k)}\|_{L^2(\Gamma)}^2 + \beta C_2\|\overline{v}^{(k-1)}\|_{L^2(\Gamma)}^2.
\label{es6}
\end{multline}
By integrating \eqref{es6} over $(0,T)$ and by using that $\{(\overline{u}^{(k)})^{\alpha}|_{\Gamma}\}_{k\geq 0}$ is uniformly bounded in $L^2(0,T;L^2(\Gamma))$ and $\{\overline{v}^{(k)}\}_{k\geq 0}$ is uniformly bounded in $L^{\infty}(0,T;L^{\infty}(\Gamma))$, we conclude that $\{\overline{v}^{(k)}\}_{k\geq 0}$ is uniformly bounded in $L^2(0,T;H^1(\Gamma))$. This completes the proof of the Lemma.
\end{proof}
\begin{proposition}\label{pro:Solutions}
Both a.e. pointwise limits $(\underline{u}^{*}, \underline{v}^{*})$ and $ (\overline{u}^{*}, \overline{v}^{*})$ of \eqref{a11} are solutions of \eqref{e1}.
\end{proposition}
\begin{proof}
We will only prove that $(\underline{u}^{*}, \underline{v}^{*})$ is a solution of \eqref{e1}, since the proof for $ (\overline{u}^{*}, \overline{v}^{*})$ is analog. Taking the weak formulation of \eqref{I1}, we have
\begin{multline}
\int_0^T\!\!\int_{\Omega}[-\underline{u}^{(k)}\varphi_t + \delta_u\nabla\underline{u}^{(k)}\nabla \varphi]dxdt\\
= \int_{\Omega}u_0\varphi dx + \int_0^T\!\int_{\Gamma}[-\alpha L_u\underline{u}^{(k)} + f(\underline{u}^{(k-1)}, \underline{v}^{(k-1)})]\varphi dSdt\\
= \int_{\Omega}u_0\varphi dx + \int_0^T\!\int_{\Gamma}[-\alpha L_u\underline{u}^{(k)} - \alpha(k_u(\underline{u}^{(k-1)})^{\alpha} - k_v(\underline{v}^{(k-1)})^{\beta}) + \alpha L_u\underline{u}^{(k-1)}]\varphi dSdt.
\label{sol1}
\end{multline}
Now using Lemma \ref{lem:APrioriEstimate}, we can apply the Dominated convergence Theorem in order to pass to the limit as $k\rightarrow +\infty$ for all terms in \eqref{sol1} to get
\begin{multline}
\int_0^T\!\!\int_{\Omega}[-\underline{u}^{*}\varphi_t + \delta_u\nabla\underline{u}^{*}\nabla \varphi]dxdt\\
=\int_{\Omega}u_0\varphi dx + \int_0^T\!\int_{\Gamma}[-\alpha L_u\underline{u}^{*} - \alpha(k_u(\underline{u}^{*})^{\alpha} - k_v(\underline{v}^{*})^{\beta}) + \alpha L_u\underline{u}^{*}]\varphi dSdt\\
= \int_{\Omega}u_0\varphi dx - \int_0^T\!\int_{\Gamma}\alpha[k_u(\underline{u}^{*})^{\alpha} - k_v(\underline{v}^{*})^{\beta}]\varphi dSdt.
\label{sol2}
\end{multline}
Similarly, we can pass to the limit as $k\rightarrow +\infty$ in
\begin{multline*}
\int_0^T\!\int_{\Gamma}[-\underline{v}^{(k)}\psi_t + \delta_v\nabla_{\Gamma}\underline{v}^{(k)}\nabla_{\Gamma}\psi + \beta L_v\underline{v}^{(k)}]dSdt\\
= \int_{\Gamma}v_0\psi dS + \int_0^T\!\int_{\Gamma}[\beta(k_u(\underline{u}^{(k-1)})^{\alpha} - k_v(\underline{v}^{(k-1)})^{\beta}) + \beta L_v\underline{v}^{(k-1)}]\psi dSdt
\end{multline*}
to derive that
\begin{equation}
\int_0^T\!\int_{\Gamma}[-\underline{v}^{*}\psi_t + \delta_v\nabla_{\Gamma}\underline{v}^{*}\nabla_{\Gamma}\psi]dSdt
= \int_{\Gamma}v_0\psi dS + \int_0^T\!\int_{\Gamma}[\beta(k_u(\underline{u}^{*})^{\alpha} - k_v(\underline{v}^{*})^{\beta})]\psi dSdt.
\label{sol3}
\end{equation}
Equation \eqref{sol2} together with \eqref{sol3} means that $(\underline{u}^{*}, \underline{v}^{*})$ is a weak solution to the system \eqref{e1}.
\end{proof}
\begin{theorem}\label{theo:ExistenceAndUniqueness}
For all non-negative initial data $(u_0, v_0)\in L^{\infty}_+(\Omega)\times L^{\infty}_+(\Gamma)$, there exists a unique non-negative global weak solution $(u,v)$ for the system \eqref{e1}.
\end{theorem}
\begin{proof}
The existence of a solution is implied from Proposition \ref{pro:Solutions}. The non-negativity of solutions follows from the Comparison Theorem, see Lemma \ref{lem:comparison}, since $(\underline{u},\underline{v}) = (0,0)$ is a lower solution. To prove the uniqueness, it's enough to show that $(\underline{u}^{*},\underline{v}^{*}) = (\overline{u}^{*}, \overline{v}^{*})$. The technique we use is similar with the one used in the proof of Lemma \ref{lem:comparison}. Setting $w = \overline{u}^{*} - \underline{u}^{*}$ and $z = \overline{v}^{*} - \underline{v}^{*}$, we have $w \geq 0, z\geq 0$, $w(0) = z(0) = 0$ and
\begin{equation}
\begin{cases}
\int_0^T\!\!\int_{\Omega}[-w\varphi_t + \delta_u\int_{\Omega}\nabla w\nabla \varphi]dxdt = \int_0^T\!\int_{\Gamma}(F(\overline{u}^{*}, \overline{v}^{*}) - F(\underline{u}^*, \underline{v}^*))\varphi dSdt,\\[1mm]
\int_0^T\!\int_{\Gamma}[-z\psi_t + \delta_v\int_{\Gamma}\nabla_{\Gamma}z\nabla_{\Gamma}\psi]dSdt = \int_0^T\!\int_{\Gamma}(G(\overline{u}^{*}, \overline{v}^{*}) - G(\underline{u}^*, \underline{v}^*))\psi dSdt.
\end{cases}
\label{tt1}
\end{equation}
Choosing $\varphi = w(t)$ in \eqref{tt1}, we obtain, for almost every $t\in (0,T)$,
\begin{equation}
\begin{aligned}
\frac{1}{2}&\frac{d}{dt}\int_{\Omega}|w|^2dx + \delta_u\int_{\Omega}|\nabla w|^2dx\\
&= \int_{\Gamma}(F(\overline{u}^*, \overline{v}^*) - F(\underline{u}^*, \underline{v}^*))wdS\\
&\leq C\int_{\Gamma}zwdS
\quad (\text{by using the locally Lipschitz property of $F$ \eqref{FLipschitzupper}})\\
&\leq \frac{\delta_u}{2}\int_{\Omega}|\nabla w|^2dx + C\int_{\Omega}|w|^2dx + C\int_{\Gamma}|z|^2dS \quad (\text{by Young and Lemma \ref{lem:TraceInequality}}).
\end{aligned}
\label{tt2}
\end{equation}
Similarly, by choosing $\psi=z$, we have
\begin{equation}
\frac{1}{2}\frac{d}{dt}\int_{\Gamma}|z|^2dS + \delta_v\int_{\Gamma}|\nabla_{\Gamma}v|^2dS \leq C\int_{\Gamma}|z|^2dS + C\int_{\Omega}|w|^2dx + \frac{\delta_u}{2}\int_{\Omega}|\nabla w|^2dx.
\label{tt3}
\end{equation}
Combining \eqref{tt2} and \eqref{tt3} implies
\begin{equation}
\frac{d}{dt}\left(\int_{\Omega}|w|^2dx + \int_{\Gamma}|z|^2dS\right) \leq C\left(\int_{\Omega}|w|^2dx + \int_{\Gamma}|z|^2dS\right).
\label{tt4}
\end{equation}
Therefore, $w(t) = 0$ and $z(t) = 0$ for a.e. $t\in (0,T)$ since $w(0) = 0, z(0) = 0$. This completes the proof.
\end{proof}
\section{Convergence to equilibrium}
In this section, we assume that the reaction rates $k_u$ and $k_v$, and thus
the equilibrium state $(u_{\infty}, v_{\infty})$ (see \eqref{f1_1} below) are constant. Moreover, for the sake of readability of the arguments, we shall assume normalised rates $k_u = k_v = 1$ (w.l.o.g. thanks to a rescaling in the cases $\alpha\neq\beta$). In any case, the following proofs can be readily generalised to arbitrary constants $k_u>0$, $k_v>0$.
We shall apply the entropy method to prove that the unique solution to \eqref{e1} converges exponentially fast to the equilibrium $(u_{\infty}, v_{\infty})$ for any initial data $(u_0,v_0)\in L^{\infty}(\Omega)\times L^{\infty}(\Gamma)$.
While the entropy method is certainly expected to apply to general reaction rates, the case of non-constant equilibria requires a more complicated formalism (see e.g. \cite{DFM}),
which we omit here for the sake of clarity of the argument and leave it for further work.
In the following, we will first consider the non-degenerate case $\delta_{v}>0$ and later the degenerate case $\delta_v= 0$. We remark that in the first case with non-degenerate surface diffusion, our method relies only on natural a-priori bounds which are entailed by well-defined entropy and entropy-dissipation functionals along the flow of the solution. However, in the case of degenerate diffusion, we require additional $L^{\infty}$-bounds of the solution. Since $L^{\infty}$-bounds of solutions for general systems are often unknown, the degenerate surface diffusion case poses more difficulties to be generalised than the non-degenerate case, which seems readily generalisable.
\medskip
The system \eqref{e1} satisfies the {\it mass conservation law} \eqref{cons}, that is,
\begin{equation*}
M=\beta\int_{\Omega}u(t,x)dx + \alpha\int_{\Gamma}v(t,x)dS = \beta\int_{\Omega}u_0(x)dx + \alpha\int_{\Gamma}v_0(x)dS>0,
\end{equation*}
where we assume that the initial mass is positive ($M>0$).
The equilibrium of non-negative solutions of the system \eqref{e1} are the unique positive constants
$(u_{\infty}, v_{\infty})$, which balance the reaction rates, i.e.
\begin{equation}\label{f1}
u_\infty^{\alpha} = v_\infty^{\beta},
\end{equation}
and satisfy the mass conservation law
\begin{equation}
\beta|\Omega|u_{\infty} + \alpha|\Gamma|v_{\infty} = M.
\label{f1_0}
\end{equation}
We remark that the uniqueness of the equilibrium follows from the monotonicity of the
right hand sides of the equilibrium conditions
\begin{equation}
u_{\infty}^{\alpha} = \Bigl(\frac{1}{\alpha|\Gamma|}(M-\beta|\Omega|u_{\infty})\Bigr)^{\beta},\qquad
v_{\infty}^{\beta} = \Bigl(\frac{1}{\beta|\Omega|}(M-\alpha|\Gamma|v_{\infty})\Bigr)^{\alpha}
\label{f1_1}
\end{equation}
on the intervals of equilibrium values, which are admissible for non-negative solutions of systems \eqref{e1}, i.e. $0 < u_{\infty} < \frac{M}{\beta|\Omega|}$ and $0<v_{\infty}< \frac{M}{\alpha|\Gamma|}$.
\medskip
As mentioned in the introduction, we prove the convergence to equilibrium by means of the entropy method. The method is based on the logarithmic entropy (free energy) functional
\begin{equation}
E(u, v) = \int_{\Omega}u(\log u - 1)dx + \int_{\Gamma}v(\log v - 1)dS
\label{f2}
\end{equation}
and its non-negative entropy dissipation
\begin{equation}
\begin{aligned}
D(u, v) &= -\frac{d}{dt}E(u,v)\\
&= \delta_u\int_{\Omega}\frac{|\nabla u|^2}{u}dx+ \delta_v\int_{\Gamma}\frac{|\nabla_{\Gamma}v|^2}{v}dS + \int_{\Gamma}(v^{\beta} - u^{\alpha})\log\frac{v^{\beta}}{u^{\alpha}}dS.
\end{aligned}
\label{f3}
\end{equation}
Our goal is to show that there exists a constant $C_0>0$ such that (see Lemma \ref{lem:E-EDEstimate} below)
\begin{equation*}
D(u,v) \geq C_0\left(E(u,v) - E(u_{\infty}, v_{\infty})\right)
\end{equation*}
for all non-negative $(u,v)$, which satisfy the mass conservation law \eqref{cons}.
Compared to previous related results on the entropy method for reaction-diffusion systems with quadratic nonlinearities (see \cite{DeFe06, DeFe07, DeFe08}), there are two main difficulties to overcome: the first is the treatment of the surface concentration $v$ and the associated boundary integrals and the second is the general nonlinear term $(v^{\beta} - u^{\alpha})\log\frac{v^{\beta}}{u^{\alpha}}$ for any $\alpha,\beta\ge1$. It is in particular the general nonlinearities, which necessitates a new proof compared to the quadratic nonlinearities considered in \cite{DeFe06, DeFe07, DeFe08}. We expect this new proof to constitute a more general approach. For a recent alternative approach for establishing entropy entropy-dissipation estimates based on a convexification argument we refer to \cite{MHM}.
\medskip
In the sequel, we will frequently use the following notations and inequalities:
\begin{description}
\item[Spatial averages and square-root abbreviation]
\begin{align*}
&\overline{u} = \frac{1}{|\Omega|}\int_{\Omega}u\,dx, \qquad U = \sqrt{u},\qquad U_{\infty} = \sqrt{u_{\infty}},
\qquad \overline{U} = \frac{1}{|\Omega|}\int_{\Omega}U\,dx,\\
&\overline{v} = \frac{1}{|\Gamma|}\int_{\Gamma}v\,dS,\qquad\, V = \sqrt{v}, \qquad\, V_{\infty} = \sqrt{v_{\infty}},\qquad
\overline{V} = \frac{1}{|\Gamma|}\int_{\Gamma}V\,dS.
\end{align*}
\smallskip
\item[Norms] $\|\cdot\|_{\Omega}$ and $\|\cdot\|_{\Gamma}$ are the norms in $L^2(\Omega)$ and $L^{2}(\Gamma)$ respectively. For a Banach space X, we denote by $\|\cdot\|_{X}$ its norm.
\smallskip
\item[Constants] A generic constant will be denoted by $C(M,\Omega,\dots)$ and may depend besides the arguments $M,\Omega,\dots$ also on $\alpha$ and $\beta$
without explicitly stating the dependence on $\alpha$ and $\beta$.
Moreover, the constants $C_i(\dots)$ and $K_i(\dots)$ for $i=0,1,2,\dots$ are specific constants, for which the same rules of dependency hold.
\smallskip
\item[Inequalities] \hfill
\begin{itemize}
\item Poincare's inequality in $\Omega$
\begin{equation*}
P(\Omega)\int_{\Omega}|\nabla u|^2dx \geq \int_{\Omega}|u - \overline{u}|^2dx,
\end{equation*}
\item Poincare's inequality on $\Gamma$
\begin{equation*}
P(\Gamma)\int_{\Gamma}|\nabla_{\Gamma} v|^2dS \geq \int_{\Gamma}|v - \overline{v}|^2dS,
\end{equation*}
\item Trace Theorem
\begin{equation}\label{Trace}
T(\Omega)\int_{\Omega}|\nabla u|^2dx \geq \int_{\Gamma}|u - \overline{u}|^2dS.
\end{equation}
\end{itemize}
\end{description}
\medskip
The mass conservation \eqref{cons} allows to rewrite the relative entropy towards the equilibrium
as
\begin{multline}
E(u, v) - E(u_{\infty}, v_{\infty})
= \int_{\Omega}u\log \frac{u}{\overline{u}}dx + \int_{\Gamma}v\log\frac{v}{\overline{v}}dS \\
+\int_{\Omega}\Bigl(\overline{u}\log\frac{\overline{u}}{u_{\infty}} - (\overline{u} - u_{\infty})\Bigr)dx +
\int_{\Gamma}\Bigl(\overline{v}\log\frac{\overline{v}}{v_{\infty}} - (\overline{v} - v_{\infty})\Bigr)dS\\
=\ I_1 + I_2,\label{f4}
\end{multline}
where we define
\begin{equation*}
I_1 := \int_{\Omega}u\log \frac{u}{\overline{u}}\,dx + \int_{\Gamma}v\log\frac{v}{\overline{v}}\,dS,
\end{equation*}
and
\begin{equation*}
I_2 := \int_{\Omega}\Bigl(\overline{u}\log\frac{\overline{u}}{u_{\infty}} - (\overline{u} - u_{\infty})\Bigr)dx +
\int_{\Gamma}\Bigl(\overline{v}\log\frac{\overline{v}}{v_{\infty}} - (\overline{v} - v_{\infty})\Bigr)dS.
\end{equation*}
\medskip
The following lemma proves, similarly to \cite{DeFe06}, a Czisz\'ar-Kullback-Pinsker type inequality, which quantifies that the relative entropy to equilibrium controls an $L^1$-distance:
\begin{lemma}\label{lem:CP-Inequality}
For all measurable functions $u: \Omega\rightarrow \mathbb R_{+}$ and $v:\Gamma \rightarrow \mathbb R_{+}$ satisfying
\begin{equation*}
M=\beta\int_{\Omega}u\,dx + \alpha\int_{\Gamma}v\,dS >0,
\end{equation*}
we have
\begin{equation}\label{kk0}
E(u, v) - E(u_{\infty}, v_{\infty}) \geq C_{\text{CKP}}\left(\|u-u_{\infty}\|_{L^1(\Omega)}^2 + \|v-v_{\infty}\|_{L^1(\Gamma)}^2\right),
\end{equation}
where $C_{\text{CKP}}>0$ is the following (non-optimal) constant depending only on the mass $M>0$ and $\alpha,\beta\ge1$:
\begin{equation*}
C_{\text{CKP}
= \frac{\min\left\{\alpha, \beta \right\}}{8M}.
\end{equation*}
\end{lemma}
\begin{proof}
By \eqref{f4}, we have that
\begin{equation*}
E(u,v) - E(u_{\infty},v_{\infty}) = I_1 + I_2.
\end{equation*}
Considering the term $I_1$ at first, we use the classic Czisz\'ar-Kullback-Pinsker inequality (see e.g. \cite{Csi}) and the mass constraints
$\overline{u}\le \frac{M}{\beta|\Omega|}$ and $\overline{v}\le \frac{M}{\alpha|\Gamma|}$ to estimate
\begin{equation*}
\int_{\Omega}u\log\frac{u}{\overline{u}}dx \geq \frac{1}{2|\Omega|\overline{u}}\|u - \overline{u}\|_{L^1(\Omega)}^2 \geq \frac{\beta}{2M}\|u - \overline{u}\|_{L^1(\Omega)}^2,
\end{equation*}
and
\begin{equation*}
\int_{\Gamma}v\log\frac{v}{\overline{v}}dS \geq \frac{1}{2|\Gamma|\overline{v}}\|v - \overline{v}\|_{L^1(\Gamma)}^2 \geq \frac{\alpha}{2M}\|v - \overline{v}\|_{L^1(\Gamma)}^2,
\end{equation*}
and, thus
\begin{equation}
I_1 \geq \frac{\beta}{2M}\|u - \overline{u}\|_{L^1(\Omega)}^2 + \frac{\alpha}{2M}\|v - \overline{v}\|_{L^1(\Gamma)}^2.
\label{kk1}
\end{equation}
Next, we rewrite $I_2$ in \eqref{f4} by introducing $q(x) = x\log x - x$, i.e.
\begin{equation*}
I_2 = |\Omega|(q(\overline{u}) - q(u_{\infty})) + |\Gamma|(q(\overline{v}) - q(v_{\infty})),
\end{equation*}
where we have used that the mass conservation law \eqref{cons} implies
$$
\int_{\Omega} (\overline{u}-u_{\infty}) \log u_{\infty}\,dx + \int_{\Gamma}
(\overline{v}-v_{\infty}) \log v_{\infty}\,dS=0
$$
since $ \frac{\log u_{\infty}}{\beta} = \frac{\log u_{\infty}^{\alpha}}{\alpha\beta} = \frac{\log v_{\infty}^{\beta}}{\alpha\beta} = \frac{\log v_{\infty}}{\alpha}$. Then, using again the conservation law \eqref{cons}, we denote \begin{equation*}
Q(\overline{u}) = |\Omega|q(\overline{u}) + |\Gamma|\underbrace{q\biggl(\frac{M - \beta|\Omega|\overline{u}}{\alpha|\Gamma|}\biggr)}_{=q(\overline{v})}\quad\text{ and }\quad R(\overline{v}) = |\Gamma|q(\overline{v}) + |\Omega|\underbrace{q\biggl(\frac{M-\alpha|\Gamma|\overline{v}}{\beta|\Omega|}\biggr)}_{=q(\overline{u})}.
\end{equation*}
Thus, we have the following two equivalent ways of writing $I_2$:
\begin{align}
I_2 &= Q(\overline{u}) - Q(u_{\infty}) = R(\overline{v}) - R(v_{\infty}).
\label{kk2}
\end{align}
Moreover, direct computations give
\begin{equation*}
Q'(u_{\infty}) = |\Omega|q'(u_{\infty}) - \frac{\beta}{\alpha}|\Omega|q'\biggl(\frac{M-\beta|\Omega|u_{\infty}}{\alpha|\Gamma|}\biggr)
= |\Omega|\log u_{\infty} - \frac{\beta}{\alpha}|\Omega|\log v_{\infty}= 0
\end{equation*}
since $u_{\infty}^{\alpha} = v_{\infty}^{\beta}$. Moreover, for any
$\overline{u}_\theta$ satisfying the mass constraints
$0\le \overline{u}_\theta\le \frac{M}{\beta|\Omega|}$
\begin{align*}
Q''(\overline{u}_\theta) &= |\Omega|q''(\overline{u}_\theta) + \frac{\beta^2}{\alpha^2}\frac{|\Omega|^2}{|\Gamma|}q''\biggl(\frac{M-\beta|\Omega|\overline{u}_\theta}{\alpha|\Gamma|}\biggr)
= |\Omega|\frac{1}{\overline{u}_\theta} + \frac{\beta^2}{\alpha^2}\frac{|\Omega|^2}{|\Gamma|}\frac{\alpha|\Gamma|}{M-\beta|\Omega|\overline{u}_\theta}\\
&\geq \frac{\beta |\Omega|^2}{M}+\frac{\beta^2}{\alpha}\frac{|\Omega|^2}{M}=\frac{\beta}{\alpha}\frac{|\Omega|^2}{M}(\alpha + \beta).
\end{align*}
In a similar way, for any $0\le\overline{v}_\theta\le \frac{M}{\alpha|\Gamma|}$, we estimate
\begin{equation*}
R'(v_{\infty}) = 0 \qquad\text{ and }\qquad R''(\overline{v}_\theta) \geq \frac{\alpha}{\beta}\frac{|\Gamma|^2}{M}(\alpha+\beta).
\end{equation*}
Thus, altogether, Taylor expansion in \eqref{kk2} with $\overline{u}_\theta=\theta\overline{u}+(1-\theta)u_{\infty}$ and $\overline{v}_\theta=\theta\overline{v}+(1-\theta)v_{\infty}$ for some $\theta\in(0,1)$ yields
\begin{multline}
I_2= \frac{1}{2}(Q(\overline{u}) - Q(u_{\infty})) + \frac{1}{2}(R(\overline{v}) - R(v_{\infty})) \\
\geq \frac{1}{4}\frac{\beta}{\alpha}\frac{|\Omega|^2}{M}(\alpha + \beta)(\overline{u} - u_{\infty})^2 + \frac{1}{4}\frac{\alpha}{\beta}\frac{|\Gamma|^2}{M}(\alpha+\beta)(\overline{v} - v_{\infty})^2\\
= \frac{1}{4}\frac{\alpha +\beta}{M}\left(\frac{\beta}{\alpha}\|\overline{u} - u_{\infty}\|_{L^1(\Omega)}^2 + \frac{\alpha}{\beta}\|\overline{v} - v_{\infty}\|_{L^1(\Gamma)}^2\right).
\label{kk3}
\end{multline}
Combining \eqref{kk1} and \eqref{kk3} with $\|u-\overline{u}\|_{L^1(\Omega)}^2 + \|\overline{u} - u_{\infty}\|_{L^1(\Omega)}^2 \ge \frac{1}{2}\|u - u_{\infty}\|_{L^1(\Omega)}^2$ by Jensen's inequality, we get
\begin{equation*}
I_1+I_2 \ge \frac{\beta}{8M}\|u - u_{\infty}\|_{L^1(\Omega)}^2 + \frac{\alpha}{8M} \|v - v_{\infty}\|_{L^1(\Omega)}^2
\end{equation*}
we obtain \eqref{kk0} with
$C_{\text{CKP}} = \frac{\min\left\{\alpha, \beta \right\}}{8M}.
$
\end{proof}
\medskip
We now state our main result of this section, which is the exponential convergence to equilibrium with explicit rates and constants via the entropy method. The proof uses an entropy entropy-dissipation estimate, which is proven in Lemma \ref{lem:E-EDEstimate} below.
\begin{theorem}[Explicit Exponential Convergence to Equilibrium]\label{theo:Convergence}\hfill\\
Assume that $\Omega\subset \mathbb R^n$ is a bounded domain with smooth boundary $\Gamma = \partial\Omega$ (e.g. $\partial\Omega\in C^{2+\epsilon}$ for any $\epsilon >0$). Then, the unique weak solution $(u,v)$ of system \eqref{e1} subject to any initial data $(u_0, v_{0})\in L^{\infty}(\Omega)\times L^{\infty}(\Gamma)$ satisfies the following exponential convergence to equilibrium
\begin{equation}\label{kk4_0}
\|u(t) - u_{\infty}\|_{L^{1}(\Omega)}^{2} + \|v(t) - v_{\infty}\|_{L^1(\Gamma)}^{2} \leq C_{\text{CKP}}^{-1}\,e^{-C_0t}\left(E(u_0, v_0) - E(u_{\infty}, v_{\infty})\right),
\end{equation}
where $C_0$ and $C_{\text{CKP}}^{-1}$ are positive constants as defined in Lemma \ref{lem:E-EDEstimate} below and Lemma \ref{lem:CP-Inequality} above and depend only on reaction rates $\alpha, \beta\ge1$, the positive diffusion rates $\delta_u,\delta_v>0$, the domain $\Omega$, the boundary $\Gamma$ and the positive initial mass $M >0$.
\end{theorem}
\begin{proof}
On the one hand, we have
\begin{equation}\label{kk4}
\frac{d}{dt}\left(E(u,v) - E(u_{\infty}, v_{\infty})\right) = \frac{d}{dt}E(u,v) = -D(u,v).
\end{equation}
On the other hand, by the Lemma \ref{lem:E-EDEstimate}, there exists $C_0>0$ such that
\begin{equation}\label{kk5}
D(u,v) \geq C_0\left(E(u,v) - E(u_{\infty}, v_{\infty})\right).
\end{equation}
Then, from \eqref{kk4}, \eqref{kk5} and the classical Gronwall inequality, we obtain
\begin{equation}
E(u(t), v(t)) - E(u_{\infty}, v_{\infty}) \leq e^{-C_0t}\left(E(u_0, v_0) - E(u_{\infty}, v_{\infty})\right).
\label{kk6}
\end{equation}
Finally, the estimate \eqref{kk4_0} follows directly from \eqref{kk6} and Lemma \ref{lem:CP-Inequality}.
\end{proof}
\begin{remark}\label{remark:SystemInDomains}
The same technique of this paper can be used to get the explicit exponential convergence to equilibrium for systems of the form:
\begin{equation*}
\begin{cases}
u_t - d_u\Delta u = -\alpha(u^{\alpha} - v^{\beta}), &t>0,x\in\Omega,\\
v_t - d_v\Delta v = \beta (u^{\alpha} - v^{\beta}), &t>0,x\in\Omega,\\
\partial u/\partial \nu = \partial v/\partial \nu = 0, &t>0,x\in\partial\Omega,\\
u(0,x) = u_0(x), v(0,x) = v_0(x), &x\in\Omega,
\end{cases}
\end{equation*}
subject to non-negative initial data $u_0, v_0\in L^{\infty}_+(\Omega)$ and for all stoichiometric coefficients $\alpha, \beta \geq 1$ and positive diffusion coefficients $d_u, d_v$. By using Poincare's inequality $P(\Omega)\|\nabla v\|_{\Omega}^2 \geq \|v - \overline{v}\|_{\Omega}^2$ instead of the Trace inequality $T(\Omega)\|\nabla v\|_{\Omega}^2 \geq \|v - \overline{v}\|_{\Gamma}^2$, all the following arguments can be directly reproduced in the same way. Thus, the result of this paper, in a certain sense, completely solves the problem of trend to equilibrium for concentrations of the reversible chemical reaction of two species $\mathcal{U}$ and $\mathcal{V}$:
\begin{figure}[htp]
\begin{center}
\begin{tikzpicture}
\node (a) {$\alpha \, \mathcal{U}$}; \node (b) at (2,0) {$\beta\, \mathcal{V}$.};
\draw[arrows=<-] ([yshift=0.7mm]a.east) -- node [above] {\scalebox{.8}[.8]{}} ([yshift=0.7mm]b.west) ;
\draw[arrows=<-] ([yshift=-0.7mm]b.west) -- node [below] {\scalebox{.8}[.8]{}} ([yshift=-0.7mm]a.east);
\end{tikzpicture}
\end{center}
\end{figure}
\end{remark}
We shall now prove the key entropy entropy-dissipation estimate.
\begin{lemma}[Entropy Entropy-Dissipation Estimate]\label{lem:E-EDEstimate}\hfill\\
For all measurable, non-negative functions $u: \Omega \rightarrow \mathbb R_{+}$ with trace $u|_{\Gamma}\in L^2(\Gamma)$ and $v: \Gamma \rightarrow \mathbb R_{+}$, which satisfy the mass conservation law
\begin{equation}\label{EDDcons}
\beta\int_{\Omega}u\,dx + \alpha\int_{\Gamma}v\,dS = M,
\end{equation}
there exists a constant $C_0>0$ such that
\begin{equation*}
D(u,v) \geq C_0\left(E(u,v) - E(u_{\infty}, v_{\infty})\right),
\end{equation*}
where $C_0$ depends only on $M$, $|\Omega|$, $P(\Omega)$, $T(\Omega)$,
$|\Gamma|$, $P(\Gamma)$ as well as $\alpha$ and $\beta$.
\end{lemma}
\noindent{\bf Proof of Lemma \ref{lem:E-EDEstimate}.} \hfill\\
We divide the proof into two cases: $\delta_v>0$ in Section 3.1 and $\delta_v = 0$ in Section 3.2.
In the first case, we don't require any additional a-priori estimates on the solution besides well defined entropy and entropy-dissipation functionals in order to obtain the entropy-entropy dissipation estimate.
In the second case, since the diffusion term in $v$ is missing, we shall require a-priori $L^{\infty}$-bounds on the solution. However, we strongly believe that one might be able to avoid the use of $L^{\infty}$-bounds in some cases of the exponents $\alpha$ and $\beta$.
\subsection{The non-degenerate case: $\delta_{v} > 0$}\hfill \\
We will show in the sequel that both $I_1$ and $I_2$ as defined in \eqref{f4} are bounded by the entropy dissipation. First, by using the Logarithmic-Sobolev inequality
\begin{equation*}
C_L(\Omega)\int_{\Omega}\frac{|\nabla u|^2}{u}dx \geq \int_{\Omega}u\log \frac{u}{\overline{u}}\,dx, \quad\text{ and }\quad C_L(\Gamma)\int_{\Gamma}\frac{|\nabla_{\Gamma}v|^2}{v}dS \geq \int_{\Gamma}v\log\frac{v}{\overline{v}}\,dS,
\end{equation*}
we immediately get the following
\begin{lemma}\label{lem:bound_I1}
For all $t\geq 0$, we have
\begin{equation}
I_1 \leq C_2 \frac{D(u,v)}{2},
\label{f5}
\end{equation}
where
\begin{equation*}
C_2 = {2}\max\left\{\frac{C_{L}(\Omega)}{\delta_u},\frac{C_{L}(\Gamma)}{ \delta_v}\right\}.
\end{equation*}
\end{lemma}
\begin{remark}
The factor ${2}$ in constant $C_2$ is chosen to still have $\frac{1}{2} D(u,v)$ left to estimate term $I_2$, which is done in the following Lemma \ref{bound_I2}.
\end{remark}
\begin{lemma}\label{bound_I2}
There exists $C_3>0$ such that, for all $t\geq 0$,
\begin{equation}
I_2 \leq C_3 \frac{D(u,v)}{2}.
\label{f6}
\end{equation}
\end{lemma}
\begin{proof}
In a preliminary step, we observe that the function $\Phi: \mathbb R^2 \rightarrow \mathbb R$ defined by
\begin{equation}\label{f5_0}
\Phi(x,y) = \frac{x\log \frac{x}{y} - (x - y)}{(\sqrt{x}-\sqrt{y})^2}=\Phi\Bigl(\frac{x}{y},1\Bigr)
\end{equation}
is continuous on $(0,\infty)^2$. Moreover, for all $y\in(0,\infty)$, the function $\Phi(\cdot,y)$ is strictly increasing on $(0,\infty)$ and satisfies $\lim\limits_{x\rightarrow 0}\Phi(x,y) = 1$, $\Phi(y,y) = 2$, see \cite{DeFe06}.
\medskip
In a first step, we use now the mass conservation $\beta|\Omega|\overline{u} + \alpha|\Gamma|\overline{v} = M$
to obtain the following bounds for $I_2$:
\begin{equation}\label{f7}
\int_{\Omega}\Bigl(\overline{u}\log\frac{\overline{u}}{u_\infty} - (\overline{u}-u_{\infty})\Bigr)\,dx \leq |\Omega|\,\Phi\biggl(\frac{M}{\beta|\Omega|}, u_{\infty}\biggr)\left(\sqrt{\overline{u}}-\sqrt{u_{\infty}}\right)^2
\end{equation}
and
\begin{equation}
\int_{\Gamma}\Bigl(\overline{v}\log\frac{\overline{v}}{v_\infty} - (\overline{v}-v_{\infty})\Bigr)\,dS \leq |\Gamma|\,\Phi\biggl(\frac{M}{\alpha|\Gamma|}, v_{\infty}\biggr)\left(\sqrt{\overline{v}} - \sqrt{v_{\infty}}\right)^2.
\label{f8}
\end{equation}
Therefore, we have from \eqref{f7} and \eqref{f8} that
\begin{equation}
I_2 \leq K_0\left[\left(\sqrt{\overline{v}} - \sqrt{v_{\infty}}\right)^2+\left(\sqrt{\overline{u}}-\sqrt{u_{\infty}}\right)^2\right],
\label{f8_1}
\end{equation}
where
$$
K_0:=\max\left\{ |\Omega|\,\Phi\biggl(\frac{M}{\beta|\Omega|},u_{\infty}\biggr), |\Gamma|\,\Phi\biggl(\frac{M}{\alpha|\Gamma|}, v_{\infty}\biggr)\right\}.
$$
\smallskip
Next, considering the entropy dissipation $D(u,v)$, we observe first that
\begin{equation}
\delta_{u}\int_{\Omega}\frac{|\nabla u|^2}{u}dx = 4\delta_u\int_{\Omega}|\nabla \sqrt{u}|^2dx = 4\delta_u\|\nabla U\|_{\Omega}^2,
\label{f9}
\end{equation}
and
\begin{equation}
\delta_{v}\int_{\Gamma}\frac{|\nabla_{\Gamma}v|^2}{v}dS = 4\delta_v\|\nabla_\Gamma v\|_{\Gamma}^2 \geq 4\delta_v\,P^{-1}(\Gamma)\|V - \overline{V}\|_{\Gamma}^2.
\label{f10}
\end{equation}
Moreover, the elementary inequality $(a-b)\log\frac{a}{b} \geq 4(\sqrt{a}-\sqrt{b})^2$ yields
\begin{equation}
\int_{\Gamma}(v^{\beta} - u^{\alpha})\log\frac{v^{\beta}}{u^{\alpha}}dS \geq 4\|V^{\beta} - U^{\alpha}\|_{\Gamma}^2.
\label{f11}
\end{equation}
Hence,
\begin{equation}
\frac{D(u,v)}{2}\geq 2\delta_u\|\nabla U\|_{\Omega}^2 + 2\delta_vP^{-1}(\Gamma)\|V - \overline{V}\|_{\Gamma}^2 + 2\|V^\beta - U^\alpha\|_{\Gamma}^2.
\label{f11_1}
\end{equation}
Combining \eqref{f8_1} and \eqref{f11_1}, we see that in order to prove \eqref{f6} it is sufficient to find positive constants $K_1\le2$ and $K_2$ such that
\begin{multline}
2\delta_u\|\nabla U\|_{\Omega}^2+2\delta_vP^{-1}(\Gamma)\|V-\overline{V}\|_{\Gamma}^2 + K_1\|V^{\beta} - U^{\alpha}\|_{\Gamma}^2 \\\geq K_2 K_0\left[\Bigl(\sqrt{\overline{U^2}} - U_{\infty}\Bigr)^2 + \Bigl(\sqrt{\overline{V^{2}}} - V_{\infty}\Bigr)^2\right],
\label{f12}
\end{multline}
where we denote $\overline{U^2}=\frac{1}{|\Omega|}\int_{\Omega} U^2\,dx$ and $\overline{V^2}=\frac{1}{|\Gamma|}\int_{\Gamma} V^2\,dS$.
\medskip
In the following, we divide the proof of the key estimate \eqref{f12} into several steps.
As a preliminary remark, we recall
that the estimate \eqref{f12} can only hold because of the constraint imposed by the conservation law \eqref{EDDcons} on $U$ and $V$, i.e.
\begin{equation}\label{conssqrt}
\beta |\Omega| \overline{U^2} + \alpha |\Gamma| \overline{V^2} =M,
\end{equation}
since without \eqref{conssqrt}, the left hand side of \eqref{f12} vanishes for all constant states $U$, $V$ satisfying $V^{\beta}=U^{\alpha}$, while the right hand side of \eqref{f12} vanishes only at the equilibrium $U_{\infty}$, $V_{\infty}$.
Thus, the following steps are designed as a chain of estimates, which allows for the conservation law \eqref{EDDcons} rewritten as \eqref{conssqrt} to enter into the proof of estimate \eqref{f12}.
\medskip
\noindent\underline{\it Step 1:} The goal of this step is to show that there exists a constant $K_3>0$ such that
\begin{equation}
\|V^{\beta} - U^{\alpha}\|_{\Gamma}^2 \geq \frac12\|\overline{V}^{\beta}-\overline{U}^{\alpha}\|_{\Gamma}^2 - K_3(\|U - \overline{U}\|_{\Gamma}^2 + \|V-\overline{V}\|_{\Gamma}^2),
\label{f14}
\end{equation}
which establishes a lower bound of the \emph{reaction entropy dissipation term} in terms of a \emph{reaction entropy dissipation term for the space averaged concentrations} $\overline{U}$ and $\overline{V}$ at the cost of
two terms, which can ultimately be controlled by the \emph{diffusion entropy dissipation}.
At first, we remark that the averaged concentrations $\overline{U}$ and $\overline{V}$
are bounded by Jensen's inequality and the conservation law \eqref{conssqrt}
\begin{align}\label{boundU}
\overline{U}^2 \le \overline{U^2} \le \frac{M}{\beta|\Omega|}\le\max\left\{1,\frac{M}{\beta|\Omega|}\right\}=:M_{\Omega},\\
\overline{V}^2 \le \overline{V^2} \le \frac{M}{\alpha|\Gamma|}\le\max\left\{1,\frac{M}{\alpha|\Gamma|}\right\}=:M_{\Gamma}.\label{boundV}
\end{align}
Next, we consider the following deviations around the spatially averaged concentrations:
\begin{equation*
\delta_1(x) := U -\overline{U}, \quad \forall x\in\Omega,
\end{equation*}
and
\begin{equation*
\delta_2(x) := V -\overline{V}, \quad \forall x\in\Gamma
\end{equation*}
and divide the boundary $\Gamma$ into two disjoint sets:
\begin{equation*}
\Gamma = S \cup S^{\perp},
\end{equation*}
where
\begin{equation*}
S:= \{x\in \Gamma: \ -\overline{U}\leq \delta_1(x) \leq \sqrt{M_{\Omega}},\ -\overline{V} \leq \delta_2(x) \leq \sqrt{M_{\Gamma}}\}.
\end{equation*}
Note that $\delta_1\in L^2(\Gamma)$ is well-defined by \eqref{f9} and the Trace Theorem \eqref{Trace}.
Due to the boundedness of $\delta_1$ and $\delta_2$ in $S$, we estimate readily
using Taylor expansion and Young's inequality
\begin{multline}
\|V^{\beta} - U^{\alpha}\|_{L^2(S)}^2= \|(\overline{V}+\delta_2)^{\beta} - (\overline{U}+\delta_1)^{\alpha}\|_{L^2(S)}^2\\
\geq \frac12\|\overline{V}^{\beta} - \overline{U}^{\alpha}\|_{L^2(S)}^2
- \|\beta(\overline{V}+\theta_2)^{\beta-1}\delta_2 - \alpha(\overline{U}+\theta_1)^{\alpha-1}\delta_1\|_{L^2(S)}^2\\
\geq \frac12\|\overline{V}^{\beta} - \overline{U}^{\alpha}\|_{L^2(S)}^2
- C_3\bigl(M_{\Omega}^{\alpha-1},M_{\Gamma}^{\beta-1}\bigr)\left(\|\delta_1\|_{\Gamma}^2 +\|\delta_2\|_{\Gamma}^2\right),
\label{b1}
\end{multline}
where we have used that $\theta_1(x)\le\delta_1(x)\le\sqrt{M_{\Omega}}$, $\theta_2(x)\le\delta_2(x)\le\sqrt{M_{\Gamma}}$
and which proves \eqref{f14} on the set $S$.
It remains to consider the set
\begin{equation*}
S^{\perp} = \{x\in \Gamma: \ \delta_1(x) > \sqrt{M_{\Omega}} \quad \text{or} \quad \delta_2(x)>\sqrt{M_{\Gamma}}\}.
\end{equation*}
By using Chebyshev's inequality and by observing that for $\delta_1 > \sqrt{M_{\Omega}} \geq \overline{U}$, the set $\{x\in \Gamma:\delta_1^2 > M_{\Omega}\}$ coincides with the set $\{x\in \Gamma:\delta_1 > \sqrt{M_{\Omega}}\}$ and analog for $\delta_2 > \sqrt{M_{\Gamma}} \geq \overline{V}$, we get
\begin{equation*}
|\{x\in\Gamma : \delta_1 > \sqrt{M_{\Omega}}\}|=|\{x\in\Gamma : \delta_1^2\geq M_{\Omega}\}| \leq \frac{\|\delta_1\|_{\Gamma}^2}{M_{\Omega}},\end{equation*}
and
\begin{equation*}
|\{x\in\Gamma : \delta_2 > \sqrt{M_{\Gamma}}\}|=|\{x\in\Gamma : \delta_2^2\geq M_{\Gamma}\}| \leq \frac{\|\delta_2\|_{\Gamma}^2}{M_{\Gamma}}.
\end{equation*}
Thus, it follows that
$$
|S^{\perp}| \leq \frac{\|\delta_1\|_{\Gamma}^2}{M_{\Omega}} +
\frac{\|\delta_2\|_{\Gamma}^2}{M_{\Gamma}}.
$$
By the bounds \eqref{boundU} and \eqref{boundV}, we have moreover that $|\overline{V}^{\beta} - \overline{U}^{\alpha}|\leq C(M_{\Omega}^{\frac{\alpha}{2}},M_{\Gamma}^{\frac{\beta}{2}})$. Hence, since $M_{\Omega}\ge1$ and
$M_{\Gamma}\ge1$
\begin{equation*}
\|\overline{V}^{\beta} - \overline{U}^{\alpha}\|_{L^2(S^{\perp})}^2 \leq C(M_{\Omega}^{{\alpha}},M_{\Gamma}^{{\beta}})|S^{\perp}| \leq C(M_{\Omega}^{{\alpha}},M_{\Gamma}^{{\beta}})
\left(\|\delta_1\|_{\Gamma}^2 +
\|\delta_2\|_{\Gamma}^2\right)
\end{equation*}
thus
\begin{equation}
\|V^{\beta} - U^{\alpha}\|_{L^2(S^{\perp})}^2 \geq 0\geq \frac12\|\overline{V}^{\beta} - \overline{U}^{\alpha}\|_{L^2(S^{\perp})}^2 - C_4(M_{\Omega}^{{\alpha}},M_{\Gamma}^{{\beta}})(\|\delta_1\|_{\Gamma}^2 + \|\delta_2\|_{\Gamma}^2).
\label{b2}
\end{equation}
Finally, the estimate \eqref{f14} is obtained from \eqref{b1} and \eqref{b2} for a constant $K_3(M_{\Omega}^{{\alpha}},M_{\Gamma}^{{\beta}})=\max\{C_3,C_4\}$.
\medskip
With estimate \eqref{f14}, we proceed in estimating the left hand side of \eqref{f12}
in the following way: We shall look for a positive constant $K_1\le2$ small enough, such that the following two conditions hold:
\begin{equation*}
\begin{cases}
\delta_u\,T^{-1}(\Omega) - K_1K_3 \geq 0,\\
\delta_vP^{-1}(\Gamma) - K_1K_3 \geq 0,
\end{cases}
\quad\Rightarrow\quad K_1\le\min\left\{\frac{\delta_u}{K_3 T(\Omega)},
\frac{\delta_v}{K_3 P(\Gamma)},2\right\}.
\end{equation*}
Here, $T(\Omega)$ denotes the constant of the Trace inequality $T(\Omega)\|\nabla U\|_{\Omega}^2\ge \|U - \overline{U}\|_{\Gamma}^2$. We can then estimate the left hand side of \eqref{f12} by using \eqref{f14}
\begin{align*}
2\delta_u\|\nabla &U\|_{\Omega}^2 + 2\delta_vP^{-1}(\Gamma)\|V - \overline{V}\|_{\Gamma}^2 + K_1\|V^{\beta}-U^{\alpha}\|_{\Gamma}^2\nonumber\\
&\geq \delta_u\|\nabla U\|_{\Omega}^2 + \delta_vP^{-1}(\Gamma)\|V - \overline{V}\|_{\Gamma}^2 + \frac{K_1}{2}\|\overline{V}^{\beta} - \overline{U}^{\alpha}\|_{\Gamma}^2\nonumber\\
&\quad + (\delta_u\,T^{-1}(\Omega) - K_1K_3)\|U - \overline{U}\|_{\Gamma}^2 + (\delta_vP^{-1}(\Gamma) - K_1K_3)\|V - \overline{V}\|_{\Gamma}^2\nonumber\\
&\geq \delta_u\|\nabla U\|_{\Omega}^2 + \delta_vP^{-1}(\Gamma)\|V - \overline{V}\|_{\Gamma}^2 + \frac{K_1}{2}\|\overline{V}^{\beta} - \overline{U}^{\alpha}\|_{\Gamma}^2.
\end{align*}
Therefore, in order to show \eqref{f12} it is sufficient in the following Step 2 to find suitable constants $K_4=\min\{\frac{2\delta_u}{K_1},\frac{2\delta_v}{K_1P(\Gamma)}\}$ and $K_5=\frac{2K_2 K_0}{K_1}$ such that:
\begin{equation}
\|\overline{V}^{\beta}-\overline{U}^{\alpha}\|_{\Gamma}^2 + K_4(\|\nabla U\|_{\Omega}^2 + \|V-\overline{V}\|_{\Gamma}^2) \geq K_5\left[(\sqrt{\overline{U^2}} - U_{\infty})^2 + (\sqrt{\overline{V^{2}}} - V_{\infty})^2\right].
\label{f15}
\end{equation}
\medskip
\noindent\underline{\it Step 2:} To prove \eqref{f15}, we use the following change of variables with respect to the equilibrium
\begin{equation}
\overline{U^2} = U_{\infty}^2(1+\mu_1)^2 \qquad\text{ and }\qquad \overline{V^2} = V_{\infty}^{2}(1+\mu_2)^2,
\label{c1}
\end{equation}
which is well-adapted to the mass conservation law \eqref{conssqrt} in the sense that
\begin{equation}
\beta|\Omega|U_{\infty}^{2}(1+\mu_1)^2 + \alpha|\Gamma|V_{\infty}^2(1+\mu_2)^2 = \beta|\Omega|U_{\infty}^2 + \alpha|\Gamma|V_{\infty}^2.
\label{c2}
\end{equation}
From \eqref{c2}, it follows that the new variables $\mu_1$ and $\mu_2$ vary only in a bounded range of admissible values, i.e. $\mu_1\in [-1,+\mu_{1,m})$ and $\mu_2\in [-1,+\mu_{2,m})$, where a straightforward estimate shows $0<\mu_{1,m}<\frac{\alpha|\Gamma|V_{\infty}^2}{\beta|\Omega|U_{\infty}^2}$ and $0<\mu_{2,m}<\frac{\beta|\Omega|U_{\infty}^2}{\alpha|\Gamma|V_{\infty}^2}$.
Moreover, equation \eqref{c2} implies that $\mu_1$ can be expressed
as a continuous, bounded function of $\mu_2$ (or the other way round), i.e.
\begin{equation}
\mu_1(\mu_2) = -1 + \sqrt{1 - \frac{\alpha |\Gamma| V_{\infty}^2}{\beta |\Omega|U_{\infty}^2}(2\mu_2 + \mu_2^2)}
= - R(\mu_2) \mu_2,
\label{c3}
\end{equation}
where
\begin{equation*}
R(\mu_2) := \frac{\frac{\alpha|\Gamma|V_{\infty}^2}{\beta|\Omega|U_{\infty}^2}(\mu_2+2)}{1 + \sqrt{1 - \frac{\alpha V_{\infty}^2|\Gamma|}{\beta U_{\infty}^2|\Omega|}(2\mu_2 + \mu_2^2)}}.
\end{equation*}
We obviously have that $\mu_1(\mu_2=0)=0$, which represents the case $\overline{U^2} = U_{\infty}^2$ and $\overline{V^2} = V_{\infty}^2$.
Moreover, $R(\mu_2)$ is a positive, monotone increasing function with
$$
0<R(-1)=\frac{\frac{\alpha|\Gamma|V_{\infty}^2}{\beta|\Omega|U_{\infty}^2}}{1 + \sqrt{1 + \frac{\alpha |\Gamma|V_{\infty}^2}{\beta |\Omega|U_{\infty}^2}}}
\le R(\mu_2) \le R(\mu_{2,m})< 2 \frac{\alpha|\Gamma|V_{\infty}^2}{\beta|\Omega|U_{\infty}^2} +1.
$$
Hence $R(\mu_2)$ for $\mu_2\in [-1,+\mu_{2,m})$ is uniformly bounded below and above by positive constants.
\medskip
Next, we notice that
\begin{equation*}
\|\delta_1\|_{\Omega}^2 = \|U - \overline{U}\|_{\Omega}^2 = \overline{U^2} - \overline{U}^2,
\end{equation*}
and thus
\begin{equation}
\overline{U} = \sqrt{\overline{U^2}} - \frac{1}{\sqrt{\overline{U^2}}+\overline{U}}\|\delta_1\|_{\Omega}^2 = U_{\infty}(1+\mu_1) - \frac{1}{\sqrt{\overline{U^2}}+\overline{U}}\|\delta_1\|_{\Omega}^2.
\label{c6}
\end{equation}
Similarly,
\begin{equation}
\overline{V} = \sqrt{\overline{V^2}} - \frac{1}{\sqrt{\overline{V^2}}+\overline{V}}\|\delta_2\|_{\Gamma}^2 = V_{\infty}(1+\mu_2) - \frac{1}{\sqrt{\overline{V^2}}+\overline{V}}\|\delta_2\|_{\Gamma}^2.
\label{c7}
\end{equation}
We denote
\begin{equation*}
R_1(U) := \frac{1}{\sqrt{\overline{U^2}} + \overline{U}}\quad \text{ and } \quad R_1(V) := \frac{1}{\sqrt{\overline{V^2}} + \overline{V}}
\end{equation*}
and remark that due to the lack of lower bounds for $\overline{U^2}\ge\overline{U}^2\ge0$ or $\overline{V^2}\ge\overline{V}^2\ge0$, we have no a-priori bounds to prevent $R_1(U)$ or $R_1(V)$ from being arbitrary large. Thus, we have to distinguish two cases, where the
first assumes a lower bound $\varepsilon>0$:
\noindent\underline{{\bf Case 1)} $\overline{U^2}\geq \varepsilon^2, \overline{V^2}\geq \varepsilon^2$:}\\[2mm]
By \eqref{c6} and \eqref{c7}, left hand side of \eqref{f15} is estimated as follows
\begin{align}
\|\overline{V}^{\beta} &- \overline{U}^{\alpha}\|_{\Gamma}^2 + K_4(\|\nabla U\|_{\Omega}^2 + \|V - \overline{V}\|_{\Gamma}^2)\nonumber\\
&=\left\|(V_{\infty}(1+\mu_2) - R_1(V)\|\delta_2\|_{\Gamma}^2)^{\beta} - (U_{\infty}(1+\mu_1)- R_1(U)\|\delta_1\|_{\Omega}^2)^{\alpha}\right\|_{\Gamma}^2\nonumber\\
&\quad + K_4(\|\nabla U\|_{\Omega}^2 + \|\delta_2\|_{\Gamma}^2)\nonumber\\
&\geq |\Gamma|\left(V_{\infty}^{\beta}(1+\mu_2)^{\beta} - U_{\infty}^{\alpha}(1+\mu_1)^{\alpha}\right)^2 - C(\varepsilon^2, M)(\|\delta_2\|_{\Gamma}^2 + \frac{1}{P(\Omega)}\|\delta_1\|_{\Omega}^2)\nonumber\\
&\quad+ K_4(\|\nabla U\|_{\Omega}^2 + \|\delta_2\|_{\Gamma}^2)\nonumber\\
&\geq |\Gamma|\left(V_{\infty}^{\beta}(1+\mu_2)^{\beta} - U_{\infty}^{\alpha}(1+\mu_1)^{\alpha}\right)^2 - C(\varepsilon^2, M,\Omega)(\|\delta_2\|_{\Gamma}^2 + \|\nabla U\|_{\Omega}^2)\nonumber\\
&\quad+ K_4(\|\nabla U\|_{\Omega}^2 + \|\delta_2\|_{\Gamma}^2)
\label{c9}
\end{align}
by using the boundedness of $U_{\infty}$, $V_{\infty}$, $\mu_1$, $\mu_2$, $R_1(U)$ and $R_1(V)$ and by using Poincare's inequality. Choosing $K_4 \geq C(\varepsilon^2, M)$ in \eqref{c9} (by recalling that $K_4 = \min\{\frac{2\delta_u}{K_1}, \frac{2\delta_v}{K_1P(\Gamma)}\}$, this implies an additional constraint to choose $K_1$ small enough), we have
\begin{equation}
\|\overline{V}^{\beta} - \overline{U}^{\alpha}\|_{\Gamma}^2 + K_4(\|\nabla U\|_{\Omega}^2 + \|V - \overline{V}\|_{\Gamma}^2) \geq |\Gamma|(V_{\infty}^{\beta}(1+\mu_2)^{\beta} - U_{\infty}^{\alpha}(1+\mu_1)^{\alpha})^2.
\label{c9_1}
\end{equation}
Therefore, in order to prove \eqref{f15}, it's enough to find $K_5$ such that
\begin{equation*}
|\Gamma|(V_{\infty}^{\beta}(1+\mu_2)^{\beta} - U_{\infty}^{\alpha}(1+\mu_1)^{\alpha})^2 \geq K_5\left(U_{\infty}^2\mu_1^2 + V_{\infty}\mu_2^2\right)
\end{equation*}
or equivalently,
\begin{equation}
\frac{U_{\infty}^2\mu_1^2 + V_{\infty}^2\mu_2^2}{V_{\infty}^{2\beta}\left((1+\mu_2)^{\beta} - (1+\mu_1)^{\alpha}\right)^2} \leq \frac{|\Gamma|}{K_5}.
\label{d1}
\end{equation}
In order to estimate the denominator of \eqref{d1}, we consider the following two cases:\\
\noindent{In the first case}, we assume that $-1\leq \mu_2 < 0$, from \eqref{c3} we have $\mu_1 > 0$. Then
\begin{equation*}
(1+\mu_2)^{\beta} \leq 1 + \mu_2< 1\quad \text{ and }\quad (1+\mu_1)^{\alpha} \geq 1 + \mu_1>1.
\end{equation*}
Hence,
\begin{equation}
|(1+\mu_2)^{\beta} - (1+\mu_1)^{\alpha}| \geq (1 + \mu_1) - (1 + \mu_2) = \mu_1- \mu_2=(1+R(\mu_2))|\mu_2|.
\label{d3}
\end{equation}
\noindent{In the second case}, we consider $\mu_2 \geq 0$ and thus $\mu_1 \leq 0$ by \eqref{c3}. We estimate
\begin{equation*}
(1+\mu_2)^{\beta} \geq (1+\mu_2) \text{ and } (1+\mu_1)^{\alpha} \leq 1 + \mu_1,
\end{equation*}
and obtain therefore,
\begin{equation}
|(1+\mu_2)^{\beta} - (1+\mu_1)^{\alpha}| \geq (1 + \mu_2) - (1+\mu_1) = \mu_2-\mu_1=(1 + R(\mu_2))|\mu_2|.
\label{d4}
\end{equation}
Altogether, \eqref{d3} and \eqref{d4} yield
\begin{equation}
V_{\infty}^{2\beta}\left((1+\mu_2)^{\beta} - (1+\mu_1)^{\alpha}\right)^2 \geq V_{\infty}^{2\beta}(1 + R(\mu_2))^2\mu_2^2.
\label{d5}
\end{equation}
For the numerator of \eqref{d1}, we use the expression \eqref{c3} to get
\begin{equation}
U_{\infty}^2\mu_1^2 + V_{\infty}^2\mu_2^2 = \left(V_{\infty}^2+ U_{\infty}^2\,R(\mu_2)^2\right)\mu_2^2, \label{d2}
\end{equation}
and combining \eqref{d2} and \eqref{d5} completes the proof of \eqref{d1}
with a constant
$$
\frac{|\Gamma|}{K_5} \ge \frac{V_{\infty}^2+ U_{\infty}^2\,R(\mu_2)^2}{(1 + R(\mu_2))^2)V_{\infty}^{2\beta}}
$$
Finally, by recalling that $K_5=\frac{2K_2 K_0}{K_1}$ and that $K_1$ was chosen small enough in the previous step, we conclude the first part of the proof of the Lemma by choosing
$K_2\le\frac{K_1 K_5}{2 K_0}$.
\medskip
\noindent\underline{{\bf Case 2)} $\overline{U^2}\leq \varepsilon^2 \text{ or } \overline{V^{2}}\leq \varepsilon^2$:}\\[2mm]
For the second case, which considers states away from the equilibrium $U\approx U_{\infty}$, $V\approx V_{\infty}$ for sufficiently small $\varepsilon$, we expect to be able to derive a positive lower bound for the entropy dissipation in terms of $\varepsilon$. At first, we observe that the right hand side of \eqref{f15} is bounded by
\begin{equation}
K_5\left[(\sqrt{\overline{U^2}} - U_{\infty})^2 + (\sqrt{\overline{V^2}} - V_{\infty})\right]^2 \leq 2K_5(\overline{u} + \overline{v} + u_{\infty} + v_{\infty}) \leq K_5C(M).
\label{c14}
\end{equation}
In the following, we consider two subcases of lower bounds of the entropy dissipation. The first subcase
considers the situation where there is a lower bound of the diffusion entropy dissipation since $U$ and $V$
are not close to their spacial averages $\overline{U}$ and $\overline{V}$: \\
\noindent\underline{Subcase 2.1) $\|\delta_1\|_{\Omega}^2 \geq \eta$ or $\|\delta_2\|_{\Gamma}^2 \geq \eta$:}\\
By using Poincare's inequality $P(\Omega)\|\nabla U\|_{\Omega}^2 \geq \|\delta_1\|_{\Omega}^2$, we see that the left hand side of \eqref{f15} is bounded below by
\begin{equation}
\begin{cases}
K_4P^{-1}(\Omega)\eta &\text{ in the case } \|\delta_1\|_{\Omega}^2 \geq \eta,\\
K_4\eta &\text{ in the case } \|\delta_2\|_{\Gamma}^2 \geq \eta.
\end{cases}
\label{c15}
\end{equation}
Thus, from \eqref{c14} and \eqref{c15}, we can obtain \eqref{f15} by choosing
\begin{equation*}
K_4 \geq \max\left\{\frac{C(M)}{P(\Omega)\eta}, \frac{C(M)}{\eta}\right\}.
\end{equation*}
\noindent\underline{Subcase 2.2) $\|\delta_1\|_{\Omega}^2 \leq \eta$ and $\|\delta_2\|_{\Gamma}^2 \leq \eta$:}\\
This subcase concerns the situation where $U$ and $V$ are close to their spacial averages $\overline{U}$ and $\overline{V}$.
Thus, since $U$ and $V$ are not close to the equilibrium $U_{\infty}$ and $V_{\infty}$ for sufficiently small $\varepsilon$ in {\bf Case 2)}, there has to be a lower bound for the reaction entropy dissipation.
Let us assume first $\overline{V^2} \leq \varepsilon^2$, thus $\overline{V}^{2}\leq \overline{V^2}\leq \varepsilon^2$. From
\begin{equation*}
\beta|\Omega|\overline{U^2} + \alpha|\Gamma|\overline{V^2} = M,\quad\text{and}\quad
\overline{U^2} = \|\delta_1\|_{\Omega}^2 +\overline{U}^2,
\end{equation*}
we estimate
\begin{equation*}
\overline{U}^2 = \frac{1}{\beta|\Omega|}(M - \alpha|\Gamma|\overline{V^2}) - \|\delta_1\|_{\Omega}^2
\geq \frac{M}{\beta|\Omega|} - \frac{\alpha|\Gamma|}{\beta|\Omega|}\varepsilon^2 - \eta.
\end{equation*}
Hence, we can expand the reaction term as follows
\begin{align}
\|\overline{U}^{\alpha}-\overline{V}^{\beta}\|_{\Gamma}^2 &\geq |\Gamma|\left(\frac{1}{2}\overline{U}^{2\alpha} - \overline{V}^{2\beta}\right)
\geq |\Gamma|\left(\frac{1}{2}\left(\frac{M}{\beta|\Omega|} - \frac{\alpha|\Gamma|}{\beta|\Omega|}\varepsilon^2 - \eta\right)^{\alpha} - \varepsilon^{2\beta}\right)\nonumber\\
&\geq \frac{|\Gamma|}{2^{\alpha+2}}\left(\frac{M}{\beta|\Omega|}\right)^{\alpha}
\label{c17}
\end{align}
for small enough $\varepsilon$ and $\eta$.
The case $\overline{U^2}\leq \varepsilon^2$ can be treated similarly and yields
\begin{equation}
\|\overline{U}^{\alpha}-\overline{V}^{\beta}\|_{\Gamma}^2 \geq \frac{|\Gamma|}{2^{\beta+2}}\left(\frac{M}{\alpha|\Gamma|}\right)^{\beta}.
\label{c17_1}
\end{equation}
From \eqref{c15}, \eqref{c17} and \eqref{c17_1}, we have for both cases $\overline{U^2}\leq \varepsilon^2$ or $\overline{V^2}\leq \varepsilon^2$ that the left hand side of \eqref{f15} is estimated below as
\begin{multline}
\|\overline{V}^{\beta} - \overline{U}^{\alpha}\|_{\Gamma}^2 + K_4(\|\nabla U\|_{\Omega}^2 + \|V - \overline{V}\|_{\Gamma}^2)\\
\geq K_6= \min\left\{K_4P^{-1}(\Omega)\eta, K_4\eta, \frac{|\Gamma|}{2^{\alpha+2}}\left(\frac{M}{\beta|\Omega|}\right)^{\alpha}, \frac{|\Gamma|}{2^{\beta+2}}\left(\frac{M}{\alpha|\Gamma|}\right)^{\beta}\right\}.
\label{c18_1}
\end{multline}
Then, \eqref{f15} follows from \eqref{c14}, \eqref{c18_1} by choosing $K_5 \leq \frac{K_6}{C(M)}$, which means to choose $K_2\le\frac{K_1 K_5}{2 K_0}$ small enough.
\end{proof}
\begin{remark}
The Step 2 in the proof of Lemma \ref{bound_I2} can be significantly shortened if we consider the stoichiometric coefficients $\alpha\geq 2$ and $\beta\geq 2$, since we can prove \eqref{c9_1} without case distinction as follows.
By noting that $\|\delta_1\|_{\Omega}^2 = \|U-\overline{U}\|_{\Omega}^2=|\Omega|(\overline{U^2}-\overline{U}^2$), we derive the expressions
\begin{equation*}
\overline{U}
= \sqrt{\overline{U^2} - \|\delta_1\|_{\Omega}^2/|\Omega|}, \qquad
\overline{V}
= \sqrt{\overline{V^2} - \|\delta_2\|_{\Gamma}^2/|\Gamma|}.
\label{c77}
\end{equation*}
Thus, by \eqref{c77}, we apply again Taylor expansion to estimate the first term on the left hand side of \eqref{f15} below by
\begin{multline}\label{newtrick}
\|\overline{V}^{\beta}-\overline{U}^{\alpha}\|_{\Gamma}^2=
\left\|\left(\overline{V^2} - \|\delta_2\|_{\Gamma}^2/|\Gamma|\right)^{\frac{\beta}{2}} - \left(\overline{U^2}- \|\delta_1\|_{\Omega}^2/|\Omega|\right)^{\frac{\alpha}{2}}\right\|_{\Gamma}^2\\
\geq \Bigl\|\overline{V^2}^{\frac{\beta}{2}} - \overline{U^2}^{\frac{\alpha}{2}}\Bigr\|_{\Gamma}^2
- 2\int_{\Gamma} \left(\overline{V^2}^{\frac{\beta}{2}} - \overline{U^2}^{\frac{\alpha}{2}}\right) \left(\frac{\beta}{2} \Bigl(\overline{V^2}-\frac{\theta_2}{|\Gamma|}\Bigr)^{\!\frac{\beta}{2}-1}\frac{\|\delta_2\|_{\Gamma}^2}{|\Gamma|}\qquad\quad\right.\\
\left.-\frac{\alpha}{2}\Bigl(\overline{U^2}-\frac{\theta_1}{|\Omega|}\Bigr)^{\!\frac{\alpha}{2}-1} \frac{\|\delta_1\|_{\Omega}^2}{|\Omega|}\right)
\end{multline}
for some $\theta_1/|\Omega|\le \|\delta_1\|_{\Omega}^2/|\Omega|\le\overline{U^2}\le M_{\Omega}$ and $\theta_2/|\Gamma|\le \|\delta_2\|_{\Gamma}^2/|\Gamma|\le\overline{V^2}\le M_{\Gamma}$. Note that $\frac{\beta}{2} - 1\geq 0$ and $\frac{\alpha}{2} - 1\geq 0$, then the last term on the right hand side of \eqref{newtrick} can be estimated below by
\begin{equation*}
C(M_{\Omega}^{\alpha},M_{\Gamma}^{\beta},\Omega)\left(\frac{\|\delta_1\|_{\Omega}^2}{P(\Omega)}+\|\delta_2\|_{\Gamma}^2\right).
\end{equation*}
Thus, from \eqref{c1} and \eqref{newtrick}, we have
\begin{multline*}
\|\overline{V}^{\beta}-\overline{U}^{\alpha}\|_{\Gamma}^2\\
\geq |\Gamma|\left(V_{\infty}^{\beta}(1+\mu_2)^{\beta} - U_{\infty}^{\alpha}(1+\mu_1)^{\alpha}\right)^2
- C(M_{\Omega}^{\alpha},M_{\Gamma}^{\beta},\Omega)\left(\frac{\|\delta_1\|_{\Omega}^2}{P(\Omega)}+\|\delta_2\|_{\Gamma}^2\right)\\
\geq |\Gamma|\left(V_{\infty}^{\beta}(1+\mu_2)^{\beta} - U_{\infty}^{\alpha}(1+\mu_1)^{\alpha}\right)^2 - C(M_{\Omega}^{\alpha},M_{\Gamma}^{\beta},\Omega)\left(\|\nabla U\|_{\Omega}^2+\|\delta_2\|_{\Gamma}^2\right).
\end{multline*}
Therefore, by choosing $K_4 \geq C(M_{\Omega}^{\alpha},M_{\Gamma}^{\beta})$, we have proved \eqref{c9_1}:
\begin{equation*}
\|\overline{V}^{\beta}-\overline{U}^{\alpha}\|_{\Gamma}^2 + K_4(\|\nabla U\|_{\Omega}^2 + \|V - \overline{V}\|_{\Gamma}^2) \geq |\Gamma|\left(V_{\infty}^{\beta}(1+\mu_2)^{\beta} - U_{\infty}^{\alpha}(1+\mu_1)^{\alpha}\right)^2.
\end{equation*}
The rest of the proof follows exactly as the end of {\bf Case 1)} in Lemma \ref{bound_I2}.
\end{remark}
\subsection{The degenerate case: $\delta_{v} = 0$}
By Remark \ref{ConstantRates} we know that $(A, B)$ is an upper solution to \eqref{e1}, then, by the comparison principle we have that, for all $t\geq 0$,
\begin{equation*}
\|u(t)\|_{L^{\infty}(\Omega)} \leq A, \quad\text{ and }\quad \|v(t)\|_{L^{\infty}(\Gamma)} \leq B.
\end{equation*}
Then, by using the same function $\Phi$ as \eqref{f5_0}, we have
\begin{align}
E(u, v) &- E(u_{\infty}, v_{\infty})\nonumber\\
&= \int_{\Omega}\left(u\log\frac{u}{u_{\infty}} - (u-u_{\infty})\right)dx + \int_{\Gamma}\left(v\log\frac{v}{v_{\infty}} - (v - v_{\infty})\right)dS\nonumber\\
&\leq \Phi(A, u_{\infty})\int_{\Omega}(\sqrt{u} - \sqrt{u_{\infty}})^2dx + \Phi(B, v_{\infty})\int_{\Gamma}(\sqrt{v}-\sqrt{v_{\infty}})^2dS\nonumber\\
&\leq \max\{\Phi(A, u_{\infty}), \Phi(B, v_{\infty})\}\left(\|U - U_{\infty}\|_{\Omega}^2 + \|V - V_{\infty}\|_{\Gamma}^2\right).
\label{new0}
\end{align}
The following lemma, roughly speaking, shows that the diffusion of $u$ in $\Omega$ and the reversible reaction of $u$ and $v$ on $\Gamma$ lead to a diffusion-effect of $v$ on $\Gamma$:
\begin{lemma}\label{lem:degenerate_estimate}
There exists $C_1, C_2>0$ such that
\begin{equation}
C_1\|U^{\alpha}-V^{\beta}\|_{\Gamma}^2 + C_2\left(\|\nabla U\|_{\Omega}^2 + \|U - \overline{U}\|_{\Gamma}^2\right) \geq C_3\|V-\overline{V}\|_{\Gamma}^2.
\label{z1}
\end{equation}
\end{lemma}
\begin{proof}
Note that, by the Trace Theorem $T(\Omega)\|\nabla U\|_{\Omega}^2 \geq \|U - \overline{U}\|_{\Gamma}^2$, we could neglect the term $\|U - \overline{U}\|_{\Gamma}^2$ in \eqref{z1}. We write it here for the sake of readability.
We will prove the inequality \eqref{z1} by distinguishing cases:
\noindent\underline{\it Case 1: $\overline{U}\geq \varepsilon$.} Applying the ansatz
\begin{equation*
V(x) = \overline{U}^{\frac{\alpha}{\beta}}(1+\delta(x)), \qquad \delta(x) \in [-1,+\infty)\qquad \forall x\in\Gamma,
\end{equation*}
we get
\begin{multline}
C_1\|U^{\alpha}-V^{\beta}\|_{\Gamma}^2 = C_1\|U^{\alpha}-\overline{U}^{\alpha}\|_{\Gamma}^2 - 2C_1\int_{\Gamma}(U^{\alpha}-\overline{U}^{\alpha})\overline{U}^{\alpha}[(1+\delta)^{\beta}-1]dS \\ + C_1\overline{U}^{2\alpha}\|(1+\delta)^{\beta}-1\|_{\Gamma}^2.
\label{z2}
\end{multline}
Since $\|U\|_{L^{\infty}(\Gamma)}\leq A$, we have
\begin{equation}
\|U^{\alpha} - \overline{U}^{\alpha}\|_{\Gamma}^2 \leq C(A)\|U-\overline{U}\|_{\Gamma}^2.
\label{z3}
\end{equation}
From \eqref{z2} and \eqref{z3}, we can estimate the left hand side of \eqref{z1} as follows
\begin{multline}
C_1\|U^{\alpha}-V^{\beta}\|_{\Gamma}^2+C_2\|U - \overline{U}\|_{\Gamma}^2
\geq \left(C_1+\frac{C_2}{C(A)}\right)\|U^{\alpha}-\overline{U}^{\alpha}\|_{\Gamma}^2 \\- 2C_1\int_{\Gamma}(U^{\alpha}-\overline{U}^{\alpha})\overline{U}^{\alpha}[(1+\delta)^{\beta}-1]dS + C_1\overline{U}^{2\alpha}\|(1+\delta)^{\beta}-1\|_{\Gamma}^2\\
\geq \frac{C_1C_2}{C_1C(A)+C_2}\overline{U}^{2\alpha}\|(1+\delta)^{\beta}-1\|_{\Gamma}^2,
\label{z4}
\end{multline}
where we have used Young's inequality
\begin{multline}
2C_1\int_{\Gamma}(U^{\alpha}-\overline{U}^{\alpha})\overline{U}^{\alpha}[(1+\delta)^{\beta}-1]dS\\
\leq \left(C_1+\frac{C_2}{C(A)}\right)\|U^{\alpha}-\overline{U}^{\alpha}\|_{\Gamma}^2 + \frac{C_1^2C(A)}{C_1C(A) + C_2}\overline{U}^{2\alpha}\|(1+\delta)^{\beta}-1\|_{\Gamma}^2.
\label{z5}
\end{multline}
Next, we observe that the function $R(\delta):=\frac{(1+\delta)^{\beta}-1}{\delta}$
is continuous on $\delta\in[-1,\infty)$ with $R(0)=\beta\ge1$ and bounded below by $R(\delta)\ge R(-1)=1$ for $\delta\in[-1,\infty)$. Thus,
\begin{equation}\label{z6}
\|(1+\delta)^{\beta}-1\|_{\Gamma}^2 = \int_{\Gamma} R(\delta)^2 \delta^2\,dS \ge
\int_{\Gamma} \delta^2\,dS.
\end{equation}
On the other hand, we have
\begin{align}
\|V - \overline{V}\|_{\Gamma}^2&= |\Gamma|\left(\overline{V^2}- \overline{V}^2\right)
= |\Gamma|\overline{U}^{\frac{2\alpha}{\beta}}\left(\overline{(1+\delta)^2}- \overline{1+\delta}^2\right)\nonumber\\
&= |\Gamma|\overline{U}^{\frac{2\alpha}{\beta}} \left(1+2\overline{\delta}+\overline{\delta^2}-(1+\overline{\delta})^2\right)
\le |\Gamma|\overline{U}^{\frac{2\alpha}{\beta}}\overline{\delta^2} \nonumber\\
&\le \overline{U}^{\frac{2\alpha}{\beta}} \int_{\Gamma} \delta^2\,dS.\label{z7}
\end{align}
Now, keeping in mind that $\overline{U}\geq \varepsilon$, we obtain \eqref{z1} from \eqref{z4}, \eqref{z6} and \eqref{z7}, by choosing
\begin{equation*}
C_3 \leq \frac{C_1C_2}{C_1C(M)+C_2}\min\{1; \varepsilon^{2\alpha(1-1/\beta)}\}.
\end{equation*}
\noindent\underline{\it Case 2: $\overline{U}\leq \varepsilon$.}
We begin by considering $\overline{U}\le\varepsilon$, for which the contribution of $\|U-\overline{U}\|_{\Gamma}^2$ in \eqref{z1} can be arbitrary small when $U$ is close to $\overline{U}$. However, for $\varepsilon$ sufficiently small, we shall show that the estimate \eqref{z1} still holds because the reaction term
$\|U^{\alpha}-V^{\beta}\|_{\Gamma}^2$ can only be "small" if the $\|V-\overline{V}\|_{\Gamma}^2$ is of the "same order of smallness".
We will treat two subcases: a) $\overline{U^2}$ is "small" and b) $\overline{U^2}$ is "big".\\[2mm]
\noindent\underline{\it Case 2a): $\overline{U^2}\leq \frac{M}{2\beta|\Omega|}$.}
A direct consequence of the conservation law \eqref{conssqrt} yields
\begin{equation}\label{z10_0}
\overline{V^2} = \frac{1}{\alpha|\Gamma|}\left(M-\beta|\Omega|\overline{U^2}\right)\geq \frac{M}{2\alpha|\Gamma|}.
\end{equation}
Next, we estimate similarly to \eqref{z2}--\eqref{z5} the left hand side of \eqref{z1} as
\begin{equation*}
C_1\|U^{\alpha}-V^{\beta}\|_{\Gamma}^2 + C_2\|U-\overline{U}\|_{\Gamma}^2 \geq C_4\|V^{\beta}-\overline{U}^{\alpha}\|_{\Gamma}^2.
\end{equation*}
where $C_4 = \frac{C_1C_2}{C_1C(M)+C_2}$.
Then, since $\overline{U}\le\varepsilon$
\begin{equation*}
C_4\|V^{\beta} - \overline{U}^{\alpha}\|_{\Gamma}^2
\geq C_4\int_{\Gamma}V^{2\beta}dS - 2C_4\varepsilon^{\alpha}\int_{\Gamma}V^{\beta}dS.
\end{equation*}
On the other hand, the right hand side of \eqref{z1} is bounded by
\begin{equation}
C_3\int_{\Gamma}|V - \overline{V}|_{\Gamma}^2 = C_3|\Gamma|\left(\overline{V^2} - \overline{V}^2\right) \leq C_3|\Gamma|\overline{V^2}.
\label{z11_1}
\end{equation}
Therefore, in order to obtain \eqref{z1}, it is sufficient to prove that
\begin{equation}
C_4\int_{\Gamma}V^{2\beta}dS - 2C_4\varepsilon^{\alpha}\int_{\Gamma}V^{\beta}dS \geq C_3|\Gamma|\overline{V^2}.
\label{z11_2}
\end{equation}
\noindent\underline{If $\beta \in [1,2]$}, by using Jensen's inequality (and noting that the function $f(x) = x^{\frac{\beta}{2}}$ is concave), we can estimate the left hand side of \eqref{z11_2} as
\begin{multline}
C_4\int_{\Gamma}V^{2\beta}dS - 2C_4\varepsilon^{\alpha}\int_{\Gamma}V^{\beta}dS\\
\geq C_4\left(\int_{\Gamma}V^2dS\right)^{\beta} - 2C_4\varepsilon^{\alpha}\left(\int_{\Gamma}V^2dS\right)^{\beta/2}\\
= C_4|\Gamma|^{\beta}\overline{V^{2}}^{\beta} - 2C_4\varepsilon^{\alpha}|\Gamma|^{\beta/2}\overline{V^2}^{\beta/2}.
\label{z12}
\end{multline}
Since $\overline{V^2}\geq \frac{M}{2\alpha|\Gamma|}$, we can choose $\varepsilon$ and $C_3$ small enough such that
\begin{equation*}
C_4|\Gamma|^{\beta}\overline{V^2}^{\beta-1} \geq 2C_4\varepsilon^{\alpha}|\Gamma|^{\beta/2}\overline{V^2}^{\beta/2-1} + C_3|\Gamma|.
\end{equation*}
After choosing $\varepsilon\leq \frac{1}{4(2\alpha)^{\beta/2\alpha}}M^{\frac{\beta}{2\alpha}}$
and $C_3\leq \frac{1}{2}\frac{C_4M^{\beta -1}}{(2\alpha)^{\beta - 1}}$, this gives together with \eqref{z12} the inequality \eqref{z11_2}.
\noindent\underline{If $\beta \geq 2$}, by using Jensen's inequality, we have
\begin{equation}
C_3|\Gamma|\overline{V^{2}} \leq C_3|\Gamma|\overline{V^{\beta}}^{2/\beta} \quad\text{ and } \quad C_4\int_{\Gamma}V^{2\beta}dS \geq C_4|\Gamma|^2\overline{V^{\beta}}^2.
\label{z14}
\end{equation}
Making use of \eqref{z14}, relation \eqref{z11_2} can be proven since
\begin{equation*}
C_4|\Gamma|^2\overline{V^{\beta}}^2 - 2C_4\varepsilon^{\alpha}|\Gamma|\overline{V^{\beta}} \geq C_3|\Gamma|\overline{V^{\beta}}^{2/\beta}
\end{equation*}
or equivalently
\begin{equation*}
C_4|\Gamma|\overline{V^{\beta}} \geq 2C_4\varepsilon^{\alpha} + C_3\overline{V^{\beta}}^{(2-\beta)/\beta}.
\end{equation*}
This can be satisfied if we choose, for instance,
$\varepsilon \le \frac{|\Gamma|^{1/\alpha}}{4^{1/\alpha}}\bigl(\frac{M}{2\alpha|\Gamma|}\bigr)^{\beta/2\alpha}$
and $C_3\leq \frac{1}{2}C_2|\Gamma|\bigl(\frac{M}{2\alpha|\Gamma|}\bigr)^{\beta -1}$ and keeping in mind that $\overline{V^2}\ge\frac{M}{2\alpha|\Gamma|}$ and $\beta \geq 2$.
\medskip
\noindent\underline{\it Case 2b): $\overline{U^2}\geq \frac{M}{2\beta|\Omega|}$.}
Similarly to \eqref{z10_0}, we deduce from the conservation of mass that $\overline{V^2} \leq M/(2\alpha|\Gamma|)$. We estimate
$$
C_2\|\nabla U\|_{\Omega}^2\ge \frac{C_2}{P(\Omega)} \|U-\overline{U}\|_{\Omega}^2\ge \frac{C_2|\Omega|}{P(\Omega)}\left(\overline{U^2}-\overline{U}^2\right) \ge
\frac{C_2}{P(\Omega)}\left(\frac{M}{2\beta}-\varepsilon^2\right)
\ge \frac{C_2}{P(\Omega)}\frac{M}{4\beta},
$$
if we chose $\varepsilon^2<\frac{M}{4\beta}$.
Next, recalling \eqref{z11_1}, we estimate
$$
\frac{C_2}{P(\Omega)}\frac{M}{4\beta}
\ge \frac{C_2}{P(\Omega)}\frac{M}{4\beta} \frac{\overline{V^2}}{M_{\Gamma}}\ge C_3\|V-\overline{V}\|^2_{\Gamma},
$$
if we choose $C_3\le \frac{C_2\alpha|\Gamma|}{2\beta P(\Omega)}$.
Altogether, the proof of \eqref{z1} is complete by choosing $\varepsilon$ and $C_3$ small enough in order to satisfy
the various constraints from the above cases.
\end{proof}
We are now ready to prove Lemma \ref{lem:E-EDEstimate} for degenerate case, that is when $\delta_v=0$, there exists $C_0>0$ such that
\begin{equation*}
D(u,v) \geq C_0(E(u,v) - E(u_{\infty}, v_{\infty})).
\end{equation*}
\begin{proof}
We begin by estimating $D(u,v)$ below, that is
\begin{align*}
D(u,v) &= \delta_{u}\int_{\Omega}\frac{|\nabla u|^2}{u}dx + \int_{\Gamma}(u^{\alpha}-v^{\beta})\log\frac{u^{\alpha}}{v^{\beta}}dS\\
&\geq 4\delta_{u}\|\nabla U\|_{\Omega}^2 + 4\|U^{\alpha}-V^{\beta}\|_{\Gamma}^2,
\end{align*}
where we have used the elementary inequality $(a-b)\log(a/b)\geq 4(\sqrt{a} - \sqrt{b})^2$.
Then, by applying the Trace inequality
$\|U - \overline{U}\|_{\Gamma}^2 \leq \|\nabla U\|_\Omega^2T(\Omega)$
and Lemma \ref{lem:degenerate_estimate}, we get
\begin{align}
D(u,v)&\geq 4\delta_u\|\nabla U\|_{\Omega}^2 + 4\|U^{\alpha}-V^{\beta}\|_{\Gamma}^2\nonumber\\
&\geq \theta\left[C_1\|U^{\alpha}-V^{\beta}\|_{\Gamma}^2 + C_2(\|\nabla U\|_{\Omega}^2 + \|U-\overline{U}\|_{\Gamma}^2)\right]\nonumber\\
&\quad+ \left[4\delta_{u}-\theta C_2(1+T(\Omega))\right]\|\nabla U\|_{\Omega}^2 + (4-\theta C_1)\|U^{\alpha}-V^{\beta}\|_{\Gamma}^2\nonumber\\
&\geq \theta C_3\|V - \overline{V}\|_{\Gamma}^2 + [4\delta_{u}-\theta C_2(1+T(\Omega))]\|\nabla U\|_{\Omega}^2
+ (4-\theta C_1)\|U^{\alpha}-V^{\beta}\|_{\Gamma}^2\nonumber\\
&\geq C_4\|\nabla U\|_{\Omega}^2 + C_5\|V-\overline{V}\|_{\Gamma}^2 + C_6\|U^{\alpha}-V^{\beta}\|_{\Gamma}^2.
\label{new2}
\end{align}
where we denote $C_4 = 4\delta_u - \theta C_2(1+T(\Omega))$, $C_5 = \theta C_3$ and $C_6 = (4-\theta C_1)$, where
$\theta>0$ is chosen such that the constants $C_4$ and $C_6$ are positive.
In the following, we estimate the relative entropy $E(u,v)-E(u_{\infty}, v_{\infty})$ above by using \eqref{new0}
\begin{multline}
E(u,v)-E(u_{\infty},v_{\infty})
\leq \max\{\Phi(A,u_{\infty}), \Phi(B,v_{\infty})\}(\|U-U_{\infty}\|_{\Omega}^2 + \|V-V_{\infty}\|_{\Gamma}^2)\\
\le C_8(\|U-\overline{U}\|_{\Omega}^2 + \|V-\overline{V}\|_{\Gamma}^2 + \|\overline{U}-U_{\infty}\|_{\Omega}^2 + \|\overline{V}-V_{\infty}\|_{\Gamma}^2)
\label{new3}
\end{multline}
with $C_8 =2 \max\{\Phi(A,u_{\infty}), \Phi(B,v_{\infty})\}$. By using \eqref{new3}, we continue to estimate \eqref{new2} below and obtain by using Poincar\'e's inequality, the Trace Theorem and for $0<\varepsilon<1$ to be chosen
\begin{align}
D(u,v
&\geq \frac{C_4}{2P(\Omega)}\|U-\overline{U}\|_{\Omega}^2 + \frac{C_4}{2T(\Omega)}\|U-\overline{U}\|_{\Gamma}^2 + C_5\|V-\overline{V}\|_{\Gamma}^2 + C_6\|U^{\alpha}-V^{\beta}\|_{\Gamma}^2\nonumber\\
&\geq \varepsilon\min\left\{\frac{C_4}{2P(\Omega)}, C_5\right\}(\|U-\overline{U}\|_{\Omega}+ \|V-\overline{V}\|_{\Gamma}^2)\nonumber\\
&\quad+\frac{C_4}{2T(\Omega)}\|U-\overline{U}\|_{\Gamma}^2 +C_6\|U^{\alpha}-V^{\beta}\|_{\Gamma}^2 + C_5(1-\varepsilon)\|V-\overline{V}\|_{\Gamma}^2\nonumber\\
&\geq \varepsilon\min\left\{\frac{C_4}{2P(\Omega)}, C_5\right\}\!\biggl(\frac{E(u,v)\!-\!E(u_{\infty},v_{\infty})}{C_8} - \|\overline{U}-U_{\infty}\|_{\Omega}^2 - \|\overline{V}-V_{\infty}\|_{\Gamma}^2\!\biggr)\nonumber\\
&\quad+\frac{C_4}{2T(\Omega)}\|U-\overline{U}\|_{\Gamma}^2 +C_6\|U^{\alpha}-V^{\beta}\|_{\Gamma}^2+ C_5(1-\varepsilon)\|V-\overline{V}\|_{\Gamma}^2\nonumber\\
&\geq \frac{\varepsilon}{C_8}\min\left\{\frac{C_4}{2P(\Omega)}, C_5\right\}(E(u,v)-E(u_{\infty},v_{\infty}))\nonumber\\
&\quad+\frac{C_4}{2T(\Omega)}\|U-\overline{U}\|_{\Gamma}^2 +C_6\|U^{\alpha}-V^{\beta}\|_{\Gamma}^2+ C_5(1-\varepsilon)\|V-\overline{V}\|_{\Gamma}^2\nonumber\\
&\quad - \varepsilon\min\left\{\frac{C_4}{2P(\Omega)}, C_5\right\}\left(\|\overline{U}-U_{\infty}\|_{\Omega}^2 + \|\overline{V}-V_{\infty}\|_{\Gamma}^2\right).
\label{new4}
\end{align}
Now, by applying \eqref{f12} with $4\delta_v\,P^{-1}(\Gamma)=C_5(1-\varepsilon)$, we can find a positive constant $\varepsilon>0$ small enough such that
\begin{multline}
\frac{C_4}{2T(\Omega)}\|U-\overline{U}\|_{\Gamma}^2 +C_6\|U^{\alpha}-V^{\beta}\|_{\Gamma}^2+ C_5(1-\varepsilon)\|V-\overline{V}\|_{\Gamma}^2\\
\geq \varepsilon\min\left\{\frac{C_4}{2P(\Omega)}, C_5\right\}\left(\|\overline{U}-U_{\infty}\|_{\Omega}^2 + \|\overline{V}-V_{\infty}\|_{\Gamma}^2\right)
\label{new5}
\end{multline}
holds and we conclude from \eqref{new4} and \eqref{new5} that
\begin{equation*}
D(u,v) \geq \frac{\varepsilon}{C_8}\min\left\{\frac{C_4}{2P(\Omega)}, C_5\right\}(E(u,v)-E(u_{\infty},v_{\infty})),
\end{equation*}
which finishes the proof of the Lemma in the case of degenerate diffusion $\delta_v=0$.
\end{proof}
As we can see in the proof the degenerate case, we used $L^{\infty}$-bounds of the solution, which are usually unavailable for more general systems. However, we believe that in some cases of stoichiometric coefficients $\alpha$ and $\beta$, there will be a way to show the exponential convergence to equilibrium without using the $L^{\infty}$ bounds. As example, we show that it is possible for the linear case, that is $\alpha = \beta = 1$.
\begin{proposition}\label{re:linear}
Assume that $\alpha = \beta = 1$ and $\delta_v = 0$. The solution to the system \eqref{e1}, which rewrites as
\begin{equation}
\begin{cases}
u_t - \delta_u \Delta u = 0, &x\in\Omega,\\
\delta_u\frac{\partial u}{\partial \nu} = - u + v, &x\in\Gamma,\\
v_t = u - v, &x\in \Gamma,\\
u(0,x) = u_0(x), &x\in\Omega,\\
v(0,x) = v_0(x), &x\in \Gamma,
\end{cases}
\label{h1}
\end{equation}
converges exponentially to the equilibrium in $L^2(\Omega)\times L^2(\Gamma)$.
\end{proposition}
\begin{remark}
Due to the lack of the surface diffusion $\delta_v\Delta_{\Gamma}v$, when establishing an entropy-entropy dissipation estimate, we need to prove an inequality analogous to \eqref{z1}, that is
\begin{equation}
C_1\|U^{\alpha}-V^{\beta}\|_{\Gamma}^2 + C_2\left(\|\nabla U\|_{\Omega}^2 + \|U - \overline{U}\|_{\Gamma}^2\right) \geq C_3\|V-\overline{V}\|_{\Gamma}^2.\label{zzz}
\end{equation}
The main point of this proposition is that, thanks to the linearity of the system, we can use the quadratic structure of the entropy to prove the existence of such an estimate without using the $L^{\infty}$-bounds of the solution (see \eqref{1star} below). For general $\alpha$ and $\beta$ an estimate like \eqref{zzz} seems
highly unclear: consider for instance a state $V=U^{\frac{\alpha}{\beta}}$. Then, $\|U^\alpha-V^\beta\|_{\Gamma}=0$
and the two remaining terms $\|\nabla U\|_{\Omega}^2$ and $\|U - \overline{U}\|_{\Gamma}^2$ on the left hand side of
\eqref{zzz} seem not strong enough to ensure
the integrability of $V$ for $\alpha \gg \beta$. Such cases remain open problems to be treated in a future work.
\end{remark}
\begin{proof}
The unique equilibrium $(u_{\infty}, v_{\infty})$ satisfies
\begin{equation}\label{h1_1}
\begin{cases}
u_{\infty} = v_{\infty},\\
|\Omega|u_{\infty} + |\Gamma|v_{\infty} = M
\end{cases}
\end{equation}
where
\begin{equation*}
M = \int_{\Omega}u_0(x)dx + \int_{\Gamma}v_0(x)dS
\end{equation*}
is the initial mass.
For the sake of simplicity, we consider the quadratic entropy (which is only an admissible entropy functional since \eqref{h1} is linear)
\begin{equation}
E(u,v) = \|u\|_{\Omega}^2 + \|v\|_{\Gamma}^2,
\label{h2}
\end{equation}
its entropy dissipation
\begin{equation}
D(u,v) = -\frac{d}{dt}E(u,v) = 2d_u\|\nabla u\|_{\Omega}^2 + 2\|u - v\|_{\Gamma}^2,
\label{h3}
\end{equation}
and the relative entropy
\begin{equation}
\begin{aligned}
E(u,v) - E(u_{\infty},v_{\infty}) &= \|u\|_{\Omega}^2 + \|v\|_{\Gamma}^2 - \|u_{\infty}\|_{\Omega}^2 - \|v_{\infty}\|_{\Gamma}^2.
\end{aligned}
\label{h4}
\end{equation}
Similarly to \eqref{f4}, we decompose the relative entropy as follow:
\begin{equation*}
\begin{aligned}
E(u,v) - E(u_{\infty}, v_{\infty}) &= [E(u,v)-E(\overline{u}, \overline{v})] + [E(\overline{u},\overline{v}) - E(u_{\infty}, v_{\infty})]\\
&= [\|u - \overline{u}\|_{\Omega}^2 + \|v-\overline{v}\|_{\Gamma}^2] + [\|\overline{u} - u_{\infty}\|_{\Omega}^2 + \|\overline{v} - v_{\infty}\|_{\Gamma}^2].
\end{aligned}
\end{equation*}
In the spirit of Lemma \ref{lem:degenerate_estimate}, we want to have an estimate similar to \eqref{z1}:
\begin{equation}
C_1\|\nabla u\|_{\Omega}^2 + C_2\|u - v\|_{\Gamma}^2 \ge C_3\|v - \overline{v}\|_{\Gamma}^2.
\label{0star}
\end{equation}
This can be done by estimating
\begin{align}
C_1\|\nabla u\|_{\Omega}^2 + C_2\|u - v\|_{\Gamma}^2&\geq \frac{C_1}{T(\Omega)}\|u - \overline{u}\|_{\Gamma}^2 + C_2\|u - v\|_{\Gamma}^2
\geq \frac{C_1C_2/T(\Omega)}{C_2 + C_1/T(\Omega)}\|\overline{u} - v\|_{\Gamma}^2\nonumber\\
&= \frac{C_1C_2/T(\Omega)}{C_2 + C_1/T(\Omega)}(\|v - \overline{v}\|_{\Gamma}^2 + \|\overline{u} - \overline{v}\|_{\Gamma}^2),
\label{1star}
\end{align}
where we have used that $\int_{\Gamma} (\overline{u}-\overline{v})(v-\overline{v})\,dS=0$.
Now, we can proceed similarly to the degenerate case to get the exponential convergence to equilibrium. For completeness, we sketch the entropy method as follows:
\begin{equation}
\begin{aligned}
D(u,v)&= 2d_u\|\nabla u\|_{\Omega}^2 + 2\|u - v\|_{\Gamma}^2\\
&\geq \frac{d_u}{P(\Omega)}\|u - \overline{u}\|_{\Omega}^2 + \theta\left(C_1\|\nabla u\|_{\Omega}^2 + C_2\|u - v\|_{\Gamma}^2\right)\\
&\geq \frac{d_u}{P(\Omega)}\|u - \overline{u}\|_{\Omega}^2 + \theta C_3\|v - \overline{v}\|_{\Gamma}^2 + \theta C_3\|\overline{u} - \overline{v}\|_{\Gamma}^2.
\end{aligned}
\label{2star}
\end{equation}
By the mass conservation and the definition of the equilibrium \eqref{h1_1}, we get
\begin{equation}
\|\overline{u} - \overline{v}\|_{\Gamma}^2 = \frac{(|\Omega|+|\Gamma|)^2}{2|\Omega|^2}(\|\overline{u} - u_{\infty}\|_{\Omega}^2 + \|\overline{v} - v_{\infty}\|_{\Gamma}^2).
\label{3star}
\end{equation}
Combining \eqref{2star} and \eqref{3star} yields
\begin{align*}
D(u,v) &\geq \frac{d_u}{P(\Omega)}\|u - \overline{u}\|_{\Omega}^2 + \theta C_3\|v - \overline{v}\|_{\Gamma}^2 \\
&\quad+ \theta C_3\frac{(|\Omega|+|\Gamma|)^2}{2|\Omega|^2}(\|\overline{u} - u_{\infty}\|_{\Omega}^2 + \|\overline{v} - v_{\infty}\|_{\Gamma}^2)\\
&\geq C_4(E(u,v) - E(u_{\infty}, v_{\infty})).
\end{align*}
Hence, the solution satisfies the exponential convergence to equilibrium:
\begin{equation*}
\|u - u_{\infty}\|_{\Omega}^2 + \|v-v_{\infty}\|_{\Gamma}^2 \leq e^{-C_4t}(\|u_0 - u_{\infty}\|_{\Omega}^2 + \|v_0 - v_{\infty}\|_{\Gamma}^2).
\end{equation*}
\end{proof}
\vskip 0.5cm
\noindent{\bf Acknowledgements.} The first author is supported by International Research Training Group IGDK 1754. This work has partially been supported by NAWI Graz.
|
2,877,628,090,399 | arxiv | \section{Introduction}
In many interesting physical, biological and social phenomena, whenever no intrinsic scale for the relevant variables is present, the emergence of "scaling laws" is phenomenologically observed~\cite{Newman}.
However, strictly speaking, a power law is not a proper way of fitting empirical data, since no choice of the exponent can keep the higher moments of a power law distribution from diverging, while every phenomenological distribution leads to finite values for all moments.
This is not just a technicality: it is rather a reflection of the fact that the long tail of a power law distribution is in practice cutoffed by the existence of some "hidden" scale, irrelevant in the scaling region, but eventually forcing some upper limit on the variables describing the system.
It would therefore be convenient to parametrize the data by means of more regular distribution functions, sufficiently damped for very large values of the variables, but admitting power law distributions as regular limits when the control parameter implementing the cutoff is sent to its limiting value.
A related issue concerns the effects of sampling, which may be non trivial even when we restrict our attention to the expectation values of the sampled variables.
On average sampling does not affect the distributions of individual objects belonging to different kinds, but when we consider frequency distributions (that is the number of kinds that are represented $k$ times in a given population) we cannot in general expect that the frequency distribution in the samples be the same as in the original population, even after averaging on many different samples, basically because the cutoff induced by sampling acts differently (and in general nontrivially) at different scales.
It is therefore quite important to be able to extract from the frequency distribution of the samples some information reflecting directly some intrinsic property of the underlying distribution.
Our purpose is therefore threefold. First we want to discuss the general relationship existing between some arbitrary frequency distribution and the expectation value of the frequency distributions of its samples, and construct observables whose expectation values turn out to be independent of the sample size, and therefore coinciding with the value taken by the same observables in the full distribution.
Moreover we want to study classes of distributions whose samples preserve the functional dependence on the parameters present in the original distribution, establishing the connection between the (a priori unknown) values of the parameters of the distribution and the (empirically measured) parameters of the sample distributions.
Finally we want to study the scaling limit of these distributions (when it exists), in order to explore the possibility of their use for the phenomenological description of systems that are theoretically expected to show scaling in the limit when all empirical cutoffs (including those induced by sampling) are going to disappear.
In Section \ref{framework} we establish the notation and the general framework of our analysis.
In Section \ref{moments} we construct a wide set of combinations of expectation values that do not depend on the sample size.
In Section \ref{ewens} we apply our approach to the popular Ewens sampling formula, showing that its features are consistent with the general pattern and computing its invariant expectation values.
In Section \ref{small} we consider the limiting case of a small sampling applied to a large population.
In Section \ref{large} we focus on the case when the original population and its samples are sufficiently large in comparison with typical frequency values, finding a useful mathematical relationship between the generating function of the expectation values of the sample distributions and the generating function of the original distribution.
In Section \ref{distributions} we analyze a class of distributions (the so-called negative binomial distributions) admitting a scaling limit and enjoying the property that the distribution of expectation values of the samples has the same mathematical form as the original distribution. We also compute in a closed form the values ot the basic invariant expectation values for these distributions.
Finally in Section \ref{scaling} we analyze the scaling limit itself and discuss the conditions under which one may expect this limit to be a sensible description of the original system.
Appendices are devoted to the proofs of some mathematical results and to discussing the issue of correlation between random samples.
\section{The general framework}
\label{framework}
We are considering a set of $N$ objects (``individuals'') belonging to $S$ different kinds (``species''), and we assume that the set contains $\hat N_a$ objects of the $a$-th kind, subject to the constraint $\sum_a \hat N_a = N$.
A sample is a set of $n$ objects, containing $\hat n_a$ objects of the $a$-th kind, subject to the constraint $\sum_a \hat n_a = n$.
The probability $P_{\{\hat n_a\}}$ of extracting a specific sample $\{\hat n_a\}$ from a given set $\{\hat N_a\}$ is obtained from the multivariate hypergeometric distribution
$$P_{\{\hat n_a\}} = {N \choose n}^{-1}\prod_{a=1}^S {\hat N_a \choose \hat n_a}.$$
We can easily compute the relevant expectation values, obtaining in particular
$$\langle \hat n_a\rangle = \hat N_a {n \over N} = n\,\hat p_a, \qquad \qquad \langle \hat n_a^2\rangle -\langle \hat n_a\rangle ^2 = {N-n \over N-1} n \hat p_a (1-\hat p_a)$$
where $\hat p_a \equiv {\bar N_a/N}$ is the probability of extracting an object of the $a$-th kind in a single extraction.
It may be useful to consider also the limit of small samples $\hat n_a \ll \hat N_a$. In this limit the probability of a specific sample is well approximated by the multinomial distribution
$$P_{\{\hat n_a\}} = n! \prod_{a=1}^S {1 \over \hat n_a!} (\hat p_a)^{\hat n_a}.$$
A frequency distribution is a set of values $\{N_k\}$, where $N_k$ is the number of kinds such that for each of them there are $k$ objects in the original set. According to the definition, the following conditions must be satisfied:
$$\sum_{k=1}^N N_k = S, \qquad \qquad \qquad \sum_{k=1}^N k\,N_k = N.$$
The frequency distribution of a sample is a set of values $\{n_l\}$, satisfying the conditions
$$\sum_{l=0}^n n_l = S, \qquad \qquad \qquad \sum_{l=1}^n l\,n_l = n.$$
Notice that the frequency distribution of a sample formally includes the (unobservable) value $n_0$, corresponding to the number of kinds, present in the original set, which are not represented in the sample.
It is in principle possible to compute the probability of any sample distribution $\{n_l\}$ as a function of a given set $\{N_k\}$. To this purpose it is convenient to define the intermediate variables $N_{kl}$, representing the (random) number of kinds characterized by $k$ objects in the original set and by $l$ ($l \leq k$) objects in the sample. The variables $N_{kl}$ are strongly constrained, since they must satisfy all the conditions:
$$\sum_{l=0}^n N_{kl} = N_k, \qquad \qquad \qquad \sum_{k=1}^N N_{kl} = n_l.$$
\smallskip
The probability $P_{\{N_{kl}\}}$ of a specific configuration ${\{N_{kl}\}}$ follows from the general probability formula~\cite{Zelterman}:
$$P_{\{N_{kl}\}} ={N \choose n}^{-1} \prod_{k=1}^N \biggl[N_k! \prod_{l=0}^k {1 \over N_{kl}!} {k \choose l}^{N_{kl}} \biggr],$$
subject to the constraint $\sum_{l=0}^n N_{kl} = N_k$.
The probability $P_{\{n_l\}}$ of finding a frequency distribution $\{n_l\}$ in a sample is obtained by summing the probabilities $P_{\{N_{kl}\}}$ over all configurations satisfiying the constraint $\sum_{k=1}^N N_{kl} = n_l$. The corresponding multivariate generating function can be defined as
$$\varepsilon^{(n)}(\{t_l\}) \equiv \sum_{\{n_l\}} P_{\{n_l\}} \prod_{l=0}^n t_l^{n_l} =\sum_{\{N_{kl}\}} P_{\{N_{kl}\}} \prod_{k=1}^N \prod_{l=0}^k t_l^{N_{kl}}.$$
It is also possible (and it will be quite convenient) to define a cumulative generating function for the probability of finding the frequency distributions $P_{\{n_l\}}$ for samples of all possible sizes :
$$E(x;\{t_l\}) \equiv \sum_{n=0}^N {N \choose n}\varepsilon^{(n)}(\{t_l\}) x^n = \sum_{\{N_{kl}\}} \prod_{k=1}^N \Bigl(N_k! \prod_{l=0}^k {1 \over N_{kl}!}\Bigl[ {k \choose l} x^l t_l \Bigr]^{N_{kl}} \Bigr) = \prod_{k=1}^N \Bigl[ \sum_{l=0}^k {k \choose l}x^l t_l \Bigr]^{N_k},$$
where we used the explicit expression of $P_{\{N_{kl}\}}$ and all the relevant constraints.
The expectation values $\langle n_l\rangle $ can be computed starting from the above expressions and from the relationship
$$\langle n_l\rangle = \sum _{k=1}^N \langle N_{kl}\rangle =\sum_{k=1}^N \sum_{\{N_{jm}\}} N_{kl} P_{\{N_{jm}\}}.$$
Straightforward manipulations lead to the results~\cite{Zelterman}
$$\langle N_{kl}\rangle = N_k {{k \choose l}{N-k \choose n-l} \over {N \choose n}}, \qquad \qquad \langle n_l\rangle = {\sum_{k=1}^N N_k {k \choose l}{N-k \choose n-l} \over {N \choose n}}.$$
It is easy to check that the following relationships are satisfied:
$$\sum_{l=0}^n \langle n_l\rangle = \sum_{k=1}^N N_k = S, \qquad \qquad
\sum_{l=0}^n l\, \langle n_l\rangle = \Bigl(\sum_{k=1}^N k\, N_k \Bigr){n \over N} = n.$$
In order to fully appreciate the relevance of considerations based on the expectation values we must evaluate the weight of the fluctuations. Taking second derivatives of the generating function $E(x; t_l)$ one obtains:
$$\langle n_l^2\rangle -\langle n_l\rangle ^2 = \sum_{k,k'} N_k N_{k'} {k \choose l} {k' \choose l} \Biggl[{ {N-k-k' \choose n-2 l} \over {N \choose n}}-{ {N-k \choose n-l} \over {N \choose n}}{ {N-k' \choose n-l} \over {N \choose n}} \Biggr] +\sum_k N_k {k \choose l} \Biggl[{ {N-k \choose n- l} \over {N \choose n}}- {k \choose l} { {N-2 k \choose n-2 l} \over {N \choose n}} \Biggr].$$
Notice that in the large $N$ limit the term quadratic in $N_k$ is depressed by a power of $1/N$. This observation suggests that very important limits of the above results may be obtained when considering large populations.
\section{Invariant expectation values}
\label{moments}
It is very important to be able to define a set of expectation values that are independent of the size of the sample, and therefore may reflect very directly the properties of the original frequency distribution.
Let's consider the following combinations of expectation values:
$$\langle m_{\{p_i\}}^{(n)}\rangle \equiv {n \choose P}^{-1}\sum_{\{q_i\}} \prod_{i=1}^I \Bigl[{q_i \choose p_i} {\partial \over \partial t_{q_i}}\Bigr] \varepsilon^{(n)}(\{t_l\}|_{\{t_l=1\}},$$
where $p_i$ are $I$ arbitrary positive integer numbers, subject only to the constraint that $P \equiv \sum_i p_i \leq n$.
The definition of the quantities appearing in the r.h.s. implies that the derivatives with respect to $t_{q_i}$ are the joint factorial moments of the distribution; therefore we are dealing with weighted combinations of joint factorial moments. When some of the indices $p_i$ are equal to one, the expectation values may be expressed in terms of a combination of lower rank moments ($I'\langle I$).
It is possible to recognize that the quantities $\langle m_{\{p_i\}}^{(P)}\rangle $ are related (up to a trivial combinatorial factor taking into account the existence of $n_p$ coincident values of the indices $p_i$) to the probability of finding the configurations $\{p_i\}$ in the sample containing $P$ elements, and are therefore strictly connected with the probabilities $P_{\{n_p\}}$ .
Exploiting the properties of the generating function $E(x; \{t_l\})$ we prove in Appendix \ref{Expectation} that
$$\langle m_{\{p_i\}}^{(n)}\rangle =\langle m_{\{p_i\}}^{(N)}\rangle \equiv M_{\{p_i\}}$$
for all sets $\{p_i\}$ such that $P \leq n$. Hence the expectation values of the nontrivial invariant moments $m_{\{p_i\}}$ evaluated for samples of arbitrary size $n \geq P$, coincide with the moments $M_{\{p_i\}}$ of the original frequency distribution. If the original set was generated by a random process, also the $M_{\{p_i\}}$ will be expectation values.
Recalling that $\varepsilon^{(N)}(\{t_l\}\equiv \prod_k t_k^{N_k}$ we may now generate a representation of all $P_{\{n_p\}}$ in terms of $N_k$, without making use of the coefficients $N_{kl}$.
The properties of the binomial coefficients make it possible to invert the relationship between invariant moments and joint factorial moments, thus finding that
$$\Bigl[\prod_{i=1}^I {\partial \over \partial t_{q_i}}\Bigr] \varepsilon^{(n)}(\{t_l\}|_{\{t_l=1\}} = \sum_{\{p_i\}}\prod_{i=1}^I (-1)^{p_i-q_i}{p_i \choose q_i}\langle {n \choose P} m_{\{p_i\}}^{(n)}\rangle = \sum_{\{p_i\}}\prod_{i=1}^I (-1)^{p_i-q_i}{p_i \choose q_i}{n \choose P} M_{\{p_i\}}.$$
The basic invariant moments are
$$m_p^{(n)} = {n \choose p}^{-1} \sum_{q=p}^n { q \choose p}n_q.$$
According to the inversion formula
$$\langle n_l\rangle = {\partial \varepsilon^{(n)} \over \partial t_l}|_{\{t_m =1\}} = \sum_{p=l}^n (-1)^{p-l}{p \choose l}{n \choose p}\langle m_p^{(n)}\rangle =\sum_{p=l}^n (-1)^{p-l}{p \choose l}{n \choose p} M_p.$$
One may define generating functions for the expectation values of $n_l$ and of the basic invariant moments:
$$f^{(n)}(t) \equiv \sum_{l=0}^n \langle n_l\rangle t^l, \qquad \qquad g^{(n)}(z) \equiv f^{(n)} \bigl(1+{z \over n}\bigr) = \sum_{p=0}^n {n \choose p} M_p \Bigl({z \over n}\Bigr)^p.$$
Notice that a special case of the above formula is
$$F(t) \equiv \sum_{k=0}^N N_k t^k, \qquad \qquad G(z) \equiv F\bigl(1+{z \over N}\bigr)= \sum_{p=0}^N {N \choose p} M_p \Bigl({z \over N}\Bigr)^p.$$
It is immediate to recognize that $g^{(n)}(0) =G(0)= M_0 \equiv S$, and ${dg \over dz}^{(n)}(0) = {dG \over dz}(0) = M_1 \equiv 1$.
\section{Application to Ewens sampling formula}
\label{ewens}
The multivariate Ewens distribution~\cite{Ewens,Karlin}, called in genetics the Ewens sampling formula, describes a specific probability for the partition of $n$ into parts, and found its main applications in the context of the neutral theory of evolution and in the unified neutral theory of biodiversity~\cite{Hubbell,Rosindell}. Since Ewens formula and its possibile generalizations have been the subject of a wide and still growing literature~\cite{Griffiths, Etienne, Lessard, Lambert}, it may be interesting to apply the results presented in Section \ref{moments} to this specific instance. In our notation Ewens probability distribution takes the form
$$P_{\{n_l\}} ={1 \over \aleph_n} \prod_{l=1}^n {1 \over n_l!}\Bigl({\theta \over l}\Bigr)^{n_l},\qquad \qquad \qquad \aleph_n \equiv {\Gamma(\theta+n) \over n! \,\Gamma(\theta)},$$
where $0< \theta < \infty$ and $\sum l\,n_l = n$.
Joint factorial moments of the Ewens distribution are easily computed~\cite{Johnson} and one can show that
$$\prod_{i=1}^I \Bigl[ {\partial \over \partial t_{q_i}}\Bigr] \sum_{\{n_l\}} P_{\{n_l\}} \prod_{l=0}^n t_l^{n_l}= {\aleph_{n-Q} \over \aleph_n}\prod_i \Bigl({\theta \over q_i}\Bigr) ,$$
where $Q \equiv \sum_i q_i \leq n$.
We are then left with the task of computing the summations appearing in the equation
$$\langle m_{\{p_i\}}^{(n)}\rangle ={n \choose P}^{-1} \sum_{\{q_i \}} \prod_{i=1}^I { q_i \choose p_i}{\aleph_{n-Q} \over \aleph_n}\prod_i \Bigl({\theta \over q_i}\Bigr) = {P! \,(n-P)!\,\Gamma(\theta) \over \Gamma(\theta + n)} \prod_{i=1}^I \Bigl({\theta \over p_i}\Bigr) \sum_{\{q_i \}} \prod_{i=1}^I { q_i-1 \choose p_i-1} {\Gamma(\theta+n-Q) \over \Gamma(\theta)\,(n-Q)!}.$$
We prove in Appendix \ref{Combinatorics} that
$$\sum_{\{q_i \geq p_i \}} \prod_{i=1}^I { q_i-1 \choose p_i-1} {\Gamma(\theta+n-Q) \over \Gamma(\theta)\,(n-Q)!} = {\Gamma(\theta+n) \over \Gamma(\theta+P)\,(n-P)!},$$
hence
$$\langle m_{\{p_i\}}^{(n)}\rangle = {P! \,\Gamma(\theta) \over \Gamma(\theta+P)}\prod_{i=1}^I \Bigl({\theta \over p_i}\Bigr) \equiv {1 \over \aleph_P}\prod_{i=1}^I \Bigl({\theta \over p_i}\Bigr),$$
showing explicitly that the expectation values of the invariant moments of the Ewens distribution are independent of the sample size and related to the probability of the configuration $\{p_i\}$ in the sampling of $P$ elements.
We stress that invariant moments, because of their independence from the size of the sample, may become a highly valuable tool in testing the applicability of Ewens distribution (and of the conceptual assumptions underlying its derivation) to the interpretation of actual empirical data.
\section{Large population and small samples}
\label{small}
A significant simplification occurs when $N \rightarrow \infty$ while all other variables are kept finite.
Setting $\tilde x \equiv Nx$ and $t_0 = 1$ in the cumulative generating function and taking the large $N$ limit we obtain
$$\tilde E(\tilde x;\{t_l\}) \equiv 1+\sum_{n=1}^\infty \tilde \varepsilon^{(n)} (\{t_l\}) {\tilde x^n \over n!} =\prod_{k=1}^\infty \Bigl[1+\sum_{l=1}^k {k \choose l} \Bigl({\tilde x \over N}\Bigl)^l t_l\Bigr]^{N_k} \rightarrow \prod_{k=1}^\infty \Bigl[1+\sum_{l=1}^\infty \Bigl( {k\,\tilde x \over N}\Bigr)^l {t_l \over l!}\Bigr]^{N_k}.$$
Let's now define (for $n,l$ different from zero) the following set of coefficients:
$$c^{(n)}(\{n_l\}) \equiv (-1)^{s-1} (s-1)!\prod_{l=1}^n {1 \over n_l!}\Bigl({1 \over l!}\Bigr)^{n_l},$$
where $s= \sum_l n_l$ and $n = \sum_l l\,n_l$, and notice that the definition of $\tilde E$ implies that
$$\ln \tilde E(\tilde x;\{t_l\}) \equiv \sum_{n=1}^\infty \Bigl[\sum_{\{n_l\}} c^{(n)}(\{n_l\}) \prod_{l=1}^n \Bigl(\tilde \varepsilon^{(l)}(\{t_l\}) \Bigr)^{n_l}\Bigr] \tilde x^n.$$
It is also possible to recognize that, under the same assumptions,
$$\ln \tilde E(\tilde x;\{t_l\}) = \sum_{n=1}^\infty \Bigl[\sum_{\{n_l\}} c^{(n)}(\{n_l\}) \prod_{l=1}^n t_l^{n_l} \Bigr] \tilde m_n^{(N)} x^n,$$
where we introduced the large $N$ limit of the basic invariant moments: $\tilde m_p^{(N)} \equiv \sum_q N_q (q/ N)^p$.
Comparing the two results we conclude that, for each value of $n \rangle 0$,
$$\sum_{\{n_l\}} c^{(n)}(\{n_l\}) \prod_{l=1}^n \Bigl(\tilde \varepsilon^{(l)}(\{t_l\}) \Bigr)^{n_l}=\Bigl[\sum_{\{n_l\}} c^{(n)}(\{n_l\}) \prod_{l=1}^n t_l^{n_l} \Bigr] \tilde m_n^{(N)}.$$
These equations allow in principle for the recursive determination of all $\tilde \varepsilon^{(n)}(\{t_l\}$ in terms of $\{\tilde m_p^{(N)}\}$ (with $p\leq n$), starting from the initial condition $\tilde \varepsilon^{(1)} = t_1$. Higher rank invariant moments ($I \rangle 1$) in the large $N$ limit become polynomials in the basic moments. However one must keep in mind that, when the set $\{N_k\}$ not fixed, but generated by a probability distribution (as in the case of the Ewens formula), the expectation values of the products of basic moments appearing in the l.h.s. do not coincide with the products of the expectation values.
\section{Large populations and large samples}
\label{large}
When $k,l \ll N,n$ one may systematically exploit the property that, for small $a$ and $b$,
$$ {N-a \choose n-b } \rightarrow { \rho^b (1-\rho)^{b-a} \over \rho^n (1-\rho)^{N-n}} \qquad \qquad \qquad \rho \equiv {n \over N}.$$
Expressing $N$ and $n$ in terms of $N_{kl}$ one may then obtain
$$P_{\{N_{kl}\}} \rightarrow \prod_{k=1}^N \biggl[N_k! \prod_{l=0}^k {1 \over N_{kl}!} P_{kl}^{N_{kl}}\biggr] \qquad \qquad \qquad P_{kl} \equiv {k \choose l} \rho^l (1-\rho)^{k-l} .$$
As shown in Appendix \ref{Fluctuations}, when computing expectation values with the above probability distribution, the constraint $\sum l\,n_l = n$ becomes irrelevant in the large $N$ limit, and expectation values of products of $N_{kl}$ with different values of the index $k$ factorize into products of expectation values computed for each separate value of $k$.
We can therefore compute directly the generating function for a fixed sample size, generalizing the multivariate multinomial distribution:
$$\varepsilon^{(n)}(\{t_l\}) = \prod_{k=1}^N \sum_{\{N_{kl}\}} \biggl[N_k! \prod_{l=0}^k {1 \over N_{kl}!}(P_{kl} \,t_l)^{N_{kl}}\biggr] =\prod_{k=1}^N \Bigl[ \sum_{l=0}^k P_{kl} \,t_l\Bigr]^{N_k}.$$
A consistency check is easily obtained by observing that $\varepsilon^{(n)}({1}) = 1$, because of the property that $\sum_{l=0}^k P_{kl} = 1$.
The expectation value of $n_l$ turns out to be:
$$\langle n_l\rangle = \sum_{k=1}^N N_k P_{kl},$$
and one may check that the conditions on $\sum_l \langle n_l\rangle $ and on $\sum_l l\, \langle n_l\rangle $ are still satisfied.
We can also estimate the behavior of fluctuations when $k,l \ll N,n$:
$$\langle n_l^2\rangle - \langle n_l\rangle ^2 = \sum_{k=1}^N N_k P_{kl}(1 -P_{kl}).$$
The above expression is always smaller than $\langle n_l\rangle $ and as a consequence fluctuations become unimportant for sufficiently large values of $\langle n_l\rangle $.
In the same limit we may derive a notable relationship between the generating function of the original frequency distribution and the generating function of the expectation values of its samples. In fact we may recognize that for sufficiently large $N$ and $n$
$$g^{(n)}(z) = \sum_{p=0}^\infty M_p {z^p \over p!}= G(z).$$
Since in general $f^{(n)}(t) = g^{(n)}\bigl(n(t-1)\bigr)$ and $F(t) = G\bigl(N(t-1)\bigr)$, it is then easy to check that
in the limit under consideration
$$f^{(n)}(t) = F(1-\rho+\rho\,t),$$
As a direct consequence of these results, whenever the (size-independent) function $\gamma(z) \equiv G(z)-G(0)$ can be cast into a form exhibiting no explicit parametric dependence on $N$, the expectation values $\langle n_l\rangle $ can be obtained from $N_k$ by the replacement $N \rightarrow n$.
Notice that in the limit $k,l \ll N,n$ the definition of the basic invariant moments $m_p^{(n)}$ simplifies to
$$m_p^{(n)} \rightarrow {p! \over n^p}\sum_{l=p}^n n_l {l \choose p}.$$
It is worth analyzing in this limit the explicit expressions of the second basic invariant moment:
$$M_2 \rightarrow {1 \over N^2} \sum_{k=1}^N k(k-1) N_k = \sum_{a=1}^S \bigl({\hat N_a \over N}\bigr)^2 -{1 \over N} = \sum_{a=1}^S \langle \bigl({\hat n_a \over n}\bigr)^2\rangle -{1 \over n} \equiv {1 \over \alpha}.$$
As shown in Appendix \ref{Correlation} the above results may be used also in order to parametrize the expected correlation between samples under the assumption of independent random sampling.
\section{A class of distributions and its properties}
\label{distributions}
As mentioned in the Introduction, distributions found in samples may often correspond to systems whose asymptotic ($N \rightarrow \infty$) distribution is expected to obey a scaling law. However the exponent of the scaling law will in general be nontrivial, in contrast with the prediction offered by the simplest neutral models. An example of empirical and theoretical evidence for nontriviality is offered by surname frequency distributions (see Ref.~\cite{Rossi} for a recent review), recalling that surnames are expected to mimick the behavior of selectively neutral alleles. it is therefore especially interesting to consider parametrizations that may reflect notriviality of exponents, ad in particular the class of negative binomial distributions~\cite{Hilbe}, which can be obtained starting from the generating function
$$ F_c (t) = {N \over x}{(1-x)^{1-c} \over c} \bigl[1-(1-x t)^c \bigr] = \sum_{k=1}^\infty {N \over x}{(1-x)^{1-c} \over \Gamma(1-c)}{\Gamma(k-c) \over k!} (xt)^k,$$
where $0\langle x\langle 1$ and the parameter c is assumed to vary in the range $0 \leq c \langle 1$.
The asymptotic behaviour of the distribution for large $k$ is easily obtained by observing that in this limit
$${\Gamma (k-c) \over k!} \rightarrow {1 \over k^{1+c}}, \qquad \qquad
N_k \rightarrow {N \over x}{(1-x)^{1-c} \over \Gamma(1-c)}{x^k \over k^{1+c}}.$$
We can now compute the generating function for the expectation values of the samples according to the general rule previously discussed, and obtain
$$f_c(t) =f_c(0)+{n \over y}{(1-y)^{1-c} \over c} \bigl[1-(1-y t)^c \bigr],$$
where we have defined $y = {\rho x \over 1-x+\rho x}$.
The distribution of the samples has the same form as the original distribution, once the replacements $N \rightarrow n$ and $x \rightarrow y$ have been performed, and therefore we obtain the asymptotic behaviour
$$n_l \rightarrow {n \over y}{(1-y)^{1-c} \over \Gamma(1-c)}{y^k \over k^{1+c}}.$$
It is possible to define a combination of parameters independent of the dimension of the sample:
$$\beta = N{1-x \over x} = n{1-y \over y}.$$
It is useful to represent $x$ and $y$ in a form showing explicitly their dependence on the dimension of the sample and on the invariant parameter $\beta$:
$$x = {N \over \beta+N}, \qquad \qquad \qquad y = {n \over \beta+n}.$$
It is now possible to evaluate the expectation value of the invariant moments from the expression
$$\gamma_c(z) \equiv G_c(z)-G_c(0) = {\beta \over c}\Bigl[ 1-\Bigl(1+{ z \over \beta}\Bigr)^c\Bigr],$$
showing no explicit parametric dependence on $N$; we therefore obtain (for $p \neq 0$)
$$M_p = \beta^{1-p} {\Gamma(p-c) \over \Gamma(1-c)} \rightarrow {\beta^{1-p} \over \Gamma(1-c)} {p! \over p^{1+c}}, \qquad \qquad \alpha \equiv {1 \over M_2} = {\beta \over 1-c}.$$
The limit of the above results when $c \rightarrow 0$ is smooth, and it corresponds to Fisher distribution~\cite{Fisher}, such that
$$F_0(t) = - \beta \ln (1-x t), \qquad \qquad \qquad N_k = \beta {x^k \over k},$$
and
$$f_0(t) = \beta \ln \bigl({\beta+N \over \beta+ n}\bigr) -\beta \ln (1-y t), \qquad \qquad n_l = \beta {y^l \over l}.$$
The generating function of the invariant moments is obtained from $\gamma_0(z) = - \beta \ln (1+ z/\beta)$,
and as a consequence the expected values of the invariant moments ($p \neq 0$) are exactly $M_p =(p-1)!\, \beta^{1-p}$.
Notice in particular the relationship $\beta = \alpha$, peculiar to Fisher distribution. By comparing these results with the large $\theta$ limit of the invariant moments of Ewens distribution we may easily check Watterson's~\cite{Watterson} and Hubbell's~\cite{Hubbell} observation that in this limit $\theta$ is strictly connected to Fisher's $\alpha$.
\section{The scaling limit}
\label{scaling}
Let's now consider very large systems, and assume that we can gather information only through the sampling of
$n$ objects belonging to the system, with $n$ large but not necessarily comparable to $N$.
The analysis of the invariant moments may then allow us to check the applicability of a phenomenological description of the samples based on some distribution falling into the classes discussed in the previous Sections.
In the case of a positive response to the check it is then possible to find numerical estimates of the parameter $\beta$ and of the exponent $c$.
Such estimates are clearly meaningful only if $\beta$ does not turn out to be significantly greater than $n$.
Under these assumptions, we can infer a description of the original system, and in the case $N \rangle \rangle n$ such a description will correspond to computing the limit $x \rightarrow 1$ of the previous results.
As a consequence, at least for observable (i.e. not too large) values of $k$, the original distribution is expected to be well described by the scaling form
$$N_k \rightarrow {N^c \beta^{1-c} \over \Gamma(1-c)}{1 \over k^{1+c}}.$$
|
2,877,628,090,400 | arxiv | \section{I. Introduction}
The study of quantum phase transition is a central issue in modern
condensed matter physics. It is widely believed that the
Ginzburg-Landau-Wilson theory for classical phase transition may
fail to describe the quantum phase transition as a result of the
quantum interference effect between classical paths(the Berry phase
effect). Recently, the concepts of quantum order and de-confined
quantum criticality are put forward theoretically. On the
experimental side, the study of quantum phase transition plays an
important role in areas ranging from high-Tc cuprates, heavy Fermion
systems, to the cold atom systems\cite{Sachdev1}.
The bilayer Heisenberg model(BHM) on square lattice is a standard
model for the study of quantum phase transition. With the increase
of the interlayer coupling($J_{\perp}$) over the intralayer
coupling($J_{\parallel}$), the ground state of the system evolves
from a state with antiferromagnetic long rang order to a quantum
disordered state through a continuous phase transition. Much
theoretical and numerical efforts have been devoted to the study of
this quantum phase transition.
On theoretical side, perturbative calculations starting from both
the ordered side(spin wave expansion)\cite{spinwave} and the
disordered side(the bond operator expansion)\cite{bond} have been
applied to the system. However, due to the biased nature of
pertubative methods, none of them can give an accurate description
of the system in the near vicinity of the quantum phase transition.
The problem is also treated with the Schwinger Boson mean field
theory\cite{schwinger,Millis}. Although the theory does predict a
phase transition between the antiferromagnetic ordered state and the
quantum disordered state, the nature of the transition is incorrect.
The mean field theory predicts a discontinues dimerization
transition around $J_{\perp}/J_{\parallel}=4.62$ into a state
composed of independent interlayer dimers, while in the real system,
the intralayer correlation is nonzero for any finite
$J_{\perp}/J_{\parallel}$.
On numerical side, the model is thoroughly studied by a variety of
methods including the high temperature series expansion\cite{series}
and the quantum Monte Carlo simulation(Stochastic series
expansion)\cite{qmc,Wang}. These numerical works confirm the
existence of the quantum critical point around
$\alpha=J_{\perp}/J\approx 2.52$. The critical exponents is found to
be consistent with that of the classical 3D Heisenberg universality
class, indicating the irrelevance of the Berry phase effect in this
phase transition.
As the quantum phase transition occurs at zero temperature, it is
natural to find a description of it in terms of an explicit ground
state wave function. The variational approach to quantum phase
transition has the virtue that it focus directly on the zero
temperature behavior of the system and provides much more detailed
information on the quantum critical behavior. In this regard, a
RVB-type variational wave function\cite{RVB,loop} had been applied
to the study of the quantum phase transition in the BHM. The wave
function is derived from Gutzwiller projection of Schwinger Boson
mean field ground state. It is well known that such a RVB wave
function can describe both the magnetic ordered and the quantum
disordered state. Thus, it has the potential to provide an unbiased
description of the quantum phase transition in BHM. The same type of
variational wave function has been successfully applied to the study
of the single layer two-dimensional Heisenberg
model\cite{RVB,schwinger1}. However, for the BHM, the variational
calculation in \cite{Yoshioka} using such a wave function predicts a
critical coupling $\alpha_{c}=3.51$, which is a very bad estimate as
compared to the result of numerical simulation. A central issue to
be addressed in this paper is to understand why the Bosonic RVB
state, which works so well on square lattice, fails for the BHM and
how to improve it.
In this paper, we propose a RVB-type variational wave function with
two variational parameters for the BHM. Similar to \cite{Yoshioka},
our wave function is derived from Gutzwiller projected Schwinger
Boson mean field state. However, in our theory the intralayer RVB
pairing and interlayer RVB pairing are treated as two independent
variational parameters, rather than been determined by mean field
self-consistent equations. We find our variational wave function
provides an accurate description of the quantum phase transition of
the BHM. We find the the transition is continuous. By analyzing the
spin structure factor, ground state fidelity susceptibility and
Binder moment ratio $Q_{2}$, the critical coupling strength is
estimated to be $\alpha_{c}\approx 2.62$, in good agreement with
those determined from the numerical simulation. The critical
exponent estimated from the scaling analysis of the $Q_{2}$ data is
also consistent wit that of the classical 3D Heisenberg universality
class. Our result indicates that the Bosonic RVB wave function
derived from Gutzwiller projection of the Schwinger Boson mean field
state can provides accurate description of the quantum phase
transition in quantum antiferromagnets. We also find that the
failure of the Schwinger Boson mean field theory originates from the
overestimation of the tendency to form interlayer dimers, which is
again caused by the relaxation of the no double occupancy
constraint.
The paper is organized as follows. In section II, we introduce the
BHM and the Bosonic RVB wave function. In section III, we present
the numerical method to do calculation on such wave functions. In
section IV, we present the numerical results and determine the
critical point of the phase transition by analyzing the results of
fidelity susceptibility and Binder moment ratio. In section V, we
present a discussion on related issues and conclude this paper.
\section{II. The bilayer Heisenberg model and the RVB-type variational wave function}
The model(BHM) we study in this paper is given by
\begin{equation}
\mathrm{H}=J_{\parallel}\sum_{<i,j>,\mu}\vec{\mathrm{S}}_{i}^{\mu}\cdot\vec{\mathrm{S}}_{j}^{\mu}
+J_{\perp}\sum_{i}\vec{\mathrm{S}}_{i}^{1}\cdot\vec{\mathrm{S}}_{i}^{2},
\end{equation}
where $\vec{\mathrm{S}}_{i}^{\mu}$ denotes the spin operator at site
$i$ of layer $\mu(=1,2)$. $\sum_{<i,j>}$ means the summation over
nearest-neighboring sites on the square lattice of each layer.
$\alpha=J_{\perp}/J_{\parallel}$ is the only dimensionless parameter
of the model. When $\alpha=0$, the model describes two decoupled
two-dimensional Heisenberg model, each of which are
antiferromagnetic ordered at zero temperature. When $\alpha
\rightarrow \infty $, the system reduces to N decoupled interlayer
dimers and the system is in a trivial quantum disordered state. A
continuous quantum phase transition connects these two limits.
Earlier numerical simulation shows that the phase transition occurs
around $\alpha_{c}=2.52$\cite{qmc,Wang}.
The Bosonic RVB wave function we will adopt in this study is made of
coherent superposition of spin singlet configurations on the lattice
and can be written as
\begin{equation}
|\mathrm{RVB}\rangle=\sum_{\{i_{k},j_{k}
\}}A(\{i_{k},j_{k}\})\prod_{k=1}^{N/2}S(i_{k},j_{k}),
\end{equation}
in which
$S(i_{k},j_{k})=\frac{1}{\sqrt{2}}(\uparrow_{i_{k}}\downarrow_{j_{k}}-\downarrow_{i_{k}}\uparrow_{j_{k}})$
denotes the spin singlet pair between site $i_{k}$ and $j_{k}$.
$A(\{i_{k},j_{k}\})$ are the coefficients of the coherent
superposition. In our case, $A(\{i_{k},j_{k}\})$ can be written in a
factorizeable
form $A(\{i_{k},j_{k}\})=\prod_{k=1}^{N/2}a_{i_{k},j_{k}}$
The wave function Eq.(2) can be used directly as variational state
for quantum antiferromagnet. A more efficient and intuitively more
attractive way to generate the RVB wave function is by Gutzwiller
projection of Schwinger Boson mean field state. This approach is
used to study two-dimensional Heisenberg model and is proved to be
very successful. However, direct application of the approach to the
BHM results in unsatisfactory results.
Here, we will adopt the form of the Gutzwiller projected Schwinger
Boson mean field state, but regard the mean field order
parameters(intralayer and interlayer RVB pairing amplitudes) as free
variational parameters, rather than been determined from the mean
field self-consistent equations. The reason for such a choice is as
follows. In the mean field treatment, the no double occupancy
constraint is relaxed. As a result, the quantitative prediction of
the mean field theory is not reliable. For example, the mean field
equation predicts an un-physical dimerization transition for BHM at
$\alpha\approx 4.62$, whose origin can be traced back to the
overestimation the tendency to form interlayer dimers, which is
again related to the relaxation of the local constraint.
In the Schwinger Boson representation\cite{schwinger}, the spin
operator is written as
\begin{equation}
\vec{\mathrm{S}}=\frac{1}{2}\sum_{\alpha,\beta=1,2}b^{\dagger}_{\alpha}\vec{\sigma}_{\alpha,\beta}b_{\beta},
\end{equation}
in which $b_{\alpha}$ is a Boson operator, $\vec{\sigma}$ is the
Pauli matrix. Eq.(3) is a faithful representation of the spin
algebra provided that the Bosonic particle satisfy the no double
occupancy constraint
\begin{equation}
\sum_{\alpha}b^{\dagger}_{\alpha}b_{\alpha}=1.
\end{equation}
The BHM written in terms of the Schwinger Boson operators reads
\begin{eqnarray}
\mathrm{H}=&-&J_{\parallel}\sum_{<i,j>,\mu}\hat{\Delta}_{i,j}^{\mu
\dagger}\hat{\Delta}_{i,j}^{\mu}-J_{\perp}\sum_{i}\hat{\Delta}_{i}^{\dagger}\hat{\Delta}_{i}\nonumber \\
&-&\sum_{i,\mu}\lambda_{i,\mu}(n_{i,\mu}-1)
\end{eqnarray}
in which
\begin{eqnarray}
\hat{\Delta}_{i,j}^{\mu}&=&\frac{1}{\sqrt{2}}(b_{i,\mu,\uparrow}b_{j,\mu,\downarrow}-b_{i,\mu,\downarrow}b_{j,\mu,\uparrow}) \nonumber\\
\hat{\Delta}_{i}&=&\frac{1}{\sqrt{2}}(b_{i,1,\uparrow}b_{i,2,\downarrow}-b_{i,1,\downarrow}b_{i,2,\uparrow})
\end{eqnarray}
denote the intralayer and interlayer RVB pairing operator,
$n_{i,\mu}=\sum_{\alpha=\uparrow,\downarrow}b_{i,\mu,\alpha}^{\dagger}b_{i,\mu,\alpha}$.
The Largrange multiplier $\lambda_{i,\mu}$ is introduced to keep
track of the local constraint.
In the mean field theory, we treat $\lambda_{i,\mu}=\lambda$ as a
constant and decouple the interaction term using the following mean
field order parameters
$\Delta_{\parallel}=\langle\hat{\Delta}_{i,j}^{1}\rangle=\langle\hat{\Delta}_{i,j}^{2}\rangle
$ and $ \Delta_{\perp}=\langle\hat{\Delta}_{i}\rangle$. The mean
field Hamiltonian reads(up to a constant)
\begin{eqnarray}
\mathrm{H}_{\mathrm{MF}}&=&-J_{\parallel}\Delta_{\parallel}\sum_{<i,j>,\mu}(\hat{\Delta}_{i,j}^{\mu
\dagger}+\hat{\Delta}_{i,j}^{\mu})\nonumber \\
&&-J_{\perp}\Delta_{\perp}\sum_{i}(\hat{\Delta}_{i}^{\dagger}+\hat{\Delta}_{i})-\lambda\sum_{i,\mu}n_{i,\mu}.
\end{eqnarray}
The mean field ground state of Eq.(7) reads
\begin{eqnarray}
|G\rangle =
\exp\Bigl[\sum_{i,j;\mu,\nu}&a_{i\mu,j\nu}&(b_{i\mu,\uparrow}^{\dagger}b_{j\nu,\downarrow}^{\dagger}-b_{i\mu,\downarrow}^{\dagger}b_{j\nu,\uparrow}^{\dagger})
\Bigr] \Bigl|0\Bigr\rangle,
\end{eqnarray}
in which $|0\rangle$ denotes the vacuum of the Schwinger Boson.
$a_{i\mu,j\nu}$ represents the RVB amplitude between site $i$ in
$\mu$ layer and site $j$ in $\nu$ layer. As a result of the
bipartite nature of the system, the RVB amplitude is nonzero only
for sites belonging to different sublattices. Thus for $\mu=\nu$,
$a_{i\mu,j\nu}$ is nonzero only when $i,j$ have different parity,
while for $\mu\neq\nu$ the reverse is true. The intralayer and
interlayer RVB amplitudes are given
by($a_{i1,j1}=a_{i2,j2},a_{i1,j2}=a_{i2,j1}$ by symmetry)
\begin{eqnarray}
a_{i1,j1}=\frac{1}{N}\sum_{\vec{k}}[\xi(k)+\eta(k)]\exp(i\vec{k}\cdot\vec{r}_{i,j})\nonumber\\
a_{i1,j2}=\frac{1}{N}\sum_{\vec{k}}[\xi(k)-\eta(k)]\exp(i\vec{k}\cdot\vec{r}_{i,j}),
\end{eqnarray}
in which
\begin{eqnarray}
\xi(k)=\frac{c_{1}\gamma(k)+c_{2}}{1+\sqrt{1-(c_{1}\gamma(k)+c_{2})^{2}}}\nonumber\\
\eta(k)=\frac{c_{1}\gamma(k)-c_{2}}{1+\sqrt{1-(c_{1}\gamma(k)-c_{2})^{2}}}\nonumber,
\end{eqnarray}
here $c_{1}=4J_{\parallel}\Delta_{\parallel}/\sqrt{2}\lambda$,
$c_{2}=J_{\perp}\Delta_{\perp}/\sqrt{2}\lambda$,
$\gamma(k)=(\cos(k_{x})+\cos(k_{y}))/2$.
The Bosonic RVB wave function adopted in this study is given by
Gutzwiller projection of the mean field ground state into the
physical subspace satisfying the local constraint,
\begin{eqnarray}
|G\rangle =
P_{G}\Bigl[\sum_{i,j;\mu,\nu}&a_{i\mu,j\nu}&(b_{i\mu,\uparrow}^{\dagger}b_{j\nu,\downarrow}^{\dagger}-b_{i\mu,\downarrow}^{\dagger}b_{j\nu,\uparrow}^{\dagger})
\Bigr]^{N/2}\Bigl|0\Bigr\rangle\nonumber\\.
\end{eqnarray}
Here $P_{G}$ denotes the Gutzwiller projection and $N$ is the number
of lattice sites. The mean field ground state contains two
dimensionless parameters, namely $c_{1}$ and $c_{2}$. In the mean
field theory, both of them are determined by the mean field
self-consistent equations. Here we regard them as two independent
variational parameters. This is the key difference between our
theory and that of \cite{Yoshioka}.
The proposed wave function Eq.(10) can describe both the magnetic
ordered and the quantum disordered state. As can be seen from
Eq.(9), as $c_{1}+c_{2}\rightarrow 1$, both $a_{i1,j1}$ and
$a_{i1,j2}$ becomes long ranged and the wave function describes a
state with antiferromagnetic long range order. On the other hand,
when $c_{1}+c_{2}$ deviates from 1, the RVB amplitudes $a_{i1,j1}$
and $a_{i1,j2}$ become short ranged and the corresponding wave
function describes a quantum disordered state. In fact,
$c_{1}+c_{2}=1$ is nothing but the Bose condensation condition in
the mean field theory.
On general grounds, we expect the interlayer pairing $c_{2}$ to
increase with $\alpha$ and the intralayer pairing $c_{1}$ to
decrease with $\alpha$. The transition between the antiferromagnetic
ordered state and the quantum disordered state is signaled by the
deviation of $c_{1}+c_{2}$ from 1. These expectations are confirmed
in the numerical calculation.
\section{III. The numerical techniques}
The Bosonic RVB wave function Eq.(10) can be studied by the standard
loop gas Monte Carlo algorithm\cite{loop}. In this algorithm, the
calculation of expectation value of a physical quantity
$\hat{A}$(for example the energy) is done as follows
\begin{eqnarray}
\frac{\langle G| \hat{A}|G\rangle}{\langle G|G\rangle}
=\frac{\sum_{\gamma,\gamma'}\psi^{*}_{\gamma}\psi_{\gamma'}\langle\gamma|\gamma'\rangle
\frac{\langle\gamma|\hat{A}|\gamma'\rangle}{\langle\gamma|\gamma'\rangle}
}{\sum_{\gamma,\gamma'}\psi^{*}_{\gamma}\psi_{\gamma'}\langle\gamma|\gamma'\rangle}.
\end{eqnarray}
Here $|\gamma\rangle$ denotes the valence bond basis vector and is
given by $|\gamma\rangle=\prod_{(i,j)\in\gamma}S(i,j)$.
$\psi_{\gamma}$ is the corresponding amplitude and is given by
$\psi_{\gamma}=\prod_{(i,j)\in\gamma}a_{i,j}$. The overlap between
two valence bond basis vectors $|\gamma\rangle$ and
$|\gamma'\rangle$ can be graphically interpreted as a loop gas on
the lattice by fusing the valence bonds in the two basis vectors. It
is easy to show that $\langle\gamma|\gamma'\rangle=2^{N_{L}}$, where
$N_{L}$ is the number of loops in the transition graph between
$|\gamma\rangle$ and $|\gamma'\rangle$.
As the system is bipartite, the RVB amplitude $a_{i1,j1}$ and
$a_{i1,j2}$ are in fact positive definite and the wave function
Eq.(10) satisfy the Marshall sign rule\cite{loop}. For this reason,
we can interpret
$W(\gamma,\gamma')=\frac{\psi^{*}_{\gamma}\psi_{\gamma'}\langle\gamma|\gamma'\rangle}{\sum_{\gamma,\gamma'}\psi^{*}_{\gamma}\psi_{\gamma'}\langle\gamma|\gamma'\rangle}$
as a normalized probability in the space of loop gas and can draw
samples on it with the standard Monte Carlo method. The calculation
of
$\frac{\langle\gamma|\hat{A}|\gamma'\rangle}{\langle\gamma|\gamma'\rangle}$
is easy for $\hat{A}=\vec{\mathrm{S}}_{i}\cdot\vec{\mathrm{S}}_{j}$
and the result reads
\begin{eqnarray}
\langle \vec{\mathrm{S}}_{i}\cdot\vec{\mathrm{S}}_{j} \rangle=
\left\{
\begin{array}{ll}
-\frac{3}{4}, & \hbox{$i,j \in$ same loop, different sublattices;} \\
\frac{3}{4}, & \hbox{$i,j \in$ same loop, same sublattice;} \\
0, & \hbox{otherwise.}
\end{array}
\right.
\end{eqnarray}
Thus both the energy and spin structure factor can be easily
calculated with the standard Monte Carlo procedure in the loop gas
space.
To determine the optimal value of the variational parameter $c_{1}$
and $c_{2}$, we calculate the expectation value of the energy and of
its gradients in the parameter space $(c_{1},c_{2})$ on a finite
lattice. It is useful to note that the gradients of energy can be
directly simulated by the loop gas Monte Carlo method also. Its
expression is given by
\begin{eqnarray}
\frac{\partial E(c_{1},c_{2})}{\partial
c_{1,2}}&=&\left\langle
\Bigl(\sum_{(i,j)\in\gamma,\gamma'}\frac{\partial \ln
a_{i,j}}{\partial
c_{1,2}}\Bigr)\frac{\langle\gamma|\hat{H}|\gamma'\rangle}{\langle\gamma|\gamma'\rangle}
\right\rangle_{L}\nonumber\\
&-&E(c_{1},c_{2})\left\langle \sum_{(i,j)\in\gamma,\gamma'}\frac{\partial \ln
a_{i,j}}{\partial c_{1,2}}\right\rangle_{L},
\end{eqnarray}
where $\langle \rangle_{L}$ denotes average over the loop gas
configurations with the weight $W(\gamma,\gamma')$.
We have used $10^{8}$ samples to calculate the energy and its
gradients to determine the optimized values for $c_{1}$ and $c_{2}$.
The boundary condition of the finite lattice is set to be periodic
in both directions. The calculation is done on a lattice with size
up to $20\times20\times2$, at which we find the critical coupling
converges to $\alpha_{c}\approx 2.62$.
\section{IV. The numerical results}
The optimized value for the parameter $c_{1}$ and $c_{2}$ as
functions of the coupling constant $\alpha$ are shown in Fig.1. As
$\alpha$ increases, the interlayer RVB pairing strength $c_{2}$
increases at the expense of the intralayer RVB pairing strength
$c_{1}$. The result is obtained on a $20\times20\times2$ lattice. It
is found that the optimized values deviate significantly from the
mean field predictions, especially for large value of $\alpha$. For
example, the mean field theory predicts that $\Delta_{\perp}$ would
reach twice the value of $\Delta_{\parallel}$ around $\alpha=4$.
However, the variational theory predicts that $\Delta_{\perp}$ is
slightly smaller than $\Delta_{\parallel}$ around $\alpha=4$. Thus,
the mean field theory overestimates greatly the tendency to form
interlayer dimer at large $\alpha$.
\begin{figure}[h!]
\includegraphics[width=8cm,angle=0]{graph1.eps}
\caption{The optimized value for the intralayer and interlayer RVB
pairing strength $c_{1}$ and $c_{2}$ as functions of the coupling
constant $\alpha$.} \label{fig1}
\end{figure}
To better understand the evolution of the variational parameters as
functions of $\alpha$, we plot the value of the $a=c_{1}+c_{2}$ as a
function of $\alpha$ in Fig.2. As we have shown above, the quantum
phase transition between the magnetic ordered state and the quantum
disordered state in our variational theory is solely controlled by
the value of $a$. The value of $a$ is seen to deviate from unity
around $2.6$, at which the Bose condensate of the spinon is gone.
\begin{figure}[h!]
\includegraphics[width=8cm,angle=0]{graph2.eps}
\caption{The optimized value for the variational parameter
$a=c_{1}+c_{2}$, which controls the order-disorder transition of
BHM. } \label{fig2}
\end{figure}
To further characterize the quantum phase transition and determine
the value of the critical coupling $\alpha_{c}$, we study the
following three kinds of quantities: the spin structure factor at
the ordering wave vector, the fidelity susceptibility of the ground
state and the Binder moment ratio $Q_{2}$.
\subsection{A. Spin Structure Factor}
For a finite system, the spontaneous magnetization can be defined in
a spin rotational invariant way as the square root of the spin
structure factor at the magnetic Bragg vector. For BHM, the Bragg
vector is $\vec{Q}=(\pi,\pi,\pi)$. The spin structure factor is
defined as
\begin{eqnarray}
S(\vec{q})=\frac{1}{N}\sum_{i,j}\langle\vec{\mathrm{S}}_{i}\cdot\vec{\mathrm{S}}_{j}\rangle\exp(i\vec{q}\cdot \vec{r}_{i,j})
\end{eqnarray}
For $q=Q$, we have
\begin{eqnarray}
M^{2}=NS(\vec{Q})=\sum_{i,j}(-1)^{i-j}\langle\vec{\mathrm{S}}_{i}\cdot\vec{\mathrm{S}}_{j}\rangle
\end{eqnarray}
In the quantum disordered state, as the spin correlation length is
finite, $S(\vec{Q})$ is of order one. However, in the magnetic
ordered state, $S(\vec{Q})$ should scale like $N$ and thus $M$ is an
extensive quantity.
The result of the spin structure factor for a $20\times20\times2$
system is shown in Fig.3. An order-disorder transition can be seen
around $2.5$. However, the signature of phase transition in the spin
structure factor is not sharp enough for an accurate determination
of the critical coupling strength. The transition is rounded into a
crossover as a result of the finite size effect. For this reason, we
need some other quantities that are more sensitive to the transition
to determine the critical coupling.
\begin{figure}[h!]
\includegraphics[width=8cm,angle=0]{graph3.eps}
\caption{The spin structure factor at the antiferromagnetic ordering
wave vector as a function of $\alpha$ for a system with $L=20$.}
\label{fig3}
\end{figure}
\subsection{B. Fidelity Susceptibility}
The concept of fidelity susceptibility is introduced to describe the
sensitivity of the ground state to the variation of the parameters
in Hamiltonian\cite{fidelity} and is expected to reach its maximum
at the critical coupling of a quantum phase transition, where the
ground state is the most susceptible to the variation of the
controlling parameters of the phase transition. The fidelity
susceptibility is defined in the following manner for a system with
only one parameter $\alpha$,
\begin{equation}
\chi_{f}=-2\lim_{\delta\alpha\rightarrow 0}\frac{\ln
|O(\alpha,\delta\alpha)|}{(\delta\alpha)^{2}},
\end{equation}
in which
$O(\alpha,\delta\alpha)=\langle\Psi_{\alpha}|\Psi_{\alpha+\delta\alpha}\rangle$
denotes the overlap between the normalized ground state vector for
parameter value $\alpha$ and $\alpha+\delta\alpha$.
In our variational theory, the fidelity susceptibility can be
calculated directly. We first fit the optimized variational
parameters as functions of $\alpha$ and then calculate the overlap
between variational ground states for nearby values of $\alpha$. The
overlap between the Bosonic RVB states is calculated in the
following way.
\begin{equation}
\frac{\langle\Psi|\Psi'\rangle}{\langle\Psi|\Psi\rangle}=\frac{\sum_{\gamma,\gamma'}\psi^{*}_{\gamma}\psi_{\gamma'}\langle\gamma|\gamma'\rangle
\frac{\psi'_{\gamma'}}{\psi_{\gamma'}}
}{\sum_{\gamma,\gamma'}\psi^{*}_{\gamma}\psi_{\gamma'}\langle\gamma|\gamma'\rangle}.
\end{equation}
In our calculation, we have set $\delta\alpha=0.01$. The result for
the fidelity susceptibility for systems of several sizes are shown
in Fig.4. A pronounced peak appears around $\alpha=2.6$. Fig.5 shows
the peak position extracted from Fig.4 as a function of the system
size $L$. It is found that the peak position converges rapidly to
its thermodynamic limit value $\alpha_{c}\approx2.62$ when $L>10$.
\begin{figure}[h!]
\includegraphics[width=8cm,angle=0]{graph4.eps}
\caption{The fidelity susceptibility for system of size
$L=6,8,10,14,16,18$ and $20$ as functions of $\alpha$. The finite
difference used to calculate the fidelity susceptibility is
$\delta\alpha=0.01$.} \label{fig4}
\end{figure}
\begin{figure}[h!]
\includegraphics[width=8cm,angle=0]{graph5.eps}
\caption{The critical coupling strength $\alpha_{c}$ determined from
the peak position of the fidelity susceptibility. The peak position
is seen to converge rapidly to its thermodynamic limit when $L>10$.
The error bars are smaller than the size of the symbols.}
\label{fig5}
\end{figure}
\begin{figure}[h!]
\includegraphics[width=8cm,angle=0]{graph6.eps}
\caption{The Binder moment ratio $Q_{2}$ for $L=14,16,18$ and $20$
as functions of $\alpha$.} \label{fig6}
\end{figure}
\subsection{C. Binder Moment Ratio $Q_{2}$}
To confirm the result derived from the fidelity susceptibility, we
calculate the Binder moment ratio $Q_{2}$\cite{Wang,Binder}. The
Binder moment ratio $Q_{2}$ is a dimensionless quantity defined in
the following manner,
\begin{equation}
Q_{2}=\frac{\langle \hat{S}_{Q}^{2} \rangle}{\langle \hat{S}_{Q}
\rangle^{2}},
\end{equation}
in which $\hat{S}_{Q}=\sum_{i,j}(-1)^{i-j}\vec{S}_{i}\cdot
\vec{S}_{j}$. Note our definition of $Q_{2}$ is slightly different
from the standard one in that it is defined in a spin rotational
invariant way, while in the standard definition only the
$z$-component of the moment is used. The Binder moment ratio is very
useful in the analysis of the critical properties as it is universal
near the critical point. More specifically, it can be expressed as a
universal scaling function of $tL^{1/\nu}$, where
$t=(\alpha-\alpha_{c})$ and $\nu$ is the critical exponent for
correlation length.
The results of $Q_{2}$ for system with $L=14,16,18$ and $20$ are
shown in Fig.6. It is found that all curves cross with each other at
approximately the same value of $\alpha$, in accordance with the
scaling hypothesis. The estimated value of the critical coupling
strength is $2.62$, in good agrement with that estimated from the
fidelity susceptibility data. The $Q_{2}$ value at the crossing
point is found to be approximately $1.23$, close but smaller than
the value(1.29) estimated from the quantum Monte Carlo simulation
with the standard definition of $Q_{2}$. Such a difference may be
caused by the difference in the definitions of $Q_{2}$.
\begin{figure}[h!]
\includegraphics[width=8cm,angle=0]{graph7.eps}
\caption{Scaling of the Binder moment ratio $Q_{2}$ data for
$L=14,16,18$ and $20$. Here $t=\alpha-\alpha_{c}$} \label{fig7}
\end{figure}
Fig.7 shows the scaling of the $Q_{2}$ data with the scaling form
$Q_{2}=M(tL^{1/\nu})$, where $t=\alpha-\alpha_{c}$ and $\nu$ is the
exponent for the correlation length. The best fit is reached by
$\alpha_{c} \approx 2.62$ and $1/\nu \approx 1.4$. The critical
exponent so obtained $\nu\approx 0.714$ is thus quite close to the
result of the quantum Monte Carlo simulation.
\section{V. Conclusion}
In this work, we proposed a Bosonic RVB wave function with the form
of the Gutzwiller projected Schwinger Boson mean field ground state
for the BHM. We find the proposed wave function predicts a
continuous phase transition between the antiferromagnetic ordered
state and the quantum disordered state. To determine the critical
coupling strength, we have calculated the spin structure factor, the
fidelity susceptibility and the Binder moment ratio $Q_{2}$. Through
finite size scaling analysis of the latter two quantities, we find
the critical coupling to be given by $\alpha_{c}\approx2.62$, in
good agreement with the quantum Monte Carlo simulation results. The
scaling analysis of $Q_{2}$ also provides an estimate of the
correlation length critical exponent($1/\nu\approx1.4$), which is
also in good agreement with the result of quantum Monte Carlo
simulation. We find the intralayer correlation is quite large at the
phase transition point and it dominates over the interlayer
correlation for $\alpha$ twice as large the critical coupling
strength. Thus, the phase transition has nothing to do with the
dimerization instability.
Our work indicates that the Bosonic RVB wave function derived from
Gutzwiller projection of the Schwinger Boson mean field ground state
has the potential to capture the physics of quantum phase transition
with high accuracy. The failure of it in previous variational study
\cite{Yoshioka} can be attributed to the weakness of the mean field
theory, which overestimate the tendency of the system to form
interlayer dimers. Such a overestimation is closely related to the
relaxation of the local constraint in the mean field treatment,
which prohibit multiple occupation of dimer on a given bond, even if
the mean field theory points to the tendency of Bose condensation of
of such interlayer dimers. The same instability also cause the
failure of the mean field theory itself for large $\alpha$. Hence,
the form the ground state predicted by the mean field theory is
correct, however, the quantitative relation between mean field order
parameters is less meaningful. The local constraint is thus
indispensable for a correct description of the quantum
antiferromagnet with the Bosonic RVB state.
In this work, we have proved the usefulness of the variational
approach to the quantum phase transition in BHM. However, a more
detailed study of the critical behavior and the excitation spectrum
around the critical point is obviously needed to further
characterize the quantum critical point in this system. We will
leave this task to future investigations.
This work is supported by NSFC Grant No. 10774187, National Basic
Research Program of China No.2007CB925001 and and No. 2010CB923004.
The authors acknowledge the discussion with Yizhuang You on fidelity
susceptibility.
\bigskip
|
2,877,628,090,401 | arxiv | \section{Introduction}
The U.S. Centers for Disease Control and Prevention (CDC) and the World Health Organization (WHO) are now recommending face masks be used by the general public and it is believed that, in conjunction with other mitigating behaviors, the wide-spread use of face masks can reduce community spread of Covid-19 \cite{cheng2020wearing}.
Given limited personal protective equipment (PPE), and the needs of healthcare workers on the front-line of the battle against this pandemic, cloth face-masks have been widely recommended for the general public (e.g. \cite{howard2020face}). Mukerji \emph{et al.} use the term `mask' to include a class of standard surgical masks which are not specially designed to protect the wearer from aerosol transmission \cite{mukerji2015infectious}. Further, they refer to `respirators' as personal protective equipment that is designed to filter and fit such that it prevents the transmission of the aerosol droplets. Included in the category of respirators are the N95, high-efficiency particulate air, powered air purifying respirator, dust-mist, and the dust-mist-fume types of respirators. Given the need and demand for the respirators by healthcare workers and first responders and the limited supply thereof, public good suggests that those items ought to be prioritized for these front-line workers. Additionally, the cost difference between masks and respirators is such that it may be prohibitive for some to acquire the later. Prior to wearing the respirator, it is recommended that the user undergo fit testing prior to use \cite{chughtai2013availability}, whereas fit testing for masks is not necessary. Given the pressure put on the supply and the challenges that may exist for training how to fit respirators, the use of masks for the general public is more feasible.\\
With the shortage and expense of respirators, the widespread adoption of single use masks was an initial response of some within the general public. However, the disposal of single use masks posses a potential environmental catastrophe \cite{fadare2020covid, roberts2020coronavirus}. To this end, the use of cloth face-masks are more environmentally responsible than single-use face masks \cite{klemes2020energy, roberts2020coronavirus}, and have been shown to be effective in reducing the transmission of other respiratory diseases \cite{spooner1967history}. However, controversy remains concerning US public opinion on the efficacy of face-masks, neck gaiters and face shields for mitigating the spread of Covid-19 \cite{wong2020covid}. The controversy over the efficacy lay in the doubt over the proper use of the PPE, lack of understanding of the transmission, and / or beliefs concerning the ability of the PPE to stop the movement of the droplets.\\
Respiratory droplets are known to contain potential pathogens \cite{hare1964transmission, wells1934air, corbett2003growing}, and are a major contributor to annual influenza epidemics \cite{solomon2020influenza}. SARS-CoV-2 is also believed to be spread through respiratory droplets \cite{Lai2020, huang2020}. The droplets are emitted from the humid respiratory tract of an infected individual, travel outside of the body, before potentially landing in the respiratory tract of a susceptible individual.\\
Limiting this mechanism of spreading is particularly important for the current COVID-19 pandemic. In particular, the presence of asymptomatic individuals \cite{asadi2019aerosol} could result in approximately 40\% to 45\% of SARS-CoV-2 infections \cite{oran2020prevalence}, which is why the CDC recommend everyone wears a mask regardless of symptoms \cite{howard2020face}. One study has estimated that the economic benefit of each additional cloth mask worn ranges from \$3,000 to \$6,000 \cite{abaluck2020case} stemming, in part, from the recovery of the estimated \$60,000 per capita mortality risk \cite{greenstone2020social}.\\
The movement of respiratory droplets from one individual to another, however, is a complex phenomena, affected by the evaporation, the dispersion, and the deposition of the droplets \cite{wells1934air}. The size of the droplets play a critical role. As the droplets of various sizes are emitted they immediately start to evaporate towards smaller droplet nuclei. This is important because larger particles are more likely to deposit on various surfaces, turning them into potentially infectious fomites, while smaller droplets might linger in the air and follow air current more closely. Therefore, airborne transmission is likely to depend on the background air flow \cite{lowen2006guinea, ai2018airborne} and ventilation \cite{giovanni2020transmission}. Protected in the droplet nuclei \cite{huang2020}, it is known that a variety of pathogens can survive for prolonged periods in the air \cite{harper1961airborne}, including SARS-CoV-2 \cite{van2020aerosol}.\\
In terms of the use of personal protective equipment, the role that airborne transmission plays in the spread of COVID-19 has been controversial as this might necessitate more stricter protocols, such as the use of airborne isolation infection rooms and fit-tested N95 face-piece respirators \cite{Leung2019}. The relative importance of airborne transmission versus larger droplet deposition is also expected to play a large role in the efficacy of different masks and face coverings.\\
Larger droplets are unlikely to travel far before gravity deposits them on nearby surfaces. This is one of the reasons that social distancing, especially when individuals are facing one another \cite{ai2018airborne}, is an important measure for decreasing the risk of transmission. However, SARS-CoV-2 has been found to persist for days on surfaces \cite{van2020aerosol} (more so on smooth surfaces than desiccating porous surfaces \cite{huang2020}) meaning that fomites are also a large contributing factor in the transmission of COVID-19. While frequent hand-washing will indubitably reduce the risk of transmission from a fomite to a susceptible individual \cite{jefferson2020physical}, the use of masks is also expected to reduce the formation of fomites by limiting the emission of large droplets from infected individuals \cite{huang2020}.\\
The location within the respiratory track from where the respiratory droplets originate will dictate the concentration of pathogens contained within the respiratory droplets \cite{wei2015enhanced}. In particular, different pathogenic organisms can be more concentrated at different locations along the respiratory tract. For instance, the instability and fluid film rupture of mucus layers along the respiratory bronchioles are thought to be responsible for smaller respiratory droplet formation \cite{johnson2009mechanism}. Large concentrations of influenza pathogens have been found in smaller droplets \cite{yan2018infectious, Lindsley2010}, and this maybe exacerbated with the lower respiratory tract infections associated with SARS-CoV-2 \cite{asadi2019aerosol}. Alternatively, larger respiratory droplets are more likely to be produced within the large airways of the upper respiratory tract and the oral cavity \cite{wei2015enhanced}. These droplets are emitted at velocities of between 6 and 22 m/s during coughing and between 1 and 5 m/s during regular breathing \cite{wei2015enhanced, chao2009characterization, kwon2012study, wei2016airborne}.\\
Significant inconsistencies have been reported in regards to the size distribution of respiratory droplets \cite{duguid1946size, chao2009characterization, gupta2010characterizing, zhang2015documentary, Liu2017, asadi2019aerosol}, but the radius of respiratory droplets is typically found to range from 1 $\mu$m to 100 $\mu$m, with a greater number of droplets at a radius of between 5 $\mu$m and 10 $\mu$m \cite{duguid1946size}. This is important, not only as to how it pertains to the efficacy of face masks, but because smaller respiratory droplets are more likely to penetrate further into the respiratory tract \cite{licht1972movement, stahlhofen1983deposition} where a smaller number of pathogens are believed to be required to cause infection \cite{Thomas2013}. For this reason advocating for the wide spread use of masks, that may stop the emission of respiratory droplets at the source, is considered an important mitigating strategy during this pandemic.\\
While some political and social resistance to wearing masks exists, it is widely agreed that wearing masks will reduce community spread without significant social or economic impacts \cite{feng2020rational}. A large number of materials have been considered for use in constructing homemade masks, and the filtration efficiency of these materials has exhibited considerable variability \cite{davies2013testing, bagherifiltration, lindsley2020efficacy, konda2020aerosol, teesing2020there, kahler2020fundamental, aydin2020performance, greenhalgh2020face}. Furthermore, neck gaiters (elastic fabric tube that one can wear around the face) have emerged as a popular face covering, with limited studies concerning their effectiveness \cite{lindsley2020efficacy}. In particular, the role of masks and coverings at limiting the release of respiratory droplets \cite{wei2016airborne, dbouk2020respiratory}, or diverting smaller droplets and reducing their forward momentum \cite{huang2020}, is widely accepted \cite{leung2020respiratory}. However, the role of masks and coverings in protecting susceptible individuals from the respiratory droplets of others (already in the air) is more controversial \cite{davies2013testing, wei2016airborne, kahler2020fundamental}.\\
The filtration efficiency depends on the size of the droplets with large droplets, that might be more likely to deposit on nearby surfaces, more effectively blocked by fabrics \cite{drewnick2020aerosol, aydin2020performance, leung2020respiratory}. Therefore, masks are expected to be effective in reducing the formation of fomites. However, multiple layers of fabric have been found to substantially increase the filtration efficiency of mask materials \cite{drewnick2020aerosol, zangmeister2020filtration}, with the first layer reducing the velocity of the droplets and increasing the ability of the second layer to filter the droplets \cite{aydin2020performance}. In addition, triboelectrically charged fabrics have been found to significantly enhance filtration efficiencies \cite{zhao2020household}, and this is expected to favor artificial materials. Neck gaiters, commonly made from polyester fabric, are high in the triboelectric series \cite{zou2019quantifying}, and may benefit from triboelectrification. However, the reason neck gaiters are popular (especially among runners) is their breathability, and it is common for materials with high breathability to have reduced filtration efficiency \cite{aydin2020performance}. The reduction in velocity through a more impermeable mask would be expected to be larger \cite{tang2009schlieren}, but the velocity through any fabric would be expected to be significantly reduced \cite{verma2020visualizing, aydin2020performance}. Furthermore, the effect of leakage due to poor mask to face fitting will degrade the collection of respiratory droplets by the face mask, with an area of leakage of only 2\% the area of the mask causing a mask efficiency reduction of 66\% \cite{dbouk2020respiratory}. The air flow will preferentially follow the path of least resistance \cite{drewnick2020aerosol}, and smaller droplets will likely follow this path around the mask \cite{dbouk2020respiratory}, regardless of filtration efficiencies.\\
Recent computer simulations have been used to elucidate the role of fluid mechanics on the efficacy of masks. In particular, Dbouk and Drikakis \cite{dbouk2020respiratory} investigated the role of leakage on the efficiency of masks. They predicted that most large droplets would become trapped at the mask surface, while smaller droplets are more likely to follow the air flow. Kumar \emph{et al.} simulated a cloth mask as an isotropic porous medium with an imperfect fit, and captured the flow through and around a mask, along with the spatial spread of droplets \cite{kumar2020utility}. The effect of the mask was found to reduce velocities, redirect air flow around the mask, and retain the majority of the ejected droplets. Here, we contribute to these emerging studies by simulating both the air flow through and around a face mask, a neck gaiter and a face shield, and the spatial evolution of emitted respiratory droplets within these flow fields. The next section details the methodology used in this study. Following the methodology, we present our findings and then discuss the implications.
\section{Methodology}
To capture the ability of face masks, neck gaiters and face shields at containing respiratory droplets we combine a number of simulation techniques. First a Lattice Spring Model (LSM) os elastic mechanics is used to simulate neck gaiter configurations. Second the Lattice Boltzmann (LB) method is used to simulate the fluid mechanics through and around porous face coverings (and around an impermeable face shield). Lastly, the spatial evolution of evaporating respiratory droplets are simulated in the flow fields and the collection efficiency of the different face coverings contrasted.
\subsection{Lattice Spring Model}
The Lattice Spring Model (LSM) is a coarse-grained simulation of elasticity and fracture mechanics.
In particular, the coarse-grained components of the model (elastic springs) mirror atomistic processes (interatomic attractions) such that the correct continuum behavior (linear elasticity) emerges.
The mechanics of the neck gaiter are captured using this coarse-grained spring
model (with spring constants $k_{i,j}$ connecting nodes $i$ and $j$) representation of a two-dimensional surface. The energy associated with the system is of the form
\begin{equation}
A = \sum \frac{k_{i,j}}{2} (r_{i,j} - r^o_{i,j})
\end{equation}
where $r_{i,j}$ is the distance between nodes $i$ and $j$, and $r^o_{i,j}$ is the equilibrium distance.
A square lattice is implemented with nodes connected to their nearest and next-nearest neighbors.
While more complicated Hamiltonians can be implemented, this Hookean Spring Model can capture the deformation of a fabric mask with an elastic modulus of $8 k/3$ and a Poisson's ratio of $1/3$ in the square lattice considered here.
No bending motions are considered, and the fabric is free to fold or ``bunch up'' without any energy penalty.
The equilibrium position of the LSM nodes is found by minimizing the energy of the system.\\
The facial dimensions were obtained from the National Institute for Occupational Safety and Health (NIOSH) anthropometric survey \cite{zhuang2010digital} in the form of a series of polygons that define averaged facial characteristics.
The displacement of the nodes are confined at the boundaries (either side of the head) and the nodes are updated iteratively to reduce the elastic energy, whilst ensuring the nodes do not pass through the polygons that define the face.
In this manner, the Lattice Spring Model captures the deformation of a neck gaiter as it stretches around the face of a wearer.
\subsection{Lattice Boltzmann Method}
The Lattice Boltzmann (LB) method is another coarse-grained simulation technique whose emergent behavior comes from coarse-grained components that mirror microscopic phenomena \cite{chen1998lattice}.
In particular, the Lattice Boltzmann method generally consists of two steps. First, a streaming step representing the advection of the particles in the fluid.
\begin{equation}
f^*_i(\mathbf{x} + \mathbf{c}_i \Delta t, t) = f_i(\mathbf{x}, t)
\end{equation}
Second, a collision step that captures the particle collisions that take place during the
movement of the particles in the fluid.
\begin{equation}
f^{**}_i(\mathbf{x}, t + \Delta t) = f^*_i(\mathbf{x}, t) - \frac{1}{\tau} \left(f^*_i(\mathbf{x}, t) - f_i^{eq}(\mathbf{x}, t) \right)
\end{equation}
The collision step relaxes the distribution functions towards an equilibrium distribution function of the form
\begin{equation}
f_i^{eq}(\mathbf{x}, t) = \rho w_i \left( 1 + 3 \frac{\mathbf{u} \cdot \mathbf{c}_i}{c} + \frac{9}{2} \frac{\left(\mathbf{u} \cdot \mathbf{c}_i\right)^2}{c^2} - \frac{3 u^2}{2c^2}\right)
\end{equation}
The density is $\rho = \sum_i f_i$ and velocity is found from $\rho \mathbf{u} = \sum_i f_i \mathbf{c}_i$.
The lattice weight factors for the D3Q19 model are $w_0 = 1/3$, $w_{1 \rightarrow 6} = 1/18$, and $w_{7 \rightarrow 18} = 1/36$.
The lattice sound speed is $c = \Delta x/\Delta t$ and the viscosity of the fluid is $\mu = (2 \tau - 1) c^2 \Delta t / 6 \rho$.
Here $\Delta t = \Delta x = 1$, and the density is initially set to unity \cite{he1997theory, thurey2009stable, gaedtke2018application, buxton2005newtonian, buxton2006computational}.\\
To capture the effects of turbulence on the fluid flow the Smagorinsky subgrid
turbulence model is implemented, which essentially modifies the viscosity according to the Reynolds stress tensor
to account for subgrid scale vortices. This modifies the relaxation time in the relaxation process of the lattice Boltzmann equations such that each node of the lattice
relaxes at different rates \cite{si2015study}.
The modified relaxation time $\tau_s$
is computed as
\begin{equation}
\tau_s = 3 (\nu + C^2 S) + \frac{1}{2}
\end{equation}
where $C$ is Smagorinsky constant, for which a value of 0.04 is chosen, and $S$ is the local strain tensor.
\begin{equation}
S = \frac{1}{6C^2} \left( \sqrt{\nu^2 + 18C^2 \sqrt{\Pi_{\alpha,\beta}\Pi_{\alpha,\beta}}} - \nu\right)
\end{equation}
where the tensor $\Pi_{\alpha, \beta}$ is obtained for each cell from the second
moment of the non-equilibrium parts of the distribution functions.
\begin{equation}
\Pi_{\alpha, \beta} = \sum_{i=1}^{19} \mathbf{c}_{i\alpha} \mathbf{c}_{i\beta} (f_i - f_i^{eq})
\end{equation}
$\alpha$ and $\beta$
each run over the three spatial dimensions, while $i$ is the
index of the particle distribution functions for the D3Q19 model \cite{thurey2009stable}.\\
No-slip boundary conditions are implemented at the solid nodes, and partially at porous nodes that represent the neck gaiter and mask, with the following third step for the particle distribution function \cite{li2014lattice}.
\begin{equation}
f_i(\mathbf{x}, t+\Delta t) = f^{**}_i(\mathbf{x}, t+\Delta t) + n_s \left( f^{**}_{\acute{i}}(\mathbf{x}, t) - f^{**}_i(\mathbf{x}, t+\Delta t)\right)
\end{equation}
Where $n_s$ is the fraction of solid at a given node. When $n_s = 1$ the above recovers the bounce-back boundary conditions, where incoming fluid particles are simply reflected in
the opposite direction during the collision step; $\acute{i}$ is the
opposite direction of $i$. Von Neumann boundary conditions are implemented at the boundaries \cite{junk2008outflow}.
\subsection{Droplet Evaporation and Transport}
In the current model we simulate individual droplets as they emerge from the mouth, shrink as the fluid evaporates, are subject to gravity and buffered by the surrounding air flow \cite{buxton2020spreadsheet}.
The diameter of a droplet changes as the water evaporates up until a critical time, $t_{crit}$, when the droplet diameter has reached its minimum, $d_{min}$ \cite{Wallace, Shaman2009, Halloran2012}.
\begin{equation}
d = \begin{cases} d_0 \sqrt{1 - \beta t} &\mbox{if } t \leq t_{crit} \\
d_{min} & \mbox{if } t > t_{crit} \end{cases}
\end{equation}
where $t$ is time after the droplet is emitted, and $\beta$ is the evaporation rate given by
\vspace{10pt}
\begin{equation}
\beta = \frac{8 D (P_{sat} - P_{\infty})}{d^2 \rho R_v T}
\end{equation}
where $D$ is the molecular diffusivity of water vapor, $P_{sat}$ and $P_{\infty}$ are the saturation and ambient water vapor pressures, $R_v$ is the specific gas constant for water, and $T$ is temperature.\\
The minimum diameter is considered to be 44\% of the original diameter, or $d_{min}/d_0 = 0.44$ \cite{Nicas2005, Halloran2012}.
However, others have argued for smaller values, or that the minimum diameter might depend on the ambient humidity \cite{Liu2017}.
$R_v = 461.52\,\text{J/(kg}\, \text{K})$ is the specific gas constant for water and the saturation water vapor pressure in Pascals can be obtained from the Buck equation \cite{Buck1981}.
\vspace{10pt}
\begin{equation}
P_{sat} = 611.21 \exp\left( \left(19.843 - \frac{T}{234.5}\right)\left(\frac{T - 273.15}{T - 16.01}\right)\right)
\end{equation}
The ambient water vapor pressure in Pascals is given by
\vspace{10pt}
\begin{equation}
P_{\infty} = P_{sat}\, \frac{RH}{100\%}
\end{equation}
where $RH$ is the relative humidity and the molecular diffusivity (in m$^2$/s) of water vapor in air is given by
\vspace{10pt}
\begin{equation}
D = 2.16\times10^{-5} \left(\frac{T}{273.15}\right)^{1.8}
\end{equation}
The drag coefficient depends on the Reynolds number
\begin{equation}
C_D = 24 (1 + 0.15 Re_p^{0.687})/Re_p
\end{equation}
which is given as
\begin{equation}
Re_p = \frac{\rho_a v d}{\mu}
\end{equation}
where $\mu$ is the viscosity of air. \\
The particle acceleration is given by
\begin{equation}
\frac{\partial \vec{u_p}}{\partial t} = \frac{18 \mu C_D Re}{\rho_p d^2 24} \left(\vec{u} - \vec{u_p} \right) + \frac{\vec{g} (\rho_p - \rho)}{\rho_p}
\end{equation}
The first term on the right hand side is the drag force per mass. $\vec{u}$ is the velocity of the air, and $\vec{u_p}$ is the droplet velocity. $\mu$ is the viscosity of the fluid. A Discrete Random Walk model is used
to account for the effect of turbulent dispersion on the particle
trajectories. This includes the fluctuating component of the velocity
due to turbulence.
The following velocity component is added to the local air velocity
\begin{equation}
\vec{u}_{k} = \xi \sqrt{2k/3}
\end{equation}
where $\xi$ is a normally distributed random number that accounts for the randomness of turbulence about a mean value, and $k$ is the turbulent kinetic energy of the fluid \cite{redrow2011modeling, hathway2011cfd, li2018modelling}.
The droplet positions are updated using the velocity verlet method, taking the fluid velocity from the linear interpolation of the lattice Boltzmann velocity field \cite{swope1982computer}.
\begin{figure}
\begin{center}\includegraphics[width=0.5\linewidth]{Figure1.png}\end{center}
\caption{Streamlines of fluid flow through a neck gaiter and a face mask. The velocity of the fluid, relative to the maximum velocity of 5 m/s, is depicted by the color of the streamlines.}
\end{figure}
\newpage
\section{Results}
We simulate the efficacy of a standard face mask, a neck gaiter and a face shield. The neck gaiter is captured using the Lattice Spring Model. The face mask geometry is designed to be a relatively good fit, with a small leakage on either side of the nose. The face shield is considered as an impermeable barrier that encircles the face.\\
The velocity at the mouth is set to 5 m/s and the neck gaiter has a porosity of 0.1 and a thickness 0.1 mm.
The mask has a lower porosity and is thicker than the neck gaiter and the porosity is taken to be 0.05 and the thickness is 0.25 mm.
The porosity of a node in the Lattice Boltzmann method is taken to be
\begin{equation}
p = \frac{L}{\sum (L_i/p_i)}
\end{equation}
where $L$ is the grid size, $L_i$ is the thickness of a layer (of either the mask or the air) and $p_i$ is the porosity of the layer (1 for just air).
This assumes the layers of the mask are always perpendicular to the direction of flow, in terms of how the sub-grid porosity is handles, but allows us to calculate the flow of fluid through the permeable mask.\\
In Figure 1, the velocity of the particles are illustrated as they flow through and around the gaiter (Figure 1.A) and the mask (Figure 1.B). The face shield is not included as the velocity simply moves up and down along the face mask. The colors of the flow indicate the velocity of the particles with red being those particles moving the fastest and violet the slowest.
The fluid simulations are ran for 25,000 time steps (a real time of 0.5 s) to establish typical a typical flow regime through and around the face coverings.
The neck gaiter allows some of the fluid to flow through the material, although the velocity is significantly reduced and spread out over the area of the neck gaiter rather than being focused at the mouth opening. In addition a large amount of fluid takes the path of least resistance and leaks through a gap between the nose and the cheek. As the neck gaiter wraps around the head, this is the only source of leakage.
The face mask, on the other hand, allows less of the fluid to permeate through the mask. The leakage is confined to a similar area to that of the neck gaiter, although in reality the areas of leakage might be expected to extend all the way around the mask, with leakage out of the sides of face masks common \cite{dbouk2020respiratory}.
Interestingly, the fluid flow around the face mask could cause infection to occur on the outside of the mask \cite{bae2020effectiveness}.\\
The emission of 10,000 respiratory droplets (with a uniform distribution between 1 $\mu$m and 100 $\mu$ m) at the same time is simulated given the flow profiles from the Lattice Boltzmann method. In Figure 2, the distribution of respiratory droplets after 0.2 s is illustrated for both the gaiter (Figure 1.A) and the mask (Figure 1.B). The larger droplets are represented in brighter colors (red and yellow) with smaller droplets illustrated in darker colors (violet).
The filtration efficiency of the masks are assumed to be linear with a value of 0.5 for a radius of 0 and a filtration efficiency of 1 for droplets of radius 10 $\mu$m. The gaiters are considered to be less efficient at filtering the respiratory droplets. The filtration efficiency is 0 for a droplet radius of 0, and 1 for a droplet of radius 20 $\mu$m \cite{zangmeister2020filtration, aydin2020performance, bagherifiltration}.
Note that the larger droplets (reds and yellows) are less likely to be emitted as a person speaks, but they would also be expected to have a significantly large volume and carry more pathogens than small droplets.
Both fabrics are assumed to stop larger particles, but the mask has a higher filtration efficiency both in terms of stopping smaller droplets (whereas the neck gaiter is assumed to offer no filtration of very small droplets) and the fact that the mask stops all droplets larger than 10 $\mu$m (as opposed to the neck gaiter which only stops droplets larger than 20 $\mu$ m. This can be seen as the projection of smaller droplets through the neck gaiter. However, the mask redirects more the fluid flow through leakages between the mask and the wearers face. As smaller droplets are more likely to follow this fluid flow, there are a significant number of smaller droplets emitted around the face mask.\\
\begin{figure}
\begin{center}\includegraphics[width=0.5\linewidth]{Figure2.png}\end{center}
\caption{The distribution of respiratory droplets after 0.2 s of being released from the mouth. The size of the droplets is depicted by the color of the droplets.}
\end{figure}
The fraction of droplets that leak around the face coverings is plotted as a function of droplet size in Figure 3a.
This is the fraction of droplets that are neither collected nor pass through the face coverings, but rather move around the face coverings.
For comparison the face shield is included, that shows the smaller droplets move around the face shield and do not interact with the face shield at all. Larger droplets, that are much less likely to be emitted anyway, are less deflected by the surrounding air flow and impact the face shield (the leakage fraction goes to zero).
The mask is closer to the face than the face shield and does stop some droplets of all sizes, however for small droplets around 75\% of the respiratory droplets are emitted around the mask. This is because the mask is less permeable to air flow, and more air is directed through the leakage, taking the smaller droplets with this air flow.
The neck gaiter, however, is more permeable and an appreciable amount of flow passes through the neck gaier material. As a consequence, less than 20\% of the droplets (even for the smaller droplets) will leak from the mask without interacting with the neck gaiter in some form.\\
\begin{figure}
\begin{center}\includegraphics[width=0.5\linewidth]{Figure3.png}\end{center}
\caption{a) The fraction of respiratory droplets that are transported around the mask, gaiter or face shield, as the air flow leaks out around the coverings, as a function of droplet size. b) The collection efficiency (fraction collected times probability of droplet expulsion) of the gaiter, mask and face shield as a function of respiratory droplet size.}
\end{figure}
The efficacy of the face masks are depicted in Figure 3c. In particular, the collection efficiency is plotted as a function of the droplet radius. Note that the collection efficiency is typically presented as the fraction of respiratory droplets collected by the face covering. Here, however, we present the fraction of respiratory droplets collected by the face covering relative to the distribution of droplets emitted. In other words, the fraction of droplets emitted at around 8 $\mu$m is taken to be 1 and the fraction at other sizes is taken from the distribution of Duguid \cite{duguid1946size}. Therefore, the collection efficiency goes to zero at very small or larger droplets as the probability of droplets being emitted at these sizes is much smaller.
It can be seen that the collection efficiency for the face shield is very small, because all but the largest droplets follow the air flow around the face shield.
The neck gaiter and the face mask interestingly have similar collection efficiencies.
For smaller droplets the face mask is more likely to allow these droplets to leak around the face mask, while the neck gaiter is more likely to allow these droplets to pass through the fabric.
The main difference is that at smaller droplets (around 1 $\mu$m) the neck gaiter offers very little filtration efficiency, while at medium sized droplets (around 10 $\mu$m) the mask will allow these to easily leak around the mask.
Therefore, the neck gaiter might be expected to produce smaller droplets in a mist around the face covering, while the face mask will produce jets of air with slightly larger droplets directed out around the mask.\\
This is consistent with Drewnick \emph{et al.} who recently showed how leakage can have a dramatic effect on the efficacy of face coverings \cite{drewnick2020aerosol}.
In addition, Lindsley \emph{et al.} found the same effects, and found that cloth face masks blocked 51\%, polyester neck gaiters blocked 47\%, and face shields blocked 2\% of respiratory droplets.
Similar proportions are reported here, although it is worth noting that while the qualitative mechanism discussed are likely to be the same, the effects of fabric filtration efficiencies, porosity and the amount of leakage around the face coverings will play a large role.\\
\section{Summary and Conclusions}
At the outset of this manuscript, we presented a brief summary of the current state with respect to personal protective equipment and the mitigation attempts for the spread of COVID-19. The importance of facial coverings can not be overstated to mitigate the spread of COVID-19 in particular and airborne viruses in general. As mentioned, one study suggests that the economic benefit to society of each additional mask worn is upwards of \$6,000. However, in order to get broad compliance on the wearing of masks, they need to be economically accessible to all, comfortable, without increasing our impact on the environment. Cloth gaiters fit all of the aforementioned criteria - they are reusable, comfortable, and economically accessible.\\
We have demonstrated that neck gaiters may at least be as effective as more commonly accepted cotton face masks. In particular, we captured the fluid dynamics, and the transport of respiratory droplets, through and around various face coverings. We elucidated the role of droplet size on the collection efficiency and leakage fraction. We found that the neck gaiter was at least as effective as the cloth mask, and both were far superior to a face shield. The confirmation that the gaiter is as efficient as regular cloth masks may provides decision-makers with information to encourage individuals to use this intervention to help mitigate the spread of COVID-19. \\
\bibliographystyle{unsrt}
|
2,877,628,090,402 | arxiv | \section{Introduction}
\label{sec:intro}
Nucleon electromagnetic form factors (FFs) play a key role in
understanding the hadronic dynamics. They describe the coupling
between a photon and a nucleon pair, and represent the only
quantities, connected to the nucleon quark structure, that can be
measured.
Physical FFs are defined as limit values for real $q^2$ ($q$
is 4-momentum transferred by the photon) of Lorentz scalar functions
that are analytic in the $q^2$ complex plane with a cut along the
real axis (time-like region), from the theoretical threshold
$\qth\equiv (2M_\pi)^2$ up to infinity. In the space-like
region ($q^2<0$), where they are real functions, FF values can
be extracted from the differential cross section of the elastic
scattering $eN\to eN$ ($N$ stands for nucleon). In the time-like
region, starting from the theoretical threshold, FFs become complex
functions, their moduli can be extracted above the physical
threshold $\qph\equiv (2M_N)^2$, studying the
angular distribution of the $\ee\to\nn$ cross
section.
The FF asymptotic behavior predicted by the perturbative
QCD is a power law~\cite{asy} that can be obtained either in terms of
dimensional considerations, or as a consequence of a minimal
gluon exchange among the constituent quarks, needed to
share the photon transfer momentum.
More in detail, disregarding logarithmic corrections of the strong coupling
constant, the QCD power laws, in the space-like limit: $q^2\to-\infty$,
for each nucleon FFs are
\bea
\begin{array}{l}
\left .\begin{array}{l}
F_1\sim (-q^2)^{-2}\,, \hspace{5mm} F_2(q^2)\sim (-q^2)^{-3}
\end{array}\right.\\
\\
\left. \begin{array}{l}
G_E(q^2)=F_1(q^2)+\displaystyle\frac{q^2}{4M_N^2}F_2(q^2)\\
G_M(q^2)=F_1(q^2)+F_2(q^2)\end{array}\right\}\sim (-q^2)^{-2}\,,\\
\end{array}
\label{eq:ffs}
\eea
where $F_1$ and $F_2$ are the Dirac and Pauli FFs, which account
for the non-spin flip and the spin flip part (further suppression
factor $1/q^2$) of the nucleon electromagnetic current.
$G_E$ and $G_M$ are, instead, the so-called electric and magnetic
Sachs FFs, in the Breit frame (for small space-like transfer momenta)
they represent the Fourier transforms of
the charge and magnetization distributions in the nucleon.
\section{Dispersion Relations}
\label{sec:dr}
Dispersion relations (DRs) allow to connect values of an analytic function
in different regions of its domain. Taking advantage from analyticity
and vanishing asymptotic behavior of FFs (see Sec.~{\ref{sec:intro}}),
we may define the integral relation
\bea
G(q^2)=\frac{1}{\pi}\int_{\qth}^\infty\frac{\im\,G(s)}{s-q^2}ds
\label{eq:dr-im}
\eea
with $q^2\le\qth$, for a generic FF $G(q^2)$. This DR states that
real values of $G(q^2)$ can be obtained at any $q^2<\qth$ integrating
the imaginary part over the upper edge of the time-like cut
$(\qth,\infty)$. However to use eq.~(\ref{eq:dr-im}), in case
of nucleons, we have to face two serious issues:
\bi
\item the imaginary part of FFs is not measurable, one
should relay in phenomenological and non-rigorous techniques to
extract it from cross section data;
\item even though the imaginary part was obtained from the data,
its values could cover only a portion of the integration interval,
starting from the physical threshold \qph. The so-called
``unphysical region'', $(\qth,\qph)$, where we expect the main
contributions from intermediate light-meson resonances,
is not experimentally accessible.
\ei
To get around these problems we use an idea proposed for the first
time in 1974 in Ref.~\cite{ioffe}.
\section{Sum rule}
\label{sec:sumrule}
The idea~\cite{ioffe} consists in using the DR of
eq.~(\ref{eq:dr-im}) for the function:
\bea
\phi(z)=A_L(z,s)\frac{\ln G(z)}{\sqrt{\qth-z}},
\label{eq:phi}
\eea
where $A_L(z,s)$, with $s$ real and positive, is an
analytic function used to suppress the FF contribution
in the time-like region $(0,s)$, and it can be chosen
requiring
\bea
\int_0^{s}A_L(z,s)^2dz\ll 1\,.
\no
\eea
Following the suggestion given in Ref.~\cite{ioffe} we
use the definition
\bea
A_L(z,s)=\sum_{l=0}^L\frac{2l+1}{(L+1)^2}P_l\left(1-
2\frac{\sqrt{s}-\sqrt{z}}{\sqrt{s}+\sqrt{z}}\right)\,,
\label{eq:al}
\eea
where $P_l$ is the Legendre polynomial of degree $l$,
while the upper limit $L$ represents an
``attenuation-power indicator''. Following the definition
of eq.~(\ref{eq:al}), $A_L(z,s)$ is analytic in $z$
with a cut along the whole negative real axis.
The imaginary part of $\phi(x)$ is then ($x$ is real)
\bea
\im\,\phi(x)=\left\{
\begin{array}{ll}
\displaystyle\frac{\im\,A_L(x,s)\ln G(x)}{\sqrt{\qth-x}}
& x\le 0\\
&\\
0 & 0<x<\qth\\
&\\
\displaystyle\frac{A_L(x,s)\ln |G(x)|}{\sqrt{x-\qth}}
& x\ge \qth\,.\\
\end{array}\right.
\label{eq:im-part}
\eea
We consider the proton magnetic FF normalized to its magnetic moment
$\mu_p$, i.e.: $G(q^2)=G_M^p(q^2)/\mu_p$, having
no poles (from analyticity), nor zeros (our assumption)
in the $q^2$ complex plane, the $\phi(z)$ function
(see eq.~(\ref{eq:phi})) is still analytic and the DR of
eq.~(\ref{eq:dr-im}) now reads
\bea
\phi(q^2)=\frac{1}{\pi}\int^{0}_{-\infty}\frac{\im\,\phi(t)}{t-q^2}dt
+\frac{1}{\pi}\int_{\qth}^\infty\frac{\im\,\phi(s)}{s-q^2}ds\,,
\no
\eea
for $0<q^2<\qth$.
In particular at $q^2=0$, having the normalizations $G(0)=1$ and
$\phi(0)=0$, the previous relation becomes the identity
\bea
\int^{0}_{-\infty}\frac{\im\,\phi(t)}{t}dt=
-\int_{\qth}^\infty\frac{\im\,\phi(s)}{s}ds\,,
\eea
in terms of $A_L$ and FF, i.e. using
the definition of eq.~(\ref{eq:im-part}), we have
\bea
\int_{-\infty}^0\!\!\!\!\!\!\frac{\im A_L(t,s)\ln G(t)}{t\sqrt{\qth-t}}dt
&\!\!\!\!\!\!=\!\!\!\!\!\!&
\!\!\!-\!\!\!\int_{\qth}^{\infty}\!\!\!\!\!\!\frac{A_L(s',s)\ln|G(s')|}{s'\sqrt{s'-\qth}}ds'
\no\\
&&\label{eq:eq}\\
&\!\!\!\!\!\!\simeq\!\!\!\!\!\!&
\!\!\!-\!\!\!\int_{s}^{\infty}\!\!\!\!
\frac{A_L(s',s)\ln|G(s')|}{s'\sqrt{s'-\qth}}ds'\,.
\no
\eea
The approximate identity of eq.~(\ref{eq:eq}) holds thanks to the attenuation
in the region $(0,s)$ provided by the function $A_L(z,s)$.
Finally the sum rule we want to use is obtained from eq.~(\ref{eq:eq})
in the special case with $s=\qph$
\bea
\int_{-\infty}^0\!\!\!\!\!\!\frac{\im A_L(t,\qph)\ln G(t)}{t\sqrt{\qth-t}}dt
\simeq&&\no\\
&&\hspace{-25mm}\!\!\!-\!\!\!\int_{\qph}^{\infty}\!\!\!\!\!\!
\frac{A_L(s',\qph)\ln|G(s')|}{s'\sqrt{s'-\qth}}ds'\,.
\label{eq:sumrule}
\eea
This identity involves only measurable quantities, i.e.: real values
of the proton magnetic FF in the space-like region (left-hand side),
and modulus, only from the physical
threshold, in the time-like region (right-hand side).
\section{Check for the asymptotic behavior}
\label{sec:check}
We verify the compatibility of space and time-like data
on $G_M^p(q^2)$ with the QCD power law behavior given
in eq.~(\ref{eq:ffs}), using the ``space-time connection''
provided by the sum rule of eq.~(\ref{eq:sumrule}).
More in detail, in the space-like region we define
the real FF as a combination of a fit of several data
sets~\cite{sl-data} (247 data points) and a power law, i.e.
\bea
G_{\rm SL}(q^2)=\left\{
\begin{array}{ll}
G^{\rm fit}_{\rm SL}(q^2) & q^2_{\rm min} \le q^2\le 0\\
&\\
G^{\rm fit}_{\rm SL}(q^2_{\rm min})(q^2_{\rm min}/q^2)^n
& q^2 \le q^2_{\rm min}\,,\\
\end{array}\right.\label{eq:sl}
\eea
where $q_{\rm min}^2\sim -30\,{\rm GeV}^2$ is the energy of the lower data point.
In the time-like region the situation is more troublesome,
indeed
all data are on the modulus of an effective FF
which corresponds to $|G_M^p(q^2)|$ only when $|G_M^p(q^2)|=|G_E^p(q^2)|$,
but that happens, by definition (see eq.~(\ref{eq:ffs})), solely at $q^2=\qph$!
Hence, to extract genuine values of $|G_M^p(q^2)|$ we used
the BaBar data on the $\ee\to\ppp$ total cross section and angular
distribution~\cite{tl-data} together with
a dispersive technique to disentangle $|G_E^p(q^2)|$ and
$|G_M^p(q^2)|$~\cite{noi}.
Similarly to what we did in the space-like region,
the modulus of the FF in the time-like region is defined as
\bea
G_{\rm TL}(q^2)=\left\{
\begin{array}{ll}
G^{\rm fit}_{\rm TL}(q^2) & 0 \le q^2\le q^2_{\rm max}\\
&\\
G^{\rm fit}_{\rm TL}(q^2_{\rm max})(q^2_{\rm max}/q^2)^n
& q^2 \ge q^2_{\rm max}\,,\\
\end{array}\right.
\label{eq:tl}
\eea
where $q_{\rm max}^2\sim 20\,{\rm GeV}^2$ is the energy of the higher BaBar data point.
In both definitions of eq.~(\ref{eq:sl}) and eq.~(\ref{eq:tl}) we used the
same power law as a consequence of the Phragm\`en-Lindel\"of theorem~\cite{pl}.
Such a theorem states that, not only the power $n$ which rules the asymptotic
behavior, but also the limit must be the same, i.e.
\bea
\lim_{q^2\to -\infty}G_{\rm SL}(q^2)=\lim_{q^2\to +\infty}G_{\rm TL}(q^2)\,.
\no
\eea
It follows that the sum rule of eq.~(\ref{eq:sumrule}), once all the
theoretical and experimental information have been considered, becomes an
equation with only one unknown the power $n$, and the result we obtained
is
\bea
n=2.27\pm 0.36\,.
\no
\eea
Figure~\ref{fig:ris} summarizes this result. The shaded bands represent:
the fit functions in the data regions, and the power laws at high space
and time-like energies ($q^2< -30\,{\rm GeV}^2$ and $q^2> 20\,{\rm GeV}^2$).
The lined central area is the unphysical region, which does not contribute
to the sum rule, being suppressed by the function $A_L(\qph,z)$.
\begin{center}
\includegraphics[width=80mm]{ferrra.eps}
\figcaption{\label{fig:ris} Modulus of the normalized proton magnetic FF in space-like
(left) and time-like (right) region. The bands represent the obtained
description (see text), the points are the data~\cite{sl-data,tl-data,noi}.
The lined central area is the unphysical region.}
\end{center}
In conclusion, using the sum rule of eq.~(\ref{eq:sumrule}), based on
analyticity properties of FFs, we have shown that experimental data
in space and time-like region are consistent with the QCD asymptotic
behavior. In particular, we found a power law for $G_M^p(q^2)$ which is
in good agreement with the perturbative QCD expectation.
\end{multicols}
\vspace{-2mm}
\centerline{\rule{80mm}{0.1pt}}
\vspace{2mm}
\begin{multicols}{2}
|
2,877,628,090,403 | arxiv | \section{Introduction}
The study of dynamics of rational functions in one complex variable enjoys a long and celebrated history, which stretches back to classical work of Fatou, Julia and others \cite{fat1, fat2, jul}. In this setting, one has a field $K$ and a rational function $f\in K(x)$, which one regards as a regular self-map from the projective line over $K$ to itself, and one then asks questions about the orbits of points under iteration of the map $f$.
A key tool in understanding the dynamics of rational maps on $\mathbb{P}^1$ is by studying their periodic points (i.e., fixed points of given iterates) and their respective basins of attraction. Given a fixed point of a map, an important method when analyzing the behaviour of nearby points comes from analytic uniformization techniques. In the case of $p$-adic maps there is an appealing trichotomy due to Rivera-Letelier \cite{RL}, which builds upon earlier work of Herman and Yoccoz \cite{HY}, which says that if $p$ is prime, $\mathbb{C}_p$ is the completion of an algebraic closure of $\mathbb{Q}_p$, and $f(z)=\lambda z+ \sum_{i\ge 2} a_i z^i \in \mathbb{C}_p[[z]]$ is a nonzero power series with $|\lambda|_p, |a_i|_p\le 1$ for all $i$, then $f$ has three possible types of analytic uniformization, which are dictated by whether the map $f$ is indifferent, attracting, or superattracting near the fixed point $z=0$.
More precisely, Rivera-Letelier \cite{RL} shows the following trichotomy holds.
\begin{enumerate}
\item[(a)] (\emph{Indifferent case}) If $|\lambda|_p=1$, then for $c\in \mathbb{C}_p$ sufficiently close to zero there exist an integer $a>0$ and power series $u_0(z),\ldots ,u_{a-1}(z)$ that converge on the unit disc such that $f^{an+i}(c)=u_i(n)$.
\item[(b)] (\emph{Attracting case}) If $0<|\lambda|_p<1$, then there is $r\in (0,1)$ and a power series $u(z)\in \mathbb{C}_p[[z]]$ that maps the closed disc $\overline{B(0,r)}$ bijectively to itself such that $f^n(z)=u(\lambda^n u^{-1}(z))$;
\item[(c)] (\emph{Superattracting case}) If $\lambda=0$ and then there is some $m\ge 2$ and some $u(z)$ that bijectively maps a disc $\overline{B(0,r_1)}$ to another disc $\overline{B(0,r_2)}$ for some $r_1,r_2>0$ such that $f^n(z)=u((u^{-1}(z))^{m^n})$.
\end{enumerate}
Rivera-Letelier's characterization of possible analytic uniformizations of $p$-adic analytic maps has played an important role within arithmetic dynamics over the past fifteen years. It should be noted, however, that for many number theoretic questions concerning orbits of self-maps, it is much more desirable to work with maps that fall into case (a) of Rivera-Leterlier's trichotomy. The reason for this is that in these cases one obtains a natural $p$-adic interpolation of orbits of points near the fixed point and so one can use tools from $p$-adic analysis to answer questions about points in the orbit. On the other hand, case (c) only really says something about the rate of convergence of points in the orbit to the fixed point and it can be more difficult to glean number theoretic information from this information in practice.
If one restricts one's focus to a fixed prime $p$ and looks at $p$-adic dynamics then cases (b) and (c) are at times unavoidable, but if one is only interested in the orbit of a point $c$ in a characteristic zero field $K$ under a rational map $f\in K(x)$, then in practice one can work instead with a finitely generated extension $K_0$ of $\mathbb{Q}$ inside $K$ over which $f$ and $c$ are both defined and one can then try to find a favourable prime $p$ and an embedding of $K_0$ into $\mathbb{C}_p$ so that case (a) of Rivera-Letelier's trichotomy applies to the orbit of the image of the point $c$. In fact, we are able to show that there are infinitely many such primes for which one can embed $K_0$ into $\mathbb{C}_p$ such that after replacing $f$ by a suitable iterate we can $p$-adically interpolate the orbit of the image of $c$ with a $p$-adic analytic map.
\begin{thm} Let $L$ be a field of characteristic zero and let $h:\mathbb{P}^1\to \mathbb{P}^1$ be a rational map defined over $L$ and let $c\in \mathbb{P}^1(L)$. Then there exists a finitely generated extension $K$ of $\mathbb{Q}$ over which both $c$ and $h$ are defined along with an infinite set of inequivalent non-archimedean completions $K_{\mathfrak{p}}$ such that
there exists a positive integer $a=a(\mathfrak{p})$ with the property that for $i\in \{0,\ldots ,a-1\}$ there exists a power series $h_i(t)\in K_{\mathfrak{p}}[[t]]$ that converges on the closed unit disc of $K_{\mathfrak{p}}$ such that $h^{an+i}(c)=h_i(n)$ for all sufficiently large $n$.
\label{thm:main}
\end{thm}
We in fact prove a stronger result than the one given in the statement of Theorem \ref{thm:main} (see Remark \ref{rmk33}). We also mention that related interpolation results appear in \cite[\S4]{BGHKST}, which deals with the case of split self-maps of $(\mathbb{P}^1)^m$. Due to the more general setting considered by the authors in \cite{BGHKST}, stronger conditions on the maps involving critical points being preperiodic are necessarily imposed and it does not seem possible to obtain Theorem \ref{thm:main} in its full generality from these related results.
In general, we cannot expect to do better than interpolating tails of orbits along progressions, since there are points whose orbits are preperiodic under certain rational maps and an analytic map that is constant on an infinite set of the closed $p$-adic ball is necessarily identically constant by Strassman's theorem. The progressions arise in the proof of Theorem \ref{thm:main} since in order to apply interpolation results we must first replace
$h$ by a suitable iterate $h^a$ and replace the starting point $c$ with $h^m(c)$ for some $m\ge 0$. For this reason, one cannot eliminate the dependence of the integer $a$ on $\mathfrak{p}$ in the statement of Theorem \ref{thm:main}.
Once one can interpolate the orbit $\{h^{an}(c_m)\}_{n\ge 0}$ with an analytic map as in the statement of Theorem \ref{thm:main}, one obtains an interpolation for $h^{an+i}(c_m)$ for $b\in \{0,\ldots ,a-1\}$ by applying the rational map $h^b$ to the analytic map interpolating $\{h^{an}(c_m)\}_{n\ge 0}$.
While Rivera-Letelier's results give a strong and very useful trichotomy for studying the dynamics of rational maps over a $p$-adic field, for many arithmetical applications it is often more desirable to have a map in which the dynamics fall into the first two cases of Rivera-Letelier's trichotomy, since one can use parametrization of the orbit by an analytic map to draw conclusions about the map (this idea was apparently first applied by Skolem \cite{Sko}, and has since been use in many other works \cite{Am2, Am, BGT10, BGT2, BSS, BGHKST, BGKT, CX, GX, LL}). If one works over a fixed $p$-adic field, however, then one cannot guarantee that the orbit is covered under the third case of Rivera-Letelier's trichotomy. We show that after working over a suitable finitely generated extension $K$ of $\mathbb{Q}$, over which our point and self-map are both defined, we can find many non-archimedean completions of $K$ for which we obtain an analytic interpolation of our orbit as in the statement of Theorem \ref{thm:main}.
In recent years, one of the most important applications of $p$-adic interpolation techniques has been to settle cases of the so-called dynamical Mordell-Lang conjecture, which can be viewed as a natural dynamical analogue of the cyclic case of the classical Mordell-Lang conjecture and was first formulated in \cite{GT}. The classical Mordell-Lang conjecture was settled in a series of works by Faltings \cite{Fal}, Vojta \cite{Voj}, and McQuillan \cite{McQ}.
\begin{conj}{(The dynamical Mordell-Lang Conjecture)}
Let $X$ be a complex quasiprojective variety and let $\Phi$ be a rational self-map of $X$. Given $c\in X$ with the property that the forward orbit of $c$ under $\Phi$ avoids the indeterminacy locus of $\Phi$ and a Zariski closed subset $Y\subseteq X$, the set of $n \in \mathbb{N}$ such that $\Phi^n(c) \in Y$ is a union of finitely many infinite arithmetic progressions augmented by a finite set.
\label{conj:DML}
\end{conj}
We note that it is possible to have an empty union of infinite arithmetic progressions in Conjecture \ref{conj:DML}, in which case the orbit of $c$ has finite intersection with $Y$; it is also possible for the finite set to be empty.
Conjecture \ref{conj:DML} is known in several cases, including when $\Phi$ is \'etale, when $X=\mathbb{A}^2$ and $\Phi$ is respectively an endomorphism \cite{Xie1} and a birational self-map \cite{Xie2}, and in several other cases \cite{BGKT, GX}. In the case when $\Phi$ is \'etale, the orbit of a point has many $p$-adic parametrizations along progressions, and this fact, combined with Theorem \ref{thm:main}, allows us to deduce that Conjecture \ref{conj:DML} holds for split maps $(h,g)$ of $\mathbb{P}^1\times X$ with $g$ \'etale.
\begin{cor} Let $X$ be a complex quasiprojective variety and let $g:X\to X$ be an \'etale self-map of $X$ and let $h\in \mathbb{C}(x)$. If $\Phi=(h,g): \mathbb{P}^1\times X\to \mathbb{P}^1\times X$, $c\in \mathbb{P}^1\times X$, and $Y\subseteq \mathbb{P}^1\times X$ is a Zariski closed subset, then $\{n\colon \Phi^n(c)\in Y\}$ is a finite union of infinite arithmetic progressions along with a finite set.
\label{cor:main}
\end{cor}
The outline of this paper is as follows. In \S\ref{inter}, we give general interpolation results, which apply to rational maps having at least four distinct non-superattracting fixed points. In \S\ref{thm}, we prove Theorem \ref{thm:main} by reducing to the case considered in \S\ref{inter}. In \S\ref{DML} we prove Corollary \ref{cor:main} and in \S\ref{conc} we make some concluding remarks, which show the difficulties with trying to extend our interpolation results to higher dimensions.
\section{$p$-Adic interpolation}\label{inter}
In this section we prove a special case of Theorem \ref{thm:main}. We let $L$ be a field of characteristic zero and we let $h(x)\in L(x)$, and we let $c\in L$. We wish to understand the orbit of $c$ under the self-map $h$.
As it turns out, the difficult case is when $h(x)$ has degree at least two. Then by a result of Fatou \cite{fat1,fat2,jul} we know that $h$ has at most $2{\rm deg}(h)-2$ periodic superattracting cycles (i.e., orbits of points $a\in \bar{L}$ with the property that $h^m(a)=a$ and $(h^m)'(a)=0$) and since $h$ has a Zariski dense set of periodic points, some iterate of $h$ will have at least four non-superattracting fixed points. Then by the remarks after the statement of Theorem \ref{thm:main} show, we may replace $h$ by an iterate and assume that it has at least four non-superattracting fixed points. Moreover, we can conjugate $h$ by a suitable fractional linear transformation and assume that $\infty$ is fixed by $h$ and is not superattracting, which means that the degree of the numerator of $h$ is one greater than the degree of the denominator.
Throughout this section, we will thus assume that the above remarks apply to $h(x)$ and we write
\be
h(x) = p(x)/q(x), \gcd(p(x),q(x))=1,~{\rm deg}(p)=1+{\rm deg}(q).
\label{eq:1}
\ee
By factoring over the algebraic closure of $K$ we then have
\be
p(x) = C(x- \alpha_1)^{a_1}\cdots (x - \alpha_s)^{a_s},
\label{eq:3}
\ee
\be
q(x) = (x - \beta_1)^{b_1}\cdots (x - \beta_{t})^{b_{t}},
\label{eq:4}
\ee
\be
p(x) - xq(x) = C' (x - \gamma_1)^{c_1} \cdots (x - \gamma_u)^{c_u},
\label{eq:5}
\ee
\be
p'(x)q(x) - q'(x)p(x) = C'' (x - \delta_1)^{d_1} \cdots (x - \delta_v)^{d_v},
\label{eq:6}
\ee
where
\begin{equation}
\mathcal{T}:=\{ \alpha_1,\ldots, \alpha_s, \beta_1,\ldots, \beta_{t}, \gamma_1,\ldots, \gamma_u,\delta_1,\ldots, \delta_v\}\subseteq \overline{K},\end{equation}
and $\alpha_1,\ldots ,\alpha_s,\beta_1,\ldots ,\beta_{t}$ are pairwise distinct, $C,C',C''$ are nonzero, $\gamma_1,\ldots ,\gamma_u$ are pairwise distinct, and $\delta_1,\ldots ,\delta_v$ are pairwise distinct.
Since $h(x)$ has at least three non-superattracting fixed points in $\mathbb{P}^1\setminus \{\infty\}$, the number of distinct roots of $p(x)-x q(x)$, is at least $3$, and we may assume that $\gamma_1,\gamma_2,\gamma_3$ are disjoint from $\{\delta_1,\ldots ,\delta_v\}$.
Since $h(x)$ is non-constant we also have $v>0$.
Consider the ring \be
\label{eq:7}R := \mathbb{Z}[c, 6^{-1}, C^{\pm 1}, (C')^{\pm 1}, (C'')^{\pm 1}, \mathcal{T}][\mathcal{S}^{-1}],\ee
where $\mathcal{S}$ is the union of the nonzero elements of $\mathcal{T}$ along with the set of elements that can be expressed as a difference of two distinct elements from $\mathcal{T}$. Inverting $6$ is not technically necessary, but we do this for convenience as the $p$-adic arguments that we will use are slightly cleaner for primes larger than $3$.
We now give a description of the orbit of the point $c$ under the map $h$ as a fraction of elements of $R$. We let
\be A_0=c,~ B_0=1.\ee Then for each $n$ we will give coprime elements $A_n,B_n$ of $R$ with the property that
$A_n/B_n = h^n(c)$.
To do this, for $n\ge 0$, we define
\begin{equation}
A_{n+1} = C(A_n-\alpha_1 B_n)^{a_1}\cdots (A_n-\alpha_s B_n)^{a_s} = p(A_n/B_n) \cdot B_n^{{\rm deg}(p(x))}
\label{eq:8}
\end{equation}
and
\begin{equation}
\label{eq:9}
B_{n+1} = B_n (A_n-\beta_1 B_n)^{b_1}\cdots (A_n-\beta_t B_n)^{b_t} = q(A_n/B_n) \cdot B_n^{{\rm deg}(p(x))}.
\end{equation}
Observe that
\begin{align} &= B_n A_{n+1} - A_n B_{n+1} \nonumber \\ &=
B_n B_{n+1} (h(A_n/B_n) - A_n/B_n)\nonumber \\
&= B_n B_{n+1} ( p(A_n/B_n) - q(A_n/B_n) A_n/B_n) q(A_n/B_n)^{-1}\nonumber \\
&= C' B_n B_{n+1} (A_n - \gamma_1 B_n)^{c_1} \cdots (A_n - \gamma_u B_n)^{c_u} B_n^{-c_1-\cdots -c_u} B_{n+1}^{-1} B_n^{{\rm deg}(p)}\nonumber
\end{align}
Since ${\rm deg}(p(x)-xq(x))=c_1+\cdots + c_u \le {\rm deg}(p(x))$, there is some $\ell_n\ge 1$ such that
\be \label{eq:10}
B_n A_{n+1} - A_n B_{n+1} =C' B_n^{\ell_n} (A_n - \gamma_1 B_n)^{c_1} \cdots (A_n - \gamma_u B_n)^{c_u}\ee
\begin{lem} Adopt the notation from Equations (\ref{eq:1})--(\ref{eq:9}).
Then for each $n\ge 0$, the elements $A_n$ and $B_n$ generate the unit ideal in $R$.
\label{lem:unit}
\end{lem}
\begin{proof}
We prove this by induction on $n$. When $n=0$ it is immediate, since $B_0=1$. Now suppose that the result holds whenever $n\le m$ with $m\ge 1$ and consider the case when $n=m+1$.
If $A_{m+1}R+B_{m+1}R\neq R$, there is a maximal ideal $\mathfrak{p}$ that contains both $A_{m+1}$ and $B_{m+1}$.
Then since $C$ is a unit in $R$, we must have $A_m-\alpha_i B_m\in \mathfrak{p}$ for some $i$ and either $B_n\in \mathfrak{p}$ or $A_m-\beta_j B_m\in\mathfrak{p}$ for some $j$.
If $B_m\in \mathfrak{p}$ then we see $A_m = (A_m-\alpha_i B_m)+\alpha_i B_m\in \mathfrak{p}$, which is impossible by our induction hypothesis. Thus we may assume that $B_m\not\in \mathfrak{p}$ and $A_m-\beta_j B_m\in\mathfrak{p}$ for some $j$.
Since $p(x)$ and $q(x)$ are coprime, $\{\alpha_1,\ldots , \alpha_s\}\cap \{\beta_1,\ldots ,\beta_t\}$ is empty, and so if
$A_m-\alpha_i B_m\in\mathfrak{p}$ and $A_n-\beta_j B_n\in\mathfrak{p}$, then $(\alpha_i-\beta_j)B_j\in \mathfrak{p}$, which is again impossible since $\alpha_i-\beta_j$ is a unit in $R$. Thus we obtain the desired result.
\end{proof}
We now prove a useful lemma, in which we make use of the $S$-unit theorem (see \cite[Theorem 6.1.3]{EG}). We recall that if $K$ is a field of characteristic zero and $G\le K^*$ is a finitely generated subgroup of the multiplicative group, then the $S$-unit theorem concerns solutions $(X_1,\ldots ,X_n)\in G^n$ to
the equation $$\sum_{i=1}^n \rho_i X_i = 0$$ with $\rho_1,\ldots ,\rho_n$ fixed nonzero elements of $K$. We say that $(X_1,\ldots ,X_n)$ is a \emph{non-degenerate} solution if no proper non-trivial subsum of $\sum \rho_i X_i$ vanishes. Then the $S$-unit theorem says that up to scaling by elements of $G$ there are only finitely many non-degenerate solutions $(X_1,\ldots ,X_n)\in G^n$ to the above equation. We recall that a subset $\mathcal{P}$ of the prime spectrum of a commutative ring is Zariski dense if the intersection of the prime ideals in $\mathcal{P}$ is contained in the nil radical of the ring.
\begin{lem} \label{l2} Adopt the notation from Equations (\ref{eq:1})--(\ref{eq:9}), and suppose that $u\ge 3$ and that $\gamma_1,\gamma_2, \gamma_3\not \in \{\delta_1,\ldots ,\delta_v\}$.
If the sequence $\{A_n/B_n\}$ is not eventually periodic then there is a Zariski dense set of maximal ideals $\mathcal{P}$ of $R$ such that for each $\mathfrak{p}\in \mathcal{P}$ there is some natural number $n$ such that the following hold:
\begin{enumerate}
\item $A_n B_{n+1}-B_n A_{n+1}\in \mathfrak{p}$;
\item $B_n\cdot B_{n+1}\not\in \mathfrak{p}$; and
\item $(p'(A_n/B_n) q(A_n/B_n) - q'(A_n/B_n) p(A_n/B_n))B_n^{{\rm deg}(p(x))+{\rm deg}(q(x))-1}$ is not in $\mathfrak{p}$.
\end{enumerate}
\end{lem}
\begin{proof}
We may assume that the sequence $\{A_n/B_n\}$ is not eventually periodic.
Let $\mathcal{P}$ denote the set of maximal ideals $\mathfrak{p}$ for which conditions (1)--(3) hold. If $\mathcal{P}$ is not Zariski dense, then there is some nonzero $f\in R$ such that $f$ is in every prime in $\mathcal{P}$. Then after replacing $R$ by $R[1/f]$, we may assume that $\mathcal{P}$ is empty. Let $U$ denote the group of units of $R$, which is a finitely generated abelian group by Roquette's theorem \cite{Roq}.
By assumption $\gamma_1,\gamma_2, \gamma_3\not \in \{\delta_1,\ldots ,\delta_v\}$.
Suppose that $A_n-\gamma_1 B_n$, $A_n-\gamma_2 B_n$, $A_n-\gamma_3 B_n$ are all in $U$ for every $n$. Then we pick $\rho_1, \rho_2, \rho_3 \in R$, not all zero, such that
$\rho_1+\rho_2+\rho_3=\gamma_1 \rho_1+\gamma_2 \rho_2+\gamma_3 \rho_3=0$.
Since $\gamma_1, \gamma_2, \gamma_3$ are pairwise distinct, we in fact have $\rho_1,\rho_2,\rho_3$ are all nonzero.
Then by construction
$$\rho_1 (A_n-\gamma_1 B_n) + \rho_2 (A_n-\gamma_2 B_n) +\rho_3 (A_n-\gamma_3 B_n) = 0$$ for every $n\ge 0$. Since every solution to
$\rho_1 x_1+\rho_2 x_2+\rho_3 x_3 = 0$ with $x_1x_2x_3\neq 0$ is necessarily non-degenerate, by the $S$-unit theorem there are only finitely many solutions $(x_1,x_2,x_3)$ in $U^3$ up to scaling.
It follows that there must exist $n$ and $m$ with $n<m$ such that
$(A_n-\gamma_1 B_n)/(A_n-\gamma_2 B_n) = (A_m-\gamma_1 B_m)/(A_m-\gamma_2 B_m)$. In other words, $\phi(A_n/B_n)=\phi(A_m/B_m)$, where $\phi(x)=(x-\gamma_1)/(x-\gamma_2)$. Since $\phi$ is an automorphism of $\mathbb{P}^1$, we then conclude that $A_n/B_n=A_m/B_m$ and so the sequence $\{A_i/B_i\}$ is eventually periodic, which is a contradiction.
It follows that there is some $n$ such that $A_n-\gamma_i B_n\not\in U$ for some $i\in \{1,2,3\}$. Then $A_n-\gamma_i B_n\neq 0$ since otherwise, we would have $A_{n+1}/B_{n+1}= A_n/B_n$ and so we conclude that there is some maximal ideal $\mathfrak{p}$ of $R$ such that $A_n-\gamma_i B_n\in \mathfrak{p}$. In particular, $B_n A_{n+1}-A_n B_{n+1}\in \mathfrak{p}$.
If $B_n\in \mathfrak{p}$ then $A_n\in \mathfrak{p}$ since $A_n-\gamma_i B_n\in \mathfrak{p}$, and this is impossible by Lemma \ref{lem:unit}.
Now if $B_{n+1}\in \mathfrak{p}$ then since $$B_n A_{n+1}-A_n B_{n+1}\in\mathfrak{p}$$ and $B_n\not\in \mathfrak{p}$, we see that $A_{n+1}\in\mathfrak{p}$, which contradicts the conclusion of Lemma \ref{lem:unit}.
Finally, if $$p'(A_n/B_n) q(A_n/B_n) - q'(A_n/B_n) p(A_n/B_n))B_n^{{\rm deg}(p(x))+{\rm deg}(q(x))-1}$$ is in $\mathfrak{p}$ then since $B_n$ is not in $\mathfrak{p}$ and since $C''$ is a unit, by Equation (\ref{eq:10}) we must have
$A_n -\delta_j B_n\in \mathfrak{p}$ for some $j$, and so $(\delta_j-\gamma_i)B_n$ is in $\mathfrak{p}$. But since $\gamma_i\neq \delta_j$, we have that $\gamma_i-\delta_j$ is a unit in $R$ by construction and since $B_n$ is not in $\mathfrak{p}$, we then see that this cannot hold.
The result follows.
\end{proof}
We will now obtain our interpolation of the orbit of $c$ under $h$ by applying a result of Poonen \cite{poonen}. We need a simple lemma, which will give us the hypotheses needed to apply Poonen's result. We recall that if $K$ is a field with a non-archimedean absolute value $|~|$ and $R$ is a subring of $K$ then the \emph{Tate algebra} $R\langle x_1,\ldots ,x_d\rangle$ is the subset of $R[[x_1,\ldots ,x_d]]$ consisting of power series that converge on the unit polydisc of $K^d$.
\begin{lem}\label{lem:o}
Let $\mathfrak{o}$ be a complete discrete valuation ring, suppose that there is a prime $\pi\in \mathbb{Z}$ with $\pi\ge 5$ such that $|\pi|<1$ in $\mathfrak{o}$, and suppose that $h(x)=p(x)/q(x)$ with $p(x),q(x)\in \mathfrak{o}[x]$. Suppose further that $c\in \mathfrak{o}$ is such that:
\begin{enumerate}
\item $|q(c)|=1$;
\item $h(c)\equiv c~(\bmod \pi^2\mathfrak{o})$; and
\item $h'(c)\equiv 1~(\bmod ~\pi\mathfrak{o})$.
\end{enumerate}
Then the map
$f(x):=\pi^{-1}(h(c+\pi x)-c)$ is in $\mathfrak{o}\langle x\rangle$ and satisfies $f(x)\equiv x~(\bmod ~\pi\mathfrak{o})$ for all $x\in \mathfrak{o}$.
\end{lem}
\begin{proof} We have $q(c+\pi x) \equiv q(c)~(\bmod~\pi)$ and so $|q(c+\pi x)|=1$ for all $x\in \mathfrak{o}$. It follows that $h^{(r)}(c+\pi x)\in \mathfrak{o}$ for all $r\ge 0$ and all $x\in \mathfrak{o}$.
Then
\begin{align*}
f(x) &= \pi^{-1}\left( h(c+\pi x)-c\right)\\
&= \pi^{-1}\left( \sum_{r\ge 0} h^{(r)}(c) \pi^{r} x^r/r! - c\right)\\
&= (h(c)-c)\pi^{-1} + h'(c)x + \sum_{r\ge 2} h^{(r)}(c) \frac{\pi^{r-1}}{r!} x^r.
\end{align*}
Since $\pi\neq 2,3$ and since $|r!|_{\pi} > \pi^{-r/(\pi-1)}$, we see that $\pi^{r-1}/r! \in \pi \mathfrak{o}$ for all $r\ge 2$, and that $|h^{(r)}(c) \frac{\pi^{r-1}}{r!} |\to 0$ as $r\to\infty$ and hence $f(x)\in \mathfrak{o}\langle x\rangle$. Next, for $x\in \mathfrak{o}$, $$f(c+\pi x)- x = (h(c)-c)\pi^{-1} + (h'(c)-1)x+ \sum_{r\ge 2} h^{(r)}(c) \frac{\pi^{r-1}}{r!} x^r \equiv 0~(\bmod~\pi\mathfrak{o}),$$ by our assumptions and the remarks above. The result follows.
\end{proof}
\begin{prop}\label{prop:main}
Adopt the notation of Equations (\ref{eq:1})--(\ref{eq:10}). Then there is a Zariski dense set of maximal ideals $\mathcal{P}$ of $R$ with the following properties:
\begin{enumerate}
\item[(i)] each $\mathfrak{p}\in \mathcal{P}$ induces a non-archimedean absolute value $|~|_{\mathfrak{p}}$ on the field of fractions, $K_{\mathfrak{p}}$, of the completion of the local ring $R_{\mathfrak{p}}$;
\item[(ii)] for each $\mathfrak{p}\in \mathcal{P}$, there is a natural number $a$ and elements $$g_0(z),\ldots ,g_{a-1}(z)\in \mathfrak{o}\langle z\rangle$$ such that for each $i\in \{0,\ldots ,a-1\}$, $h^{an+i}(c)=g_i(n)$ for all $n$ sufficiently large, where $\mathfrak{o}$ is the valuation subring of $K_{\mathfrak{p}}$ consisting of elements $r$ with $|r|_{\mathfrak{p}}\le 1$.
\end{enumerate}
\end{prop}
\begin{proof}
By Lemma \ref{l2} there is a Zariski dense set $\mathcal{P}$ of maximal ideals $\mathfrak{p}$ such that items (1)--(3) from the statement of Lemma \ref{l2} hold.
Since the regular locus of ${\rm Spec}(R)$ is a dense open set, we can replace $\mathcal{P}$ by a dense subset with the additional property that $R_{\mathfrak{p}}$ is a regular local noetherian ring for $\mathfrak{p}\in \mathcal{P}$.
Since $R_{\mathfrak{p}}$ is regular, its associated graded ring $$\bigoplus_{n\ge 0} \mathfrak{p}^nR_{\mathfrak{p}}/\mathfrak{p}^{n+1}R_{\mathfrak{p}}$$ is a polynomial ring over the residue field $\mathbb{F}:=R_{\mathfrak{p}}/\mathfrak{p}R_{\mathfrak{p}}$; moreover, $\mathbb{F}$ is a finite field by the Nullstellensatz \cite[Theorem 4.19]{Eis}.
Since the associated graded ring of the local ring $R_{\mathfrak{p}}$ is an integral domain, we have an absolute value $|~|=|~|_{\mathfrak{p}}$ on $R_{\mathfrak{p}}$ given by $|0|=0$ and for $a$ nonzero, $|a|=|\mathbb{F}|^{-\nu(a)}$, where $\nu(a)$ is the largest nonnegative integer $r$ such that $a\in \mathfrak{p}^r R_{\mathfrak{p}}$. Such an $r$ necessarily exists by the Krull intersection theorem.
Then this absolute value extends to an absolute value on $K_{\mathfrak{p}}$, where $K_{\mathfrak{p}}$ is the field of fractions of the completion of $R_{\mathfrak{p}}$. We now let $\mathfrak{o}$ denote the valuation subring of $K_{\mathfrak{p}}$ consisting of elements of absolute value at most $1$. Then $\mathfrak{o}$ is a discrete valuation ring and there is a unique prime $\pi \in \mathbb{Z}$ such that $|\pi | < 1$, since $\mathbb{F}$ is finite.
Let $c_i = A_i/B_i\in R_{\mathfrak{p}}$ for $i\ge 0$. Then by assumption $$h(c_m)\equiv
c_m~(\bmod~\mathfrak{p}R_{\mathfrak{p}})\qquad {\rm and}\qquad h'(c_m)\not\equiv 0~(\bmod~\mathfrak{p}
R_{\mathfrak{p}}).$$
Moreover, by our choice of $\mathfrak{p}$ we have that $|B_n|=1$ for
all $n\ge m$ and so $c_n\equiv c_m~(\bmod ~ \mathfrak{p}R_{\mathfrak{p}})$ for $n\ge
m$.
It follows that there exist $m',m''\ge m$ with $m'>m''$ such that $c_{m'}\equiv c_{m''}
~(\bmod ~\pi^2\mathfrak{o})$ and $h'(c_{m'})\equiv 1~(\bmod ~\pi \mathfrak{o})$.
Thus we see that after replacing $h$ by $h^{m'-m''}$, we may assume that the conditions in Lemma \ref{lem:o} are satisfied and so the map $$f(x) := \pi^{-1}(h(c_{m''}+\pi x)-c_{m''})$$
satisfies $f(x)\equiv x~(\bmod~\pi \mathfrak{o})$. Hence by a result of Poonen \cite{poonen} we have there is a map $g(x,n)\in \mathfrak{o}\langle x,n\rangle$ such that
$f^n(x)=g(x,n)$. In particular, $h^n(c_{m''}) = \pi f^n(0)+c_{m''} = \pi g(0,n) + c_{m''}$, and so we have obtained an interpolation of the orbit of $c_{m''}$ under $h$. Since we did this at the expense of replacing $h$ by an iterate and replacing our starting point $c_0$ with a different point in the orbit, we see that this gives the desired result, as explained in the remarks following the statement of Theorem \ref{thm:main}.
\end{proof}
\section{Proof of Theorem \ref{thm:main}}\label{thm}
We now use the results of the preceding section to prove our main interpolation result.
\begin{proof}[Proof of Theorem \ref{thm:main}]
If $h$ is of degree one, then $h$ is \'etale and the result follows from BGT. Similarly, if the orbit of $c$ under of iteration of $h$ is preperiodic then the result holds trivially. Thus we may assume that the orbit is infinite and that $h$ has degree at least two. By a result of Fatou \cite{fat1,fat2,jul}, there are at most $2{\rm deg}(h)-2$ attracting periodic cycles, and since the set of periodic points of $h$ is infinite, after replacing $h$ be an iterate, we may assume that $h$ has at least four non-attracting fixed points and by enlarging $K$ and conjugating $h$ by a fractional linear transformation, we may assume that $\infty$ is fixed by $h$. (We note that if we can interpolate the orbit of a point $\phi(c)$ under a conjugate $\phi\circ h\circ\phi^{-1}$ of $h$, then we can interpolate the orbit of $c$ under $h$, by applying $\phi^{-1}$ to our interpolating power series.) Then by Proposition \ref{prop:main} we obtain the desired result in this case.
\end{proof}
We now make several remarks that are potentially useful for applications of Theorem \ref{thm:main}.
\begin{rmk}
We observe that an analogous conclusion to that of Theorem \ref{thm:main} can be obtained for self-maps of curves. The reason for this is that after replacing the map by a suitable iterate, it suffices to consider the case of a geometrically irreducible curve. We can also pass to the normalization and assume that our curve is smooth, and Theorem \ref{thm:main} then handles the genus $0$ case. The genus one case follows from \cite{BGT10} and for curves of genus $\ge 2$, every endomorphism is an automorphism by the Riemann-Hurwitz formula and has finite order \cite[Ex. IV 2.5, IV 5.2, V 1.11]{Hartshorne}, and so the result holds trivially in this last case.
\end{rmk}
\begin{rmk} In some cases it is useful to keep track of additional geometric data and thus to enlarge the ring $R$ in Equation (\ref{eq:7}). We note that a finite set of additional generators from the ambient field $L$ can be added to the ring $R$ without affecting the arguments.
\label{rmk32}
\end{rmk}
\begin{rmk}\label{rmk33}
We in fact show something strictly stronger than merely having an infinite set of pairwise distinct completions of $K$ in Theorem \ref{thm:main}. The proofs shows that there is a finitely generated subring $R$ of $K$ whose field of fractions is $K$ that is a subring of the valuation ring for each of our absolute values $|~|$ and the set of elements $r\in R$ such that $|r|<1$ for each of the absolute values we construct is $\{0\}$.
\end{rmk}
\section{An instance of the dynamical Mordell-Lang Conjecture}\label{DML}
In this section, we apply Theorem \ref{thm:main} to obtain an instance of the dynamical Mordell-Lang conjecture for split endomorphisms of a certain form. We make use of the results of \cite{BGT10}, which is a precursor to the work of Poonen, and uses results about embedding finitely generated rings into $p$-adic rings rather than completions. Nevertheless it is straightforward to translate these results to the framework we work with.
\begin{proof}[Proof of Corollary \ref{cor:main}]
We write $c=(c',c'')\in \mathbb{P}^1\times X$.
As in the proof of Theorem \ref{thm:main}, if $h$ has degree one, then $h$ is \'etale and we can infer the result directly from \cite{BGT10}. Thus we may assume that $h$ has degree at least $2$ and by replacing $\Phi$ (and hence $h$) by an iterate and possibly conjugating $h$ by an automorphism of $\mathbb{P}^1$ (i.e., making a change of variables), we may assume that $h$ satisfies the hypotheses from \S\ref{inter} and in particular we now adopt the notation from Equations (\ref{eq:1})--(\ref{eq:10}). As the remarks following the statement of Theorem \ref{thm:main} show, we can replace $\Phi$ by a suitable iterate and still obtain the desired result for the original map $\Phi$.
It is sufficient to consider the case when $X$ is smooth and geometrically irreducible by the argument given in \cite[Theorem 1.3]{BGT10}. Thus we assume that $X$ is a irreducible and smooth quasiprojective variety. Let $\rho: X \longrightarrow \mathbb{P}^M(\mathbb{C})$ be an embedding of $X$ into projective space. Then \cite[Theorem 4.1]{BGT10} shows there is a finitely generated $\mathbb{Z}$-subalgebra $S$ of $\mathbb{C}$ for which we obtain a model $\mathcal{X}\subseteq \mathbb{P}^M_{{\rm Spec}(S)}$ of $X$ over ${\rm Spec}(S)$. By adding a finite set of additional elements to $S$, we may assume that $S$ is a finitely generated $R$-algebra, where $R$ is as in Equation \ref{eq:7}.
Then \cite[Proposition 4.3]{BGT10} shows that there is a dense open subset $U$ of ${\rm Spec}(S)$ such that there is a scheme $\mathcal{X}_U$ that is smooth and quasiprojective over $U$ whose generic fibre is $X$ such that the endomorphism $g$ of $X$ extends to an unramified endomorphism $g_U$ of $\mathcal{X}_U$. Since $U$ is a dense open subset of ${\rm Spec}(S)$, by Proposition \ref{prop:main} combined with Remark \ref{rmk32} we have some maximal ideal $\mathfrak{p}$ that is in $U$, where (i) and (ii) from Proposition \ref{prop:main} hold for the maximal ideal $\mathfrak{p}$.
In particular, if we let $K_{\mathfrak{p}}$ be the field of fractions of the completion of the local ring $S_{\mathfrak{p}}$ and let $\mathbb{F}$ denote the residue field $S_{\mathfrak{p}}/\mathfrak{p}S_{\mathfrak{p}}$, then there is a natural number $a$ and elements $$h_0(z),\ldots ,h_{a-1}(z)\in \mathfrak{o}\langle z\rangle$$ such that for each $i\in \{0,\ldots ,a-1\}$, $h^{an+i}(c')=h_i(n)$ for all $n$ sufficiently large, where $\mathfrak{o}$ is the valuation subring of $K_{\mathfrak{p}}$.
Since $K_{\mathfrak{p}}$ is the field of fractions of the completion of a finitely generated $\mathbb{Z}$-algebra with respect to a maximal ideal $\mathfrak{p}$, there is a unique prime $\ell$ such that it is isomorphic to a topological subfield of $\mathbb{C}_{\ell}$.
Then the arguments of \cite{BGT10}\footnote{While the arguments are done over $\mathbb{Z}_p$ they work with any complete rank one discrete valuation ring of mixed characteristic with finite residue field.} show that if we regard $X(\mathbb{C}_{\ell})$ as a $d$-dimensional $\mathbb{C}_{\ell}$-manifold, then after replacing $c''$ by $g^m(c'')$ for some $m$, there is some integer $b\ge 1$ and analytic open neighbourhood $V$ of $g^m(c'')$ inside $X(K_{\mathfrak{p}})$ that is invariant under $g^b$ and an analytic bijection $\iota: V\to {\mathfrak{o}}^d$ such that
$\iota \circ g^{bn}(g^m(c'')) = (g_1(n),\ldots ,g_d(n))$, where $g_1(z),\ldots ,g_d(z)\in\mathfrak{o}\langle z\rangle$.
In particular, the arguments of \cite[Theorem 4.1]{BGT10} show that if we take $e$ to be the least common multiple of $a$ and $b$ then the set of sufficiently large positive integers in $\{n\colon \Phi^{en+i}(c)\in Y\}$ can be realized as the set of common positive integer zeros of a finite set of maps in $\mathfrak{o}\langle z\rangle$. In particular, by Strassman's theorem we get that $$\{n\colon \Phi^{en+i}(c)\in Y\}$$ either contains all sufficiently large natural numbers $n$ or it is a finite set. The result follows.
\end{proof}
Occasionally one can use data from canonical heights or other methods to prove the dynamical Mordell-Lang for certain classes of endomorphisms. Such methods do not seem to be generally applicable to the general case we consider. As an example, consider $\mathbb{P}^1\times E$, where $E$ is an elliptic curve, and let $\Phi=(g,[2])$, where $[2]$ is multiplication by $2$ and $g$ is a rational map of degree four. Then the heights of $g^n(a)$ and $[2^n]\cdot b$ for a point $(a,b)\in \mathbb{P}^1\times E$ with $a$ not preperiodic under $g$ and $b$ not a torsion point of $E$, are both asymptotic to a nonzero constant times $4^n$ as $n\to\infty$ and so there are non-periodic curves $C\subseteq \mathbb{P}^1\times E$ where one cannot naively use information from heights to rule out the orbit having infinite intersection with $C$.
\section{Concluding remarks} \label{conc}
We have shown that one can interpolate self-maps of curves, and it is natural to ask whether one can obtain similar interpolation results for endomorphisms of higher dimensional varieties. Unfortunately, this fails even for surfaces. As a simple example, consider the map $f:\mathbb{A}^2\to \mathbb{A}^2$ given by $f(x,y)=(x+1,y(x+1))$. Then
$f^n(0,1) = (n,n!)$ and so for each prime $p$ we have $|n!|_p \to 0$ and so there is no way to interpolate the orbit of $(0,1)$ under $f$.
A more reasonable goal is to consider simultaneous interpolation for several rational maps $h_1,\ldots, h_m\in \mathbb{C}(x)$ to obtain the case of split rational maps from $\left(\mathbb{P}^1\right)^m$ to itself. In this case, we do not know whether this can be done even when $m=2$. Heuristics (see \cite[\S5]{BGHKST}) suggest, however, that one should be able to obtain the split case for rational maps under general conditions \cite[Conjecture 8.4.0.19]{BGT16}.
\section*{Acknowledgments}
We thank Dragos Ghioca and Tom Tucker for many helpful comments and suggestions. In particular, we are grateful to Dragos Ghioca for mentioning the example following the proof of Corollary \ref{cor:main}.
|
2,877,628,090,404 | arxiv | \section{Introduction: Deformations and Singleton Physics}
Physical theories have their domain of applicability mainly depending on
the velocities and distances concerned. But the passage from one domain
(of velocities and distances) to another one does not appear in an
uncontrolled way. Rather, a new fundamental constant enters the modified
formalism and the attached structures (symmetries, observables, states,
etc.) {\it deform} \cite{Fl82,Fl98} the initial structure; namely, we have
a new structure which in the limit when the new parameter goes to zero
coincides with the old formalism. In other words, to {\it detect} new
formalisms we have to study deformations of the algebraic structures
attached to a given formalism.
The only question is in which category we perform this search for
deformations. Usually physics is rather conservative and if we start e.g.
with the category of associative or Lie algebras, we tend to deform in
this category. This is the case of traditional quantization
\cite{BFFLS78,St98} (deforming classical mechanics to quantum mechanics by
introducing a new parameter $\hbar$, keeping the same algebra of observables
but deforming their composition law). The same is true of the passage from
Galilean physics to special relativity (new parameter $c^{-1}$, where $c$ is
the speed of light) and thence to physics in De Sitter space-time
(the new parameter being the curvature). It is this last aspect which we
shall present here.
In this paper we touch recent developments in field theories
based on supergravity, conformal field theories, compactification of
higher dimensional field theories, string theory, M-theory, $p$-branes,
etc. for which people rediscovered the efficiency and advantages of
anti De Sitter theories (which are stable deformations of Poincar\'e field
theories in the category of Lie groups; see however \cite{FHT93} for
quantum groups, at roots of unity in that case). There are many reasons
for the advantages of anti De Sitter (often abbreviated as AdS) theories
among which we can mention that AdS field theory admits an invariant
natural infrared regularization of the theories in question and that the
kinematical spectra (angular momentum and energy) are naturally discrete.
But in addition AdS theories have a great bonus: the existence of
{\it singleton} representations discovered by Dirac \cite{Di63} for
${\rm{SO}}(2,3)$, corresponding to a ``square root" of AdS massless
representations. We discovered that fact around 20 years ago \cite{FF78,FF80}
and developed rather extensively its physical consequences in the following
years \cite{AF78,AFFS81,BFFH85,BFFS82,BFH83,FaF80,FF81,FF84,FF86,FF86f,%
FF87,FF88,FF89,FF91,FF98,FFG86,FFS88,FHT93,Fr79,Fr82,Fr88,FH87,HFF92}.
Singleton theories are topological in the sense that the corresponding
singleton field theories live naturally on the boundary at infinity of the
De Sitter bulk (boundary which has one dimension less than the bulk).
They are new types of gauge theories which in addition permit to consider
massless particles, e.g. the photon, as {\it dynamically} composite
AdS particles \cite{FF88,FF98,FFS88}. Some of the beautiful properties of
singleton theories can be extended to higher dimensions, and this is the
main point of the recent huge interest in these AdS theories, which touched
a large variety of aspects of AdS physics. More explicitly, in several of
the recent articles among which we can mention
\cite{Ma97,Wi98,FFr98,FFZ98,FZ98,FKPZ,FMMR,SS98},
the new picture permits to study duality between CFT on the boundary at
infinity and the corresponding AdS theory in the bulk. That duality, which
has also interesting dynamical aspects in it, utilizes among other things
the great notational simplifications permitted by singleton physics.
\section{Kinematics: one massless particle $=$ two Dirac singletons}
In order to give a flavor of the basic features of the theory of singletons
in the (2+3) anti-De~Sitter space-time AdS$_4$ and their relation to
massless particles, we shall in this section and in the following
indicate some of these features, refering to the quoted literature for
a more detailed presentation. The theory can be extended to other dimensions
(higher or lower). In AdS$_3$ one gets essentially the same features
\cite{HFF92}; the main difference being (as is well known) that the ($2+2$)
De Sitter group ${\rm{SO}}(2,2)$ is not the full (infinite-dimensional)
conformal group of $1+1$ space-time (one has then to study Witt and
Virasoro algebras; cf. e.g. \cite{BH86,Mi99,It98}). In space-times of
dimension $\geq 5$, some care is needed \cite{AL98,La98} as to the definition
of masslessness and of singletons (the very nice properties of dimension~4
are not all preserved, a fact sometimes overlooked in the recent
literature); we shall not enter here into this discussion.
The maximal compact subalgebra of $\mathfrak{so}(2,3)$ is
$\mathfrak{so}(2)\oplus \mathfrak{so}(3)$. We then have minimal weight
(positive energy, which is one of the reasons for choosing AdS) unitary
irreducible representations (UIRs) of a corresponding Lie group. In the
following we consider mainly the twofold covering $SO_{(2,3)}$ of the connected
component of the identity of ${\rm{SO}}(2,3)$, and denote by $D(E_0,s)$
these minimal weight
representations. Here $E_0$ is the minimal ${\rm{SO}}(2)$ eigenvalue and the
half-integer $s$ is the spin. These irreducible representations are
unitary (belonging to the discrete series above the limit of unitarity)
provided $E_0\geq s+1$ for $s\geq 1$ and
$E_0\geq s+\frac{1}{2}$ for $s=0$ and $s=\frac{1}{2}$.
At the limit of unitarity (i.e. when $2E_0$, which is an integer
for $SO_{(2,3)}$ but can take any value for the universal covering,
tends to the limit from above), the Harish Chandra module $D(E_0,s)$
becomes indecomposable and the physical UIR appears as a quotient,
a hall-mark of gauge theories.
For instance, for $s\geq 1$, we get in the limit an indecomposable
representation denoted here by $ID(s)$ or more explicitly by
$D(s+1,s)\rightarrow D(s+2,s-1)$, a shorthand notation \cite{FFS88}
for what mathematicians would write as a short exact sequence of
modules $0\rightarrow D(s+1,s)\rightarrow ID(s)\rightarrow
D(s+2,s-1)\rightarrow 0$.
Now in gauge theories one needs extensions involving more than two
UIRs. A typical situation is the case of flat space electromagnetism
where one has the classical Gupta-Bleuler triplet which, in our
shorthand notations, can be written $Sc\rightarrow Ph \rightarrow Ga$.
Here $Sc$ (scalar modes) and $Ga$ (gauge modes) are massless
zero-helicity UIRs $h(0,0)$ of the Poincar\'e (inhomogeneous Lorentz)
group $\mathcal{P}_{1+3}= SO_{(1,3)}\cdot\mathbb{R}^4$ while $Ph$ is the
module of physical modes, transforming under $h(0,1)\oplus h(0,-1)$,
where $h(0,s)$ is the UIR of $\mathcal{P}_{1+3}$ with mass 0 and
helicity $s\in \mathbb{Z}$. The scalar modes can be suppressed by a
gauge fixing condition (e.g. the Lorentz condition) but then one is left
with a nontrivial extension $Ph \rightarrow Ga$ on the vector space
$Ph\dot{+} Ga$ which has no invariant nondegenerate metric and cannot
be quantized covariantly. However the above Gupta-Bleuler triplet,
a nontrivial successive extension $Sc\rightarrow (Ph\rightarrow Ga)$,
is an indecomposable representation on a space which admits an invariant
nondegenerate (but indefinite) Hermitian form and it must be used in order
to obtain a covariant quantization of this gauge theory. We shall meet
here a similar situation, which in fact cannot be avoided. Indeed a
general result \cite{Ar85} says in particular that if an extension
$U^1\rightarrow U^2$, with $U^2$ a UIR, has a nondegenerate Hermitian
form, then $U^1$ is equivalent to an extension $U^3\rightarrow U^2$ for
some representation $U^3$ and the original extension is in fact a triplet
$U^2\rightarrow U^3\rightarrow U^2$.
The {\it massless representations} of ${\rm{SO}}(2,3)$
are defined (for $s\geq \frac{1}{2}$) as $D(s+1,s)$ and (for helicity
zero) $D(1,0)\oplus D(2,0)$. There are many justifications to this
definition, among which we can mention \cite{AFFS81}:
\begin{itemize}
\item[a)] The representations $D(s+1,s)$ contract smoothly, in a precise
mathematical sense, to either one of the two massless representations
$h(0,\pm s)$ of $\mathcal{P}_{1+3}$.
Each of the latter has an operationally unique extension to a UIR of
$SO_{(2,4)}$ (a 4-fold covering of the conformal group), the restriction
of which to the $SO_{(2,3)}$ subgroup is exactly the representation we
started with. Moreover each $D(s+1,s)$ can be extended (also uniquely
once the sign is fixed) to either of the two UIRs of the conformal
group which have $h(0,\pm s)$ for restriction to the Poincar\'e group
$\mathcal{P}_{1+3}$. The same properties
are true (for helicity zero) of $D(1,0)\oplus D(2,0)$.
\item[b)] The representations $D(E_0,s)$ can be realized as field
theories on AdS$_4$ but, at the limit of unitarity $D(s+1,s)$ with
$s\geq 1$, they are accompanied by extensions. As a consequence we get
a gauge theory, quantizable only by use of an indefinite metric and
a Gupta-Bleuler triplet.
For $D(s+1,s)$ with $s\geq 0$, the physical signals propagate on
the AdS$_4$ light cone \cite{FFG86}.
\end{itemize}
For $s=0$ and $s=\frac{1}{2}$, the above mentioned gauge theory
appears not at the level of the massless representations
$D(1,0)\oplus D(2,0)$ and $D(\frac{3}{2},\frac{1}{2})$ but at the
limit of unitarity, the singletons Rac$=D(\frac{1}{2},0)$ and
Di$=D(1,\frac{1}{2})$. These UIRs remain irreducible on the Lorentz
subgroup ${\rm{SO}}(1,3)$ and on the (1+2) dimensional Poincar\'e group
$\mathcal{P}_{1+2}$, of which $SO_{(2,3)}$ is the conformal group.
On $\mathcal{P}_{1+2}$ they give \cite{Bi82} the only
massless (discrete helicity) representations.
The singleton representations have a fundamental property:
\begin{equation}\label{di+rac}
({\mathrm{Di}}\oplus{\mathrm{Rac}})\otimes({\mathrm{Di}}
\oplus{\mathrm{Rac}})=(D(1,0)\oplus D(2,0))\oplus
2 \bigoplus_{s=\frac{1}{2}}^\infty D(s+1,s).
\end{equation}
Note that all the representations that appear in the decomposition are
massless representations.
Thus, in contradistinction with flat space, in AdS$_4$, massless states
are ``composed'' of two singletons. The flat space limit of a singleton
is a vacuum (a representation of $\mathcal{P}_{1+3}$ which is trivial on
the translations) and, even in AdS$_4$, the singletons are very poor in
states: their $(E,j)$ diagram has only a single trajectory (hence their
name). The $(E,j)$ spectra of the massless and singleton representations
is:
\smallskip
$D(s+1,s)$, $s>0$: $E-j=1,2,\ldots$; $j-s=0,1,\ldots$\ .
$D(1,0)$: $E-j=1,3,\ldots$; $j=0,1,\ldots$\ ; \
$D(2,0)$: $E-j=2,4,\ldots$; $j=0,1,\ldots$\ .
Rac$=D(\frac{1}{2},0)$: $E-j=\frac{1}{2}$; $j=0,1,2,\ldots$\ ;\quad
Di$=D(1,\frac{1}{2})$: $E-j=\frac{1}{2}$; $j=\frac{1}{2},
\frac{3}{2}, \ldots$\ .
\smallskip
\noindent [In AdS$_3$, where $\mathfrak{so}(2,3)$ is the conformal
Lie algebra and the anti-De~Sitter Lie algebra is
$\mathfrak{so}(2,2)\equiv\mathfrak{so}(1,2)\oplus\mathfrak{so}(1,2)$,
the ``physical'' massless representations are Di and Rac and the
analogue of singletons are the metaplectic representations
$D(\frac{1}{4})$ and $D(\frac{3}{4})$ of $\mathfrak{so}(1,2)$.
The sum of the two latter is the harmonic oscillator representation
and its tensor square is Di$\oplus$Rac, so we have an exact analogue
of (\ref{di+rac}) \cite{FF80}.
There is however a potentially important difference \cite{HFF92}:
the AdS$_3$ algebra $\mathfrak{so}(2,2)$ is no more the whole
conformal algebra of the $1+1$ space time, since that algebra is
well-known to be infinite dimensional. We shall not elaborate on
this point here.]
In normal units a singleton with angular momentum $j$ has energy
$E=(j+\frac{1}{2})\rho$, where $\rho$ is the curvature of the
AdS$_4$ universe. This means that only a laboratory of cosmic
dimensions can detect a $j$ large enough for $E$ to be measurable
since the cosmological constant (of the order of $\rho$) is very
small. At the flat space limit, the singletons become
vacua (representations of $\mathcal{P}_{1+3}$ with vanishing
energy and momentum) so that they carry no energy at all.
Furthermore local observation of a free singleton field is prevented by
gauge invariance (we shall come back briefly to this point below).
We thus have what can be called ``kinematical confinement'' of singletons
\cite{FF80}, which suggests that they can be a viable alternative
for quarks as fundamental constituents of matter. Elementary particles
would then be composed of two, three or more singletons and/or
anti singletons (the latter being associated with the contragredient
representations). As with quarks, several (three so far) flavors of
singletons (and anti singletons) should eventually be introduced to
account for all elementary particles.
In order to pursue this point further we need to develop a field
theory of singletons and of particles composed of singletons.
\section{Field Theory}
In this section we shall give a very short overview of the many
developments already achieved with singleton field theory and
interactions of singletons.
A first attempt to quantize the singleton field was based on the De Sitter
covariant Klein-Gordon equation ($\square -\frac{5}{4}\rho)\phi=0$
where $\rho =\ 3\Lambda,~\Lambda$ the cosmological constant.
An appropriate choice of boundary conditions, \break
$\lim r^{1\over 2}\phi <\infty$ as $r\rightarrow\infty$,
leads to a space of solutions that
carries the singleton representation $D(\frac{1}{2},0)$ but not as an
invariant subspace. Instead, $D(\frac{1}{2},0)$ is induced on a
quotient space of solutions, where the ignorable invariant
subspace consists of the solutions that satisfy
$\lim r^\frac{1}{2}\phi\rightarrow 0$ as $r\rightarrow\infty.$ This is a
difficulty even in the context of
classical field theory, for it means that there is no invariant propagator
that includes the contributions from the
singleton modes. An invariant propagator does exist, but an examination of
its asymptotic properties reveals that all its
Fourier modes fall off as $1/r^{5\over 2}$ at infinity; these modes
constitute an invariant subspace on which the space time symmetry group
acts by $D(\frac{5}{2},0)$.
It is very significant that this representation on the ignorable ``gauge"
modes is of the ordinary massive type, while the singleton representation
is highly degenerate. The energy levels of the former are degenerate
and the spectrum of angular momentum is limited from above by the energy
(in units of $\rho$). The energy levels of $D(\frac {1}{2},0)$ are
much more degenerate: $l=E-\frac{1}{2}$. This suggests that the physical,
singleton modes are swamped by the gauge
modes and that any interaction designed to detect singletons will fail to
be gauge invariant and hence non-unitary.
In the idiom of representation theory, the space of solutions of the equation
$(\square -\frac{5}{4}\rho)\phi\ = 0$ satisfying the boundary condition
$r^{1\over 2}\phi\ <\infty$ as $r\rightarrow\infty$, carries the
non-decomposable representation
$D(\frac {1}{2}, 0)\rightarrow\ D( \frac {5}{2},0)$.
Quantization needs a non-degenerate, invariant symplectic structure. This
requires the introduction of additional modes, canonically conjugate to
the gauge modes (compare the situation in electrodynamics where Maxwell
theory has no momentum conjugate to gauge modes), to give to the total
space the symmetric form
\begin{equation}
D(\frac {5}{2}, 0)\rightarrow\ D(\frac{1}{2}, 0)\rightarrow\
D (\frac {5}{2}, 0)
\end{equation}
or `` scalar $\rightarrow$ transverse $\rightarrow$ gauge". Initially,
this was done by admitting logarithmic solutions of the Klein-Gordon
equation above. Afterwards it was discovered that the dipole equation
$( \square\ -\frac {5}{4}\phi)\ ^{2}\phi\ = 0$
with the same boundary conditions, provides a much more interesting solution
to the problem.
It is remarkable that this particular instance of the dipole equation, in
marked contrast with what is the case in flat space, and also in
anti De Sitter space with any other value of the mass
parameter, actually contains physical propagating
modes. (In all the other cases the representation takes the form
``scalar$\rightarrow$gauge", with no physical section in between.)
What is even more remarkable is that this theory is a {\it topological
field theory}; that is, the physical solutions manifest themselves only
by their boundary values at $r\rightarrow\infty$: lim $r^{1\over 2}\phi$
defines a field on the 3-dimensional boundary at infinity. There, on the
boundary, gauge invariant interactions are possible and make a 3-dimensional
conformal field theory. This is a 4-dimensional analogue of the
5-dimensional anti DeSitter/4-dimensional conformal field theory duality
discovered recently by Maldacena \cite{Ma97}.
However, if massless fields (in 4 dimensions) are singleton composites,
then singleton must come to life as four dimensional objects, and this
requires the introduction of unconventional statistics. The requirement
that the bilinears have the properties of ordinary (massless) bosons also
tells us that the statistics of singletons must be of another sort.
The basic idea is \cite{FF88,FFS88} that we can decompose the
singleton field operator as
$\phi(x)=\sum_{-\infty}^\infty \phi^j(x)a_j$ in terms of positive
energy creation operators $a^{*j}=a_{-j}$ and annihilation
operators $a_j$ (with $j>0$) without so far making any assumptions
about their commutation relations. The choice of commutation relations comes
later, when requiring that photons, considered as two-Rac
fields (using the full tensor product of the two singleton
triplets) be Bose-Einstein quanta. The singletons are then subject
to unconventional statistics \cite{FF89} (which is perfectly
admissible since they are naturally confined) and an appropriate
Fock space can be constructed. Based on these principles, a
(conformally covariant) composite QED theory was constructed
\cite{FF88}, with all the good features of the usual theory.
In addition one can show \cite{FF87} that the BRST structure of singleton
gauge theory induces the BRST structure of the electromagnetic
potential.
Conformal covariance is based \cite{BFH83} on the
indecomposable $\mathfrak{so}(2,4)$ Gupta-Bleuler triplet
$D(1,\frac{1}{2},\frac{1}{2})\rightarrow
[D(2,1,0)\oplus D(2,0,1)\oplus \mathrm{Id}]\rightarrow
D(1,\frac{1}{2},\frac{1}{2})$
which gives by restriction two inequivalent De Sitter triplets
$D(3,0)\rightarrow D(2,1)\rightarrow D(3,0)$ and
$D(1,1)\rightarrow [D(2,1)\oplus\mathrm{Id}]\rightarrow D(1,1)$,
both of which appear in the direct product of $D(\frac {5}{2}, 0)
\rightarrow\ D(\frac {1}{2}, 0) \rightarrow\
D(\frac {5}{2}, 0)$ by itself.
This procedure can be (and has in great part been) extended to
the spinor singleton (the Di) and both Di and Rac can be
combined to give a superfield formulation covariant under
the superalgebra $\mathfrak{osp}(4\vert 1)$ \cite{Fr88,FF98}.
This will permit to include Yang-Mills fields, quantum gravity,
supergravity and models of QCD, all based on singletons as
fundamental constituents.
The latest contribution \cite{FF98} to this interpretation of massless
fields as singleton composites deals with gravitons, giving an explicit
expression for the weak gravitational potential in terms of singleton
bilinears.
If this idea is introduced in the context of bulk/boundary duality, then it
is natural to relate massless fields on the bulk to conserved currents on
the boundary. But we are interested in the composite nature of massless
fields on space time (the bulk), and a direct current-field identity is
then inappropriate, since currents are conserved by virtue of
the field equations while massless fields are divergenceless only on the
physical subspace defined by gauge fixing.
In the paper \cite{FF98} it was shown that the dipole formulation
provides a natural construction of all massless fields in terms of
bilinears that are conserved only by virtue of the gauge fixing
condition on constituent singleton fields.
Now remember that the ``massless'' De Sitter representation
$D(\frac{3}{2},\frac{1}{2})$ (in contrast with other $D(s+1,s)$
representations), has spin above the limit of unitarity, a fact that
singles it out among massless AdS$_4$ particles. It can be obtained
as one of the two, $\gamma_5$-related, irreducible representations that
constitute the space of solutions of the corresponding Dirac equation.
By developing a field theory of composite neutrinos along the lines
explained above (neutrinos composed of singleton pairs, with three flavors
of singletons) it might be possible to {\it correlate} the recently observed
oscillations between the two or three kinds of neutrinos (that
suggests they should have a mass and gives some estimates of it) with
the {\it AdS$_4$ description} of these ``massless" particles. To avoid
misunderstandings we want to stress that we are fully aware of the fact that
any reasonable estimate of \underbar{the value} of the cosmological constant
rules out a direct connection to the value of experimental parameters like
PC violation coupling constants or neutrino masses. (PC violation is a
feature of composite QED, though no estimate of its strength has been
made.) What we are saying is that \underbar{the structure} of
Anti De Sitter field theory, and more especially the structure of
singleton field theory, may provide a natural framework for a
description of neutrino oscillations.
|
2,877,628,090,405 | arxiv | \section{Introduction}
Recall that a random variable $Z$ has a {\em symmetric $\alpha$-stable} (S$\alpha$S) distribution with $0<\alpha\le 2$,
if ${\mathbb E}\exp(itZ) = \exp(-\sigma^\alpha|t|^\alpha)$ for all $t\in\mathbb R$ with some constant $\sigma>0$. A process
$X=\indt X$ is said to be S$\alpha$S if all its finite linear combinations follow S$\alpha$S distributions.
In this paper, we investigate the general decomposability problem for S$\alpha$S processes with $0<\alpha<2$. Namely, let $X = \indt X$ be an S$\alpha$S process indexed by an arbitrary set $T$. Suppose that
\begin{equation}\label{eq:decomposition}
\{X_t\}_{t\in T} \stackrel{\rm d}{=} \bccbb{ X_t^{(1)} + \cdots + X_t^{(n)}}_{t\in T},
\end{equation}
where `$\stackrel{\rm d}{=}$' means equality in finite-dimensional distributions, and $X\topp k = \indt{X\topp k}, k=1,\dots,n$
are {\em independent} S$\alpha$S processes. We will write $X\stackrel{\rm d}{=} X\topp1+\cdots+X\topp n$
in short, and each $X^{(k)}$ will be referred to as a {\em component} of $X$. The stability property readily
implies that~\eqref{eq:decomposition} holds with $X\topp k\stackrel{\rm d}{=} n^{-1/\alpha} X \equiv \{n^{-1/\alpha}X_t\}_{t\in T}$.
The components equal in finite-dimensional distributions to a constant multiple of $X$ will be referred to as
{\em trivial}. We are interested in the general structure of all possible {\it non-trivial} S$\alpha$S components of $X$.
Many important decompositions \eqref{eq:decomposition} of S$\alpha$S processes are already available
in the literature: see for example Cambanis et al.~\cite{cambanis92characterization},
Rosi\'nski~\cite{rosinski95structure}, Rosi\'nski and Samorodnitsky~\cite{rosinski:samorodnitsky:1996}, Surgailis et al.~\cite{surgailis98mixing},
Pipiras and Taqqu~\cite{pipiras02structure, pipiras:taqqu:2004cy}, and Samorodnitsky~\cite{samorodnitsky05null}, to name a few.
These results were motivated by studies of various probabilistic and structural aspects of the underlying
S$\alpha$S processes such as ergodicity, mixing, stationarity, self-similarity, etc.
Notably, Rosi\'nski~\cite{rosinski95structure} established a fundamental connection between
stationary S$\alpha$S processes and non-singular flows. He developed important tools based on minimal
representations of S$\alpha$S processes and inspired multiple decomposition results motivated by connections
to ergodic theory.
In this paper, we adopt a different perspective.
Our main goal is to characterize of
{\it all} possible S$\alpha$S decompositions \eqref{eq:decomposition}.
Our results show how the dependence structure of an S$\alpha$S process determines the structure of its components.
Consider S$\alpha$S processes $\indt X$ indexed by a complete separable metric space $T$ with
an integral representation
\begin{equation}\label{rep:integral}
\{X_t\}_{t\in T} \stackrel{\rm d}{=} {\Big\{} \int_{S} f_t(s) M_\alpha({\rm d} s) {\Big\}}_{t\in T},
\end{equation}
where real-valued functions $\{f_t\}_{t\in T} \subset L^\alpha(S,{\cal B}_S,\mu)$ are referred to as the {\it spectral functions} of $\indt X$. By default, $M_\alpha$ is a real-valued S$\alpha$S random measure on the
standard Lebesgue space $(S,{\cal B}_S,\mu)$, with a $\sigma$-finite control measure $\mu$.
The spectral functions determine the finite-dimensional distributions of the process: for all $n\in\mathbb N, t_j\in T, a_j\in\mathbb R$,
\begin{equation}\label{eq:fdd}
{\mathbb E}\exp\bpp{-i\summ j1na_jX_{t_j}} = \exp\bpp{-\int_S\babs{\summ j1na_jf_{t_j}}^\alpha{\rm d}\mu}\,.
\end{equation}
Every separable in
probability S$\alpha$S process $X$ can be shown to have such a representation; see, for example, the excellent book by Samorodnitsky and Taqqu~\cite{samorodnitsky94stable} for detailed discussions on S$\alpha$S distributions and processes. Without loss of generality, we always assume that the spectral functions $\indt f\subset L^\alpha(S,{\cal B}_S,\mu)$ have {\em full support}, i.e., $S = {\rm supp}\{ f_t,\ t\in T\}$.
We first state the main result of this paper. To this end, we recall that the {\it ratio $\sigma$-algebra} of a spectral representation $F= \indt f$ (of $\{X_t\}$) is defined as
\begin{equation}\label{e:rho}
\rho(F) \equiv\rho\{ f_t,\ t\in T\} := \sigma\{ f_{t_1}/f_{t_2},\ t_1,t_2\in T\}.
\end{equation}
The following result characterizes the structure of all S$\alpha$S decompositions.
\begin{theorem}\label{thm:1}
Suppose $\indt X$ is an S$\alpha$S process ($0<\alpha<2$) with spectral representation
\[
\indt X \stackrel{\rm d}{=} \bccbb{\int_Sf_t(s)M_\alpha({\rm d} s)}_{t\in T}\,,
\]
with
$\indt f\subset L^\alpha(S,{\cal B}_S,\mu)$. Let $\{X_t^{(k)}\}_{t\in T},\ k=1,\cdots,n$ be independent S$\alpha$S processes.
\begin{itemize}
\item [(i)] The
decomposition
\begin{equation}\label{eq:decomposition1}
\indt X \stackrel{\rm d}{=} \ccbb{X\topp 1_t+\cdots + X\topp n_t}_{t\in T}
\end{equation}
holds, if and only if there exist measurable functions
$r_k:S\to[-1,1]$, $k=1,\cdots,n$, such that
\begin{equation}\label{eq:Xk}
\indt{X\topp k}\stackrel{\rm d}{=}\bccbb{\int_Sr_k(s)f_t(s)M_\alpha({\rm d} s)}_{t\in T},\ \ k=1,\cdots,n.
\end{equation}
In this case, necessarily $\summ k1n|r_k(s)|^\alpha = 1$, $\mu$-almost everywhere on $S$.
\item[(ii)] If \eqref{eq:decomposition1} holds, then the $r_k$'s in \eqref{eq:Xk} can be chosen
to be non-negative and $\rho(F)$-measurable. Such $r_k$'s are unique modulo $\mu$.
\end{itemize}
\end{theorem}
As an application, we study the structure of the {\em stationary} S$\alpha$S components of a stationary S$\alpha$S
process.
We obtain a characterization for all possible stationary components of stationary S$\alpha$S processes
in Theorem~\ref{thm:stationary} below.
As a simple example, consider the moving average process $\{X_t\}_{t\in\mathbb R^d}$ with spectral representation
\[
\{X_t\}_{t\in\mathbb R^d}\stackrel{\rm d}{=} \bccbb{\int_{\mathbb R^d}f(t+s)M_\alpha({\rm d} s)}_{t\in \mathbb R^d},
\]
where $d\in\mathbb N$, $M_\alpha$ is an S$\alpha$S random measure on $\mathbb R^d$ with the Lebesgue control measure $\lambda$, and $f\in L^\alpha(\mathbb R^d,{\cal B}_{\mathbb R^d},\lambda)$ (see, e.g.,~\cite{samorodnitsky94stable}). We show that such a process has only trivial stationary S$\alpha$S components, i.e.\
all its stationary components are rescaled versions of the original process (Corollary~\ref{coro:MMA1}). Such stationary
S$\alpha$S processes will be called {\em indecomposable}.
More examples are provided in Sections~\ref{sec:characterization} and~\ref{sec:stationary}.
We also develop parallel decomposability theory for max-stable
processes. Recently, Kabluchko~\cite{kabluchko09spectral} and Wang and Stoev~\cite{wang10association, wang10structure}
have established intrinsic connections between sum- and
max-stable processes. In particular, the tools in \cite{wang10association} readily imply that the developed decomposition
theory for S$\alpha$S processes applies {\em mutatis mutandis} to max-stable processes.
{ The rest of the paper is structured as follows.} In Section \ref{sec:characterization}, we provide some consequences of Theorem~\ref{thm:1} for general S$\alpha$S processes. The stationary case is discussed in Section \ref{sec:stationary}. Parallel results on max-stable processes are presented in Section \ref{sec:maxstable}. The proof of Theorem~\ref{thm:1} is given in Section~\ref{sec:proofs}.
\section{S$\alpha$S Components} \label{sec:characterization}
In this section, we provide a few examples to illustrate the consequences of our main result Theorem~\ref{thm:1}. The first one is about S$\alpha$S processes with independent increments. Recall that we always assume $0<\alpha<2$.
\begin{corollary}\label{c:ind-incr} Let $X=\{X_t\}_{t\in\mathbb R_+}$ be an arbitrary S$\alpha$S process with
independent increments and $X_0 = 0$. Then all S$\alpha$S components of $X$ also have independent increments.
\end{corollary}
\begin{proof} Write $m(t) = \nn{X_t}_\alpha^\alpha$, where $\|X_t\|_\alpha$ denotes the scale coefficient of
the S$\alpha$S random variable $X_t$. By the independence of the increments of $X$, it follows that $m$ is a
non-decreasing function with $m(0) = 0$. First, we consider the simple case when $m(t)$ is right-continuous.
Consider the Borel measure $\mu$ on $[0,\infty)$ determined by $\mu([0,t]):= m(t)$. The independence of the
increments of $X$ readily implies that $X$ has the representation:
\begin{equation}\label{eq:ind}
\displaystyle{\{X_t\}_{t\in\mathbb R_+} \stackrel{\rm d}{=} {\Big\{} \int_0^\infty{\bf 1}_{[0,t]}(s) M_\alpha({\rm d} s) {\Big\}}_{t\in\mathbb R_+}},
\end{equation}
where $M_\alpha$ is an S$\alpha$S random measure with control measure $\mu$.
Now, for any S$\alpha$S component $Y (\equiv X^{(k)})$ of $X$, we have that \eqref{eq:Xk} holds
with $f_t(s) = {\bf 1}_{[0,t]}(s)$ and some function $r(s) (\equiv r_k(s))$. This implies that the increments of $Y$ are also
independent since, for example, for any $0\le t_1<t_2 $, the spectral functions $r(s) f_{t_1}(s) =
r(s) {\bf 1}_{[0,t_1]}(s)$ and $r(s) f_{t_2}(s) - r(s) f_{t_1}(s) = r(s) {\bf 1}_{(t_1,t_2]}(s)$ have disjoint supports.
It remains to prove the general case. The difficulty is that $m(t)$ may have (at most countably many) discontinuities, and a representation as~\eqref{eq:ind} is not always possible. Nevertheless, introduce the right-continuous functions $t\mapsto m_i(t), i=0,1$,
\[
m_0(t) \mathrel{\mathop:}= m(t+) - \sum_{\tau\leq t}(m(\tau)-m(\tau-))\ \ \mbox{ and }\ \
m_1(t) \mathrel{\mathop:}= \sum_{\tau\leq t}(m(\tau)-m(\tau-))
\]
and let $\wt M_\alpha$ be an S$\alpha$S random measure on $\mathbb R_+\times\{0,1\}$ with control
measure $\mu([0,t]\times\{i\}) := m_i(t),\ i=0,1,\ t\in{\mathbb R}_+$. In this way, as in \eqref{eq:ind} one can show that
\[
\indt X \stackrel{\rm d}{=} \bccbb{\int_{\mathbb R_+\times\{0,1\}}{\bf 1}_{[0,t)\times\{0\}}(s,v) + {\bf 1}_{[0,t]\times\{1\}}(s,v) \wt M_\alpha({\rm d} s,{\rm d} v)}_{t\in T}\,.
\]
The rest of the proof remains similar and is omitted.
\end{proof}
\begin{remark}
Theorem~\ref{thm:1} and Corollary~\ref{c:ind-incr} do not apply to the Gaussian case ($\alpha = 2$). For the sake of simplicity, take $T = \{1,2\}$ and $n=2$ (2 S$\alpha$S components) in~\eqref{eq:decomposition}. In this case, all the (in)dependence information of the mean-zero Gaussian process $\indt X$ is characterized by the covariance matrix $\Sigma$ of the Gaussian vector $(X_1\topp1, X_1\topp2, X_2\topp1,X_2\topp2)$. A counterexample can be easily constructed by choosing appropriately $\Sigma$. This reflects the drastic difference of the geometries of $L^\alpha$ spaces for $\alpha<2$ and $\alpha = 2$.
\end{remark}
The next natural question to ask is whether two S$\alpha$S processes have {\it common components}. Namely, the S$\alpha$S process $Z$ is a common component of the S$\alpha$S processes $X$ and $Y$, if $X\stackrel{\rm d}{=} Z+X\topp1$ and $Y\stackrel{\rm d}{=} Z+Y\topp1$, where ${X\topp1}$ and ${Y\topp1}$ are both S$\alpha$S processes independent of $Z$.
To study the common components, the {\it co-spectral} point of view introduced in Wang and Stoev~\cite{wang10structure} is helpful.
Consider a {\it measurable} S$\alpha$S process $\indt X$ with spectral representation~\eqref{rep:integral}, where the index set $T$ is equipped with a measure $\lambda$ defined on the $\sigma$-algebra ${\cal B}_T$. Without loss of generality, we take $f(\cdot,\cdot):(S\times T, {\cal B}_S\times{\cal B}_T)\to (\mathbb R,{\cal B}_\mathbb R)$ to be jointly measurable (see Theorems 9.4.2 and 11.1.1 in~\cite{samorodnitsky94stable}). The {\it co-spectral functions}, $f_\cdot(s)\equiv f(s,\cdot)$, are elements of $L^0(T)\equiv L^0(T,{\cal B}_T,\lambda)$, the space of ${\cal B}_T$-measurable functions modulo $\lambda$-null sets. The co-spectral functions are indexed by $s\in S$, in contrast to the spectral functions $f_t(\cdot)$ indexed by $t\in T$.
Recall also that a set ${\cal P}\subset L^0(T)$ is a {\it cone}, if $c{\cal P} = {\cal P}$ for all $c\in\mathbb R\setminus\{0\}$ and $\{\vv 0\}\in{\cal P}$. We write $\{f_\cdot(s)\}_{s\in S}\subset{\cal P}$ modulo $\mu$, if for $\mu$-almost all $s\in S$, $f_\cdot(s)\in{\cal P}$.
\begin{proposition}\label{prop:common}
Let $X^{(i)}=\{X_t^{(i)}\}_{t\in T}$ be S$\alpha$S processes with measurable representations
$\{f_t^{(i)}\}_{t\in T}\subset L^\alpha(S_i,{\cal B}_{S_i},\mu_i),\ i=1,2$. If there exist two cones ${\cal P}_i\subset L^0(T),i=1,2$, such that $\{f\topp i_\cdot(s)\}_{s\in S_i}\subset{\cal P}_i$ modulo $\mu_i$, for $i = 1,2$, and ${\cal P}_1\cap{\cal P}_2 = \{\vv 0\}$, then the two processes have no common component.
\end{proposition}
\begin{proof}
Suppose $Z$ is a component of $X\topp 1$. Then, by Theorem~\ref{thm:1}, $Z$ has a spectral representation $\indt{r\topp1f\topp1}$, for some ${\cal B}_{S_1}$-measurable function $r\topp1$.
By the definition of cones, the co-spectral functions of $Z$ are included in ${\cal P}_1$, i.e., $\{r\topp1(s)f_\cdot\topp1(s)\}_{s\in S_1}\subset{\cal P}_1$ modulo $\mu_1$. If $Z$ is also a component of $X\topp 2$, then by the same argument, $\{r\topp2(s)f_\cdot\topp2(s)\}_{s\in S_2}\subset{\cal P}_2$ modulo $\mu_2$, for some ${\cal B}_{S_2}$-measurable function $r\topp2(s)$. Since ${\cal P}_1\cap{\cal P}_2 = \{\vv 0\}$, it then follows that $\mu_i({\rm{supp}}(r\topp i)) = 0, i=1,2$, or equivalently $Z = 0$, the degenerate case.
\end{proof}
We conclude this section with an application to S$\alpha$S moving averages.
\begin{corollary}\label{coro:MAequal}
Let $X\topp1$ and $X\topp2$ be two S$\alpha$S moving averages
\[
\{X\topp i_t\}_{t\in{\mathbb R}^d}\stackrel{\rm d}{=} \bccbb{\int_{{\mathbb R}^d}f\topp i(t+s)M_\alpha\topp i({\rm d} s)}_{t\in {\mathbb R}^d}
\]
with kernel functions $f\topp i\in L^\alpha(\mathbb R^d,{\cal B}_{\mathbb R^d},\lambda), i=1,2$. Then, either
\begin{equation}\label{eq:MAequal}
X\topp1\stackrel{\rm d}{=} cX\topp2\mbox{ for some } c>0\,,
\end{equation}
or $X\topp1$ and $X \topp2$ have no common component. Moreover,~\eqref{eq:MAequal} holds, if and only if for some $\tau\in\mathbb R^d$ and $\epsilon\in\{\pm 1\}$,
\begin{equation}\label{eq:MAequal1}
f\topp1(s) = \epsilon cf\topp2(s+\tau)\,, \mu\mbox{-almost all } s\in S.
\end{equation}
\end{corollary}
\begin{proof}
Clearly~\eqref{eq:MAequal1} implies~\eqref{eq:MAequal}. Conversely, if~\eqref{eq:MAequal} holds, then~\eqref{eq:MAequal1} follows as in the proof of Corollary 4.2 in~\cite{wang10structure}, with slight modification (the proof therein was for {\it positive} cones). When~\eqref{eq:MAequal} (or equivalently~\eqref{eq:MAequal1}) does not hold, consider the smallest cones containing $\{f\topp i(s+\cdot)\}_{s\in\mathbb R}, i =1,2$ respectively. Since these two cones have trivial intersection $\{\vv 0\}$, Proposition~\ref{prop:common} implies that $X\topp1$ and $X\topp 2$ have no common component.
\end{proof}
\section{Stationary S$\alpha$S Components and Flows}\label{sec:stationary}
Let $X =\{X_t\}_{t\in T}$ be a stationary S$\alpha$S process with representation \eqref{rep:integral},
where now $T ={\mathbb R}^d$ or $T={\Bbb Z}^d$, $d\in\mathbb N$.
The seminal work of Ros\'nski~\cite{rosinski95structure} established an important connection between stationary S$\alpha$S processes and {\it flows}.
A family of functions $\indt\phi$ is said to be a flow on $(S,{\cal B}_S,\mu)$, if for all $t_1,t_2\in T$, $\phi_{t_1+t_2}(s) = \phi_{t_1}(\phi_{t_2}(s))$ for all $s\in S$, and $\phi_0(s) = s$ for all $s\in S$. We say that a flow is {\it non-singular}, if $\mu(\phi_t(A)) = 0$ is equivalent to
$\mu(A) = 0$, for all $A\in{\cal B}_S, t\in T$. Given a flow $\indt\phi$, $\indt c$ is said to be a {\it cocycle} if $c_{t+\tau}(s) = c_t(s)c_\tau\circ\phi_t(s)$ $\mu$-almost surely for all $t,\tau\in T$ and $c_t\in\{\pm1\}$ for all $t\in T$.
To understand the relation between the structure of stationary S$\alpha$S processes and flows, it is necessary to work with {\em minimal}
representations of S$\alpha$S processes, introduced by Hardin~\cite{hardin81isometries,hardin82spectral}.
The minimality assumption is crucial in many results on the structure of
S$\alpha$S processes, although it is in general difficult to check (see e.g.~Rosi\'nski~\cite{rosinski06minimal} and
Pipiras~\cite{pipiras07nonminimal}).
\begin{definition} \label{d:minimal} The spectral functions $F \equiv \indt f$ (and the corresponding spectral representation~\eqref{rep:integral}) are
said to be minimal, if the ratio $\sigma$-algebra $\rho(F)$ in \eqref{e:rho}
is equivalent to ${\cal B}_S$, i.e., for all $A\in {\cal B}_S$, there exists $B \in \rho(F)$ such that
$\mu(A\Delta B) = 0,$ where $A\Delta B = (A\setminus B) \cup (B\setminus A)$.
\end{definition}
Rosi\'nski (\cite{rosinski95structure}, Theorem 3.1) proved that if $\indt f$ is minimal, then there exists a modulo
$\mu$ unique non-singular flow $\indt\phi$, and a corresponding cocycle
$\indt c$, such that for all $t\in T$,
\begin{equation}\label{eq:flow}
f_t(s) = c_t(s)\bpp{\ddfrac\mu{\phi_t}\mu(s)}^{1/\alpha}f_0\circ\phi_t(s)\,, \mu\mbox{-almost everywhere.}
\end{equation}
Conversely, suppose that \eqref{eq:flow} holds for some non-singular flow $\indt \phi$, a corresponding cocycle $\indt c$, and a function
$f_0\in L^\alpha(S,\mu)$ ($\indt f$ not necessarily minimal). Then, clearly the S$\alpha$S process $X$ in \eqref{rep:integral} is stationary. In this case, we
shall say that $X$ is generated by the flow $\indt \phi$.
Consider now an S$\alpha$S decomposition \eqref{eq:decomposition} of $X$, where the independent components $\indt {X\topp k}$'s
are {\em stationary}. This will be referred to as a {\em stationary S$\alpha$S decomposition}, and the $\indt {X\topp k}$'s
as {\em stationary components} of $X$. Our goal in this section is to characterize the structure of all possible stationary components.
This characterization involves the invariant $\sigma$-algebra with respect to the flow $\indt\phi$:
\begin{equation}\label{e:F-phi}
{\cal F}_\phi = \{A\in{\cal B}_S:\mu(\phi_\tau(A) \Delta A) =0\,,\mbox{ for all } \tau\in T\}\,.
\end{equation}
Given a function $g$ and a $\sigma$-algebra ${\cal G}$, we write $g\in{\cal G}$, if $g$ is measurable with respect to ${\cal G}$.
\begin{theorem}\label{thm:stationary} Let $\indt X$ be a stationary and measurable
S$\alpha$S process with spectral functions $\indt f$ given by
\[
f_t(s) = \int_S c_t(s)\bpp{\ddfrac\mu{\phi_t}\mu(s)}^{1/\alpha}f_0\circ\phi_t(s)M_\alpha({\rm d} s), t\in T\,.
\]
{\it (i)} Suppose that $\indt X$ has a stationary S$\alpha$S decomposition
\begin{equation}\label{eq:decomposition_stat}
\indt X \stackrel{\rm d}{=} \ccbb{X\topp 1_t+\cdots + X\topp n_t}_{t\in T}.
\end{equation}
Then, each component $\indt {X\topp k}$ has a representation
\begin{equation}\label{eq:Xk_stat}
\indt{X\topp k}\stackrel{\rm d}{=}\bccbb{\int_Sr_k(s)f_t(s)M_\alpha({\rm d} s)}_{t\in T},\ \ k=1,\cdots,n,
\end{equation}
where the
$r_k$'s can be chosen to be non-negative and $\rho(F)$-measurable. This choice is unique modulo
$\mu$ and these $r_k$'s are $\phi$-invariant, i.e.\ $r_k\in{\cal F}_\phi$.
\noindent{\it (ii)} Conversely, for any $\phi$-invariant $r_k$'s such that $\summ k1n|r_k(s)|^\alpha = 1$, $\mu$-almost everywhere on $S$, decomposition \eqref{eq:decomposition_stat} holds with $X^{(k)}$'s as in
\eqref{eq:Xk_stat}.
\end{theorem}
\begin{proof} By using \eqref{eq:flow}, a change of variables, and the $\phi$-invariance of
the functions $r_k$'s, one can show that the $X^{(k)}$'s in \eqref{eq:Xk_stat} are stationary. This fact and
Theorem \ref{thm:1} yield part {\em (ii)}.
We now show {\em (i)}.
Suppose that $X^{(k)}$ is a stationary (S$\alpha$S) component of $X$. Theorem \ref{thm:1} implies that there
exists unique modulo $\mu$ non-negative and $\rho(F)$-measurable function $r_k$ for which \eqref{eq:Xk_stat} holds.
By the stationarity of $X^{(k)}$, it also follows that for all $\tau \in T$, $\{ r_k(s) f_{t+\tau}(s)\}_{t\in T}$ is also a
spectral representation of $X^{(k)}$. By the flow representation \eqref{eq:flow}, it follows that for all $t,\tau\in T$,
\begin{equation}\label{eq:flow1}
f_{t+\tau}(s) = c_\tau(s)f_t\circ\phi_\tau(s)\bpp{\ddfrac\mu{\phi_\tau}\mu}^{1/\alpha}(s)\,,\mbox{ $\mu$-almost everywhere,}
\end{equation}
and we obtain that for all $\tau, t_j\in T, a_j\in \mathbb R,\ j=1,\cdots,n$:
\[
\int_S\babs{\summ j1na_jr_k(s)f_{t_j+\tau}(s)}^\alpha\mu({\rm d} s)
= \int_S {\Big|}\sum_{j=1}^n a_j r_k\circ\phi_{-\tau}(s) f_{t_j}(s) {\Big|} ^\alpha \mu({\rm d} s),\nonumber
\]
which shows that $\{r_k \circ\phi_{-\tau}(s) f_t(s)\}_{t\in T}$ is also a representation for $X^{(k)}$, for all $\tau \in T$.
Observe that from~\eqref{eq:flow1}, for all $t_1,t_2,\tau\in T$ and $\lambda\in\mathbb R$,
\[
\bccbb{\frac{f_{t_1+\tau}}{f_{t_2+\tau}}\leq \lambda} = \phi_\tau^{-1}\bccbb{\frac{f_{t_1}}{f_{t_2}}\leq \lambda}\mbox{ modulo }\mu.
\]
It then follows that for all $\tau\in T$, the $\sigma$-algebra
$\phi_{-\tau} (\rho(F))\equiv (\phi_\tau)^{-1} (\rho(F))$ is equivalent to $\rho(F)$. This, by the uniqueness of $r_k\in\rho(F)$ (Theorem \ref{thm:1}),
implies that $r_k\circ \phi_\tau = r_k$ modulo $\mu$, for all $\tau$. Then, $r_k\in {\cal F}_\phi$ follows from standard measure-theoretic argument. The proof is complete.
\end{proof}
\begin{remark}
The structure of the {\em stationary} S$\alpha$S components of stationary S$\alpha$S processes (including
random fields) has attracted much interest since the seminal work of
Rosi\'nski~\cite{rosinski95structure, rosinski00decomposition}. See, for example,
Pipiras and Taqqu~\cite{pipiras:taqqu:2004cy}, Samorodnitsky~\cite{samorodnitsky05null},
Roy~\cite{roy07ergodic,roy09poisson}, Roy and Samorodnitsky~\cite{roy08stationary},
Roy~\cite{roy10ergodic,roy10nonsingular}, and Wang et al.~\cite{wang09ergodic}.
In view of Theorem~\ref{thm:stationary}, the components considered in these works correspond to indicator functions
$r_k(s) = {\bf 1}_{A_k}(s)$ of certain disjoint flow-invariant sets $A_k$'s arising from ergodic theory
(see e.g.~Krengel~\cite{krengel85ergodic} and Aaronson~\cite{aaronson97introduction}).
\end{remark}
\comment{\begin{remark} Suppose the representation in \eqref{rep:integral} is minimal and consider the
decomposition \eqref{eq:decomposition}, where $X^{(k)}_t := \int_{A_k} f_t(s) M_\alpha({\rm d} s)$ for disjoint
measurable $A_k$'s with $S = \cup_{k=1}^n A_k$. As in the proof of Corollary \ref{coro:stationaryMinimal}
one can show that the components $X^{(k)} = \{X_t^{(k)}\}_{t\in T},\ k=1,\cdots,n$ are all essentially
different and non-trivial.
\end{remark}
}
Theorem \ref{thm:stationary} can be applied to check {\it indecomposability} of stationary S$\alpha$S processes.
Recall that a stationary S$\alpha$S process is said to be {\em indecomposable}, if all its stationary S$\alpha$S
components are trivial (i.e.\ constant multiples of the original process).
\begin{corollary}\label{coro:indecomposable}
Consider $\indt X$ as in Theorem~\ref{thm:stationary}. If ${\cal F}_\phi$ is trivial, then $\indt X$ is indecomposable. The converse is true when, in addition, $\indt f$ is minimal.
\end{corollary}
\begin{proof}
If ${\cal F}_\phi$ is trivial, the result follows from Theorem~\ref{thm:stationary}.
Conversely, let $\indt f$ be minimal and $X$ indecomposable. Then, one can choose $A\in{\cal F}_\phi$, such that $\mu(A)>0$ and $\mu(S\setminus A)>0$. Then, consider
\[
\indt {X^A}\stackrel{\rm d}{=} \bccbb{\int_S{\bf 1}_A(s)f_t(s)M_\alpha({\rm d} s)}_{t\in T}.
\]
By Theorem~\ref{thm:stationary}, $X^A$ is a stationary component of $X$. It suffices to show that $X^A$ is a non-trivial of $X$, which would contradict the indecomposability.
Suppose that $X^A$ is trivial, then $ cX^A \stackrel{\rm d}{=} X$, for some $c>0$. Thus, by
Theorem \ref{thm:stationary}, $cX^{A}$ has a representation as in \eqref{eq:Xk_stat}, with $r_k:= c{\bf 1}_A$. On the other hand,
since $c X^A \stackrel{\rm d}{=} X$, we also have the trivial representation with $r_k:= 1$. Since $A \in \rho(F)$,
the uniqueness of $r_k$ implies that $1 = c{\bf 1}_A$ modulo $\mu$, which contradicts $\mu(A^c)>0$. Therefore, $X^A$ is non-trivial.
\end{proof}
The indecomposable stationary S$\alpha$S
processes can be seen as the elementary building blocks for the construction of
general stationary S$\alpha$S processes.
We conclude this section with two examples.
\begin{Example}[Mixed moving averages] \label{example:MMA}
Consider a {\em mixed moving average} in the sense of \cite{surgailis93stable}:
\begin{equation}\label{eq:MMA}
\{X_t\}_{t\in {\mathbb R}^d}\stackrel{\rm d}{=} {\Big\{} \int_{{\mathbb R}^d \times V} f(t+s,v) M_\alpha({\rm d} s, {\rm d} v) {\Big\}}_{t\in{\mathbb R}^d}.
\end{equation}
Here, $M_\alpha$ is an S$\alpha$S random measure on ${\mathbb R}^d\times V$ with the control measure $\lambda\times\nu$, where $\lambda$ is the Lebesgue measure on $(\mathbb R^d,{\cal B}_{\mathbb R^d})$ and $\nu$ is a probability measure on $(V,{\cal B}_V)$, and $f(s,v) \in L^\alpha ({\mathbb R}^d\times V,{\cal B}_{{\mathbb R}^d\times V},\lambda\times\nu)$.
Given a disjoint union $V = \bigcup_{j=1}^nA_j$, where $A_j$'s are measurable subsets of $V$, the mixed moving averages can clearly be decomposed as in~\eqref{eq:decomposition_stat} with
\[
\{X\topp k_t\}_{t\in \mathbb R^d}\stackrel{\rm d}{=} \bccbb{\int_{\mathbb R^d\times A_k}f(t+s,v)M_\alpha({\rm d} s,{\rm d} v)}_{t\in\mathbb R^d}\,, \mbox{ for all } k = 1,\dots,n\,.
\]
\end{Example}
Any moving average process
\begin{equation}\label{eq:MA}
\{X_t\}_{t\in{\mathbb R}^d} \stackrel{\rm d}{=} \bccbb{\int_{{\mathbb R}^d}f(t+s)M_\alpha({\rm d} s)}_{t\in{\mathbb R}^d}
\end{equation}
trivially has a mixed moving average representation.
The next result shows when the converse is true.
\begin{corollary}\label{coro:MMA1}
The mixed moving average $X$ in~\eqref{eq:MMA} is indecomposable, if and only if it has a moving
average representation as in~\eqref{eq:MA}.
\end{corollary}
\begin{proof} By Corollary~\ref{coro:indecomposable}, the moving average process~\eqref{eq:MA} is indecomposable, since in this case $\phi_t(s) = t+s, t,s\in{\mathbb R}^d$ and therefore ${\cal F}_\phi$ is trivial. This proves the `if' part.
Suppose now that $X$ in~\eqref{eq:MMA} is indecomposable.
In Section 5 of Pipiras \cite{pipiras07nonminimal}
it was shown that S$\alpha$S processes with mixed moving average representations
and {\em stationary increments} also have minimal representations of the mixed moving average type.
By using similar arguments, one can show that this is also true for the class of {\em stationary}
mixed moving average processes.
Thus, without loss of generality, we assume that the representation in \eqref{eq:MMA} is minimal.
Suppose now that there exists a set $A \in {\cal B}_V$ with
$\nu(A) >0$ and $\nu(A^c)>0$. Since ${\mathbb R}^d\times A$ and ${\mathbb R}^d\times A^c$ are flow-invariant, we have
the stationary decomposition
$
\{X_t\}_{t\in {\mathbb R}^d} \stackrel{\rm d}{=} \{ X_t^A + X_t^{A^c}\}_{t\in {\mathbb R}^d},
$
where
$$
X^B_t := \int_{{\mathbb R}\times V} {\bf 1}_{B}(v) f(t+s,v)M_\alpha({\rm d} s,{\rm d} v),\ \ B\in \{A, A^c\}.
$$
Note that both components $X^A =\{X_t^{A}\}_{t\in{\mathbb R}^d}$ and
$X^{A^c} =\{X_t^{A^c}\}_{t\in{\mathbb R}^d}$ are non-zero because the
representation of $X$ has full support.
Now, since $X$ is indecomposable, there exist positive constants $c_1$ and $c_2$, such that
$X\stackrel{\rm d}{=} c_1 X^{A} \stackrel{\rm d}{=} c_2 X^{A^c}$. The minimality of the representation and
Theorem \ref{thm:stationary} imply that $c_1 {\bf 1}_A = c_2 {\bf 1}_{A^c}$ modulo $\nu$, which is impossible.
This contradiction shows that the set $V$ cannot be partitioned into two
disjoint sets of positive measure. That is, $V$ is a singleton and the mixed
moving average is in fact a moving average.
\end{proof}
\begin{Example}[Doubly stationary processes]
Consider a stationary process $\xi=\{\xi_t\}_{t\in T}$ ($T = {\Bbb Z}^d$) supported on the probability space $(E,{\cal E},\mu)$ with $\xi_t\in L^\alpha(E,{\cal E},\mu)$.
Without loss of generality, we may suppose that $\xi_t (u)= \xi_0\circ \phi_t(u)$, where
$\{\phi_t\}_{t\in T}$ is a $\mu$-measure-preserving flow.
Let $M_\alpha$ be an S$\alpha$S random measure on $(E,{\cal E},\mu)$ with control measure $\mu$. The stationary S$\alpha$S
process $X = \{X_t\}_{t\in T}$
\begin{equation}\label{eq:doublyStochastic}
X_t := \int_{E}\xi_t(u)M_\alpha({\rm d} u), t\in T
\end{equation}
is said to be {\em doubly stationary} (see~Cambanis et al.~\cite{cambanis87ergodic}).
By Corollary~\ref{coro:indecomposable}, if $\xi$ is ergodic, then $X$ is indecomposable. \end{Example}
A natural and interesting question raised by a referee is: what happens when $X$ is decomposable and hence $\xi$ is non-ergodic?
Can we have a direct integral decomposition of the process $X$ into indecomposable components? The following remark partly addresses this question.
\begin{remark} The doubly stationary S$\alpha$S processes are a special case of stationary S$\alpha$S processes generated by {\em positively recurrent flows (actions)}.
As shown in Samorodnitsky~\cite{samorodnitsky05null}, Remark 2.6, each such stationary S$\alpha$S process $X = \{X_t\}_{t\in T}$
can be expressed through a measure-preserving flow (action) on a {\em finite} measure space. Namely,
\begin{equation}\label{e:pos-recurrent}
\{X_t\}_{t\in T} \stackrel{\rm d}{=} {\Big\{} \int_{E} f_t(u) M_\alpha^{(\mu)}({\rm d} u) {\Big\}}_{t\in T}, \ \ \mbox{ with } \ f_t(u):= c_t(u) f_0\circ \phi_t(u),
\end{equation}
where $M_\alpha^{(\mu)}$ is an S$\alpha$S random measure with a {\em finite} control measure $\mu$ on $(E,{\cal E})$, $\phi=\{\phi_t\}_{t\in T}$ is a $\mu$-preserving
flow (action), and $\{c_t\}_{t\in T}$ is a co-cycle with respect to $\phi$. In the case when the co-cycle is trivial ($c_t\equiv 1$) and $\mu(E) =1$, the process $X$ is {\em doubly stationary}.
For simplicity, suppose that $T = {\Bbb Z}^d$ and without loss of generality let $(E,{\cal E},\mu)$ be a standard Lebesgue space with $\mu(E) =1$.
The ergodic decomposition theorem (see e.g.~Keller~\cite{keller98equilibrium}, Theorem 2.3.3) implies that there exists conditional probability distributions $\{\mu_u\}_{u\in E}$ with respect to ${\cal I}$ such that $\phi$ is measure-preserving and ergodic with respect to the measures $\mu_u$ for $\mu$-almost all $u\in E$.
Let $\nu$ be another $\phi$-invariant measure on $(E,{\cal E})$ dominating the conditional probabilities $\mu_u$ so that the Radon--Nikodym derivatives
$p(x,u) = ({\rm d} \mu_u/{\rm d} \nu)(x)$ are jointly measurable on $(E\times E, {\cal E}\otimes {\cal E}, \nu\times \mu)$. Consider
\[
g_t(x,u) = f_t(x) p(\phi_t(x),u)^{1/\alpha}.
\]
Recall that $\nu$ and $\mu_u$ are $\phi$-invariant, whence
$$
p(\phi_t(x),u) = \frac{{\rm d} \mu_{u}}{{\rm d} \nu}(\phi_t(x)) = \frac{{\rm d} \mu_u}{{\rm d} \nu} (x)= p(x,u),\ \ \mbox{ modulo $\nu\times \mu$.}
$$
Thus, $g_t(x,u) = f_t(x) ({\rm d} \mu_u /{\rm d} \nu)^{1/\alpha} (x)$, and for all $a_j\in{\mathbb R},\ t_j \in T,\ j=1,\cdots,n$, we have
\begin{eqnarray*}
\int_{E^2} {\Big|} \sum_{j=1}^n a_j g_{t_j}(x,u){\Big|}^\alpha \nu({\rm d} x)\mu({\rm d} u)\! &=& \!\int_{E^2} {\Big|} \sum_{j=1}^n a_j f_{t_j}(x){\Big|}^\alpha \frac{{\rm d}\mu_u}{{\rm d} \nu} (x)\nu({\rm d} x) \mu({\rm d} u)\\
& \! = \!& \int_{E^2} {\Big|} \sum_{j=1}^n a_j f_{t_j}(x){\Big|}^\alpha {\rm d}\mu_u ({\rm d} x) \mu({\rm d} u) \\&\! =\! & \int_{E} {\Big|} \sum_{j=1}^n a_j f_{t_j}(x){\Big|}^\alpha \mu({\rm d} x),
\end{eqnarray*}
where the last equality follows from the identity that $\int_{E} h(x) \mu({\rm d} x) = \int_{E^2} h(x) \mu_u({\rm d} x) \mu({\rm d} u)$, for all $h\in L^1(E,{\cal E},\mu)$.
We have thus shown that $\indt X$ defined by~\eqref{e:pos-recurrent} has another spectral representation
\begin{equation}\label{eq:ergoDecomp}
\indt X\stackrel{\rm d}{=}\bccbb{\int_{E \times E} g_t(x,u) M_\alpha^{(\nu\times \mu)}({\rm d} x,{\rm d} u)}_{t\in T}\,,
\end{equation}
where $M_\alpha^{(\nu\times \mu)}$ is an S$\alpha$S random measure on $E\times E$ with control measure $\nu\times \mu$.
It also follows that for $\mu$-almost all $u\in E$, the process defined by
\[
{X^{(u)}_t}\mathrel{\mathop:}={\int_{E}g_t(x,u)M_\alpha^{(\nu)}({\rm d} x)},\ t\in T,
\]
is indecomposable, where $M_\alpha^{(\nu)}$ has control measure $\nu$. Indeed, as above, one can show that
$$
\{X^{(u)}_t\}_{t\in T} \stackrel{\rm d}{=}
{\Big\{}\int_E f_t(u,x) M_\alpha^{(\mu_u)}({\rm d} x){\Big\}}_{t\in T},
$$
where $M_\alpha^{(\mu_u)}$ has control measure $\mu_u$. The ergodic decomposition theorem implies that the flow (action) $\phi$ is ergodic
with respect to $\mu_u$, which by Corollary \ref{coro:indecomposable} implies the indecomposability of $X^{(u)} = \{X^{(u)}_t\}_{t\in T}$. In this way,~\eqref{eq:ergoDecomp} parallels the mixed moving average representation for stationary S$\alpha$S processes generated by {\it dissipative flows} (see e.g.~Rosi\'nski~\cite{rosinski95structure}).
\end{remark}
\begin{remark} The above construction of the decomposition~\eqref{eq:ergoDecomp} assumes the existence of a $\phi$-invariant measure
$\nu$ dominating all conditional probabilities $\mu_u,\ u\in E$. If the measure $\mu$, restricted on the invariant $\sigma$-algebra ${\cal F}_\phi$ is discrete, i.e.\ ${\cal F}_\phi$
consists of countably many atoms under $\mu$, then one can take $\nu\equiv \mu$. In this case, the process $X$ is decomposed into a sum (possibly infinite) of its indecomposable
components:
$$
X_t = \sum_{k} \int_{E_k} f_t(x) M_\alpha^{(\mu)}({\rm d} x),\
$$
where the $E_k$'s are disjoint $\phi$-invariant measurable sets, such that $E = \cup_k E_k$ and $\phi\vert_{E_k}$ is ergodic, for each $k$. In this case, the $E_k$'s are the
atoms of ${\cal F}_\phi$.
In general, when $\mu\vert_{{\cal F}_\phi}$ is not discrete, the dominating measure $\nu$ if it exists, may not be $\sigma$-finite. Indeed, since the $\phi_t$'s are
ergodic for $\mu_u$, it follows that either $\mu_{u'} = \mu_{u''}$ or $\mu_{u'}$ and $\mu_{u''}$ are
singular, for $\mu$-almost all $u', u''\in E$. Thus, if ${\cal F}_\phi$ is ``too rich", this singularity feature implies that the measure $\nu$ may not be chosen to be
$\sigma$-finite.
\end{remark}
\section{Decomposability of Max-stable Processes}\label{sec:maxstable}
Max-stable processes are central objects in extreme value theory. They arise in the limit
of independent maxima and thus provide canonical models for the dependence of the extremes (see e.g.~\cite{dehaan06extreme} and the references therein). Without loss of generality we focus here on $\alpha$-Fr\'echet processes.
Recall that a random variable $Z$ has an $\alpha$-Fr\'echet distribution, if
${\mathbb P}(Z\le x) = \exp( -\sigma^\alpha x^{-\alpha})$ for all $x>0$ with some constant $\sigma>0$.
A process $Y=\{Y_t\}_{t\in T}$ is said to be $\alpha$-Fr\'echet if for all $n\in\mathbb N$, $a_i\ge 0,\ t_i\in T, i =1,\cdots,n,$ the
max-linear combinations $\max\{ a_i Y_{t_i},\ i=1,\cdots,n\} \equiv \bigvee_{i=1}^n a_iY_{t_i}$ are
$\alpha$-Fr\'echet. It is well known that a max-stable process is $\alpha$-Fr\'echet, if and only if it has $\alpha$-Fr\'echet marginals (de Haan~\cite{dehaan78characterization}). In the seminal paper~\cite{dehaan84spectral}, de Haan developed convenient
spectral representations of these processes. An extremal integral representation, which parallels the
integral representations of S$\alpha$S processes, was developed by Stoev and Taqqu~\cite{stoev06extremal}.
Let $Y=\indt Y$ be an $\alpha$-Fr\'echet ($\alpha>0$) process. As in the S$\alpha$S case, if $Y$ is separable
in probability, it has the extremal representation
\begin{equation}\label{rep:extremal}
\{Y_t\}_{t\in T} \stackrel{\rm d}{=} {\Big\{} {\int^{\!\!\!\!\!\!\!{\rm e}}}_S f_t(s) M^\vee_\alpha({\rm d} s) {\Big\}}_{t\in T},
\end{equation}
where $\{f_t\}_{t\in T} \subset L_+^\alpha(S,{\cal B}_S,\mu) = \{f\in L^\alpha(S,{\cal B}_S,\mu): f\geq 0\}$ are non-negative deterministic functions, and where
$M_\alpha^\vee$ is an {\it $\alpha$-Fr\'echet random sup-measure with control measure $\mu$} (see~\cite{stoev06extremal} for more details). The finite-dimensional distributions of $Y$ are characterized in terms of the spectral functions $f_t$'s as follows:
\begin{equation}\label{eq:cdf}
\mathbb P(Y_{t_i} \le y_i,\ i=1,\cdots,n) = \exp {\Big\{} - \int_{S} {\Big(} \max_{1\le i\le n} \frac{f_{t_i}(s) }{y_i}
{\Big)}^{\alpha} \mu({\rm d} s) {\Big\}},
\end{equation}
for all $y_i>0, t_i \in T,\ i=1,\cdots,n$.
The above representations of max-stable processes mimic those of S$\alpha$S processes~\eqref{rep:integral} and~\eqref{eq:fdd}.
The cumulative distribution functions and max-linear combinations of spectral functions, in the max-stable setting, play the role of characteristic functions and linear combinations in the sum-stable setting, respectively. In fact, the deep connection between the two classes of processes has been
clarified via the notion of {\em association} by Kabluchko~\cite{kabluchko09spectral} and Wang and Stoev~\cite{wang10association}, independently through different perspectives.
In the sequel, assume $0<\alpha<2$. An S$\alpha$S process $X$ and an $\alpha$-Fr\'echet process $Y$
are said to be {\it associated} if they have a common spectral representation. That is, if for {\em some}
non-negative $\{f_t\}_{t\in T} \subset L_+^\alpha(S,{\cal B}_S,\mu)$, Relations
\eqref{rep:integral} and \eqref{rep:extremal} hold. The association is well defined in the following sense:
any other set of functions $\indt g\subset L^\alpha_+(S,{\cal B}_S,\mu)$ is a spectral representation of $X$, if and only if, it is
a spectral representation of $Y$ (see \cite{wang10association}, Theorem 4.1).
\begin{remark} It is well known that $\wt Y = \indt{Y^\alpha}$ is a 1-Fr\'echet process (see e.g.~\cite{stoev06extremal}, Proposition 2.9). Moreover,
if~\eqref{rep:extremal} holds, then $\wt Y$ has spectral functions $\indt{f^\alpha}\subset L_+^1(S,{\cal B}_S,\mu)$. Thus, the exponent
$\alpha>0$ plays no essential role in the dependence structure of $\alpha$-Fr\'echet processes.
Consequently, the notion of association (defined for $\alpha\in(0,2)$) can be used to study $\alpha$-Fr\'echet processes with
arbitrary positive $\alpha$'s.
\end{remark}
The association method can be readily applied to transfer decomposability results for S$\alpha$S processes to
the max-stable setting, where now sums are replaced by maxima. Namely, let $Y=\{Y_t\}_{t\in T}$ be an
$\alpha$-Fr\'echet process. If
\begin{equation}\label{e:max-decomp}
\{Y_t\}_{t\in T} \stackrel{\rm d}{=} {\Big\{} Y_t^{(1)} \vee \cdots \vee Y_t^{(n)}{\Big\}}_{t\in T},
\end{equation}
for some independent $\alpha$-Fr\'echet processes $Y^{(k)} = \{Y_t^{(k)}\}_{t\in T},\ i=1,\cdots,n$, then we say that
the $Y^{(k)}$'s are {\em components} of $Y$. By the max-stability of $Y$,
\eqref{e:max-decomp} trivially holds if the $Y^{(k)}$'s are independent copies of
$\{ n^{-1/\alpha} Y_t\}_{t\in T}$. The constant multiples of $Y$ are referred to as trivial components of $Y$ and
as in the S$\alpha$S case, we are interested in the structure of the non-trivial ones.
To illustrate the association method, we prove the max-stable counterpart of our main result Theorem~\ref{thm:1}. From the proof, we can see that the other results in the sum-stable setting have their natural max-stable
counterparts by association.
We briefly state some of these results at the end of this section.
\begin{theorem}\label{thm:2}
Suppose $\indt Y$ is an $\alpha$-Fr\'echet process with spectral representation \eqref{rep:extremal}, where
$F\equiv \indt f\subset L^\alpha_+(S,{\cal B}_S,\mu)$.
Let $\{Y_t^{(k)}\}_{t\in T},\ k=1,\cdots,n$, be independent $\alpha$-Fr\'echet processes. Then the
decomposition~\eqref{e:max-decomp} holds, if and only if there exist measurable functions
$r_k:S\to[0,1]$, $k=1,\cdots,n$, such that
\begin{equation}\label{eq:Yk}
\indt{Y\topp k}\stackrel{\rm d}{=}\bccbb{{\int^{\!\!\!\!\!\!\!{\rm e}}}_Sr_k(s)f_t(s)M_\alpha^\vee({\rm d} s)}_{t\in T},\ \ k=1,\cdots,n.
\end{equation}
In this case, $\summ k1n r_k(s)^\alpha = 1$, $\mu$-almost everywhere on $S$ and the $r_k$'s in \eqref{eq:Yk} can be chosen
to be $\rho(F)$-measurable, uniquely modulo $\mu$.
\end{theorem}
\begin{proof}
The `if' part follows from straight-forward calculation of the cumulative distribution functions~\eqref{eq:cdf}. To show the `only if' part, suppose~\eqref{e:max-decomp} holds and $Y\topp k$ has spectral functions $\indt{g\topp k}\subsetL^\alpha_+(V_k,{\cal B}_{B_k},\nu_k)$, $k = 1,\dots,n$. Without loss of generality, assume $\{V_k\}_{k=1,\dots,n}$ to be mutually disjoint and define $g_t(v) = \summ k1n g_t\topp k(v){\bf 1}_{V_k}\inL^\alpha_+(V,{\cal B}_V,\nu)$ for appropriately defined $(V,{\cal B}_V,\nu)$ (see the proof of Theorem~\ref{thm:1}).
Now, consider the S$\alpha$S process $X$ associated to $Y$. It has spectral functions $\indt f$ and $\indt g$. Consider the S$\alpha$S processes $X\topp k$ associated to $Y\topp k$ via spectral functions $\indt {g\topp k}$ for $k = 1,\dots, n$. By checking the characteristic functions, one can show that $\{X\topp k\}_{k=1,\dots,n}$ form a decomposition of $X$ as in~\eqref{eq:decomposition}. Then, by Theorem~\ref{thm:1}, each S$\alpha$S component ${X\topp k}$ has a spectral representation~\eqref{eq:Xk} with spectral functions $\indt{r_kf}$. But we introduced $X\topp k$ as the S$\alpha$S process associated to $Y\topp k$ via spectral representation $\indt{g\topp k}$. Hence, $X\topp k$ has spectral functions $\indt{g\topp k}$ and $\indt{r_kf}$, and so does $Y\topp k$ by the association (\cite{wang10association}, Theorem 4.1). Therefore,~\eqref{eq:Yk} holds and the rest of the desired results follow.
\end{proof}
Further parallel results can be established by the association method. Consider a stationary $\alpha$-Fr\'echet process $Y$.
If $Y\topp k, k=1,\dots,n$ are independent stationary $\alpha$-Fr\'echet processes such that~\eqref{e:max-decomp} holds, then we
say each $Y\topp k$ is a {\it stationary $\alpha$-Fr\'echet component} of $Y$. The process $Y$ is said to be {\it indecomposable}, if it has no non-trivial stationary component. The following results on
{\em (mixed) moving maxima} (see e.g.~\cite{stoev06extremal} and~\cite{kabluchko09spectral} for more details) follow from
Theorem~\ref{thm:2} and the association method, in parallel to Corollary~\ref{coro:MMA1} on
(mixed) moving averages in the sum-stable setting.
\begin{corollary}\label{coro:max}
The mixed moving maxima process
\[
{\{Y_t\}_{t\in\mathbb R^d}\stackrel{\rm d}{=}\bccbb{{\int^{\!\!\!\!\!\!\!{\rm e}}}_{\mathbb R^d\times V}f(t+s,v)M_\alpha^\vee({\rm d} s,{\rm d} v)}_{t\in\mathbb R^d}}
\]
is indecomposable, if and only if it has a moving maxima representation
\[
{\{Y_t\}_{t\in\mathbb R^d}\stackrel{\rm d}{=} \bccbb{{\int^{\!\!\!\!\!\!\!{\rm e}}}_{\mathbb R^d}f(t+s)M_\alpha^\vee({\rm d} s)}_{t\in\mathbb R^d}}\,.
\]
\end{corollary}
\section{Proof of Theorem~\ref{thm:1}}\label{sec:proofs}
We will first show that Theorem~\ref{thm:1} is true when $\indt f$ is minimal (Proposition~\ref{prop:minimal}), and then we complete the proof by relating a general spectral representations to a minimal one. This technique is standard in the literature of representations of S$\alpha$S processes (see e.g.~Rosi\'nski~\cite{rosinski95structure}, Remark 2.3).
We start with a useful lemma.
\begin{lemma}\label{lem:unique}
Let $\indt f\subset L^\alpha(S,{\cal B}_S,\mu)$ be a minimal representation of an S$\alpha$S process.
For any two bounded ${\cal B}_S$-measurable functions $r\topp1$ and $r\topp2$, we have
\[
\bccbb{\int_Sr\topp1f_t{\rm d} M_\alpha}_{t\in T} \stackrel{\rm d}{=}\bccbb{\int_Sr\topp2f_t{\rm d} M_\alpha}_{t\in T}\,,
\]
if and only if $|r\topp1| = |r\topp2|\ \mbox{ modulo }\mu$.
\end{lemma}
\begin{proof} The 'if' part is trivial. We shall prove now the 'only if' part.
Let $S\topp k:={\rm supp}(r\topp k),\ k=1,2$ and note that since $\indt f$ is minimal,
then $\indt {r\topp k f}$, are minimal representations, restricted to $S\topp k$, $k=1,2$, respectively.
Since the latter two representations correspond to the same process, by Theorem 2.2 in \cite{rosinski95structure},
there exist a bi-measurable, one-to-one and onto point mapping $\Psi :S\topp 1\to S\topp 2$ and
a function $h:S\topp1\to\mathbb R\setminus\{0\}$, such that, for all $t\in T$,
\begin{equation}\label{e:f_12}
r\topp1(s) f_t(s) = r\topp2\circ\Psi(s) f_t\circ \Psi(s)h(s)\,,\mbox{ almost all } s\in S\topp1,
\end{equation}
and
\begin{equation}\label{eq:h}
\ddfrac\mu\Psi\mu = |h|^\alpha\,, \mu\mbox{-almost everywhere.}
\end{equation}
It then follows that, for almost all $s\in S\topp 1$,
\begin{equation}\label{e:ratio}
\frac{f_{t_1}(s)}{f_{t_2}(s)} = \frac{r\topp 1(s) f_{t_1}(s)}{r\topp1(s) f_{t_2}(s)} = \frac{f_{t_1}\circ\Psi(s)}{f_{t_2}\circ\Psi(s)}.
\end{equation}
Define $ R_\lambda(t_1,t_2) = \{s\, :\, f_{t_1}(s)/f_{t_2}(s)\le \lambda\}$
and note that by \eqref{e:ratio},
for all $A\equiv R_\lambda(t_1,t_2)$,
\begin{equation}\label{eq:Delta}
\mu\spp{\Psi(A\cap S\topp 1)\Delta(A\cap S\topp2)} = 0\,.
\end{equation}
In fact, one can show that Relation \eqref{eq:Delta} is also valid for all $A \in \rho(F)\equiv\sigma(R_\lambda(t_1,t_2):\lambda\in\mathbb R, t_1,t_2\in T)$.
Then, by minimality, \eqref{eq:Delta} holds for all $A\in{\cal B}_S$. In particular, taking $A$ equal to $S\topp1$ and
$S\topp2$, respectively, it follows that $\mu(S\topp1\Delta S\topp2) = 0$. Therefore, writing $\wt S \mathrel{\mathop:}= S\topp1\cap S\topp2$, we have
\begin{equation}\label{eq:wtS}
\mu(\Psi(A\cap \widetilde S) \Delta (A \cap \widetilde S)) = 0,\ \ \mbox{ for all }A\in {\cal B}_S\,.
\end{equation}
This implies that $\Psi(s) = s$, for $\mu$-almost all $s\in \wt S$. To see this, let ${\cal B}_{\wt S} = {\cal B}_S\cap \wt S$ denote the $\sigma$-algebra ${\cal B}_S$ restricted to $\wt S$. Observe that for all
$A\in{\cal B}_{\wt S}$, we have ${\bf 1}_A = {\bf 1}_A\circ\Psi$, for $\mu$-almost all $s\in\wt S$, and trivially $\sigma({\bf 1}_A:A\in{\cal B}_{\wt S}) = {\cal B}_{\wt S}$. Thus, by the second
part of Proposition 5.1 in~\cite{rosinski06minimal}, it follows that $\Psi(s) = s$ modulo $\mu$ on $\wt S$. This and \eqref{eq:h} imply
that $h(s) \in \{\pm1\}$, almost everywhere. Plugging $\Psi$ and $h$ into~\eqref{e:f_12} yields the desired result.
\end{proof}
\begin{proposition}\label{prop:minimal} Theorem~\ref{thm:1} is true when $\indt f$ is minimal.
\end{proposition}
\begin{proof} {\em We first prove the 'if' part}. The result follows readily
by using characteristic functions. Indeed, suppose that the $X^{(k)} = \{X_t^{(k)}\}_{t\in T},$
$k=1,\dots,n$ are independent and have representations as in \eqref{eq:Xk}. Then, for all
$a_j\in{\mathbb R},\ t_j\in T,\ j=1,\cdots, m,$ we have
\begin{multline}\label{e:lemma-1}
{\mathbb E}\exp\bpp{i\summ j1ma_jX_{t_j}} = \exp\bpp{-\int_S\babs{\summ j1ma_jf_{t_j}}^\alpha{\rm d}\mu}\\
= \prod_{k=1}^n\exp\bpp{-\int_S\babs{\summ j1ma_jr_kf_{t_j}}^\alpha{\rm d}\mu} = \prod_{k=1}^n{\mathbb E}\exp\bpp{i\summ j1ma_jX_{t_j}\topp k}\,,
\end{multline}
where the second equality follows from the fact that $\summ k1n|r_k(s)|^\alpha = 1,$ for $\mu$-almost all
$s\in S.$ Relation \eqref{e:lemma-1} implies the decomposition \eqref{eq:decomposition}.
{\em We now prove the 'only if' part}. Suppose that \eqref{eq:decomposition} holds and
let $\indt{f\topp k}\subset L^\alpha(V_k,{\cal B}_{V_k},\nu_k),\ k=1,\dots,n$ be representations for the independent components
$\indt {X\topp k}, k=1,\dots,n$, respectively, and without loss of generality, assume that $\{V_k\}_{k=1,\dots,n}$ are mutually disjoint.
Introduce the measure space $(V,{\cal B}_V,\nu)$, where $V:= \bigcup_{k=1}^nV_k$,
${\cal B}_V:= \{\bigcup_{k=1}^n A_k,\ A_k\in {\cal B}_{V_k},\ k=1,\dots,n\}$ and $\nu(A):= \summ k1n\nu_k(A\cap V_k)$ for
all $A\in {\cal B}_V$.
By \eqref{eq:decomposition}, it follows that
$\{X_t\}_{t\in T} \stackrel{\rm d}{=}\{\int_V g_t {\rm d} \wb M_\alpha\}_{t\in T}$, with $g_t(u)\mathrel{\mathop:}=\summ k1n f_t\topp k(u){\bf 1}_{V_k}(u)$ and $\wb M_\alpha$ an S$\alpha$S random
measure on $(V,{\cal B}_V)$ with control measure $\nu$.
Thus, $\{f_t\}_{t\in T} \subset L^\alpha(S,{\cal B}_S,\mu)$ and $\{g_t\}_{t\in T} \subset L^\alpha(V,{\cal B}_V,\nu)$ are two representations of the
same process $X$, and by assumption the former is {\em minimal}. Therefore, by Remark 2.5 in~\cite{rosinski95structure},
there exist modulo $\nu$ unique functions $\Phi:V\to S$ and $h:V\to\mathbb R\setminus{\{0\}}$, such that, for all $t\in T$,
\begin{equation}\label{eq:ftk}
g_t(u) = h(u)f_t\circ\Phi(u)\,,\mbox{ almost all }u\in V\,,
\end{equation}
where moreover $\mu = \nu_h\circ\Phi^{-1}$ with ${\rm d}\nu_h = |h|^\alpha{\rm d}\nu$.
Recall that $V$ is the union of mutually disjoint sets $\{V_k\}_{k=1,\dots,n}$. For each $k=1,\dots,n$, let $\Phi_k:V_k \to S_k := \Phi(V_k)$ be the restriction
of $\Phi$ to $V_k$, and define the measure $\mu_k(\cdot) \mathrel{\mathop:}=\nu_{h,k}\circ\Phi_k^{-1}(\,\cdot\,\cap S_k)$ on $(S,{\cal B}_S)$ with
${\rm d}\nu_{h,k}\mathrel{\mathop:}=|h|^\alpha{\rm d}\nu_k$. Note that $\mu_k$ has support $S_k$, and the Radon--Nikodym derivative ${\rm d}\mu_k/{\rm d}\mu$ exists.
We claim that~\eqref{eq:Xk} holds with $r_k\mathrel{\mathop:}=({\rm d}\mu_k/{\rm d}\mu)^{1/\alpha}$. To see this, observe that
for all $ m\in\mathbb N, a_1, \ldots, a_m \in\mathbb R, t_1, \ldots, t_m \in T$,
\[
\int_S\babs{\summ j1m a_jr_kf_{t_j}}^\alpha{\rm d}\mu = \int_{S_k}\babs{\summ j1m a_jf_{t_j}}^\alpha{\rm d}\mu_k = \int_{V_k}\babs{\summ j1ma_jhf_{t_j}\circ\Phi_k}^\alpha{\rm d}\nu_k\,,
\]
which, combined with~\eqref{eq:ftk}, yields \eqref{eq:Xk} because $g_t\vert_{V_k} = f_t^{(k)}$.
Note also that $\summ k1n\mu_k = \mu$ and thus $\summ k1nr_k^\alpha=1$.
This completes the proof of part {\it (i)} of Theorem \ref{thm:1} in the case when $\{f_t\}_{t\in T}$ is minimal.
To prove part {\it (ii)}, note that the $r_k$'s above are in fact non-negative and ${\cal B}_S$-measurable. Note also that by minimality,
the $r_k$'s have versions $\wt r_k$'s that are $\rho(F)$-measurable, i.e.\ $r_k = \wt r_k$ modulo $\mu$.
Their uniqueness follows from Lemma \ref{lem:unique}.
\end{proof}
\begin{proof}[{\it Proof of Theorem \ref{thm:1}}]
\noindent {\it (i)} The `if' part follows by using characteristic functions as in the proof of Proposition~\ref{prop:minimal} above.
\noindent{Now, we prove the `only if' part.} Let $\{\wt f_t\}_{t\in T} \subset L^\alpha (\wt S,{\cal B}_{\wt S}, \wt \mu)$ be a minimal representation of $X$. As in the proof of Proposition~\ref{prop:minimal}, by Remark 2.5 in~\cite{rosinski95structure},
there exist modulo $\mu$ unique functions $\Phi:S\to \wt S$ and $h:S\to\mathbb R\setminus{\{0\}}$, such that,
for all $t\in T$,
\begin{equation}\label{eq:ftk-thm-1}
f_t(s) = h(s)\wt f_t\circ\Phi(s)\,,\mbox{ almost all }s\in S,
\end{equation}
where $\wt\mu = \mu_h\circ\Phi^{-1}$ with ${\rm d} \mu_h = |h|^\alpha{\rm d}\mu$.
Now, by Proposition~\ref{prop:minimal}, if the decomposition \eqref{eq:decomposition} holds, then there exist unique non-negative functions $\wt r_k,\ k=1,\cdots,n$, such that
\begin{equation}\label{e:thm-1.1}
\{ X^{(k)}_t\}_{t\in T} \stackrel{d}{=} {\Big\{} \int_{\wt S} \wt r_k \wt f_t {\rm d} \wt M_\alpha {\Big\}}_{t\in T},\ \ k=1,\cdots,n,
\end{equation}
and $\sum_{k=1}^n \wt r_k^\alpha = 1$ modulo $\wt \mu$. Here $\wt M_\alpha$ is an S$\alpha$S measure on
$(\wt S,{\cal B}_{\wt S})$ with control measure $\wt \mu$. Let $r_k(s) := \wt r_k \circ \Phi(s)$ and note that
by using \eqref{eq:ftk-thm-1} and a change of variables, for all $a_j \in {\mathbb R}, t_j\in T,\ j=1,\cdots,m$, we obtain
\begin{equation}\label{e:thm-1.2}
\int_{S} {\Big|}\sum_{j=1}^m a_j r_k(s) f_{t_j}(s) {\Big|} \mu({\rm d} s) = \int_{\wt S} {\Big|}\sum_{j=1}^m a_j
\wt r_k(s) \wt f_{t_j}(s) {\Big|} \wt \mu({\rm d} s).
\end{equation}
This, in view of Relation \eqref{e:thm-1.1}, implies \eqref{eq:Xk}. Further, the fact that
$\sum_{k=1}^n\wt r_k^\alpha =1$ implies $\sum_{k=1}^nr_k^\alpha =1$, modulo $\mu$, because
the mapping $\Phi$ is non-singular, i.e.\ $\wt \mu\circ \Phi^{-1} \sim \mu$.
This completes the proof of part {\it (i)}.
\medskip We now focus on proving part {\it (ii)}. Suppose that \eqref{eq:Xk} holds for two choices of $r_k$, namely
$r_k'$ and $r_k''$. Let also $r_k'$ and $r_k''$ be non-negative and measurable with respect to $\rho(F)$. We claim that
\begin{equation}\label{eq:rhoF}
\rho(F)\sim \Phi^{-1}(\rho(\wt F))
\end{equation}
and defer the proof to the end. Then, since the minimality implies that ${\cal B}_{\wt S}\sim\rho(\wt F)$.
$r_k'$ and $r_k'' $ are measurable with respect
to $\rho(F) \sim \Phi^{-1}({\cal B}_{\wt S})$. Now, Doob--Dynkin's lemma (see e.g.~Rao~\cite{rao05conditional}, p.~30) implies that
\begin{equation}\label{e:thm-1.3}
r_k'(s) = \wt r_k'\circ \Phi(s)\ \ \mbox{ and } \ \ r_k''(s) = \wt r_k'' \circ \Phi(s),\ \ \mbox{ for $\mu$ almost all $s$},
\end{equation}
where $\wt r_k'$ and $\wt r_k''$ are two ${\cal B}_{\wt S}$-measurable functions.
By using the last relation and a change of variables, we obtain that \eqref{e:thm-1.2} holds with $(r_k,\wt r_k)$ replaced by $(r_k',\wt r_k')$ and $(r_k'',\wt r_k'')$, respectively. Thus both $\indt{\wt r_k'\wt f}$ and
$\indt{\wt r_k''\wt f}$ are representations of the $k$-th component of $X$. Since $\{\wt f_t\}_{t\in T}$ is a minimal representation of $X$,
Lemma~\ref{lem:unique} implies that $\wt r_k' = \wt r_k''$ modulo $\wt \mu$. This, by \eqref{e:thm-1.3} and the
non-singularity of $\Phi$ yields $r_k' = r_k''$ modulo $\mu$.
It remains to prove~\eqref{eq:rhoF}
Relation \eqref{eq:ftk-thm-1} and the fact that $h(s)\not=0$ imply that
for all $\lambda$ and $t_1,t_2\in T$,
$
\{ f_{t_1}/f_{t_2} \le \lambda \} = \Phi^{-1} (\{ \wt f_{t_1}/\wt f_{t_2} \le \lambda \} )\mbox{ modulo }\mu.
$
Thus the classes of sets ${\cal C}:= \sccbb{ \{f_{t_1}/f_{t_2} \le \lambda \},\ t_1,t_2\in T,\ \lambda\in {\mathbb R}}$
and $\wt {\cal C}:=\sccbb{ \Phi^{-1} (\{ \wt f_{t_1}/\wt f_{t_2} \le \lambda \}),\ t_1,t_2\in T,\ \lambda\in {\mathbb R}} $ are equivalent. That is, for all $A \in {\cal C}$, there exists $\wt A\in \wt{\cal C}$, with $\mu(A\Delta \wt A) = 0$ and vice versa.
Define
$$
\wt {\cal G} = \bccbb{\Phi^{-1}(A): A\in \rho(\wt F)\mbox{ such that\,} \mu(\Phi^{-1}(A) \Delta B) = 0 \mbox{ for some } B\in \sigma({\cal C})}.
$$
Notice that $\wt {\cal G}$ is a $\sigma$-algebra and since $\wt {\cal C} \subset \wt {\cal G} \subset
\Phi^{-1}(\rho(\wt F))$, we obtain that $\sigma(\wt {\cal C}) = \Phi^{-1}(\rho(\wt F)) \equiv
\wt {\cal G}$. This, in view of definition of $ \wt {\cal G}$, shows that for all $\wt A \in \sigma(\wt {\cal C})$, exists
$A\in \sigma({\cal C})$ with $\mu(A\Delta \wt A) = 0$. In a similar way one can show that each element of
$\sigma({\cal C})$ is equivalent to an element in $\sigma(\wt {\cal C})$, which completes the proof of the
desired equivalence of the $\sigma$-algebras.
\end{proof}
\noindent{\bf Acknowledgment}
Yizao Wang and Stilian Stoev's research were partially supported by the NSF grant DMS--0806094 at the University of Michigan, Ann Arbor. The authors were grateful to Zakhar Kabluchko for pointing out two mistakes in a previous version and for many helpful discussions. They also thank two anonymous referees for helpful comments and suggestions.
\def\cprime{$'$} \def\polhk#1{\setbox0=\hbox{#1}{\ooalign{\hidewidth
\lower1.5ex\hbox{`}\hidewidth\crcr\unhbox0}}}
|
2,877,628,090,406 | arxiv | \section{Introduction}
An order received by a commercial warehouse operator may comprise of several order-lines which specify a required item and a quantity. \textit{Order-picking} is the process of retrieving these items from the warehouse and delivering them to a target location within the warehouse for further handling \citep{petersen1999evaluation}. In its most basic form, a human worker will receive an order and travel around the warehouse with a cart and pick the required items manually. The general objective is to minimise the time required for order completion. This process typically represents a significant proportion of a warehouse's operational costs, where figures in the region of 55\% are commonly quoted \citep{drury1988towards}. As such, order-picking has attracted significant automation efforts in pursuit of reducing operational costs and thereby improving commercial competitiveness.
Automation efforts have generally focused on the \textit{pick-to-picker} paradigm, in which large-scale autonomous systems move items to pickers for dispatch to customers. There are numerous examples of these types of systems, including the Dematic Multishuttle\footnote{\label{dematicwebsitefootnote}\url{https://www.dematic.com/en-au/products/storage}}, Autostore\footnote{\label{dematicautostorefootnote}\url{https://www.dematic.com/en-au/products/storage/autostore}}, and KIVA \citep{Wurman_DAndrea_Mountz_2008}. Typically, these approaches require significant capital investment and can be costly to adjust to varying warehouse capacity and consumer demand. For these reasons, adoption is generally limited to larger operations.
An alternative and more common approach is the \textit{picker-to-pick} paradigm, in which pickers move to item locations within the warehouse and directly retrieve them for dispatch. This paradigm accounts for the majority of warehouse operations, where \citet{de2007design} estimated that 80\% of warehouses in Western Europe followed this paradigm. Despite its prevalence, automation is comparatively less mature in this domain.
In this work, we consider the augmentation of the picker-to-pick paradigm with robotic vehicles such as automated guided vehicles (AGVs) and autonomous mobile robots (AMRs). We believe their increasing affordability and capability provides an opportunity to develop an incremental pathway towards higher productivity and full warehouse automation. We refer to AGVs and AMRs interchangeably, as our solution is not strictly dependent on the type of vehicle being used. The general idea of AGV-assisted order-picking has begun to receive attention in the academic literature \citep{azadeh2020dynamic, loffler2022picker, vzulj2022order}. Typically, this involves the decoupling of a traditional picker's role into order transportation and item picking, where transportation is handled by the AGV and picking is handled by a human worker or robotic picker. This approach has multiple benefits: (1) pickers do not need to travel back to the depot to complete an order, thereby minimising unproductive walking time, (2) the integration of AGV technologies into existing warehouses requires minimal modification to existing infrastructure, and (3) the system can support scaling with varying demand by changing the number of AGVs and pickers.
Simulation experts at Dematic\footnote{Dematic is a multinational company specialising in materials handling systems and logistics automation.} have practical experience in developing various order-picking strategies in the \textit{picker-to-pick} paradigm through heuristic methods.
They find that their application and optimisation for innately variable warehouse configurations and optimisation targets require significant engineering effort. Ideally, the derivation of optimal methods for worker control should be an automatic process.
Reinforcement learning (RL) offers this capability, having achieved notable successes in a number of complex real-world domains \citep{bellemare2020autonomous, degrave2022magnetic}. Order-picking by its very nature is a multi-agent problem, thus we leverage multi-agent RL (MARL) which extends RL to multi-agent systems \citep{papoudakis2019dealing}. An important benefit of MARL is its flexibility to operate with diverse warehouse and worker specifications as well as optimisation objectives (e.g. order throughput, battery usage, travel distance, traffic and congestion, pallet stability, labour cost), where existing heuristic approaches would require significant engineering effort to fit different specifications.
This paper details the current status of the R\&D effort initiated by Dematic and the University of Edinburgh towards a general-purpose and scalable MARL solution for the order-picking problem. We developed a high-performance simulator which is capable of representing real-world customer operations and is optimised for efficient RL training and testing. We further developed a MARL approach, Hierarchical Shared Network Actor-Critic (HSNAC), which improves sample efficiency over Shared Network Actor-Critic \citep{christianos2020shared} through enabling decomposition of the large action space via a multi-layer hierarchy. HSNAC outperforms a well-established industry heuristic, achieving a $23.2$\% improvement in order-lines per hour while being generally applicable to different warehouse specifications. We outline a path towards the deployment of our solution, identifying a number of limitations and promising methodologies to further improve the performance and realism of our solution, and consideration for its integration into a comprehensive machine learning pipeline.
\section{Related Literature}
\label{sec:lit}
\paragraph{AGV-assisted order-picking}
\citet{azadeh2020dynamic} model the order-picking problem as a queuing network and explore the impact of different zoning strategies (no zoning and progressive zoning). They then further extend their method by representing it as a Markov decision process and consider dynamic switching based on the order-profile using dynamic programming. \citet{loffler2022picker} consider an AGV-assisted picker and provide an exact polynomial time routing algorithm for a single-block parallel-aisle warehouses. \citet{vzulj2022order} consider a warehouse partitioned into disjoint picking zones, where AGVs meet pickers at handover zones to transport the orders back to the depot. They propose a heuristic for effective order-batching to reduce tardiness. We additionally acknowledge pick-to-conveyor solutions\footnote{\url{www.dematic.com/en-gb/products/case-and-piece-picking}} which we consider as a special case of AGV-assisted order-picking. Our work differs from existing work in this area as it does not place any restrictions on how workers may cooperate. To the best of our knowledge, our work is the first application of MARL to order-picking in the picker-to-pick context.
\paragraph{Multi-Agent Pickup and Delivery Problem}
MAPD problems \citep{mapd} consider a set of agents that are sequentially assigned tasks. Each task requires the agent to visit a pickup location and a delivery location. The locations of agents and tasks are typically presented as nodes in an undirected connected graph. The agents move between locations via the graph's edges, where the edge capacity is restricted so that only one agent may move along an edge. The objective is to minimise the time duration required for task completion, which may be further broken down into task assignment and the planning of collision free paths. This formulation has been applied to several problems in warehouse logistics \citep{mapd,ma2019lifelong,XuIROS22}. The decoupling of workers within our approach introduces complex interdependencies between the paths of different worker types which significantly complicates the problem. \citet{greshler2021comapf} have introduced the Cooperative Multi-Agent Path Finding problem which is applicable within this domain but requires explicit specification of the workers required to cooperate and does not allow for optimisation over extended temporal periods. We consider MARL as an alternative due to its generality and flexibility.
\paragraph{Multi-agent reinforcement learning}
MARL algorithms are designed to train coordinated agent policies for multiple autonomous agents, and have received much attention in recent years with the introduction of deep learning techniques into MARL \citep{papoudakis2019dealing}.
MARL has previously seen application to various warehousing problems, including Shared Experience Actor-Critic to pick-to-picker systems \citep{christianos2020shared, papoudakis2021benchmarking}, and a deep Q-network variant for sortation control \citep{s20123401}. For the specific complexities of the order-picking problem, we consider methods at the intersection of MARL and hierarchical RL (HRL) to enable action space decomposition and temporal abstraction. This combination has been studied by \citet{xiao2020macromarl} who derive MARL algorithms for macro-actions under partial observability, and \citet{ahilan2019feudal} who propose Feudal Multi-Agent Hierarchies (FMH) which extends Feudal RL \citep{dayan1992feudal} to the cooperative MARL domain. We apply a 3-layer adaptation of FMH to a partially observable stochastic game with individual agent reward functions.
\section{Background}
\label{sec:prob_stat}
We consider a scenario in which a warehouse operator (customer) seeks to improve the efficiency of their warehouse, $\mathcal{W}$, through automation of order-picking. The task requires optimal utilisation of the customer's resources to maximally improve warehouse operations, measured by key performance indicators such as pick rate (order-lines/hour), order lead time (seconds), and distance travelled (metres).
\subsection{Warehouse Definition}
\label{sec:warehouse}
We consider a customer warehouse defined by the 3-tuple $\mathcal{W} = \{L, Z, W\}$.
\begin{itemize}
\item $L$ refers to the set of spatially distributed locations within the warehouse and can be further broken down into $L = L_{i} \cup L_{t}$, where $L_i$ and $L_t$ refer to the set of locations with stored items and other locations (such as idle locations and order delivery stations), respectively.
\item $Z$ defines the order distribution which is dependent on the warehouse's supplier and customer behaviour and is assumed to be known. An order $z = \{(p_{0}, q_{0}), \dots, (p_{n}, q_{n})\}$ is sampled from $Z$. Each pair $(p, q)$ represents an order-line, where $p$ represents the item and $q$ specifies the required quantity.
\item $W$ is the set of workers and is comprised of AGVs, $V$, and pickers, $P$, where the workers in each set are homogeneous. AGVs are assigned orders sampled from $Z$ with $z^v$ denoting the current order of AGV $v \in V$. Successful picking of an order-line requires coordination between AGVs and pickers.
\end{itemize}
For a given warehouse, we seek to derive a joint policy $\pi$ which defines the behaviour of all workers in $W$ such that we maximise the average pick rate $K$, formally denoted with $\pi = \argmax_{\pi} K(W,\pi)$.
A key desideratum of our solution is to automatically learn optimal policies for any given warehouse configuration and order profile.
\begin{figure}
\centering
\includegraphics[width=\linewidth]{media/SystemsArchitecture.pdf}
\caption{Systems architecture for our solution.}
\label{fig:systemsarchitecture}
\end{figure}
\subsection{Heuristic Solutions}
\label{sec:heuristics}
Two heuristics which Dematic use within this setting are \textit{Follow Me} (FM), and \textit{Pick, Don't Move} (PDM).
\paragraph{Follow Me}
A number of AGVs are assigned to each picker and will follow them through the warehouse. Each AGV's order is concatenated and the travelling salesman problem (TSP) solution is generated to determine the order in which the items will be picked. This improves efficiency over a traditional picker with order cart approach as the AGV can leave the picker and deliver the completed order to the packing area.
FM minimises idle time for pickers, as it ensures that they are always travelling or picking, but also leads to more travelling of pickers than needed.
\paragraph{Pick, Don't Move}
Pickers are allocated to zones (such as a picker per aisle) in the warehouse which they are responsible for, while AGVs are allowed to travel throughout the entirety of the warehouse. The AGVs travel through the list of locations they need to visit to complete their order using a TSP solution. Once an AGV goes into a picker's zone, the picker is responsible for meeting the AGV at the item location, and picking relevant items to the relevant orders onto the AGV. Pickers prioritise service of AGVs by the relative proximity of the AGV and picker to their target locations.
PDM minimises travel distance for pickers, allowing them to spend more time picking, and less time moving. However, it may result in under-utilisation of pickers in case there are few items within current orders in their operating zones.
\subsection{Key Challenges}
\label{sec:challenges}
The efficacy of optimised heuristic methods is context and customer-dependent, requiring consideration of many factors such as the warehouse item clustering strategy, order profile, order prioritisation mechanism, changes in demand and supply, changes in labour and workforce conditions, and regulatory factors to name but a few. A heuristic strategy needs to be selected and repeatedly tuned with this ever-changing context.
The required iterative process necessitates a regular engineering effort in consultation with a customer to review, analyse and tune performance of heuristics and parameters.
This engineering burden motivates the automation of worker policies for the order-picking problem under the consideration of the following challenges:
\paragraph{Scalability} We desire a general-purpose solution which can handle variations in multiple dimensions, including the number of total item locations $|L|$, the order distribution $Z$ and the number of workers $|V| + |P|$. Controlling all workers with a single decision-making entity quickly becomes infeasible due to the joint action space growing exponentially with the number of workers. Hence, we consider MARL approaches in which pickers and AGVs are modelled as individual agents.
\paragraph{Development environment}
MARL algorithms require significant numbers of exploratory interactions within the environment \citep{papoudakis2021benchmarking}, involving prolonged periods of sub-optimal behaviours within the target warehouse configuration. This is unacceptable for warehouse operators and, therefore, mandates the utilisation of a high-performance simulation platform as an essential part of the development pipeline.
\paragraph{Productionisation}
Any derived solution does not exist in isolation, but as part of a pipeline and a larger warehouse system. As such, we propose a minimal viable system architecture in \Cref{fig:systemsarchitecture}. The diagram identifies how we envisage our systems interacting. MARL algorithms will be trained on a training cluster. Algorithm inference runs on an AI controller, which must be deployed on-premise, as warehouse environments cannot tolerate operational downtime. Generated commands are transmitted to workers through an on-premise wireless communications network. For a robotic vehicle, this command is given via the vehicle management system and executed by the vehicle, and for a human worker, this command is provided on a mobile device or headset. Feedback is provided to the systems by the picker upon successful pick via voice or scanned barcode.
\begin{figure}[t]
\centering
\includegraphics[width=\linewidth]{media/warehouse.png}
\caption{3-D warehouse simulator. Snapshot showing human and AGV workers moving along aisles.}
\label{fig:warehouse}
\end{figure}
\section{Warehouse Simulator}
\label{sec:sim}
To facilitate efficient MARL training, experts at Dematic developed a high-performance warehouse simulator which is capable of representing real-world customer warehouses and includes implementations of the baselines described in \Cref{sec:heuristics}. \Cref{fig:warehouse} shows an example snapshot of a simulated warehouse used in our experiments.
\subsection{Implementation}
The simulator was developed in Python $3.9$ using the Panda3D game engine to enable visualisation and implements the OpenAI gym interface \citep{openai2016gym}. We introduce several assumptions to reduce the simulator's computational requirements. These are listed below, and will be relaxed in future work.
\begin{itemize}
\item \textbf{Warehouse scaling} We model the warehouse at a 1:3 scale and all workers travel at a speed of $1.66$ meters per second. By scaling the distances, we enable faster transit and consequently faster training times.
\item \textbf{Worker commitment} When a worker chooses to travel to a location, it commits to its execution until arrival.
\item \textbf{Collisions} We do not model worker collisions, assuming that they are able to move past each other without any delay or penalty. We give the vehicle management and execution system the responsibility of collision avoidance.
\item \textbf{Automatic loading} Picking of items from a picker onto an AGV is done automatically and instantaneously once both workers are at the respective location. The impact of the quantity in an order-line is considered to be negligible and as such we set it to $1$.
\item \textbf{Fixed workers} The number of workers within the system is fixed and unchanging. Workers do not experience failures, nor does their productivity decrease over time.
\item \textbf{FIFO order assignment} Orders are assigned to AGVs following a first-in-first-out queue system. Optimisation of order assignment is left for future work.
\end{itemize}
\subsection{Simulator Optimisation}
Major effort was directed towards improving the execution speed of the simulator. Through optimisations, we achieved a speed-up of $13021$x (from $0.003643$ steps per second to $47.51$ steps per second collected over 300 samples), for a warehouse consisting of 1276 item locations, 24 agents, and running 4 environments in parallel. The most impactful modifications we made are detailed below.
\paragraph{Time progression in game engine} We generate frames at 0.2 FPS (frames per second), such that 5 seconds in game time passes on each game engine step (and equivalently, each algorithm action step). The relatively low FPS rate impacts our worker action frequency, resulting in some minor performance degradation, but was chosen to make RL training within a reasonable time-frame tractable.
\paragraph{Precalculation and caching shortest paths} As we shall describe within \Cref{sec:algo}, agents select their target locations as actions. Following such a movement decision, we calculate the shortest path between the agent's current and target location.
To facilitate this computation, a warehouse is represented as a graph and shortest paths computed using the Dijkstra's algorithm~\citep{dijkstra1959note} are cached in a table.
This table uses significant memory for reasonable warehouses (in the order of 500MB for 1200 locations), but path determination becomes a hash-table lookup of $O(1)$.
\paragraph{Boosting execution time for distance calculations}
Calculating distances from $(x, y)$ coordinates to graph nodes is a frequent computation which scales with $O(|W||L|)$. To improve this operation, we use the following elements: (1) a graph coordinate hash-table for exact matches, (2) a compiled vectorised function which translates the operation to machine code, and (3) a KD-tree \citep{bentley1975kdtree} of the graph coordinates. We see a significant improvement in execution time for an insignificant amount of memory usage.
\section{Multi-Agent Reinforcement Learning}
\label{sec:marl}
We use MARL to train coordinated agent policies in simulation. This section details our model and algorithm design.
\subsection{Problem Modelling}
\subsubsection{Partially observable stochastic game}
We model the multi-agent interaction as a partially observable stochastic game (POSG) for $N$ agents \citep{hansen2004dynamic}. A POSG is defined by the tuple $(\mathcal{I}, \mathcal{S}, \{\mathcal{O}^i\}_{i\in \mathcal{I}}, \{\mathcal{A}^i\}_{i\in \mathcal{I}}, \mathcal{P}, \Omega, \{\mathcal{R}^i\}_{i\in \mathcal{I}})$, with agents $i\in\mathcal{I} = \{1,\ldots,N\}$, state space $\mathcal{S}$, and joint action space $\mathcal{A} = \mathcal{A}^1\times\ldots\times \mathcal{A}^N$.
Each agent $i$ only perceives its local observations $o^i \in \mathcal{O}^i$ given by the observation function $\Omega: \mathcal{S} \times \mathcal{A} \mapsto \Delta(\mathcal{O})$ with joint observation space $\mathcal{O} = \mathcal{O}^1\times\ldots\times\mathcal{O}^N$. The transition function $\mathcal{P}: \mathcal{S} \times \mathcal{A} \mapsto \Delta(\mathcal{S})$ returns a distribution over successor states given a state and a joint action. Agent $i$ receives a reward $r^i_t$ at each step $t$ defined by its individual reward function $\mathcal{R}^i: \mathcal{S} \times \mathcal{A} \times \mathcal{S} \mapsto \mathbb{R}$. The goal is to learn a joint policy $\pi = (\pi_1,\ldots,\pi_N)$ to maximise the discounted return $G^i=\sum^T_{t=1}{\gamma^{t-1}r_t^i}$ of each agent $i$ with respect to the policies of other agents; formally, $\forall i\in\mathcal{I}: \pi_i \in \argmax_{\pi_i^\prime} \mathbb{E}\left[G^i \mid \pi_i^\prime, \pi_{-i}\right]$ where $\pi_{-i} = \pi \setminus \{\pi_i\}$, and $\gamma$ and $T$ denoting the discount factor and episode length, respectively.
\subsubsection{Action space}
The completion of orders requires pickers to be able to visit all item locations $l \in L_{i}$ and for AGVs to be able to visit all locations $l \in L$.
We enable this by defining the action space of pickers and AGVs as $\mathcal{A}_{p} = L_{i}$ and $\mathcal{A}_{v} = L$, respectively.
This results in a large action space and an action duration that is dependent on the distance between the agents' current and target locations. These issues are addressed in \Cref{sec:algo}.
\subsubsection{Observation space}
The availability of communication links between workers and the central servers affords a high degree of flexibility in modelling the information observed by agents.
We aim to provide both classes of agents with sufficient information to make optimal decisions whilst pruning out unnecessary variables. Picker and AGV observations are defined in \Cref{eq:pick_obs} and \Cref{eq:agv_obs} with $\oplus$ denoting the concatenation operator. Pickers and AGVs observe the current and target locations of all agents, denoted $l_{c}^i \in L$ and $l_t^i \in L$ for agent $i$. Additionally, AGVs only observe their own order while pickers observe all orders.
\begin{equation}
\label{eq:pick_obs}
O_p = \{(l_c^i, l_t^i) \mid i \in \mathcal{I}\} \oplus \{z^v \mid v \in V\}
\end{equation}
\begin{equation}
\label{eq:agv_obs}
O_v = \{(l_c^i, l_t^i) \mid i \in \mathcal{I}\} \oplus z^v
\end{equation}
\subsubsection{Reward function}
Agents are rewarded for behaviour which is aligned with the objective stated in \Cref{sec:warehouse}. The reward function for agent $i$ as a picker and AGV, respectively, are given in \Cref{eq:pick_rew,eq:agv_rew}.
\begin{equation}
\label{eq:pick_rew}
r^i_t =
\begin{cases}
0.1, & \text{if $i$ completed pick at step } t\\
-0.05,& \text{otherwise}
\end{cases}
\end{equation}
\begin{equation}
\label{eq:agv_rew}
r^i_t =
\begin{cases}
0.1, & \text{if $i$ received picked item at step } t\\
0.1, & \text{if $i$ completed order at step } t\\
-0.05, & \text{otherwise }
\end{cases}
\end{equation}
\begin{figure}
\centering
\includegraphics[width=\linewidth]{media/hmarl_diagram.pdf}
\caption{Diagram of 3-layer Feudal Hierarchy.}
\label{fig:hier_net}
\end{figure}
\subsection{Hierarchical MARL for Order-Picking}
\label{sec:algo}
In the current formulation, each agent's action space is very large with $|\mathcal{A}^i| \approx |L|$ and the actions may take different durations to terminate. To address these challenges and simplify the training of agents, we utilise an adaptation of FMH \citep{ahilan2019feudal}, which involves the introduction of a manager that produces goals for subordinates to satisfy, shown in \Cref{fig:hier_net}.
We apply this concept to partition the locations within the warehouse into a set of disjoint sectors $Y$, formally $L = \bigcup_{y\in Y}y$. The manager observes the current and target locations of all agents as well as the current orders assigned to each AGV, and determines a sector $y^i$ for each agent $i\in\mathcal{I}$ to move to. Given the assigned sector, agent $i$'s policy $\pi_i$ selects its new target location $l_t^i \in y^i$ within its assigned sector. This decomposition greatly reduces the effective action space of each agent's policy which is now given by $\max_{y\in Y} |y| \ll |L|$. Once the target location of each agent is determined, a lower-level controller will then calculate a path from its current location and execute the necessary sequence of primitive actions.
We further reduce the size of the effective action space of agents through action-masking, based on the observation that each AGV $v \in V$ only needs to collect the items within their current order $z^v$. Therefore, AGVs should only move to locations within the warehouse which contain these items.
This is achieved by masking out actions which refer to locations without items in the current order of an AGV $v$. Given that in expectation, $|z^v| \ll |L|$, this significantly reduces the number of actions for each AGV. For pickers, a reasonable action-masking approach is less clear cut. Empirically, we found that only considering picker actions which refer to current and target locations of all AGVs was effective.
The policy and value network of the manager are given by a multi-headed neural network comprising of three fully-connected layers consisting of 128 neurons each with ReLU activations, and a policy and value head for each agent.
Each agent is parameterised by a value and critic network represented by two fully connected layers of 64 neurons with ReLU activations. This hierarchical model is trained using the Shared Network Actor-Critic (SNAC) algorithm \citep{christianos2020shared} in which we share networks across pickers and AGVs, respectively, to improve the efficiency of the training process. The discount factor, $\gamma$, is set to $0.99$ for all agents.
We found that we can further improve performance by applying generalised advantage estimation (GAE) \citep{Schulman2016gae} and standardising the advantage estimate in each training batch to have zero mean and unit variance.
\begin{figure}
\centering
\includegraphics[width=\linewidth]{media/orderlines_plot.pdf}
\caption{Average order-lines per hour of HSNAC, SNAC, and heuristics. Shaded area shows 95\% confidence interval.}
\label{fig:main_results}
\end{figure}
\section{Empirical Evaluation}
\label{sec:exp}
\begin{table*}
\includegraphics[width=\linewidth]{media/MetricComparisonTable.pdf}
\caption{Performance comparisons between HSNAC (ours), SNAC, PDM and FM. Results show mean $\pm$ 95\% confidence interval for distance travelled, idle time, and pick rate for both AGVs and pickers.}
\label{tab:algorithm_comparison}
\end{table*}
We test our approach in a warehouse comprising of $1276$ item locations, with $16$ AGVs and $8$ pickers which is shown in \Cref{fig:warehouse}. The workers are presented with an episodic task consisting of $80$ orders with an average length of $5$ order-lines which are randomly distributed within $L_{i}$. We implement HSNAC in PyTorch \citep{paszke2019pytorch} and allow it to train for $8000$ episodes and $8$ random seeds. The partitioning, $Y$, is achieved through division of the warehouse into $22$ equal-sized sections. We compare HSNAC against the PDM and FM heuristics (\Cref{sec:heuristics}) and SNAC~\citep{christianos2020shared} which uses the same neural network architecture as the HSNAC worker agents.
We also tried shared experience actor critic~\citep{christianos2020shared} and independent actor-critic but omit them from our figures as they underperformed SNAC by $12.4\%$ and $89.0\%$ respectively.
We use pick rate in order-lines per hour as our primary performance measure, indicating the average frequency of picks in the episode.
Experimentation was performed on three Ubuntu servers, two with 128 core CPUs and 256GB RAM, and one with 32 core CPUs and 128GB RAM, respectively,
and experimental tracking was performed using WandB \citep{wandb}. Full evaluation details with additional results and episode video recordings are available in an extended report\footnote{\label{foot:wandb}\url{https://sites.google.com/view/scalablemarlwarehouse/}}.
\Cref{fig:main_results} shows the pick rate in order-lines per hour of HSNAC and all baselines across training. We observe that HSNAC achieves noticeably higher order-lines per hour compared to PDM, but is still under-performing compared to FM. Comparing HSNAC to SNAC demonstrates the advantage of the hierarchical architecture, with HSNAC converging significantly faster.
In \Cref{tab:algorithm_comparison}, we show selected aggregated characteristics for $50$ episodes for PDM and FM, and across all trained HSNAC and SNAC seeds.
We find that HSNAC's improvement in pick rate over PDM is largely due to a reduction in idle time by 27.55\% for AGVs and 44.68\% for pickers, however it also incurs an increased travel distance of 20.96\% for AGVs and 70.23\% for pickers over the course of an episode.
Upon closer inspection of agent behaviours, we were able to observe situations in which two or more pickers raced towards AGVs, which appears to demonstrate the emergence of competitive behaviours. Through comparison of the pickers' order-line completion locations (shown in the extended report) we observed correlations between pickers which indicate that a variety of zone allocations may be emerging. In contrast to PDM, allocated zones are not served by a singular picker. HSNAC appears to allow a greater degree of flexibility, where two or more agents may serve the same zone of the warehouse, but may also move to other areas.
\section{Path to Deployment}
\label{sec:discuss}
Our results support our hypothesis that MARL agents can derive effective solutions for the order-picking problem. It plausibly provides a mechanism by which we can avoid costly heuristic optimisation in the future. We achieved this through employing a number of methodologies, where the hierarchical decomposition of the action space had a large impact which we attribute to it being able to provide a solution to the order-picking problem at a lower spatial resolution than SNAC. Through HSNAC, we were able to outperform PDM which is a well-established industry heuristic. However, the manner in which it achieves this is not without fault. The increase in travel distance for pickers shown in \Cref{tab:algorithm_comparison} likely represents a notable increase in maintenance costs for robotic pickers and possible concerns for human picker welfare. This is likely attributable to the competitive behaviours we observed. We believe that these observations, in combination with the current supremacy of FM, demonstrate that whilst we are already in a region of commercial viability as we greatly exceed heuristic flexibility, there is more work to be done to achieve performance parity with the best performing heuristic.
The simulator (\Cref{sec:sim}) forms the foundation of our approach, and provides a necessary trade-off between accuracy and execution speed in order to make training in a reasonable time plausible. In real-world settings, many of our assumptions may be challenged. For example, (1) loading may take a variable amount of time dependent on the required quantity or item type, (2) the number of workers may vary with shift patterns and (3) agent policy decisions may be able to happen quicker than the decision interval allowed. Despite this, even with relaxation of all our assumptions, transferring policies trained in simulation to the real world bring additional challenges. This is a well-known challenge within the RL community known as \textit{sim-to-real} \citep{zhao2020simtoreal}. In addition to this, a warehouse does not exist in a vacuum and it needs to be adaptive to pressures that are placed on it by its supply chain. This is an issue of \textit{concept drift} \citep{lu2018learning} and being able to handle its implication is essential.
Moving our solution towards production requires comprehensive analysis and mitigation of the limitations which we have identified. We plan to investigate methods to reduce picker competitivity, where we hypothesise that additional factors such as energy usage penalty and agent-to-agent communication may inhibit the emergence of these behaviours. We have begun some limited investigations into curriculum learning \citep{JMLR:v21:20-212} as a method to allow for tightening of assumptions such as our decision interval, and plan to continue with this work and expand the scope to consider more variables. We have also begun to consider challenges associated with deployment. For example, upon our warehouse's order profile or other parameters drifting, we envisage opportunity for re-training within simulation. With suitable demand prediction models, we may even be able to undertake this proactively. A real-world proof-of-concept is within our project roadmap, and addressing hardware interoperability and sim-to-real challenges is critical for commercial viability.
\section{Conclusion}
\label{sec:conc}
Within this work, we evaluated the feasibility of utilising MARL as a method to avoid time and resource-intensive manual heuristic tuning for an AGV-assisted warehouse. Through employing a range of innovations, we were able to demonstrate that this is possible. Our algorithm, HSNAC, achieved a notable performance gain over an established industry heuristic in a realistic simulated warehouse. We have identified a range of limitations related to our current assumptions and considerations for deployment.
|
2,877,628,090,407 | arxiv |
\section{Introduction}\label{sec:intro}
Chance-constrained programming is an important paradigm in optimization under uncertainty. It acknowledges that it may not be possible to satisfy all constraints of a system due to the inherent uncertainty in the model parameters; instead, it aims to satisfy the system constraints with high probability. The generic form of a chance-constrained program (CCP) is given by
\begin{equation}\label{eq:ccp}
\min_{\vx} \left\{\mathbf{c}^\top \vx:~ \vx \in \mathcal{X}, ~ \mathbb{P}^*[\bm{\xi} \not\in \mathcal{S}(\vx)] \leq \epsilon \right\}. \tag{CCP}
\end{equation}
Here, $\mathcal{X} \subset \mathbb{R}^L$ is a domain for the vector of decision variables $\vx$, $\bm{\xi} \in \mathbb{R}^K$ is a random vector distributed according to $\mathbb{P}^*$, and $\epsilon \in (0,1)$ is the risk tolerance for the random variable $\bm{\xi}$ falling outside a decision-dependent safety set given by $\mathcal{S}(\vx) \subseteq \mathbb{R}^K$.
A main challenge in formulating and solving~\eqref{eq:ccp}~problems is that the distribution $\mathbb{P}^*$ is typically unknown or else is not efficiently computable in high dimensions. Often, in practice, this issue is addressed by approximating $\mathbb{P}^*$ with an \emph{empirical distribution} $\mathbb{P}_N$ obtained by sampling $N$ independent and identically distributed (i.i.d.) samples $\{ \bm{\xi}_i \}_{i \in [N]}$ from $\mathbb{P}^*$, where $[N]:=\{1,\ldots,N\}$. This leads to the natural and popular \emph{Sample Average Approximation} (SAA) formulation obtained by replacing $\mathbb{P}^*$ with $\mathbb{P}_N$ in~\eqref{eq:ccp} (see \cref{sec:nominal}).
This procedure has been shown to be statistically consistent \citep{Cal1,campi,sample}, but is also known to be quite sensitive to the samples drawn unless $N$ is quite large, in which case the resulting formulation is computationally intractable.
Consequently, finding a solution to~\eqref{eq:ccp} that is robust to errors in approximating $\mathbb{P}^*$ with $\mathbb{P}_N$ is of interest. To address this issue, a recent growing stream of research studies the following \emph{distributionally robust chance-constrained program}~\eqref{eq:dr-ccp}:
\begin{equation}\label{eq:dr-ccp}
\min_{\vx} \left\{\mathbf{c}^\top \vx:~ \vx \in \mathcal{X}, ~ \sup_{\mathbb{P} \in \mathcal{F}} \mathbb{P}[\bm{\xi} \not\in \mathcal{S}(\vx)]\leq \epsilon \right\}, \tag{DR-CCP}
\end{equation}
where $\mathcal{F}$ is an ambiguity set of distributions on $\mathbb{R}^K$.
In~\eqref{eq:dr-ccp}, the set $\mathcal{F}$ plays a critical role. Usually, $\mathcal{F}:=\mathcal{F}_N(\theta)$ with a parameter $\theta>0$ is selected such that the empirical distribution $\mathbb{P}_N$ is contained in it, and $\theta$ governs the size of the ambiguity set (and consequently the degree of conservatism of~\eqref{eq:dr-ccp}). See \citet{Rahimian2019DistributionallyRO} and references therein for a survey on distributionally robust optimization, in particular, the properties of~\eqref{eq:dr-ccp} and existing solution methods.
One of the most commonly studied set $\mathcal{F}_N(\theta)$ is the so-called \emph{Wasserstein ambiguity set}, which
is defined by the Wasserstein distance ball of radius $\theta$ around the empirical distribution $\mathbb{P}_N$. The Wasserstein ambiguity set gained popularity due to its desirable statistical properties and advantages over other ambiguity sets based on moments, $\phi$-divergences, unimodality, or support; see e.g., \cite{EOO03,JG16,calafiore:jota06,Hanasusanto2015,ZhangJiangShen2018,Li2019,Xie18}. Furthermore, Wasserstein uncertainty set is also attractive because the dual representation for the worst-case probability $\mathbb{P}[\bm{\xi} \not\in \mathcal{S}(\vx)]$ over the ambiguity set $\mathbb{P} \in \mathcal{F}_N(\theta)$ \cite{gao2016distributionally,BlanchetMurthy2019,MohajerinEsfahaniKuhn2018} can be used to derive deterministic non-convex reformulations of \eqref{eq:dr-ccp}~\cite{HotaEtAl2019,xie2018distributionally,chen2018data}. For example, for certain linear forms of safety sets $\mathcal{S}(\cdot)$, \citet{chen2018data} and \citet{xie2018distributionally} show that \eqref{eq:dr-ccp} can be represented as a mixed-integer program (MIP) with big-$M$ coefficients, which enables, in theory, modeling and solving these problems with black-box solvers. In practice, however, the resulting MIPs even with moderate sample sizes (e.g., $N=100$) cannot be solved in reasonable time with commercial MIP solvers.
In the literature, the scalability challenge of~\eqref{eq:dr-ccp} with Wasserstein ambiguity is addressed by exploiting further problem structures. %
\citet{xie2018distributionally} considers the case where all the decision variables in~\eqref{eq:ccp} are binary, for which he derives a big-$M$-free formulation that leads to notable computational benefits. \citet{WangLiMehrotra2020} impose the assumption that the support of $\bm{\xi}$ is finite and all the decision variables are binary when formulating distributionally robust assignment problems with the so-called left-hand side (LHS) uncertainty, in which the uncertain parameters $\bm{\xi}$ affect the coefficients of the decision variables in the safety set $\mathcal{S}(\vx)$. They further assume that chance constraints are given \emph{individually}, i.e., each chance constraint takes a single inequality.
\citet{ZhangDong2020} also use individual chance constraints to model
an uncertain renewable load control problem with binary variables and the right-hand side (RHS) uncertainty structure, where the uncertain parameters do not interact with the coefficients of the variables in the safety set, and propose some enhancements to the MIP reformulation. \citet{JiLejeune2019} give MIP formulations of \eqref{eq:dr-ccp} under %
other structural assumptions on the support of $\bm{\xi}$. In contrast to these work, we do not assume or exploit binary problem structure, place no assumptions on the support of $\bm{\xi}$, and we consider \emph{joint} chance constraints under LHS uncertainty, i.e., a chance constraint may take a system of inequalities.
In our previous work~\cite{rhs2020}, we observe that the SAA formulation can be cast as
\cref{eq:dr-ccp} with radius $\theta=0$, since $\mathcal{F}_N(0)=\{\mathbb{P}_N\}$ under Wasserstein ambiguity, and that for $\theta>0$, the SAA formulation is a relaxation of \cref{eq:dr-ccp}. We then
exploit this connection between \cref{eq:dr-ccp} and SAA to address the (easier) case of RHS uncertainty. Our proposed approach in~\cite{rhs2020} provides stronger formulations and valid inequalities for \cref{eq:dr-ccp}, which are instrumental in solving both instances that are an order of magnitude larger than those in the literature, from 100s of scenarios to 1000s of scenarios, and instances that are difficult even for small number of scenarios (100s) due to the number of original decision variables and their problem structure. In this paper, we further explore this connection between \cref{eq:dr-ccp} and SAA to address the more difficult case of LHS uncertainty. As a result of the more complex structure of LHS uncertainty, our developments here differ from the RHS uncertainty case in \cite{rhs2020}, and we highlight these wherever relevant.
\subsection*{Contributions and outline}
In~\cref{sec:problem} we formally describe our problem and the safety sets of interest. Our main contributions are summarized as follows.
\begin{itemize}
\item We delineate the relationship between SAA and~\eqref{eq:dr-ccp} under Wasserstein ambiguity with LHS uncertainty in \cref{sec:nominal}. We note that while recognizing that SAA is a relaxation of DR-CCP is not novel in itself, the main innovation of our work is to build a \emph{precise link} between the CCP formulation and the DR-CCP formulation in terms of the binary variables used in the CCP formulation. Using this connection, we obtain %
a stronger formulation for~\eqref{eq:dr-ccp} by adding linearly many valid inequalities to the standard MIP formulation from the literature~\cite{chen2018data,xie2018distributionally}, given in~\eqref{eq:joint}, \emph{without} increasing the number of variables. %
Our formulation~\cref{eq:joint-knapsack} is an exact reformulation of~\eqref{eq:dr-ccp} %
for the closed safety sets %
and gives a tighter relaxation %
for the open safety sets; %
see \cref{thm:knapsack-valid-joint}.
We then exploit this link to further strengthen the DR-CCP formulation (\cref{sec:quantile}).
Our previous paper \citep{rhs2020} also provides a link between CCP and DR-CCP formulations in the \emph{right-hand side uncertainty} case, yet the left-hand side uncertainty case that we consider in the present paper is considerably more complicated. The main difference can be seen through \cref{lemma:joint-valid} that states that the formulation of \eqref{eq:dr-ccp} in equation~\eqref{eq:joint} from previous literature is not exact in the case of LHS uncertainty. %
However, in the case of RHS uncertainty the associated MIP formulations are exact; see \cref{rem:joint-valid-RHS-compare}. \cref{sec:nominal} aims to clarify this distinction. %
\citet[Theorem 3]{xie2018distributionally} provides a better relaxation of~\eqref{eq:dr-ccp} than the ordinary CCP by using the CVaR interpretation of the DR-CCP formulation and its value-at-risk relaxation. The author does not use this relaxation to derive an exact reformulation of DR-CCP---doing so would require a new set of binary variables and associated big-M constraints. In contrast, it bears repeating that our aim is \emph{not} only to provide a stronger relaxation, but also to link the CCP and DR-CCP formulations without introducing new variables, and exploit this link to solve the \emph{exact} DR-CCP formulation effectively.
In \cref{sec:other_relaxations}, we present an argument that the relaxation in \citep[Theorem 3]{xie2018distributionally} cannot be linked with the DR-CCP formulation in the same manner as what we do in this paper. Similarly, \citet{ChenXie2020} also use the observation that SAA is a relaxation of DR-CCP, but they do not reveal the connection regarding the binary variables.
\item In \cref{sec:quantile}, we exploit this relationship between SAA and~\eqref{eq:dr-ccp} to suggest a further quantile-based strengthening of the formulation. %
In particular, the connection with SAA exposes a \emph{mixing substructure} in the MIP reformulation of \eqref{eq:dr-ccp}. For the RHS uncertainty case, exploiting this mixing substructure to reduce the big-$M$ coefficients entails simply the sorting of the nominal right-hand side parameters. Due to the interaction of the random parameters with the decision variables, this procedure cannot be immediately applied to the case of LHS uncertainty (see \cref{rem:mixing-RHS-comparison} for a discussion of the differences in this procedure against our previous paper~\citep{rhs2020}).
Hence, for \eqref{eq:dr-ccp} with LHS uncertainty we suggest a more involved quantile-based strengthening framework in~\cref{sec:quantile}, which lets us derive a further improved formulation~\eqref{eq:joint-k-reduced}.
As opposed to the previous literature on quantile-based strengthening, our results exploit a unique structure stemming from the MIP formulation of \eqref{eq:dr-ccp}.
Through these developments, we are able to perform significant coefficient strengthening and enhance our MIP formulations with an additional exponential class of valid inequalities; see \cref{thm:improved-formulation} and inequalities~\eqref{eq:mixing}.
We note that the mixing procedure has been applied in the context of basic CCP before; see e.g., \citep{luedtke2010integer,luedtke2014branch-and-cut}. In this paper, we do utilize it again, but in the new context of strengthening~\eqref{eq:dr-ccp}.
\item In certain cases, the most powerful version of quantile strengthening procedure to generate the coefficients of the mixing set may require us to solve a number of subproblems, which are sometimes large optimization problems themselves. In the most general case, we would need $N^2$ calls to an LP solver to compute these coefficients. In \cref{sec:cover-pack}, we consider the special case of covering and packing constraints and show that these special structures enable us to develop a more efficient coefficient strengthening procedure which does not require us to use an LP solver. This expedites the process of deriving formulation~\eqref{eq:joint-k-reduced} for the case of covering and packing constraints. We note that these classes of problems are sufficiently prevalent \citep{bicriteria,binary-packing,ZhangJiangShen2018} in this literature as well.
\item Finally, in \cref{sec:experiments}, we assess the computational impact of our theoretical developments on stochastic portfolio optimization and resource planning problems. We have conducted extensive numerical experiments to demonstrate the effectiveness of our proposed approach. Our numerical results show that our formulation significantly improves upon the existing formulations.
For the portfolio optimization problem, we test instances with $N\in\{100,300,500,1000\}$ samples. We observe that our framework reduces the overall solution time remarkably compared to the existing formulations, regardless of sample size. In particular, when $N\in\{500,1000\}$, we see that while none of the instances can be solved to optimality within one hour with the existing formulations, our proposed approach attain an optimal solution for most of the instances within a couple of minutes. The instances with $N\in\{100,300\}$ are easier, but we still observe that our formulation performs much better.
The resource planning problem instances are much harder than the portfolio optimization instances, but we still obtain similar results showing the efficacy of our proposed approach against the existing formulations. For the resource planning problem, even with sample sizes as small as $N=100$, 75 out of the 100 instances are not solved to optimality, and the basic formulation terminates with 45-77\% optimality gap after an hour. Furthermore, we show that none of the instances with $N=300$ can be solved to optimality with the basic formulation and the algorithm terminates with over 90\% optimality gap after an hour of computing with the existing formulations. For this harder problem class our formulation solves 98 (instead of 25) of the 200 total instances are solved to optimality and the largest optimality gap is 14\%.
\end{itemize}
\section{Other Relaxations of \eqref{eq:dr-ccp}}\label{sec:other_relaxations}
\citet[Theorem 3]{xie2018distributionally} provides a better relaxation of~\eqref{eq:dr-ccp} than the ordinary \eqref{eq:ccp}. %
However, we present an argument that the relaxation in \citep[Theorem 3]{xie2018distributionally} cannot be linked with \eqref{eq:dr-ccp} in the same manner as what we do in this paper. To illustrate this, let us consider an individual chance constraint with closed safety set
\[ \mathcal{S}^c(x) = \left\{ \xi : (b-A^\top x)^\top \xi + d - a^\top x \geq 0 \right\}. \]
The binary variables $\{z^i\}_{i \in [N]}$ appearing in \eqref{eq:joint}, the formulation for \eqref{eq:dr-ccp}, essentially model the disjunction
\[\underbrace{(b-A^\top x)^\top \xi^i + d - a^\top x \geq 0}_{z^i=0} \text{ or } \underbrace{(b-A^\top x)^\top \xi^i + d - a^\top x < 0}_{z^i=1}.\]
The intuitive reason why we can link \eqref{eq:saa-reformulation} and \eqref{eq:joint} is because the binary variables in \eqref{eq:saa-reformulation} model the exact same disjunction. On the other hand, \citep[Theorem 3]{xie2018distributionally} states that a valid inequality for $\inf_{\mathbb{P} \in \mathcal{F}_N(\theta)} \mathbb{P}[(b-A^\top x)^\top \xi + d - a^\top x \geq 0] \geq 1-\epsilon$ is the modified nominal chance constraint
\[ \mathbb{P}_N\left[ (b-A^\top x)^\top \xi + d - a^\top x \geq \frac{\theta}{\epsilon} \|b-A^\top x\|_* \right] \geq 1-\epsilon. \]
As stated in \citep[Corollary 4]{xie2018distributionally}, this can also be formulated as a MIP, with binary variabes $u^i$ that model the disjunction
\[\underbrace{(b-A^\top x)^\top \xi^i + d - a^\top x \geq \frac{\theta}{\epsilon} \|b-A^\top x\|_*}_{u^i=0} \text{ or } \underbrace{(b-A^\top x)^\top \xi^i + d - a^\top x < \frac{\theta}{\epsilon} \|b-A^\top x\|_*}_{u^i=1}.\]
This is a fundamentally different disjunction than the one in \eqref{eq:joint}. Indeed, if we have a scenario $i$ for which
\[ 0 < (b-A^\top x)^\top \xi^i + d - a^\top x < \frac{\theta}{\epsilon} \|b-A^\top x\|_*, \]
then $z^i = 0$ but $u^i=1$. Therefore, we cannot link the binary variables $z^i$ and $u^i$ in the same manner as we did for the nominal CCP formulation, since there is no a priori reason to prevent such a scenario occurring. Thus, the strengthening procedure from our paper cannot be adapted to work with the relaxation from \citep{xie2018distributionally}.
\section{Problem formulation}\label{sec:problem}
We consider Wasserstein ambiguity sets $\mathcal{F}_N(\theta)$ defined as the $\theta$-radius Wasserstein ball of distributions on $\mathbb{R}^K$ around the empirical distribution $\mathbb{P}_N$. We use the \emph{1-Wasserstein distance}, based on a norm $\|\cdot\|$, between two distributions $\mathbb{P}$ and $\mathbb{P}'$. This is defined as follows:
\begin{equation}\label{eq:wasserstein-dist}
d_W(\mathbb{P},\mathbb{P}') := \inf_{\Pi} \left\{ \mathbb{E}_{(\bm{\xi},\bm{\xi}') \sim \Pi}[\|\bm{\xi} - \bm{\xi}'\|] : \Pi \text{ has marginal distributions } \mathbb{P}, \mathbb{P}' \right\}.
\end{equation}
Then, the Wasserstein ambiguity set is
\[\mathcal{F}_N(\theta) := \left\{ \mathbb{P} : d_W(\mathbb{P}_N,\mathbb{P}) \leq \theta\right\}. \]
Given a decision $\vx \in \mathcal{X}$ and random realization $\bm{\xi} \in \mathbb{R}^K$, the \emph{distance from $\bm{\xi}$ to the unsafe set, $\mathbb{R}^K\setminus \mathcal{S}(\vx)$,} is defined as
\begin{equation}\label{eq:distance}
\dist(\bm{\xi},\mathcal{S}(\vx)) := \inf_{\bm{\xi}' \in \mathbb{R}^K} \left\{ \|\bm{\xi} - \bm{\xi}'\| : \bm{\xi}' \not\in \mathcal{S}(\vx) \right\}.
\end{equation}
It is important to highlight that here $\dist(\bm{\xi},\mathcal{S}(\vx))$ computes the distance from $\bm{\xi}$ to the unsafe set, \emph{not to the set $\mathcal{S}(\vx)$}. Despite this, we chose to use this notation to emphasize that the distance function depends on the realization of the random parameter $\bm{\xi}$ and the decision-dependent safety set $\mathcal{S}(\vx)$.
In the remainder of the paper, %
we assume that the sample $\{\bm{\xi}^i\}_{i \in [N]}$, the risk tolerance $\epsilon \in (0,1)$ and the radius $\theta > 0$ are fixed.
We denote the feasible region of \eqref{eq:dr-ccp} as follows:
\begin{equation}
\mathcal{X}_{\DR}(\mathcal{S}) := \left\{ \vx \in \mathcal{X} :~ \sup_{\mathbb{P} \in \mathcal{F}_N(\theta)} \mathbb{P}[\bm{\xi} \not\in \mathcal{S}(\vx)] \leq \epsilon \right\}.\label{eq:dr-ccp-region}
\end{equation}
In this notation we make the dependence on the safety set function $\mathcal{S}$ explicit because the valid inequalities we will introduce have an explicit dependence on the safety set.
It was recently shown that the distributionally robust chance constraint in \eqref{eq:dr-ccp}, and therefore~\eqref{eq:dr-ccp-region}, can be reformulated in a computationally tractable form~\cite{chen2018data,xie2018distributionally}. These reformulation results are obtained based on earlier developments on duality theory for Wasserstein distributional robustness~\cite{BlanchetMurthy2019,gao2016distributionally}. More precisely, whenever $\mathcal{S}(\vx) \subseteq \mathbb{R}^K$ is an arbitrary open set for each $\vx \in \mathcal{X}$ and $\theta > 0$, \citet[Theorem 3]{chen2018data} show that $\mathcal{X}_{\DR}(\mathcal{S})$ can be formulated as \begin{align}\label{eq:cc-distance-formulation}
\mathcal{X}_{\DR}(\mathcal{S}) = \left\{ \vx \in \mathcal{X} : \begin{aligned}
&\quad \exists\ t \geq 0, \ \mathbf{r} \geq \bm{0},\\
&\quad \dist(\bm{\xi}^i,\mathcal{S}(\vx)) \geq t - r^i, \ i \in [N],\\
&\quad \epsilon\, t \geq \theta + \frac{1}{N} \sum_{i \in [N]} r^i
\end{aligned} \right\} .
\end{align}
Under the additional restriction $t > 0$, another similar formulation also holds. \citet[Proposition 1]{xie2018distributionally} derives the same formulation for the case of \emph{closed linear safety sets}, which we will define later in this section. In fact, \eqref{eq:cc-distance-formulation} holds regardless of whether $\mathcal{S}(\vx)$ is open or closed because \citet[Proposition 3]{gao2016distributionally} show that for given $\mathcal{S}(\vx)$,
\[
\sup_{\mathbb{P} \in \mathcal{F}(\theta)} \mathbb{P}[\bm{\xi} \not\in \intt \mathcal{S}(\vx)] = \sup_{\mathbb{P} \in \mathcal{F}(\theta)} \mathbb{P}[\bm{\xi} \not\in \mathcal{S}(\vx)] = \sup_{\mathbb{P} \in \mathcal{F}(\theta)} \mathbb{P}[\bm{\xi} \not\in \cl \mathcal{S}(\vx)]
\] and
\begin{equation}\label{eq:dist-invariant}
\dist(\bm{\xi}, \intt \mathcal{S}(\vx)) = \dist(\bm{\xi}, \mathcal{S}(\vx)) = \dist(\bm{\xi}, \cl \mathcal{S}(\vx)),
\end{equation}
where $\intt \mathcal{S}(\vx)$ and $\cl \mathcal{S}(\vx)$ denote the interior and closure of $\mathcal{S}(\vx)$, respectively. To implement formulation~\eqref{eq:cc-distance-formulation} for~\eqref{eq:dr-ccp}, it is crucial to represent the constraints $\dist(\bm{\xi}^i,\mathcal{S}(\vx)) \geq t - r^i$ in a computationally tractable form. To do so, it is important to understand the distance function $\dist(\bm{\xi},\mathcal{S}(\vx))$, which depends on not only the random parameter $\bm{\xi}$ and the decision vector $\vx$ but also the structure of the safety set.
Of particular interest is \emph{linear safety sets}, defined by a finite set of linear inequalities.
In this case, the decision-dependent safety set $\mathcal{S}(\vx)$ is of the form of either $\mathcal{S}^o(\vx)$ for \emph{open} safety sets or $\mathcal{S}^c(\vx)$ for \emph{closed} safety sets, respectively, where
\begin{subequations}\label{eq:safety}
\begin{align}
\mathcal{S}^o(\vx) &:= \left\{ \bm{\xi} :~ (\vb-\mathbf{A}^\top \vx)^\top \bm{\xi}_p + d_p - \mathbf{a}_p^\top \vx > 0, \ p \in [P] \right\},\label{eq:safety-open}\\
\mathcal{S}^c(\vx) &:= \left\{ \bm{\xi} :~ (\vb-\mathbf{A}^\top \vx)^\top \bm{\xi}_p + d_p - \mathbf{a}_p^\top \vx \geq 0, \ p \in [P] \right\}.\label{eq:safety-closed}
\end{align}
\end{subequations}
As both open and closed safety sets have attracted attention in the recent literature; namely, \citet{chen2018data} consider open safety sets, while \citet{xie2018distributionally} consider closed safety sets. We will examine both types of safety sets in this paper as well. As we will see in \cref{sec:nominal}, the addition of the CCP constraints to the~\eqref{eq:dr-ccp} formulation has a different impact on the exactness of the resulting formulation depending on whether the set is open or closed. In~\eqref{eq:safety}, $P$ determines the number of inequalities defining the linear safety set. When $P=1$, we refer to the chance constraint $\mathbb{P}^*[\bm{\xi} \not\in \mathcal{S}(\vx)] \leq \epsilon$ as an \emph{individual chance constraint}, and when $P>1$, we call $\mathbb{P}^*[\bm{\xi} \not\in \mathcal{S}(\vx)] \leq \epsilon$ as a \emph{joint chance constraint}. The random vector $\bm{\xi}$ in~\eqref{eq:safety} consists of $P$ subvectors $\bm{\xi}_1,\ldots,\bm{\xi}_P$, each of which is associated with an inequality in the safety set description. The classical literature on CCP typically considers different settings depending on whether or not the linear inequalities have uncertainty in the coefficients of the decision variables. When $\mathbf{A}\neq\bm{0}$, we say that the chance constraint has \emph{left-hand side (LHS) uncertainty}. When $\mathbf{A}=\bm{0}$, then we have $\vb\neq\bm{0}$ so that inequalities have random data, in which case, we say that the chance constraint has \emph{right-hand side (RHS) uncertainty}.
When $A,b \neq 0$, this model considers the most general case of both LHS and RHS uncertainty simultaneously. However, following the standard terminology, we will refer to this general model as the LHS uncertainty case.
In this paper, we focus on linear safety sets given by~\eqref{eq:safety} with LHS uncertainty. In the case of linear safety sets of form~\eqref{eq:safety}, formulation~\eqref{eq:cc-distance-formulation} admits a tractable reformation of~\eqref{eq:dr-ccp}. \citet{chen2018data} focus on the open safety set $\mathcal{S}^o(\vx)$ given by~\eqref{eq:safety-open}, for which they provide a MIP reformulation. Independently, \citet{xie2018distributionally} considers the closed safety set $\mathcal{S}^c(\vx)$ given by~\eqref{eq:safety-closed} and arrive at a MIP reformulation that is almost identical to the MIP formulation of \citet{chen2018data}. When $\mathcal{S}(\vx)=\mathcal{S}^o(\vx)$ and $\vb\neq \mathbf{A}^\top \vx$, \citet[Lemma 2]{chen2018data} prove that the associated distance function is given by
\begin{equation}
\dist(\bm{\xi},\mathcal{S}(\vx)) = \max\left\{ 0,\ \min_{p \in [P]} \frac{(\vb-\mathbf{A}^\top \vx)^\top \bm{\xi}_p + d_p - \mathbf{a}_p^\top \vx}{\|\vb-\mathbf{A}^\top \vx \|_*} \right\},\label{eq:distance-linear}
\end{equation}
where $\|\cdot\|_*$ represents the norm dual to $\|\cdot\|$.
\citet[Theorem 1]{xie2018distributionally} argues that~\eqref{eq:distance-linear} holds even when $\mathcal{S}(\vx)=\mathcal{S}^c(\vx)$ and $\vb\neq \mathbf{A}^\top \vx$; note that this can also be deduced from~\eqref{eq:dist-invariant}. On the other hand, if $\vb = \mathbf{A}^\top \vx$, we must compute the distance function through the original definition given by~\eqref{eq:distance}, and its characterization differs depending on whether we consider $\mathcal{S}^o(\vx)$ or $\mathcal{S}^c(\vx)$. We note that this issue is not present in the case of RHS uncertainty because $\vb\neq\bm{0}$ and $\mathbf{A}=\bm{0}$ automatically imply that $\vb \neq \mathbf{A}^\top \vx$ for all $\vx$, and so the distance function is precisely~\eqref{eq:distance-linear} for all $\vx$.
Assuming that $\vb \neq \mathbf{A}^\top \vx$ for any $\vx \in \mathcal{X}$, we can substitute the formula~\eqref{eq:distance-linear} for the distance function in the reformulation~\eqref{eq:cc-distance-formulation} of~\eqref{eq:dr-ccp} with joint chance constraints. Due to the max-terms in~\eqref{eq:distance-linear}, the resulting formulation is non-convex. Nevertheless, under the assumption that $\vb \neq \mathbf{A}^\top \vx$ for any $\vx \in \mathcal{X}$, by introducing binary variables and big-$M$ constraints to model the distances and the max-terms therein, \citet[Theorem 2]{xie2018distributionally} obtains the following equivalent MIP reformulation of~\eqref{eq:dr-ccp}.
\begin{subequations}\label{eq:joint}
\begin{align}
\min\limits_{\vz, \mathbf{r}, t, \vx} \quad & \mathbf{c}^\top \vx\label{joint:obj}\\
\text{s.t.}\quad & \vz \in \{0,1\}^N,\ t \geq 0, \ \mathbf{r} \geq \bm{0},\ \vx \in \mathcal{X},\label{joint:vars}\\
& \epsilon\, t \geq \theta \|\vb -\mathbf{A}^\top \vx\|_* + \frac{1}{N} \sum_{i \in [N]} r^i,\label{joint:conic}\\
& M^i (1-z^i) \geq t-r^i, \quad i \in [N],\label{joint:bigM1}\\
& (-\mathbf{A}\bm{\xi}_p^i- \mathbf{a}_p)^\top\vx+ (\vb^\top\bm{\xi}_p^i+d_p) + M^i z^i \geq t-r^i, \quad i \in [N]\ p\in[P].\label{joint:bigM2}
\end{align}
\end{subequations}
where $M^1,\ldots,M^N$ are sufficiently large positive constants. Here, the term $\|\vb - \mathbf{A}^\top \vx\|_*$ was in the denominator in~\eqref{eq:distance-linear} but is moved by multiplying $t,\mathbf{r}$ in \eqref{eq:cc-distance-formulation} by $\|\vb - \mathbf{A}^\top \vx\|_*$ and relabelling the variables accordingly. In fact,~\citet[Proposition 1]{chen2018data} focus on individual chance constraints, but their proof can be extended to provide formulation~\eqref{eq:joint} for joint chance constraints.
On the other hand, when there exists some $\vx \in \mathcal{X}\setminus \mathcal{X}_{\DR}(\mathcal{S})$ such that $\vb = \mathbf{A}^\top \vx$, the constraints \eqref{joint:vars}--\eqref{joint:bigM2} correspond to a \emph{relaxation} of $\mathcal{X}_{\DR}(\mathcal{S})$ for both $\mathcal{S}\in\{\mathcal{S}^o,\mathcal{S}^c\}$. In fact, if $\vx \in \mathcal{X}\setminus \mathcal{X}_{\DR}(\mathcal{S})$ satisfies $\vb = \mathbf{A}^\top \vx$, then $\vx$ always satisfies~\eqref{joint:vars}--\eqref{joint:bigM2} together with $t=r^i=0$ and $z^i=1$ for all $i\in[N]$. We next describe the precise relationship between $\mathcal{X}_{\DR}(\mathcal{S})$ for $\mathcal{S}\in \{\mathcal{S}^o,\mathcal{S}^c\}$ and formulation \eqref{eq:joint}.
\begin{lemma}\label{lemma:joint-valid}
We have
\begin{subequations}\label{eq:valid}
\begin{align}
\left\{ \vx : \text{\eqref{joint:vars}--\eqref{joint:bigM2}} \right\} &= \mathcal{X}_{\DR}(\mathcal{S}^o) \cup \left\{ \vx \in \mathcal{X} :~ \begin{aligned}
&\vb = \mathbf{A}^\top \vx,\\
&d_p \leq \mathbf{a}_p^\top \vx\ \ \text{for some} \ p\in[P]
\end{aligned} \right\}\label{eq:valid-open}\\
&= \mathcal{X}_{\DR}(\mathcal{S}^c) \cup \left\{ \vx \in \mathcal{X} :~ \begin{aligned}
&\vb = \mathbf{A}^\top \vx,\\
&d_p < \mathbf{a}_p^\top \vx\ \ \text{for some} \ p\in[P]
\end{aligned} \right\}\label{eq:valid-closed}.
\end{align}
\end{subequations}
\end{lemma}
\begin{proof}
Let $\vx \in \mathcal{X}$ such that $\vb\neq \mathbf{A}^\top\vx$. Then, by~\citet[Proposition 1]{chen2018data} and \citet[Proposition 1]{xie2018distributionally}, we deduce that $\vx \in \left\{ \vx : \text{\eqref{joint:vars}--\eqref{joint:bigM2}} \right\}$ if and only if $\vx\in \mathcal{X}_{\DR}(\mathcal{S}^o)$ and $\vx\in \mathcal{X}_{\DR}(\mathcal{S}^c)$.
Now take $\vx \in \mathcal{X}$ such that $\vb = \mathbf{A}^\top \vx$. We have already argued that $\vx$ together with $t=r^i=0$ and $z^i=1$ for all $i\in[N]$ satisfies~\eqref{joint:vars}--\eqref{joint:bigM2}. Therefore, to prove that~\eqref{eq:valid-open} and~\eqref{eq:valid-closed} hold, we need to characterize when $\vx$ is contained in $\mathcal{X}_{\DR}(\mathcal{S}^o)$ and $\mathcal{X}_{\DR}(\mathcal{S}^c)$.
If $d_p -\mathbf{a}_p^\top \vx > 0$ for all $p\in[P]$, then $\mathcal{S}^o(\vx)=\mathbb{R}^K$ and thus the worst-case probability $\sup_{\mathbb{P} \in \mathcal{F}_N(\theta)} \mathbb{P}[\bm{\xi} \not\in \mathcal{S}^o(\vx)]$ is $0$, in which case, $\vx\in \mathcal{X}_{\DR}(\mathcal{S}^o)$. On the other hand, if $d_p -\mathbf{a}_p^\top \vx \leq 0$ for some $p\in[P]$, then $\mathcal{S}^o(\vx)$ is empty, which means that $\sup_{\mathbb{P} \in \mathcal{F}_N(\theta)} \mathbb{P}[\bm{\xi} \not\in \mathcal{S}^o(\vx)]=1$. In this case, $\vx\not\in \mathcal{X}_{\DR}(\mathcal{S}^o)$. Therefore, the equality in~\eqref{eq:valid-open} holds.
If $d_p -\mathbf{a}_p^\top \vx \geq 0$ for all $p\in[P]$, then $\mathcal{S}^c(\vx) = \mathbb{R}^K$, so as before, $\vx\in \mathcal{X}_{\DR}(\mathcal{S}^c)$. If $d_p -\mathbf{a}_p^\top \vx < 0$ for some $p\in[P]$, then $\mathcal{S}^c(\vx) = \emptyset$ and thus we can similarly argue that $\vx\not\in \mathcal{X}_{\DR}(\mathcal{S}^c)$. Hence, the equality in~\eqref{eq:valid-closed} holds, as required.
\ifx\flagJournal1 \qed \fi
\end{proof}
\begin{remark}\label{rem:distance}
\cref{lemma:joint-valid} indicates that the sets $\left\{ \vx : \text{\eqref{joint:vars}--\eqref{joint:bigM2}} \right\}\setminus \mathcal{X}_{\DR}(\mathcal{S}^o)$ and $\left\{ \vx : \text{\eqref{joint:vars}--\eqref{joint:bigM2}} \right\}\setminus \mathcal{X}_{\DR}(\mathcal{S}^c)$ may potentially be non-empty, in which case, the optimal solution returned by solving~\eqref{eq:joint} may fall into these extraneous sets. \citet[Remark 2]{chen2018data} suggest how to handle this case separately by solving a series of MIPs with strict inequalities. That is, if the optimal solution $\vx^*$ is in the set $\left\{ \vx : \text{\eqref{joint:vars}--\eqref{joint:bigM2}} \right\}\setminus \mathcal{X}_{\DR}(\mathcal{S}^o)$, one can solve $2E+1$ variants of~\eqref{eq:joint}, where $E$ is the number of rows in the system $\vb=\mathbf{A}^\top\vx$, that include exactly one of $2E$ strict inequalities $b_e<(\mathbf{A})_e^\top \vx$, $b_e>(\mathbf{A})_e^\top \vx$ for $e\in [E]$ and one system of strict inequalities $d_p>\mathbf{a}_p^\top\vx$ for $p\in [P]$. The case when $\vx^*\in \left\{ \vx : \text{\eqref{joint:vars}--\eqref{joint:bigM2}} \right\}\setminus \mathcal{X}_{\DR}(\mathcal{S}^c)$ can be similarly dealt with.
\ifx\flagJournal1 \hfill\hbox{\hskip 4pt \vrule width 5pt height 6pt depth 1.5pt}\vspace{0.0cm}\par \fi
\end{remark}
\begin{remark}\label{rem:joint-valid-RHS-compare}
Note that \cref{lemma:joint-valid} states that the formulation of \eqref{eq:dr-ccp} in equation~\eqref{eq:joint} from previous literature is not exact: there are extraneous parts when $x \in \mathcal{X}$ is a solution to $A^\top x = b$. This is an artifact of the left-hand side uncertainty. Indeed, in the case of right-hand side uncertainty only, i.e., when $A=0$, $A^\top x = b$ is only solvable when $b=0$ also, but this is the trivial case when the random part of the constraint has been zeroed out, hence we need not consider it. Therefore, in contrast to the RHS uncertainty case discussed in our previous paper \cite{rhs2020}, more effort and care are required when relating the formulations for CCP and \eqref{eq:dr-ccp} in the left-hand side case. In particular we need to understand precisely how the extraneous sections are affected, and this is the focus of \cref{sec:nominal}.
\ifx\flagJournal1 \hfill\hbox{\hskip 4pt \vrule width 5pt height 6pt depth 1.5pt}\vspace{0.0cm}\par \fi
\end{remark}
\subsection*{Remarks on safety sets}
The most notable structural assumption of~\eqref{eq:safety} is that all $P$ inequalities share the same coefficient matrix $\mathbf{A}$ as opposed to a more general form that allow different $\mathbf{A}$ and $\vb$ matrices across inequalities as the following:
\begin{equation}\label{eq:safety-general}
\mathcal{S}(\vx) := \left\{ \bm{\xi} :~ (\vb_p-\mathbf{A}_p^\top \vx)^\top \bm{\xi}_p + d_p - \mathbf{a}_p^\top \vx \geq 0, \ p \in [P] \right\}.
\end{equation}
In the RHS uncertainty case, it is possible to have different $\vb_1,\ldots,\vb_p$ instead of the same $\vb$ but $A_p=\bm{0}$ for $p\in[P]$, for which~\citet[Proposition~2]{chen2018data} and~\citet[Corollary~3]{xie2018distributionally} provide almost identical MIP reformulations of $\mathcal{X}_{\DR}(\mathcal{S})$. On the other hand, to the best of our knowledge, there is no tractable reformulation proposed in the literature for non-identical coefficient matrices $\mathbf{A}_p, p\in[P]$ in the LHS uncertainty case.
\begin{remark}\label{distinct-matrices}
The derivation of formulation~\eqref{eq:joint} does not immediately generalize to the case when
$\mathcal{S}(\vx)$ is given by~\eqref{eq:safety-general}. Although the distance function in~\eqref{eq:distance-linear} can be simply modified with $\mathbf{A}_p$ and $\vb_p$ for $p\in[P]$, the step of replacing $t\|\vb_p - \mathbf{A}_p^\top \vx\|_*$ and $r^i\|\vb_p - \mathbf{A}_p^\top \vx\|_*$ by $t$ and $r^i$ does not go through as before.
\ifx\flagJournal1 \hfill\hbox{\hskip 4pt \vrule width 5pt height 6pt depth 1.5pt}\vspace{0.0cm}\par \fi
\end{remark}
The form of~\eqref{eq:safety} dictates that we have a fixed matrix $\mathbf{A}$ for all constraints $p \in [P]$. Despite this restrictive structural form, we next show how the corresponding results can be applied to more general safety sets of the form~\eqref{eq:safety-general}.
\begin{remark}\label{rem:same-A}
Given non-identical coefficient matrices $\mathbf{A}_p$ and vectors $\vb_p$, define the following coefficient matrix and the vector in a lifted space
\[ \tilde{\mathbf{A}} := \begin{bmatrix}
\mathbf{A}_1 & \cdots & \mathbf{A}_P
\end{bmatrix}, \quad \tilde{\vb} := \begin{bmatrix}
\vb_1^\top & \cdots &\vb_P^\top
\end{bmatrix}^\top. \]
We also define new random variables $\tilde{\bm{\xi}}_p := (\bm{0},\ldots,\bm{0},\bm{\xi}_p,\bm{0},\ldots,\bm{0})$ and $\tilde{\bm{\xi}} := (\tilde{\bm{\xi}}_1,\ldots,\tilde{\bm{\xi}}_P)$, i.e., $\tilde{\bm{\xi}}_p$ lives in the same space as the original $\bm{\xi} = (\bm{\xi}_1,\ldots,\bm{\xi}_P)$, but all components are set to the zero vector except for $\bm{\xi}_p$.
Then, letting $\Proj_p$ be the projection operation $\bm{\xi} \mapsto \bm{\xi}_p$, we can equivalently write $\mathcal{S}(\vx)$ as
\[
\mathcal{S}(\vx) = \left\{ \bm{\xi} = (\bm{\xi}_1,\ldots,\bm{\xi}_P) :~ \begin{aligned}
&\exists\, \tilde{\bm{\xi}} = (\tilde{\bm{\xi}}_1,\ldots,\tilde{\bm{\xi}}_P) \text{ s.t. } \bm{\xi}_p = \Proj_p(\tilde{\bm{\xi}}_p), \ p \in [P],\\
&\Proj_q(\tilde{\bm{\xi}}_p) = \bm{0},\ p \neq q, \ p,q \in [P],\\
&(\tilde{\vb}-\tilde{\mathbf{A}}^\top \vx)^\top \tilde{\bm{\xi}}_p + d_p - \mathbf{a}_p^\top \vx \geq 0, \ p \in [P]
\end{aligned} \right\} .
\]
The following is an approximation of $\mathcal{S}(\vx)$ obtained after removing the structural assumption on $\tilde{\bm{\xi}}_p$ for $p\in[P]$ by dropping the first two projection constraints in $\mathcal{S}(\vx)$.
\[
\tilde{\mathcal{S}}(\vx) = \left\{ \tilde{\bm{\xi}} = (\tilde{\bm{\xi}}_1,\ldots,\tilde{\bm{\xi}}_P) :~ (\tilde{\vb}-\tilde{\mathbf{A}}^\top \vx)^\top \tilde{\bm{\xi}}_p + d_p - \mathbf{a}_p^\top \vx \geq 0, \ p \in [P] \right\}.
\]
Observe that $\tilde{\mathcal{S}}(\vx)$ is similar to $\mathcal{S}(\vx)$, except that it lives in the space of the lifted variables $\tilde{\bm{\xi}}$. Importantly, it is of the same form as \eqref{eq:safety}. Note that the ambiguity set $\mathcal{F}_N(\theta)$ must now consist of distributions over the lifted random variable $\tilde{\bm{\xi}}$, rather than $\bm{\xi}$. However, since in $\tilde{\mathcal{S}}(\vx)$ we do not impose that $\Proj_q(\tilde{\bm{\xi}}_p) = \bm{0}$ for $p \neq q$ on the support of this random variable $\tilde{\bm{\xi}}$, this ambiguity set will be larger than what we originally wish to consider. Therefore, using this lifting given in $\tilde{\mathcal{S}}(\vx)$ results in a more conservative solution compared to the optimal solution to~\eqref{eq:dr-ccp}.
We use this technique in the resource planning application of \cref{sec:resource-planning}.
\ifx\flagJournal1 \hfill\hbox{\hskip 4pt \vrule width 5pt height 6pt depth 1.5pt}\vspace{0.0cm}\par \fi
\end{remark}
\section{Connection with the nominal chance constraint}\label{sec:nominal}
There is a direct relation between~\eqref{eq:dr-ccp} and the traditional sample average approximation formulation:
\begin{equation}\label{eq:saa-ccp}
\min_{\vx} \left\{\mathbf{c}^\top \vx:~ \vx \in \mathcal{X}, ~ \frac{1}{N} \sum_{i \in [N]} \bm{1}(\bm{\xi}_i \not\in \mathcal{S}(\vx)) \leq \epsilon \right\}. \tag{SAA}
\end{equation}
We formalize this next.
\begin{remark}\label{rem:SAAconnection}
When the radius $\theta$ of the Wasserstein ambiguity set $\mathcal{F}_N(\theta)$ is 0, only the empirical distribution $\mathbb{P}_N$ belongs in the ambiguity set, in which case,~\eqref{eq:dr-ccp} coincides with~\eqref{eq:saa-ccp}. In general, \eqref{eq:saa-ccp} is a relaxation of \eqref{eq:dr-ccp} since we have $\mathcal{F}_N(0)\subseteq \mathcal{F}_N(\theta)$ for any $\theta\geq0$, i.e.,
\begin{equation}\label{eq:relaxation}
\mathcal{X}_{\DR}(\mathcal{S}) \subseteq \mathcal{X}_{\SAA}(\mathcal{S}),
\end{equation}
where $ \mathcal{X}_{\SAA}(\mathcal{S})$ denotes the feasible region of~\eqref{eq:saa-ccp}.
Hence, \eqref{eq:saa-ccp} provides a lower bound (for minimization) on the optimum value of~\eqref{eq:dr-ccp}.
\ifx\flagJournal1 \hfill\hbox{\hskip 4pt \vrule width 5pt height 6pt depth 1.5pt}\vspace{0.0cm}\par \fi
\end{remark}
In the case of RHS uncertainty, this connection between \cref{eq:saa-ccp} and \cref{eq:dr-ccp} has been first observed and explored in our previous work~\citep{rhs2020}. It turns out that this relation is instrumental in improving the MIP formulation of~\eqref{eq:dr-ccp} with LHS uncertainty given in~\eqref{eq:joint} as well. We discuss this in this section. Moreover, this connection
allows us to reduce the extraneous set
in the feasible region of~\eqref{eq:joint} discussed in \cref{rem:distance}
for the open safety set to $\{ \vx \in \mathcal{X} : \vb = \mathbf{A}^\top \vx, \ d_p = \mathbf{a}_p^\top \vx\ \ \text{for some}\ p\in[P]\}$, and remove it completely for the closed safety set.
The relation between~\eqref{eq:dr-ccp} and~\eqref{eq:saa-ccp} described in \cref{rem:SAAconnection} immediately gives rise to an improved formulation. In fact, it turns out that there is a more \emph{direct} correspondence between the MIP reformulation \eqref{eq:saa-reformulation} of~\eqref{eq:saa-ccp} below, often referred to as the \emph{big-$M$ formulation}, and the MIP reformulation~\eqref{eq:joint} of~\eqref{eq:dr-ccp}. As an immediate consequence of \cref{rem:SAAconnection}, we can strengthen the MIP reformulation of~\eqref{eq:dr-ccp}, and further apply existing tools that were developed originally for~\eqref{eq:saa-ccp}. We will elaborate this further in the remainder of this section.
Suppose that the safety set is given by $\mathcal{S}(\vx) = \left\{ \bm{\xi} :~ s(\vx,\bm{\xi}) \geq 0\right\}$ for some continuous function $s(\cdot)$. In this case, \citet{luedtke2010integer,ruspp:02} show that~\eqref{eq:saa-ccp} can be reformulated as the following MIP, known as the big-$M$ formulation:
\begin{subequations}\label{eq:saa-reformulation}
\begin{align}
\min\limits_{\mathbf{u}, \mathbf{r}, t, \vx} \quad & \mathbf{c}^\top \vx\label{saa:obj}\\
\text{s.t.}\quad & \mathbf{u} \in \{0,1\}^N,\ \vx \in \mathcal{X},\label{saa:vars}\\
& \sum_{i\in [N]} u^i \leq \lfloor\epsilon N\rfloor, \label{saa:knapsack}\\
& s(\vx,\bm{\xi}^i) + M^i u^i \geq 0, \quad i \in [N],\label{saa:bigM}
\end{align}
\end{subequations}
where $M^1,\ldots,M^N$ are sufficiently large constants and $u^i, i \in [N]$ is an indicator variable that is equal to one if $s(\vx,\bm{\xi}^i)<0$ and hence scenario $i$ is unsafe. Constraints~\eqref{saa:knapsack} and~\eqref{saa:bigM} are often referred to as the {\it knapsack (or cardinality) constraint} and the {\it big-$M$ constraints}, respectively. Thus, formulation~\eqref{eq:saa-reformulation} provides a relaxation of~\eqref{eq:dr-ccp} when the safety set $\mathcal{S}(\vx)$ is given by $\mathcal{S}(\vx) = \left\{ \bm{\xi} :~ s(\vx,\bm{\xi}) > 0\right\}$. Further strengthenings of the MIP formulation~\eqref{eq:saa-reformulation} via other classes of valid inequalities have been suggested in~\cite{abdi2016mixing-knapsack,Kilinc-Karzan2019joint-sumod,kucukyavuz2012mixing,LKL16,liu2018intersection,luedtke2010integer,luedtke2014branch-and-cut,xie2018quantile,zhao2017joint-knapsack}.
Note that for the closed safety set \eqref{eq:safety-closed}, we define $s(\cdot)$ as
\begin{equation}\label{eq:saa-safety}
s(\vx,\bm{\xi}^i) := \min\limits_{p\in[P]}\left\{(-\mathbf{A}\bm{\xi}_p^i- \mathbf{a}_p)^\top\vx+ (\vb^\top\bm{\xi}_p^i+d_p)\right\},
\end{equation}
and thus $s(\vx,\bm{\xi}^i)$ in~\eqref{saa:bigM} can be replaced with~\eqref{eq:saa-safety}. Another way of representing~\eqref{saa:bigM} in this case is to expand the minimum term in~\eqref{eq:saa-safety}, thereby obtaining
\begin{equation}\label{eq:saa-safety'}
s_p(\vx,\bm{\xi}^i) + M^i z^i \geq 0,\quad \text{where}\quad s_p(\vx,\bm{\xi}^i) := (-\mathbf{A}\bm{\xi}_p^i- \mathbf{a}_p)^\top\vx+ (\vb^\top\bm{\xi}_p^i+d_p).
\end{equation}
for $i\in[N]$, $p\in[P]$. As discussed in \cref{rem:SAAconnection},~\eqref{eq:saa-ccp} is a relaxation of~\eqref{eq:dr-ccp}, so inequalities of the form~\eqref{saa:knapsack}--\eqref{saa:bigM} can be added to the MIP formulation \eqref{eq:joint} of~\eqref{eq:dr-ccp}. Including these inequalities in a na\"{i}ve way would introduce a \emph{new} binary variable for each sample $\bm{\xi}^i$ and result in two different sets of $N$ binary variables in the formulation.
Our key observation is that these inequalities can be added \emph{without} introducing additional binary variables, but instead we show in \cref{thm:knapsack-valid-joint} that~\eqref{saa:knapsack}--\eqref{saa:bigM} can simply be added to \eqref{eq:joint} with \emph{$\vz$ simply replacing $\mathbf{u}$} and \emph{the same big-$M$ constants} without compromising the validity of formulation~\eqref{eq:joint}.
This provides us with the possibility of applying and adapting techniques developed to improve the formulation (and thereby computational tractability) of \eqref{eq:saa-reformulation} to \eqref{eq:joint}. The same observation for strengthening the~\eqref{eq:dr-ccp} formulation in the case of the RHS uncertainty was made in our recent paper~\cite[Theorem 1]{rhs2020}.
Our main result concerns the following MIP formulation for the joint chance constraint of \eqref{eq:dr-ccp} where $\mathcal{S}(\vx)$ can be either the open or the closed safety set from~\eqref{eq:safety}:
\begin{subequations}\label{eq:joint-knapsack}
\begin{align}
\min\limits_{\vz, \mathbf{r}, t, \vx} \quad & \mathbf{c}^\top \vx\label{joint-k:obj}\\
\text{s.t.}\quad & (\vz,\mathbf{r},t,\vx)\ \text{satisfies} \ \eqref{joint:vars}\text{--}\eqref{joint:bigM2},\label{joint-k:basic}\\
& \sum_{i\in [N]} z^i \leq \lfloor \epsilon N \rfloor,\label{joint-k:knapsack}\\
& (-\mathbf{A}\bm{\xi}_p^i- \mathbf{a}_p)^\top\vx+ (\vb^\top\bm{\xi}_p^i+d_p) + M^i z^i \geq 0, \quad i \in [N],\ p\in[P], \label{joint-k:bigM3}
\end{align}
\end{subequations}
where $M^i, i\in[N]$ are sufficiently large positive constants.
The only difference between formulation~\eqref{eq:joint-knapsack} and formulation~\eqref{eq:joint} is the additional constraints~\eqref{joint-k:knapsack} and~\eqref{joint-k:bigM3}.
\cref{thm:knapsack-valid-joint} will show that while~\eqref{joint-k:knapsack} and~\eqref{joint-k:bigM3} do cut off points in the feasible region of~\eqref{eq:joint}, they are nevertheless valid for both $\mathcal{X}_{\DR}(\mathcal{S}^o)$ and $\mathcal{X}_{\DR}(\mathcal{S}^c)$. In fact, \cref{eq:joint-knapsack} is an exact reformulation of~\eqref{eq:dr-ccp} for the closed safety set $\mathcal{S}^c$ given in~\eqref{eq:safety}, and gives a tighter relaxation than \eqref{eq:joint} for the open safety set $\mathcal{S}^o$.
\begin{theorem}\label{thm:knapsack-valid-joint}
The feasible region of~\eqref{eq:joint-knapsack} is characterized as follows:
\begin{align}
&\left\{ \vx \in \mathcal{X} : \text{\eqref{joint-k:basic}--\eqref{joint-k:bigM3}} \right\} \label{eq:prop1-1-l}\\
&\qquad= \left\{ \vx \in \mathcal{X} : \text{\eqref{joint:vars}--\eqref{joint:bigM2}} \right\} \setminus \left\{ \vx \in \mathcal{X} : \begin{aligned}
\ &\vb = \mathbf{A}^\top \vx,\\
&d_p < \mathbf{a}_p^\top \vx\ \ \text{for some}\ p\in[P]
\end{aligned} \right\}\label{eq:prop1-1}\\
&\qquad= \mathcal{X}_{\DR}(\mathcal{S}^o) \cup \left\{ \vx \in \mathcal{X} : \begin{aligned}
\ &\vb = \mathbf{A}^\top \vx,\\
&d_p =\mathbf{a}_p^\top \vx\ \ \text{for some}\ p\in[P]
\end{aligned} \right\}\\
&\qquad= \mathcal{X}_{\DR}(\mathcal{S}^c).
\end{align}
\end{theorem}
\begin{proof}
We will prove the equality in~\eqref{eq:prop1-1}. Then the rest will follow from~\eqref{eq:valid}. %
We first show that the set in~\eqref{eq:prop1-1-l} is contained in the set in~\eqref{eq:prop1-1}. To this end, take a vector $\vx\in\mathcal{X}$ satisfying~\eqref{joint-k:basic}--\eqref{joint-k:bigM3} with some $\vz,\mathbf{r},t$. Then $\vx,\vz,\mathbf{r},t$ automatically satisfy \eqref{joint:vars}--\eqref{joint:bigM2}, so it suffices to argue that $\vb \neq \mathbf{A}^\top \vx$ or $d_p \geq \mathbf{a}_p^\top \vx$ for all $p\in[P]$. Suppose for a contradiction that $\vb = \mathbf{A}^\top \vx$ and $d_p < \mathbf{a}_p^\top \vx$ for some $p\in[P]$. As $z^i\in\{0,1\}$, it follows from~\eqref{joint:bigM1} and~\eqref{joint:bigM2} that $r^i\geq t$ for all $i\in [N]$. Then we obtain $\epsilon\, t\geq \sum_{i\in[N]}r^i/N\geq t$ from~\eqref{joint:conic}. Since $\epsilon<1$, we must have $t=r^i=0$ for all $i\in[N]$, and then constraint~\eqref{joint:bigM2} becomes $d_p -\mathbf{a}_p^\top \vx + M^i z^i\geq 0$. This in turn implies that $z^i=1$ for all $i\in[N]$ because $d_p -\mathbf{a}_p^\top \vx<0$, and in particular, $\sum_{i\in[N]}z_i=N$. However, as $\epsilon<1$, $\vz$ violates~\eqref{joint-k:knapsack}, a contradiction. Therefore, $\vb \neq \mathbf{A}^\top \vx$ or $d_p \geq \mathbf{a}_p^\top \vx$ for all $p\in[P]$, as required.
Next we show that the set in~\eqref{eq:prop1-1} is contained in the set in~\eqref{eq:prop1-1-l}. Let $\vx\in\mathcal{X}$ satisfy~\eqref{joint:vars}--\eqref{joint:bigM2} with some $\vz,\mathbf{r},t$. It suffices to argue that if $\vb \neq \mathbf{A}^\top \vx$ or $d _p\geq \mathbf{a}_p^\top \vx$ for all $p\in[P]$, then $\vx\in\mathcal{X}$ satisfies~\eqref{joint-k:basic}--\eqref{joint-k:bigM3} with some $\bar\vz,\bar\mathbf{r},\bar t$ (not necessarily the same $\vz,\mathbf{r},t$). First, assume that $\vb \neq \mathbf{A}^\top \vx$. We claim that $\vx,\bar\vz,\mathbf{r},t$ satisfy~\eqref{joint-k:basic}--\eqref{joint-k:bigM3} where $\bar\vz\in\{0,1\}^N$ is the vector satisfying $\bar z^i=1$ if and only if $(-\mathbf{A}\bm{\xi}_p^i- \mathbf{a}_p)^\top\vx+ (\vb^\top\bm{\xi}_p^i+d_p)<0$ for all $i\in[N]$. Since $M^i$ is sufficiently large so that $(-\mathbf{A}\bm{\xi}_p^i- \mathbf{a}_p)^\top\vx+ (\vb^\top\bm{\xi}_p^i+d_p)+M^i\geq0$, the constraints~\eqref{joint-k:bigM3} are satisfied with $\bar\vz$. Moreover, by the choice of $\bar\vz$, $\min\left\{(-\mathbf{A}\bm{\xi}_p^i- \mathbf{a}_p)^\top\vx+ (\vb^\top\bm{\xi}_p^i+d_p)+M^i\bar z^i,~M^i(1-\bar z^i)\right\}$ is greater than or equal to $\min\left\{(-\mathbf{A}\bm{\xi}_p^i- \mathbf{a}_p)^\top\vx+ (\vb^\top\bm{\xi}_p^i+d_p)+M^iz^i,~M^i(1-z^i)\right\}$ for any $z^i\in\{0,1\}$. That means $\vx,\bar\vz$ satisfy~\eqref{joint:bigM1} and~\eqref{joint:bigM2} because they are already satisfied by $\vx,\vz$. Hence, it remains to argue that $\bar\vz$ satisfies~\eqref{joint-k:knapsack}. Since $\vb \neq \mathbf{A}^\top \vx$, we have $\|\vb - \mathbf{A}^\top \vx \|_*>0$ and thus $t>0$ by~\eqref{joint:conic}. We claim that ${r^i}/{t}\geq \bar z^i$ for all $i\in[N]$. When $\bar z^i=1$, ${r^i}/{t}\geq 1=\bar z^i$ holds by~\eqref{joint:bigM1}. We also know that ${r^i}/{t}\geq 0$ as $r^i\geq0$, and in particular, ${r^i}/{t}\geq\bar z^i$ holds when $\bar z^i=0$. As~\eqref{joint:conic} states that $\epsilon N\geq \sum_{i\in[N]}{r^i}/{t}$, it follows that $\epsilon N\geq\sum_{i\in[N]}\bar z^i$. Since $\sum_{i\in[N]}\bar z^i$ takes an integer value, $\bar\vz$ indeed satisfies~\eqref{joint-k:knapsack}. Therefore, $\vx,\bar\vz,\mathbf{r},t$ satisfy~\eqref{joint-k:basic}--\eqref{joint-k:bigM3}. Thus, we may assume that $\vb = \mathbf{A}^\top \vx$ and $d_p\geq\mathbf{a}_p^\top\vx$ for all $p\in[P]$. Then, it is clear that $\vx$ together with $\bar t=\bar r^i=\bar z^i=0$ for $i\in[N]$ satisfies~\eqref{joint-k:basic}--\eqref{joint-k:bigM3}, as required.
\ifx\flagJournal1 \qed \fi
\end{proof}
\begin{remark}
By \cref{thm:knapsack-valid-joint},~\eqref{eq:joint-knapsack} is an exact reformulation of~\eqref{eq:dr-ccp} when the safety set $\mathcal{S}(\vx)$ is closed. When $\mathcal{S}(\vx)$ is open, if~\eqref{eq:joint-knapsack} returns an optimal solution $\vx$ such that $\vb \neq \mathbf{A}^\top \vx$ or $d_p \neq\mathbf{a}_p^\top \vx$ for all $p\in[P]$, then $\vx$ is an optimal solution to~\eqref{eq:dr-ccp}. However, if~\eqref{eq:joint-knapsack} returns an optimal solution $\vx$ such that $\vb = \mathbf{A}^\top \vx$ and $d_p =\mathbf{a}_p^\top \vx$ for some $p\in[P]$, then $\vx\not\in \mathcal{X}_{\DR}(\mathcal{S}^o)$. Nevertheless, we can deal with this case separately by solving linear programs with strict inequalities as in \citet[Remark 2]{chen2018data} (see also \cref{rem:distance}).
\ifx\flagJournal1 \hfill\hbox{\hskip 4pt \vrule width 5pt height 6pt depth 1.5pt}\vspace{0.0cm}\par \fi
\end{remark}
\begin{remark}\label{rem:bigM}
\citet[Theorem~2]{xie2018distributionally} shows that the following choice of $M^i$ for $i\in[N]$ is sufficient for the validity of formulations~\eqref{eq:joint} and~\eqref{eq:joint-knapsack}:
\begin{equation}\label{eq:bigMvalue-joint}
M^i=\max_{x\in\mathcal{X},p\in[P]}\left\{\left|(-\mathbf{A}\bm{\xi}_p^i- \mathbf{a}_p)^\top\vx+ (\vb^\top\bm{\xi}_p^i+d_p)\right|\right\},\quad i\in[N].
\end{equation}
But, when the domain $\mathcal{X}$ is not bounded, $M^i$ is not necessarily finite; our applications in \cref{sec:experiments} fall into this category. In such cases, instead of \cref{eq:bigMvalue-joint}, we can simply ensure that
\begin{equation}\label{eq:bigMvalue-joint-opt}
M^i \geq \max_{p\in[P]} \left\{\left|(-\mathbf{A}\bm{\xi}_p^i- \mathbf{a}_p)^\top\vx+ (\vb^\top\bm{\xi}_p^i+d_p)\right|\right\},\quad i\in[N],
\end{equation}
for at least one optimal solution $\vx$ of \cref{eq:joint-knapsack}, which maintains the validity of the formulation.
That said, in order to be able to use~\eqref{eq:bigMvalue-joint-opt}, we must understand the structure of the optimal solutions, which can be a nontrivial task on its own. In \cref{sec:experiments}, we will explain how to choose $M^i$ for $i\in[N]$ based on~\eqref{eq:bigMvalue-joint-opt} for the specific applications we consider.
\ifx\flagJournal1 \hfill\hbox{\hskip 4pt \vrule width 5pt height 6pt depth 1.5pt}\vspace{0.0cm}\par \fi
\end{remark}
\section{Quantile Strengthening}\label{sec:quantile}
Formulation~\eqref{eq:joint-knapsack} is already stronger than~\eqref{eq:joint}. Moreover, we can improve formulation~\eqref{eq:joint-knapsack} even further by exploiting the so-called \emph{mixing substructure} residing in~\crefrange{joint-k:knapsack}{joint-k:bigM3}. In the case of the nominal chance-constrained programs as in~\eqref{eq:saa-reformulation}, analyzing and exploiting the mixing substructure originating from the big-$M$ and the knapsack constraints~\crefrange{saa:knapsack}{saa:bigM} is already a common practice. In particular, the big-$M$ coefficients in front of the binary variables in~\eqref{saa:knapsack} can be significantly reduced based on the assumption that solutions satisfy the knapsack constraint~\eqref{saa:bigM}. We will explain this procedure in \cref{sec:mixing} in detail and refer to it as \emph{quantile strengthening}. \citet{luedtke2010integer} developed this quantile strengthening technique for solving nominal chance-constrained programs with random RHS, and~\citet{luedtke2014branch-and-cut} later applied it to CCPs with random LHS. We can reduce the big-$M$ coefficients in~\eqref{joint-k:bigM3} by applying the same method to~\crefrange{joint-k:knapsack}{joint-k:bigM3}. What is surprising is that the big-$M$ coefficients in~\eqref{joint:bigM2} can also be reduced using the quantile information, thereby further strengthening formulation~\eqref{eq:joint-knapsack}. %
For distributionally robust chance constraints with random RHS, our previous work \citep{rhs2020} demonstrated how to adapt quantile strengthening to improve the big-$M$ coefficients in~\eqref{joint:bigM2} and provided strong numerical evidence that this coefficient strengthening step has an overwhelmingly positive impact in the overall computation time. In this section, we extend this framework to the~\eqref{eq:dr-ccp} with random LHS setting and discuss how the big-$M$ coefficients in~\eqref{joint:bigM2} can be reduced accordingly. See \cref{rem:mixing-RHS-comparison} for a discussion of the differences in our quantile strengthening procedure for the LHS uncertainty case against our previous paper~\citep{rhs2020}.
The main distinction in the random LHS case, compared to the RHS uncertainty case, is that the coefficients $(-\mathbf{A}\bm{\xi}_p^i-\mathbf{a}_p)$ of the decision variables~$\vx$ in~\eqref{joint:bigM2} change over different scenarios, because $\mathbf{A}\neq\bf{0}$. When $\mathbf{A}=\bf{0}$,~\crefrange{joint-k:knapsack}{joint-k:bigM3} naturally give rise to a mixing set with a fixed linear function $(-\mathbf{a}_p)^\top\vx$ for each $p\in [P]$. In contrast, when $\mathbf{A}\neq\bf{0}$, we construct a mixing set corresponding to $(-\mathbf{A}\bm{\xi}_p^i-\mathbf{a}_p)^\top\vx$ for every pair of $i\in[N]$ and $p\in[P]$. For this, we rely on an idea of~\citet{luedtke2014branch-and-cut} used for quantile strengthening to solve nominal CCPs with random LHS.
The distinct feature of our framework is that we consider particular structures stemming from $(-\mathbf{A}\bm{\xi}_p^i-\mathbf{a}_p)^\top\vx$ for $i\in[N]$ and $p\in[P]$ in~\eqref{joint:bigM2} for the sake of reducing the big-$M$ coefficients in~\eqref{joint:bigM2}.
In~\cref{sec:mixing} we describe the construction of the mixing inequalities as in~\cite{luedtke2014branch-and-cut}, and in~\cref{sec:bigM_reduction} we describe our quantile strengthening procedure for \cref{eq:dr-ccp} with LHS uncertainty.
\subsection{Mixing inequalities}\label{sec:mixing}
Let us consider the following mixing substructure arising from the constraints~\crefrange{joint-k:knapsack}{joint-k:bigM3}:
\begin{equation}
Q:=\left\{ (\vx,\vz)\in \mathcal{X} \times \{0,1\}^N : \begin{aligned}
\ &s(\vx,\bm{\xi}^i) + M^i z^i \geq 0, \quad i \in [N],\\
& \sum_{i\in [N]} z^i \leq \lfloor\epsilon N\rfloor
\end{aligned} \right\}.\label{eq:mixingset}
\end{equation}
We can set $s(\vx,\bm{\xi}) := (-\mathbf{A}\bm{\xi}_p- \mathbf{a}_p)^\top\vx+ (\vb^\top\bm{\xi}_p+d_p)$ for a fixed $p\in[P]$ so that individual constraints are separately considered, or the set $Q$ can capture the joint constraints by taking $s(\vx,\bm{\xi}):=\min_{p\in[P]}\left\{(-\mathbf{A}\bm{\xi}_p- \mathbf{a}_p)^\top\vx+ (\vb^\top\bm{\xi}_p+d_p)\right\}$.%
We will utilize the following procedure to find inequalities of the form $\vmu^\top \vx + \vpi^\top \vz\geq\beta$ that are valid for the mixed-integer set $Q$ in~\eqref{eq:mixingset}. Given a fixed linear function $\vmu^\top\vx$ and a set $\bar{\mathcal{X}} \supseteq \mathcal{X}$, we solve the following single scenario subproblem for each scenario $i\in[N]$:
\begin{equation}
\bar h^i(\vmu):=\min\left\{\vmu^\top\vx:\ s(\vx,\bm{\xi}^i)\geq0, \ \vx\in \bar{\mathcal{X}}\right\}.\label{eq:single-subproblem}
\end{equation}
Then, $\vmu^\top\vx\geq \bar h^i(\vmu)$ holds for $(\vx,\vz)\in Q$ with $z^i=0$. Next, we sort the values $\bar h^i(\vmu)$ for $i\in[N]$ in non-decreasing order. Without loss of generality by a re-indexing if needed, we may assume that $\bar h^N(\vmu) \geq \bar h^{N-1}(\vmu) \geq \cdots \geq \bar h^1(\vmu)$. For ease of notation, we let
$k:=\lfloor\epsilon N\rfloor.$
Furthermore, note that there must exist $i \in \{N-k,N-k+1,\ldots,N\}$ with $z^i=0$ since $\sum_{i\in [N]} z^i \leq k$ is also enforced in $Q$ and thus the pigeonhole principle applies. So, we deduce that $\vmu^\top\vx \geq \bar h^{N-k}(\vmu)$ because $\bar h^i(\vmu)\geq \bar h^{N-k}(\vmu)$ for all $i\geq N-k$. To summarize, this reasoning shows that $\vmu^\top\vx\geq \bar h^i(\vmu)$ holds if $z^i=0$ and that $\vmu^\top\vx\geq \bar h^{N-k}(\vmu)$ is satisfied always, in particular, when $z^i=1$ for $i\in [N]$. Hence,
\begin{equation}
\vmu^\top\vx + \left(\bar h^i(\vmu)-\bar h^{N-k}(\vmu)\right)z^i\geq\bar h^i(\vmu)\label{eq:mixing-base}
\end{equation}
is valid.
Note that the inequalities~\eqref{eq:mixing-base} for $i\leq N-k$ are redundant because $\vmu^\top\vx\geq \bar h^i(\vmu)$ is implied by $\vmu^\top\vx\geq \bar h^{N-k}(\vmu)$ if $i\leq N-k$.
Following this procedure we now have a set of inequalities~\eqref{eq:mixing-base} that share a common linear function $\vmu^\top\vx$ and each one has a distinct integer variable. Therefore, we can apply the {\it mixing procedure} of~\citet{gunluk2001mixing} (see, also, {\it star inequalities} by~\citet{atamturk2000mixed}) to the set of inequalities~\eqref{eq:mixing-base} to obtain stronger inequalities. For any $J=\{j_1,\ldots,j_\ell\}$ with $N\geq j_1\geq\cdots\geq j_\ell\geq N-k+1$, the {\it mixing inequality} derived from $J$ and~\eqref{eq:mixing-base} is
\begin{equation}
\vmu^\top\vx + \sum_{i\in[\ell]}\left(\bar h^{j_i}(\vmu)-\bar h^{j_{i+1}}(\vmu)\right)z_{j_i}\geq \bar h^{j_1}(\vmu),\label{eq:mixing}
\end{equation}
where $j_{\ell+1}:=N-k$.
Inequalities \eqref{eq:mixing} are sufficient to describe the convex hull of solutions to $(\vx,\vz)\in\mathbb{R}^L\times\{0,1\}^N$ satisfying~\eqref{eq:mixing-base}~\cite{atamturk2000mixed,gunluk2001mixing,Kilinc-Karzan2019joint-sumod}. Furthermore, while exponentially many, inequalities~\eqref{eq:mixing} can be separated in $O(N\log N)$ time~\cite{gunluk2001mixing,Kilinc-Karzan2019joint-sumod}.
\begin{remark}\label{remark:relaxed-h}
Inequalities \eqref{eq:mixing} are valid for the set $\bar{\mathcal{X}}$. If we choose $\bar{\mathcal{X}} = \mathcal{X}$, depending on the structure of our original domain and the choice of $s(\cdot)$, computing the value of $\bar h^i(\vmu)$ exactly can be expensive. However, if we take $\bar{\mathcal{X}} \supseteq \mathcal{X}$, then inequalities \eqref{eq:mixing} are also valid for $\mathcal{X}$. Similar to \cref{rem:bigM}, we can also take $\bar{\mathcal{X}}$ to be a set containing at least one optimal solution to \eqref{eq:joint-knapsack} to also derive valid inequalities for that formulation. In \cref{remark:rsrcp-yield-lower-bounds}, we will follow this computationally more attractive approach for the probabilistic resource planning application.
\ifx\flagJournal1 \hfill\hbox{\hskip 4pt \vrule width 5pt height 6pt depth 1.5pt}\vspace{0.0cm}\par \fi
\end{remark}
\subsection{Quantile strengthening via mixing inequalities}\label{sec:bigM_reduction}
We return our attention to formulation~\eqref{eq:joint-knapsack}, which contains a mixing substructure $Q$ of the form \eqref{eq:mixingset} with $s(\vx,\bm{\xi}):=\min_{g\in[P]}\left\{(-\mathbf{A}\bm{\xi}_g- \mathbf{a}_g)^\top\vx+ (\vb^\top\bm{\xi}_g+d_g)\right\}$. To obtain mixing inequalities \eqref{eq:mixing-base} valid for \eqref{eq:joint-knapsack}, we must choose the linear function $\vmu^\top \vx$. A natural set of candidates for the starting linear function $\vmu^\top \vx$ includes $(-\mathbf{A}\bm{\xi}_p^i- \mathbf{a}_p)^\top \vx$ for $i\in[N]$ and $p\in[P]$. Define $k:=\lfloor \epsilon N\rfloor$ as before. For fixed $i\in[N]$ and $p\in[P]$, let
\begin{equation}\label{eq:quantiles}
q_p^i:=\text{the} \ (k+1)\text{-th largest value in}\ \left\{\bar h^N(-\mathbf{A}\bm{\xi}_p^i- \mathbf{a}_p),\ldots,\bar h^1(-\mathbf{A}\bm{\xi}_p^i- \mathbf{a}_p)\right\},
\end{equation}
where for $j\in[N]$
\[ \bar{h}^j(-\mathbf{A} \bm{\xi}_p^i - \mathbf{a}_p) \!=\! \min\! \left\{\! (-\mathbf{A}\bm{\xi}_p^i- \mathbf{a}_p)^\top \vx :\begin{aligned}
&\vx \in \bar{\mathcal{X}},\\
&(-\mathbf{A}\bm{\xi}_g^j- \mathbf{a}_g)^\top\vx+ (\vb^\top\bm{\xi}_g^j+d_g) \geq 0, ~g \in [P]
\end{aligned} \!\right\} \]
as in \eqref{eq:single-subproblem} and $\bar{\mathcal{X}}$ is a set containing at least one optimal solution to \eqref{eq:joint-knapsack} as in \cref{remark:relaxed-h}. Then, we arrive at the following \emph{basic} mixing inequalities \eqref{eq:mixing-base} that are valid for~\eqref{eq:joint-knapsack}:
\begin{align}
& (-\mathbf{A}\bm{\xi}_p^i- \mathbf{a}_p)^\top\vx\geq q_p^i,\label{eq:mixing-base-quantile}\\
&(-\mathbf{A}\bm{\xi}_p^i- \mathbf{a}_p)^\top\vx+ (\bar h^i(-\mathbf{A}\bm{\xi}_p^i- \mathbf{a}_p)-q_p^i)z^i\geq \bar h^i(-\mathbf{A}\bm{\xi}_p^i- \mathbf{a}_p).\label{eq:mixing-base-lhs}
\end{align}
\begin{lemma}\label{lemma:quantile-strengthening}
For any $i\in[N]$ and $p\in[P]$, the following inequality is valid for \eqref{eq:joint-knapsack}:
\begin{equation}\label{eq:mixing-base-relaxed}
(-\mathbf{A}\bm{\xi}_p^i- \mathbf{a}_p)^\top\vx+ (\vb^\top\bm{\xi}_p^i+d_p)+ (-\vb^\top\bm{\xi}_p^i-d_p-q_p^i)z^i\geq 0.
\end{equation}
\end{lemma}
\begin{proof}
From the definition of $\bar h^i(-\mathbf{A}\bm{\xi}_p^i- \mathbf{a}_p)$ above we deduce that
$$\bar h^i(-\mathbf{A}\bm{\xi}_p^i- \mathbf{a}_p)\geq \min_{\vx}\left\{(-\mathbf{A}\bm{\xi}_p^i- \mathbf{a}_p)^\top\vx:\ (-\mathbf{A}\bm{\xi}_p^i- \mathbf{a}_p)^\top\vx\geq -\vb^\top{\bm{\xi}_p^i}-d_p\right\}= -(\vb^\top\bm{\xi}_p^i+d_p).$$
Then, since $\bar h^i(-\mathbf{A}\bm{\xi}_p^i- \mathbf{a}_p)\geq -(\vb^\top\bm{\xi}_p^i+d_p)$ and $(z^i-1)\leq 0$ for $z^i\in\{0,1\}$, it follows that $\bar h^i(-\mathbf{A}\bm{\xi}_p^i- \mathbf{a}_p)(z^i-1)\leq -(\vb^\top\bm{\xi}_p^i+d_p)(z^i-1)$. So,~\eqref{eq:mixing-base-relaxed} follows from~\eqref{eq:mixing-base-lhs}.
\ifx\flagJournal1 \qed \fi
\end{proof}
Note that~\eqref{eq:mixing-base-relaxed} is identical to \eqref{joint-k:bigM3} except for a different coefficient in front of the binary variable $z^i$, and \eqref{joint-k:bigM3} itself is quite similar to \eqref{joint:bigM2}. By exploiting this similarity, we can improve formulation~\eqref{eq:joint-knapsack} by reducing the coefficient of $z^i$ in~\eqref{joint:bigM2} to that of \eqref{eq:mixing-base-relaxed}. Thus, our \emph{improved formulation} is:
\begin{subequations}\label{eq:joint-k-reduced}
\begin{align}
\min\limits_{\vz, \mathbf{r}, t, \vx} & \mathbf{c}^\top \vx\\
\text{s.t.}~ & (\vz,\mathbf{r},t,\vx)\ \text{satisfies \crefrange{joint:vars}{joint:bigM1} and \cref{joint-k:knapsack}}\label{joint-k-reduced:basic}\\
& (-\mathbf{A}\bm{\xi}_p^i- \mathbf{a}_p)^\top\vx\geq q_p^i, ~i \in [N], p \in [P]\label{joint-k-reduced:quantile}\\
& (-\mathbf{A}\bm{\xi}_p^i- \mathbf{a}_p)^\top\vx+ (\vb^\top\bm{\xi}_p^i+d_p)+ (-\vb^\top\bm{\xi}_p^i-d_p-q_p^i)z^i\geq t-r^i, \label{joint-k-reduced:bigM2}\\
& \qquad \qquad \qquad \qquad \qquad \qquad \qquad \qquad \qquad \qquad \qquad i \in [N], p \in [P].\notag
\end{align}
\end{subequations}
The validity of the updated inequalities \eqref{joint-k-reduced:bigM2} hinges on the following simple result.
\begin{lemma}\label{lemma:new-bigM}
Suppose that $x,y \in \mathbb{R}$, $C,C_1,C_2 \in \mathbb{R}_+$ and $z \in \{0,1\}$ satisfies
$C(1-z)\geq y$, %
$x + C_1 z \geq 0$, and %
$x + C_2 z \geq y$.
Then we also have $x + C_1 z \geq y$.
\end{lemma}
\begin{proof}
If $z = 1$, we have $y \leq C(1-z) = 0 \leq x + C_1 z$, and if $z = 0$, we have $x = x + C_1 z = x + C_2 z \geq y$. Thus, in either case, we have $x + C_1 z \geq y$.
\ifx\flagJournal1 \qed \fi
\end{proof}
\begin{theorem}\label{thm:improved-formulation}
Formulation~\eqref{eq:joint-k-reduced} is a valid reformulation of~\eqref{eq:dr-ccp} where the safety set is given by~\eqref{eq:safety}.
\end{theorem}
\begin{proof}
By \cref{thm:knapsack-valid-joint}, formulation \eqref{eq:joint-knapsack} is a valid reformulation of~\eqref{eq:dr-ccp}.
Thus, it suffices to show that $\mathcal{X}_1=\mathcal{X}_2$, where \[\mathcal{X}_1 := \left\{ (\vz,\mathbf{r},t,\vx) : \text{\crefrange{joint-k:basic}{joint-k:bigM3}} \right\},\quad \mathcal{X}_2 := \left\{ (\vz,\mathbf{r},t,\vx) : \text{\crefrange{joint-k-reduced:basic}{joint-k-reduced:bigM2}} \right\}.\]
Note that the constraints~\eqref{joint:bigM2} are not explicitly included in $\mathcal{X}_2$. However, since $M^i \geq -\vb^\top\bm{\xi}^i - d_p - q_p^i$ (which we can assume since it is a big-$M$ constant), \eqref{joint:bigM2} is implied by \eqref{joint-k-reduced:bigM2}. Hence, we trivially have $\mathcal{X}_2 \subseteq \mathcal{X}_1$.
In order to prove $\mathcal{X}_1 \subseteq \mathcal{X}_2$, we first observe that \crefrange{eq:mixing-base-quantile}{eq:mixing-base-lhs} are simply inequalities \eqref{eq:mixing-base} derived from the mixing substructure \crefrange{joint-k:knapsack}{joint-k:bigM3} using the function $s(\vx,\bm{\xi})=\min_{p\in[P]}\left\{(-\mathbf{A}\bm{\xi}_p- \mathbf{a}_p)^\top\vx+ (\vb^\top\bm{\xi}_p+d_p)\right\}$, thus \eqref{joint-k-reduced:quantile} are valid for $\mathcal{X}_1$. Finally, we argue that \eqref{joint-k-reduced:bigM2} is valid for $\mathcal{X}_1$. For every $i \in [N]$ and $p \in [P]$, we obtain from \cref{lemma:quantile-strengthening} and \crefrange{joint:bigM1}{joint:bigM2} that
\begin{subequations}\label{eq:mixing-bigMreduction}
\begin{align}
&M^i (1-z^i) \geq t - r^i, \label{eq:mixing-bigMreduction0}\\
&(-\mathbf{A}\bm{\xi}_p^i- \mathbf{a}_p)^\top\vx+ (\vb^\top\bm{\xi}_p^i+d_p)+ (-\vb^\top\bm{\xi}_p^i-d_p-q_p^i) z^i\geq 0, \label{eq:mixing-bigMreductionmix}\\
&(-\mathbf{A}\bm{\xi}_p^i- \mathbf{a}_p)^\top\vx+ (\vb^\top\bm{\xi}_p^i+d_p)+ M^i z^i\geq t-r^i. \label{eq:mixing-bigMreductionnon0}
\end{align}
\end{subequations}
We then apply \cref{lemma:new-bigM} with $x = (-\mathbf{A}\bm{\xi}_p^i- \mathbf{a}_p)^\top\vx+ (\vb^\top\bm{\xi}_p^i+d_p)$, $y = t-r^i$, $C = C_2 = M^i$, $C_1 = -\vb^\top\bm{\xi}_p^i-d_p-q_p^i$ to get that \eqref{joint-k-reduced:bigM2} is valid for $\mathcal{X}_1$.
\ifx\flagJournal1 \qed \fi
\end{proof}
\begin{remark}
We highlight that, different from the traditional quantile-based strengthening for nominal chance constraints, the coefficient strengthening proposed in \cref{thm:improved-formulation} is derived from the distinct structure of~\eqref{eq:dr-ccp}, namely the complementary upper bounding constraints \eqref{eq:mixing-bigMreduction0} and \eqref{eq:mixing-bigMreductionnon0} imposed on $t-r^i$ based on the value of $z^i$, combined with the basic mixing inequality \eqref{eq:mixing-bigMreductionmix} that has the same coefficients $-\mathbf{A}\bm{\xi}_p^i- \mathbf{a}_p$ and the same binary variable $z^i$. %
\ifx\flagJournal1 \hfill\hbox{\hskip 4pt \vrule width 5pt height 6pt depth 1.5pt}\vspace{0.0cm}\par \fi
\end{remark}
\begin{remark}\label{remark:relaxed-quantiles}
The coefficient of $z^i$ in~\eqref{joint-k-reduced:bigM2} is $-\vb^\top\bm{\xi}_p^i-d_p-q_p^i$, whereas it is $M^i$ in~\eqref{joint:bigM2}. Furthermore, $q_p^i$ can be replaced by any lower bound $\beta_p^i$ on $q_p^i$ and the resulting formulation still gives a valid reformulation of~\eqref{eq:dr-ccp}. As long as $M^i\geq -\vb^\top\bm{\xi}_p^i-d_p-\beta_p^i$, the inequality
\[(-\mathbf{A}\bm{\xi}_p^i- \mathbf{a}_p)^\top\vx+ (\vb^\top\bm{\xi}_p^i+d_p)+ (-\vb^\top\bm{\xi}_p^i-d_p-\beta_p^i)z^i\geq t-r^i\]
dominates~\eqref{joint:bigM2}. In practice, the $M^i$ computed na\"{i}vely from \cref{rem:bigM} is much higher than $-\vb^\top \bm{\xi}_p^i - d_p - q_p^i$.
\ifx\flagJournal1 \hfill\hbox{\hskip 4pt \vrule width 5pt height 6pt depth 1.5pt}\vspace{0.0cm}\par \fi
\end{remark}
\begin{remark}\label{remark:warning}
To compute $q_p^i$, we need to evaluate $\bar h^j(-\mathbf{A}\bm{\xi}_p^i- \mathbf{a}_p)$ for $j\in[N]$ which is the optimum value of the single scenario subproblem given in~\eqref{eq:single-subproblem}. Note that we must solve $N^2$ such subproblems. For $s(\vx,\bm{\xi}^j)=(-\mathbf{A}\bm{\xi}_p^j- \mathbf{a}_p)^\top\vx+ (\vb^\top\bm{\xi}_p^j+d_p)$ in~\eqref{eq:single-subproblem}, this computation becomes
\begin{equation}\label{eq:lower-bounds-indiv}
\bar h^j(-\mathbf{A}\bm{\xi}_p^i- \mathbf{a}_p)=\min\limits_{\vx\in \mathcal{X}}\left\{(-\mathbf{A}\bm{\xi}_p^i- \mathbf{a}_p)^\top\vx:\ (-\mathbf{A}\bm{\xi}_p^j- \mathbf{a}_p)^\top\vx\geq -\vb^\top\bm{\xi}_p^j-d_p\right\}.
\end{equation}
In the optimization problem~\eqref{eq:lower-bounds-indiv}, we can take $\mathbb{R}^L$ or $\mathbb{R}^L_+$ for a relaxation $\bar \mathcal{X}$ of $\mathcal{X}$. %
But, then the problem in~\eqref{eq:lower-bounds-indiv} becomes trivial and its optimal value is not necessarily finite. In such cases, instead of an individual constraint, we can set $s(\vx,\bm{\xi}^j)=\min_{p\in[P]}\left\{(-\mathbf{A}\bm{\xi}_p^j- \mathbf{a}_p)^\top\vx+ (\vb^\top\bm{\xi}_p^j+d_p)\right\}$ in~\eqref{eq:lower-bounds-indiv} so that
\begin{align}
&\bar h^j(-\mathbf{A}\bm{\xi}_p^i- \mathbf{a}_p)\label{eq:lower-bounds-joint}\\
&=\min\limits_{\vx\in \mathcal{X}}\left\{(-\mathbf{A}\bm{\xi}_p^i- \mathbf{a}_p)^\top\vx: (-\mathbf{A}\bm{\xi}_g^j- \mathbf{a}_g)^\top\vx\geq -\vb^\top\bm{\xi}_g^j-d_g,\ g\in[P]\right\}.\nonumber
\end{align}
Although~\eqref{eq:lower-bounds-joint} provides a stronger value than~\eqref{eq:lower-bounds-indiv}, it requires solving a linear program with many constraints even when $\mathcal{X}=\mathbb{R}^L$ or $\mathcal{X}=\mathbb{R}_+^L$. In~\cref{sec:cover-pack}, we study \emph{packing} and \emph{covering} constraints as a special case, where the problems~\eqref{eq:lower-bounds-indiv} are easily solvable. We find that the time taken to compute $q_p^i$ is negligible for the applications considered in \cref{sec:experiments} even without such covering or packing structure.
\ifx\flagJournal1 \hfill\hbox{\hskip 4pt \vrule width 5pt height 6pt depth 1.5pt}\vspace{0.0cm}\par \fi
\end{remark}
\begin{remark}\label{rem:mixing-RHS-comparison}
One of the key differences between our previous paper \citep{rhs2020} and the current one lies in the derivation of their strengthenings, i.e., the reduction of the big-$M$ coefficients for the DR-CCP formulation in \cref{sec:bigM_reduction} which requires the application of the mixing procedure described in \cref{sec:mixing} in a specific manner. This is more complicated than the procedure used in the right-hand side setting. To be specific, a linear constraint with random right-hand side under $N$ scenarios gives rise to inequalities with the same left-hand side but $N$ different right-hand sides. As they share the same left-hand side, they can be grouped and we can apply the coefficient strengthening developed in~\cite{rhs2020}. However, this is specific to the right-hand side uncertainty case only, because a constraint with random left-hand side under $N$ scenarios gives rise to $N$ inequalities with $N$ different left-hand side terms. In this case, we cannot group these inequalities, and each inequality needs to be dealt with separately. This is definitely a complicating factor that was separately addressed in \cite{luedtke2014branch-and-cut} for SAA-based chance-constrained programs as well. Our strengthening procedure for the left-hand side uncertainty case of DR-CCP is adapted from \cite{luedtke2014branch-and-cut}.
\ifx\flagJournal1 \hfill\hbox{\hskip 4pt \vrule width 5pt height 6pt depth 1.5pt}\vspace{0.0cm}\par \fi
\end{remark}
\section{Covering and packing constraints}\label{sec:cover-pack}
Covering and packing problems attracted special attention in the literature \citep{bicriteria,binary-packing,ZhangJiangShen2018}---their special structures can often be exploited for efficiency. To this end, we now focus on this special case.
Consider constraints of the form
\begin{equation}\label{eq:constraint}
(-\mathbf{A}\bm{\xi}_p- \mathbf{a}_p)^\top\vx>-\vb^\top\bm{\xi}_p-d_p \quad\text{and}\quad (-\mathbf{A}\bm{\xi}_p- \mathbf{a}_p)^\top\vx\geq -\vb^\top\bm{\xi}_p-d_p,
\end{equation}
where the coefficients $-\mathbf{A}\bm{\xi}_p- \mathbf{a}_p$ of the decision vector $\vx$ and the right-hand side $-\vb^\top\bm{\xi}_p-d_p$ have the same sign. In~\eqref{eq:constraint} the strict inequality follows from considering open safety sets. We say that~\eqref{eq:constraint} are \emph{covering} type if $-\mathbf{A}\bm{\xi}_p- \mathbf{a}_p\geq\bf{0}$ and $-\vb^\top\bm{\xi}_p-d_p\geq0$, and we say that constraints~\eqref{eq:constraint} are \emph{packing} type if $-\mathbf{A}\bm{\xi}_p- \mathbf{a}_p\leq\bf{0}$ and $-\vb^\top\bm{\xi}_p-d_p\leq0$. For example, a \emph{probabilistic portfolio optimization problem} can be defined by a covering constraint; its distributionally robust chance-constrained program formulation is given as follows:
\begin{align}
\min\limits_{\vx} \quad & \mathbf{c}^\top \vx\notag\\
\text{s.t.}\quad & \mathbb{P}[\bm{\xi}^\top\vx >w ] \geq 1-\epsilon,\quad \forall \mathbb{P} \in \mathcal{F}_N(\theta),\tag{Portfolio}\label{eq:portfolio}\\
& \vx\geq\bm{0},\notag
\end{align}
where $\bm{\xi}$ captures the random yields of financial assets; each component encodes the ratio of the end price and the initial price of a financial product (a ratio greater than~1 implies profit whereas a ratio less than~1 indicates loss). Here, $\mathbf{c}$ is the cost vector and $w$ denotes a prescribed target return. We may assume that the price never goes down to 0. Then, $\bm{\xi}>\bm{0}$ for all $\bm{\xi}\in\mathbb{R}^K$, and thus~$\bm{\xi}^\top\vx >w$ is a covering constraint.
In \cref{sec:bigM_reduction}, we presented a way to improve the value of $M^i$ in~\eqref{joint:bigM2}, which allows us to replace~\eqref{joint:bigM2} by $(-\mathbf{A}\bm{\xi}_p- \mathbf{a}_p)^\top\vx+ (\vb^\top\bm{\xi}_p+d_p) + (-\vb^\top\bm{\xi}_p-d_p-q_p^i)z^i \geq t-r_i$ where $q_p^i$ is given by~\eqref{eq:quantiles}. Moreover, in \cref{remark:relaxed-quantiles} we argue that $q_p^i$ can be relaxed by any lower bound $\beta_p^i$ on $q_p^i$, especially when the exact evaluation of $q_p^i$ is computationally expensive. Next, we establish that for covering and packing constraints, we can efficiently compute a strong lower bound on $q_p^i$ under a mild assumption.
\begin{lemma}\label{lemma:cov-pack-quantiles}
Suppose that constraints~\eqref{eq:constraint} are in the form of covering or packing. Further, assume that all realizations of $-\mathbf{A}\bm{\xi}_p- \mathbf{a}_p$ have the same support and that $\mathcal{X}\subseteq \mathbb{R}_+^L$. Then for $i,j\in[N]$ and $p\in[P]$,
\begin{equation}\label{eq:h_j}
\bar h^j_p(-\mathbf{A}\bm{\xi}_p^i- \mathbf{a}_p)\geq\min\left\{(-\mathbf{A}\bm{\xi}_p^i- \mathbf{a}_p)_\ell\frac{(-\vb^\top\bm{\xi}_p^j-d_p)}{(-\mathbf{A}\bm{\xi}_p^j- \mathbf{a}_p)_\ell}:\ \ell\in\text{supp}(-\mathbf{A}\bm{\xi}_p^i- \mathbf{a}_p)\right\}
\end{equation}
and $\text{supp}(-\mathbf{A}\bm{\xi}_p^i- \mathbf{a}_p)$ denotes the support of $-\mathbf{A}\bm{\xi}_p^i- \mathbf{a}_p$.
\end{lemma}
\begin{proof}
We consider the case when~\eqref{eq:constraint} are covering type; the packing case can be proved similarly. Since $\mathcal{X}\subseteq \mathbb{R}_+^L$, it follows from~\eqref{eq:lower-bounds-indiv} that for $i,j\in[N]$ and $p\in[P]$,
\begin{equation}\label{eq:h_j'}
\bar h^j_p(-\mathbf{A}\bm{\xi}_p^i- \mathbf{a}_p)\geq\min\limits_{\vx\geq\bm{0}}\left\{(-\mathbf{A}\bm{\xi}_p^i- \mathbf{a}_p)^\top\vx:(-\mathbf{A}\bm{\xi}_p^j- \mathbf{a}_p)^\top\vx\geq -\vb^\top\bm{\xi}_p^j-d_p\right\}.
\end{equation}
Since $-\mathbf{A}\bm{\xi}_p^i- \mathbf{a}_p$ and $-\mathbf{A}\bm{\xi}_p^j- \mathbf{a}_p$ have the same support, after possibly projecting out some variables in $\vx$, we may assume that $-\mathbf{A}\bm{\xi}_p^i- \mathbf{a}_p>\bm{0}$ and $-\mathbf{A}\bm{\xi}_p^j- \mathbf{a}_p>\bm{0}$. Then, the minimum of the linear program in~\eqref{eq:h_j'} is attained at a vertex of the simplex $\{\vx\in\mathbb{R}_+^L:(-\mathbf{A}\bm{\xi}_p^j- \mathbf{a}_p)^\top\vx=-\vb^\top\bm{\xi}_p^j-d_p\}$, thus~\eqref{eq:h_j} follows.
\ifx\flagJournal1 \qed \fi
\end{proof}
Given the lower bounds on $\bar h^j_p(-\mathbf{A}\bm{\xi}_p^i- \mathbf{a}_p)$ for $j\in[N]$ obtained by the closed form in~\eqref{eq:h_j}, the $(N-\lfloor \epsilon N\rfloor)$-th largest one is a lower bound on $q_p^i$, due to~\eqref{eq:quantiles}.
\section{Computational Study}\label{sec:experiments}
We test the effectiveness of our developments on portfolio optimization and probabilistic resource planning problems. We detail the explicit form of these problems, along with instance generation and numerical conclusions in~\cref{sec:portfolio,sec:resource-planning}, respectively. For both problems, we use the $\ell_2$-norm for $\|\cdot\|$ to define the Wasserstein distance~\eqref{eq:wasserstein-dist}.
We conducted all of the experiments on an Intel Core i5 3GHz processor with 6 cores and 32GB memory. Each experiment was in single-core mode, and five experiments were run in parallel. We enforced a time limit of 3600 seconds on each model. All solution times are measured by C++ in seconds externally from CPLEX.
We used CPLEX 12.9 as the MIP solver. We used CPLEX user-cut callback feature to separate and add cuts from an exponential family. It is well-known that using a user-cut callback function affects various internal CPLEX dynamics (such as dynamic search, aggressiveness of CPLEX presolve and cut generation procedures, etc.). Thus, to make a fair comparison,
we included an empty user-cut callback function, which does not separate any user cuts, in the implementation of the basic formulation given by~\citet{chen2018data} and~\citet{xie2018distributionally}. We opted to separate our inequalities only at the root node because we identified in our preliminary tests that separating a large number of inequalities throughout the branch-and-cut tree usually slows down the search process.
\newcommand{\texttt{Basic}\xspace}{\texttt{Basic}\xspace}
\newcommand{\texttt{Improved}\xspace}{\texttt{Improved}\xspace}
\newcommand{\texttt{Mixing}\xspace}{\texttt{Mixing}\xspace}
\newcommand{\texttt{Path}\xspace}{\texttt{Path}\xspace}
\newcommand{\texttt{Mixing+Path}\xspace}{\texttt{Mixing+Path}\xspace}
We compare the following three formulations:
\begin{description}
\item[\texttt{Basic}\xspace:] the basic formulation~\eqref{eq:joint} given by~\citet{chen2018data} and~\citet{xie2018distributionally} where we discuss the big-$M$ computations based on the corresponding problem classes separately below,
\item[\texttt{Improved}\xspace:] the improved formulation~\eqref{eq:joint-k-reduced},
\item[\texttt{Mixing}\xspace:] the improved formulation~\eqref{eq:joint-k-reduced} with the mixing inequalities~\eqref{eq:mixing}.
\end{description}
For each formulation, we recorded the following statistics:
\begin{description}
\item[Slv(Fnd):] the number of instances solved to optimality within the CPLEX time limit and, in parentheses, the number of instances for which a feasible solution was found, and hence, an upper bound is available.
\item[Time(Gap):] the average solution time, in seconds, of the instances that were solved to optimality, and, in parentheses, the average of the final optimality gap of the instances that were not solved to optimality within the CPLEX time limit. The optimality gap is computed as $(UB-LB)/LB*100$ where $UB$ and $LB$ respectively are the objective values of the best feasible solution and the best lower bound value at termination. A `*' in a Time or Gap entry indicates that either no instance was solved to optimality or all instances were solved to optimality within the CPLEX time limit so that there were no instances for measuring the corresponding statistic.
\item[R.time:] the average time spent at the root node of the branch-and-cut tree over all instances, in seconds.
\item[R.gap(Fnd):] the final optimality gap at the root node of the branch-and-cut tree. A `*' entry for gap indicates that no solution was found in any of the 10 instances within the CPLEX time limit, in parentheses, the number of instances for which a feasible solution was found at the root node, and hence, an upper bound is available.
\end{description}
\subsection{Portfolio optimization}\label{sec:portfolio}
We consider the distributionally robust chance-constrained programming formulation of a portfolio optimization problem from~\citet{chen2018data} given by~\eqref{eq:portfolio}. The problem is to find a minimum cost portfolio investment $\vx$ into $K$ assets with random returns $\bm{\xi}=(\xi_1,\ldots,\xi_K)^\top\in\mathbb{R}_+^K$ while achieving a prescribed target return with probability at least $1-\epsilon$. Problem~\eqref{eq:portfolio} admits the following MIP reformulation:
\begin{subequations}\label{eq:portfolio-re}
\begin{align}
\min\limits_{\vz, \mathbf{r}, t, \vx} \quad & \mathbf{c}^\top \vx\label{portfolio:obj}\\
\text{s.t.}\quad & \vz \in \{0,1\}^N,\ t \geq 0, \ \mathbf{r} \geq \bm{0},\ \vx \geq \bm{0},\label{portfolio:vars}\\
& \epsilon\, t \geq \theta \|\vx\|_* + \frac{1}{N} \sum_{i \in [N]} r^i,\label{portfolio:conic}\\
& M^i (1-z^i) \geq t-r^i, \quad i \in [N],\label{portfolio:bigM1}\\
& \vx^\top\bm{\xi}^i-w + M^i z^i \geq t-r^i, \quad i \in [N],\label{portfolio:bigM2}\\
&\sum_{i\in[N]}z^i\leq \lfloor \epsilon N\rfloor.\label{portfolio:knapsack}
\end{align}
\end{subequations}
In fact, $\vx =\bm{0}$ with $(\vz,\mathbf{r},t)=(\bm{1},\bm{0},0)$ satisfies~\eqref{portfolio:vars}--\eqref{portfolio:bigM2}. Then $\vx =\bm{0}$ with $(\vz,\mathbf{r},t)=(\bm{1},\bm{0},0)$ would be an optimal solution if~\eqref{portfolio:knapsack} were not present. Hence,~\eqref{portfolio:knapsack} is necessary, and by \cref{thm:knapsack-valid-joint},~\eqref{eq:portfolio-re} is an exact reformulation of~\eqref{eq:portfolio}.
Adapting our formulation~\eqref{eq:joint-k-reduced} to model~\eqref{eq:portfolio}, we obtain another formulation that is the same as~\eqref{eq:portfolio-re} except that~\eqref{portfolio:bigM2} is replaced with
\begin{equation}\label{portfolio:bigM2'}
\vx^\top\bm{\xi}^i-w + (w-q^i)z^i \geq t-r^i,\quad i\in[N],
\end{equation}
where $q^i$ is defined as in~\eqref{eq:quantiles}. As $\bm{\xi}^\top\vx >w$ is a covering constraint, we can compute a lower bound $q^i$ based on~\eqref{eq:h_j} in \cref{lemma:cov-pack-quantiles}. We next discuss how to select valid big-$M$ values in~\eqref{eq:portfolio-re}. %
\begin{remark}\label{remark:portfolio-M}
For~\eqref{eq:portfolio-re}, the domain of $\vx$ is not bounded, and hence $M^i$ given by~\eqref{eq:bigMvalue-joint} is not bounded. Then, as discussed in \cref{rem:bigM}, for some optimal $\vx$ to \cref{eq:portfolio-re}, we can choose $M^i\geq |\vx^\top \bm{\xi}^i-w|$ for each $i \in [N]$. Let $\vx$ be an optimal solution to~\eqref{eq:portfolio-re}.
First, since $\vx^\top \bm{\xi}^i\geq0$, it follows that $(\vx^\top \bm{\xi}^i-w)+w=\vx^\top \bm{\xi}^i\geq0$, so $-(\vx^\top \bm{\xi}^i-w)\leq w$ for all $i\in[N]$. Let $J\subseteq [N]$ denote the set of scenarios $j$ such that $\vx^\top \bm{\xi}^j-w\geq 0$. Then, $J$ is nonempty because $\vx$ satisfies the nominal chance constraint with nonzero probability. If $\vx^\top \bm{\xi}^j-w>0$ for all $j\in[J]$, one can scale down $\vx$ by a factor of some $\delta\in(0,1)$ such that $\delta\vx^\top \bm{\xi}^j-w\geq0$ for $j\in[J]$, thereby satisfying the same set of scenarios but obtaining a better solution. So, we may assume that $\vx^\top \bm{\xi}^j=w$ for some $j\in J$. Let $\xi_{\max}$ and $\xi_{\min}$ be the maximum and the minimum coordinate values of $\bm{\xi}$. Then, for $j\in[N]$, $\vx^\top \bm{\xi}^i\leq \xi_{\max} \vx^\top\bm{1}\leq {\xi_{\max}}\vx^\top \bm{\xi}^j/{\xi_{\min}}=w\cdot{\xi_{\max}}/{\xi_{\min}}$,
implying that $(\xi_{\max}/\xi_{\min}-1)w \geq (\vx^\top \bm{\xi}^i-w)$ holds for all $i\in[N]$. Thus, it is sufficient to set
\begin{equation}\label{portfolio:M}
M^i=\max\{w,\ \left({\xi_{\max}}/{\xi_{\min}}-1\right)w \},\quad i\in[N].
\end{equation}
\ifx\flagJournal1 \hfill\hbox{\hskip 4pt \vrule width 5pt height 6pt depth 1.5pt}\vspace{0.0cm}\par \fi
\end{remark}
\subsubsection{Instance Generation}
We follow the same instance generation scheme of~\citet{chen2018data} (and hence that of~\citet{Xie18}). We set $K=50$, $w=1$, and the cost coefficients $c_i$, for $i\in[50]$, %
are chosen uniformly at random from $\{1,\ldots,100\}$. As mentioned in \cref{sec:cover-pack}, each $\xi_i$ indicates the ratio of the end price and the initial price so that $\bm{\xi}$ always remains positive. For our experiments, we generate each coordinate of $\bm{\xi}$ uniformly at random from $[0.8,1.5]$. Based on \cref{remark:portfolio-M} and~\eqref{portfolio:M}, we set $M^i=1$ for all $i \in [N]$. As we use the $\ell_2$-norm for Wasserstein ambiguity sets, reformulations~\eqref{eq:joint} and~\eqref{eq:joint-k-reduced} become mixed-integer second-order cone programs. We test a set of values for the Wasserstein radius $\theta$ and risk tolerance $\epsilon$; we choose $\theta\in\{0.05,0.1,0.2\}$ and $\epsilon\in\{0.05,0.1\}$. For each problem parameter combination, we generate 10 random instances and report the average statistics.
\subsubsection{Performance Analysis} \label{subsec:portfolio-results}
Our experiments with $N\in\{500,1000\}$ scenarios are summarized in \cref{tab:port-full}.
Note that these correspond to much larger number of scenarios than $N\in\{100,110,\ldots,200\}$ considered previously in~\cite{chen2018data}. For completeness, we present experiments on $N\in\{100,300\}$ scenarios and a brief discussion in Appendix~\ref{sec:portfolio_small}.
\input{portresults}
For both $N\in\{500, 1000\}$, mixing inequalities are very rarely separated when $\theta=0.02$, and they are never separated for large $\theta>0.02$. When $\theta>0.02$, since mixing inequalities are never separated, the performances of \texttt{Mixing}\xspace and \texttt{Improved}\xspace are almost identical in terms of all of the statistics including root node statistics. The non-separation of mixing inequalities for large $\theta$ follows from the fact that the nominal region $\mathcal{X}_{\SAA}(\mathcal{S})$ (and consequently the resulting mixing inequalities) is a worse approximation for the distributionally robust region $\mathcal{X}_{\DR}(\mathcal{S})$ when $\theta$ gets larger. The same phenomenon was also observed in \cite{rhs2020} fo DR-CCP with RHS uncertainty. Thus, we report the relevant statistics for \texttt{Mixing}\xspace only for $\theta \leq 0.02$ in \cref{tab:mixing} in \cref{app:mixing}.
We observe that when the radius $\theta$ is small, the resulting problems are much harder to solve. Such difficulty of the problems for small $\theta$ was also reported by~\citet{rhs2020} for DR-CCP with RHS uncertainty. For example, for $\theta=0.001$, none of the models is able to solve any one of the ten instances for $N=500$ or $N=1000$ within the time limit of 3600 seconds. Despite this, we observe that in terms of the average final optimality gap for $\theta=0.001$ and $N=500$ ($N=1000$), \texttt{Mixing}\xspace is the best with 6.10\% gap (8.79\%), followed by \texttt{Improved}\xspace with 14.29\% (16.37\%), and finally \texttt{Basic}\xspace with 18.85\% (23.87\%).
In the case of $\theta=0.001$, it is noteworthy to point out that the average number of mixing inequalities separated is still relatively small; 89.6 in the case of $N=500$ and 271.5 for $N=1000$.
However, for $\theta=0.001$ and $N=1000$, comparing \texttt{Improved}\xspace and \texttt{Mixing}\xspace, we note that the mixing inequalities
improve the average root gap from 17.89\% to 16.43\%. This may appear to be a modest reduction, but surprisingly, it resulted in a reduction in the final optimality gap from 16.37\% to 8.79\% on average. %
Overall, these results highlight the positive computational impact of our developments in \texttt{Improved}\xspace and \texttt{Mixing}\xspace for small $\theta$.
As for the other $\theta$ values, we observe that \texttt{Improved}\xspace consistently outperforms \texttt{Basic}\xspace in terms of the number of instances solved for all $N$ and $\theta$ values. This is particularly striking for $N=1000$. In this case, \texttt{Basic}\xspace is unable to solve (with the exception of one instance out of ten for $\theta=0.04$) any of the ten randomly generated problem instances for any of the $\theta$ values within the time limit of 3600 seconds. In contrast, for all of the $\theta$ values greater than or equal to $0.1$, \texttt{Improved}\xspace solves at least 7 out of 10 random instances within an average of less than 300 seconds. For the instances that were unsolved for $N=1000$ and $\theta\ge0.04$, the reported average final gaps for \texttt{Basic}\xspace range between 12\% to 24.8\%, whereas the same range for \texttt{Improved}\xspace is 3.5\% to 14\%.
It may appear that for $N=500$ and $\theta\in\{0.04,0.06\}$, overall solution time of \texttt{Improved}\xspace is longer than \texttt{Basic}\xspace solution times, but this is due to the fact that we are able to solve more instances with \texttt{Improved}\xspace within the time limit (5 and 7 versus 2 and 1). For some instances, even finding a feasible solution within the time limit is a challenge for both of the formulations, in particular for $N=1000$ and $\theta\in\{0.04,0.06\}$.
Finally, observe that the solution time at the root node for \texttt{Basic}\xspace and \texttt{Improved}\xspace are very similar, but the
root node gap of \texttt{Improved}\xspace is better than \texttt{Basic}\xspace in most cases. This difference is more pronounced for the instances with $N=500$ and large $\theta$. The large improvement in the root gap for these instances translates into much faster overall solution times.
\subsection{Probabilistic resource planning}\label{sec:resource-planning}
We consider a probabilistic resource planning problem studied by~\citet{luedtke2014branch-and-cut} in the context of solving~\eqref{eq:saa-ccp}. Given a set of resources %
and a set of customer groups, %
the problem is to decide the quantity of each resource with the minimum cost to satisfy customer demands, i.e.,
\begin{equation}\label{eq:resource-panning}
\min_{\vx\in\mathbb{R}_+^D,\ \vy\in\mathbb{R}_+^{DP}}\left\{\mathbf{c}^\top\vx:\ \begin{array}{ll}
\sum_{p\in [P]}y_{dp}\leq \rho_dx_d,&d\in[D]\\
\sum_{d\in [D]}\mu_{dp}y_{dp}\geq\lambda_p,& p\in[P]
\end{array}\right\},\tag{RSRC-Plan}
\end{equation}
where $D$ is the number of resources and $P$ is the number of customer types, $c_d$ is the unit production cost of resource $d\in[D]$ and $\rho_d\in(0,1]$ represents the random yield of resource $d$ (e.g., the fraction of planned production that is available), $\lambda_p$ denotes the random demand of customer group $p\in[P]$, %
$\mu_{dp}$ represents the random service rate of resource $d$ for customer group $p$. %
Here, $x_d$ is the variable for the quantity of resource $d$ to be produced and $y_{dp}$ is the variable for the amount of resource $d$ allocated to customer group $p$. The constraints $\sum_{p\in [P]}y_{dp}\leq \rho_dx_d$ for $d\in[D]$ in~\eqref{eq:resource-panning} are \emph{resource assignment constraints}, and $\sum_{d\in [D]}\mu_{dp}y_{dp}\geq\lambda_p$ for $p\in[P]$ are \emph{demand satisfaction constraints}. Let $(\bm{\rho}^i,\vmu^i,\vlambda^i)\in\mathbb{R}_+^D\times\mathbb{R}_+^{DP}\times\mathbb{R}_+^P$ be the realization of the random parameters under scenario $i\in[N]$. Then the DR-CCP formulation of~\eqref{eq:resource-panning} is given by %
\begin{subequations}\label{eq:rsrcp-re}
\begin{align}
\min\limits_{\vz, \mathbf{r}, t, \vx,\vy} \quad & \mathbf{c}^\top \vx\label{rsrcp:obj}\\
\text{s.t.}\quad & \vz \in \{0,1\}^N,\ t \geq 0, \ \mathbf{r} \geq \bm{0},\ \vx \geq \bm{0},\ \vy \geq \bm{0},\label{rsrcp:vars}\\
& \epsilon\, t \geq \theta \left\|(\vx,\vy,\bm{1})^\top\right\|_* + \frac{1}{N} \sum_{i \in [N]} r^i,\label{rsrcp:conic}\\
& M^i(1-z^i) \geq t-r^i, \quad i \in [N],\label{rsrcp:bigM1}\\
& \rho_d^ix_d- \sum_{p\in [P]}y_{dp} + M^iz^i \geq t-r^i, \quad i \in [N],\ d\in[D],\label{rsrcp:bigM2-d}\\
& \sum_{d\in [D]}\mu_{dp}^iy_{dp}-\lambda_p^i + M^iz^i \geq t-r^i, \quad i \in [N],\ p\in[P]. \label{rsrcp:bigM2-p}%
\end{align}
\end{subequations}
\subsubsection{Instance generation and big-$M$ computation}
We test instances with $D=10$, $P=20$, and $\epsilon=0.1$. For the cost vector $\mathbf{c}$ and the random parameters $(\bm{\rho},\vmu,\vlambda)$, we use the same setting of~\citet[Section 3.3]{luedtke2014branch-and-cut} (further details of instance generation can be found in~\citet{luedtke2014branch-and-cut-supplement}). This instance generation scheme ensures that each sample data $(\bm{\rho}^i,\vmu^i,\vlambda^i)$ is nonnegative almost surely. We empirically found that the problem becomes infeasible when $\theta$ gets above 0.01, so we test 10 different values $\{0.0001,0.001,0.002,\ldots,0.009\}$ for $\theta$.
Since the domain $\mathcal{X}$ of~\eqref{eq:rsrcp-re} is not bounded, we need to choose a value for $M^i$ based on~\eqref{eq:bigMvalue-joint-opt}, i.e., for some optimal $(\vx,\vy)$ to \cref{eq:rsrcp-re}, set $M^i$ to be greater than or equal to
\begin{equation}\label{rsrcp:M-lb}
\max_{d\in[D]}\bigg\{\bigg|\rho_d^ix_d- \sum_{p\in [P]}y_{dp}\bigg|\bigg\}\ \text{and} \ \max_{p\in[P]}\bigg\{\bigg|\sum_{d\in [D]}\mu_{dp}^iy_{dp}-\lambda_p^i\bigg|\bigg\}.
\end{equation}
Using the nonnegativity of data $(\bm{\rho}^i,\vmu^i,\vlambda^i)$, in \cref{remark:rsrcp-M} in~\cref{app:big-M-resource-planning}, we provide an upper bound on~\eqref{rsrcp:M-lb}, thereby providing a value for $M^i$.
Notice that the demand constraints in~\eqref{eq:resource-panning} are covering type, so we can improve~\eqref{rsrcp:bigM2-p} by reducing $M^i$ based on \cref{lemma:cov-pack-quantiles}. However, the resource assignment constraints are neither covering nor packing type, hence we cannot apply \cref{lemma:cov-pack-quantiles} to compute the reduced coefficient for~\eqref{rsrcp:bigM2-d}. So, %
in~\cref{remark:rsrcp-yield-lower-bounds} we describe our reduced coefficient computation for~\eqref{rsrcp:bigM2-d} based on~\eqref{eq:lower-bounds-joint}. %
\subsubsection{Performance Analysis} \label{subsec:resource-results}
We summarize our experiments with $N\in\{100,300\}$ scenarios in \cref{tab:RP-full}.
Note that the resource planning problems with LHS uncertainty are significantly more difficult than the portfolio optimization problems, thus the number of scenarios $N$ we can scale to were much smaller than in \cref{sec:portfolio}.
\input{resourceplanningresults}
We continue to see that when the radius $\theta$ is small, the resulting problems are much harder to solve.
For example, for $\theta\in\{0.001,0.002\}$, \texttt{Basic}\xspace is not able to solve any one of the ten instances for $N=100$ within the time limit. That said, for the really small radius of $\theta=0.0001$, the instances are slightly easier with more instances solved to optimality than $\theta=0.001$ for all models.
For $N=100$, as $\theta$ increases, more instances are solved by \texttt{Basic}\xspace, however, even for the largest $\theta$, i.e., $\theta=0.009$, there are three instances for which \texttt{Basic}\xspace is not able to find an optimal solution. In contrast \texttt{Improved}\xspace is able to solve all instances to optimality for $\theta\in\{0.007,0.008,0.009\}$.
Furthermore, for $N=300$, \texttt{Basic}\xspace is not able to solve any of the instances for \emph{any} $\theta$. These instances are simply intractable for \texttt{Basic}\xspace, which terminates with over 90\% optimality gap in all test cases. In contrast, for the largest $\theta$ tested, \texttt{Improved}\xspace finds an optimal solution to all ten instances well within the time limit.
Comparing the quality of the solutions at termination, we observe that the optimality gaps for \texttt{Basic}\xspace are extremely large in these instances, ranging from 45\% for $N=100, \theta=0.009$ to 97.08\% for $N=300, \theta=0.009$. In contrast, the optimality gaps for \texttt{Improved}\xspace range from 0\% for various settings including $N=100, \theta\in\{0.007,0.008,0.009\}$ and $N=300, \theta=0.009$ to at most 14.51\% for $N=100, \theta=0.005$.
It is interesting to note that in most cases, an integer feasible solution is not found at the root node in both \texttt{Basic}\xspace and \texttt{Improved}\xspace, so the root gap information is not available, except \texttt{Basic}\xspace is able to find a feasible solution for four instances for $N=100, \theta=0.0001$, albeit with 100\% optimality gap.
This observation is reversed for $N=300, \theta=0.0001$, when \texttt{Basic}\xspace is unable to report a root gap for any instance, whereas \texttt{Improved}\xspace is able to report an average gap of 11.25\% for six instances.
A few comments are in order for the performance of \texttt{Mixing}\xspace. Once again, we only report these results for $\theta \leq 0.001$ in \cref{app:mixing}, since we observed that no mixing inequalities are separated for $\theta > 0.001$ for any $N\in\{100,300\}$. That said, \texttt{Mixing}\xspace is quite effective for $\theta=0.0001$. For $N=100, \theta=0.0001$, an average of 687.3 mixing inequalities are separated, and \texttt{Mixing}\xspace is able to solve nine instances to optimality (three more than \texttt{Improved}\xspace), and in smaller average solution time (800 seconds versus 979 seconds). The effectiveness of \texttt{Mixing}\xspace decreases when $\theta=0.001$: in this case, only an average of 12.5 mixing inequalities are separated when $N=100$. Indeed, in this case, \texttt{Mixing}\xspace solves one fewer instance to optimality than \texttt{Improved}\xspace and there is only a moderate decrease in the optimality gap (7.99\% for \texttt{Mixing}\xspace versus 9.03\% for \texttt{Improved}\xspace). More mixing cuts are separated on average for $N=300$: 4337.9 and 223.5, respectively, for $\theta=0.0001$ and $\theta=0.001$. Despite this, these instances are still unsolvable within the time limit. Nevertheless, there is a moderate decrease in the final gap from 7.68\% to 7.55\% for $\theta=0.0001$ and from 13.22\% to 11.88\% for $\theta=0.001$. Finally, with respect to root gaps, in contrast to \texttt{Improved}\xspace, the only interesting statistic for \texttt{Mixing}\xspace is that $N=300, \theta=0.0001$, \texttt{Mixing}\xspace achieves a smaller root gap of 10.28\%, on average, but over fewer instances that solve to optimality (five instead of six) than \texttt{Improved}\xspace. %
In summary, we observe that our proposed \texttt{Improved}\xspace formulation drastically increases our ability to obtain high-quality solutions to \eqref{eq:dr-ccp}. \texttt{Mixing}\xspace provides additional improvement for cases when $\theta$ is small.
\section{Additional Results for Portfolio Optimization}\label{sec:portfolio_small}
In this section, we present additional results for portfolio optimization problems from Section~\ref{subsec:portfolio-results} for $N\in\{100, 300\}$. The results in Tables \ref{tab:port-full-N100} and \ref{tab:port-full-N300} demonstrate once again the improved performance of our approach for \emph{all} parameter regimes, including small $N$ and large $\theta$. Note that while the basic formulation does not have trouble solving portfolio instances with $N=100$ (and also for $N=300$ and $\theta \geq 0.4$), we still observe that the improved formulation solves noticeably more quickly for these parameter regimes.
\begin{table}[h!t]
\centering
\caption{Results for portfolio for $N=100$}
\label{tab:port-full-N100}
\begin{tabular}{l|rrrr|rrrr}
\toprule
{} & \multicolumn{4}{c}{\texttt{Basic}} & \multicolumn{4}{c}{\texttt{Improved}} \\
{$\theta$} & Slv(Fnd) & Time(Gap) & R.time & (Fnd)R.gap & Slv(Fnd) & Time(Gap) & R.time & (Fnd)R.gap \\
\midrule
0.001 & 10(10) & 0.75(*) & 0.15 & (10)18.60 & 10(10) & 0.29(*) & 0.04 & (10)16.25 \\
0.020 & 10(10) & 0.61(*) & 0.40 & (10)18.92 & 10(10) & 0.59(*) & 0.11 & (10)29.14 \\
0.040 & 10(10) & 0.94(*) & 0.74 & (10)19.72 & 10(10) & 0.56(*) & 0.30 & (10)14.92 \\
0.060 & 10(10) & 1.18(*) & 1.00 & (10)15.85 & 10(10) & 0.46(*) & 0.30 & (10)7.96 \\
0.080 & 10(10) & 1.57(*) & 1.35 & (10)17.63 & 10(10) & 0.48(*) & 0.42 & (10)4.31 \\
0.100 & 10(10) & 1.21(*) & 0.98 & (10)19.58 & 10(10) & 0.43(*) & 0.38 & (10)1.99 \\
0.120 & 10(10) & 1.15(*) & 0.90 & (10)21.61 & 10(10) & 0.48(*) & 0.48 & (10)0.10 \\
0.140 & 10(10) & 1.08(*) & 0.81 & (10)21.53 & 10(10) & 0.37(*) & 0.37 & (10)0.00 \\
0.160 & 10(10) & 0.83(*) & 0.56 & (10)19.89 & 10(10) & 0.34(*) & 0.34 & (10)0.00 \\
0.180 & 10(10) & 0.81(*) & 0.57 & (10)19.41 & 10(10) & 0.32(*) & 0.32 & (10)0.08 \\
\bottomrule
\end{tabular}
\end{table}
\begin{table}[h!t]
\centering
\caption{Results for portfolio for $N=300$}
\label{tab:port-full-N300}
\begin{tabular}{l|rrrr|rrrr}
\toprule
{} & \multicolumn{4}{c}{\texttt{Basic}} & \multicolumn{4}{c}{\texttt{Improved}} \\
{$\theta$} & Slv(Fnd) & Time(Gap) & R.time & (Fnd)R.gap & Slv(Fnd) & Time(Gap) & R.time & (Fnd)R.gap \\
\midrule
0.001 & 0(10) & *(13.51) & 0.36 & (10)23.09 & 0(10) & *(8.61) & 0.14 & (10)17.80 \\
0.020 & 5(10) & 479.47(9.38) & 0.57 & (10)23.48 & 7(10) & 556.67(4.22) & 1.00 & (10)32.42 \\
0.040 & 7(10) & 685.71(4.64) & 0.50 & (10)26.73 & 10(10) & 192.80(*) & 1.44 & (10)20.88 \\
0.060 & 10(10) & 569.36(*) & 1.58 & (10)30.69 & 10(10) & 30.27(*) & 1.84 & (10)16.04 \\
0.080 & 10(10) & 214.67(*) & 2.37 & (10)29.59 & 10(10) & 10.54(*) & 1.97 & (10)11.48 \\
0.100 & 10(10) & 128.98(*) & 2.71 & (10)27.37 & 10(10) & 5.78(*) & 1.94 & (10)7.16 \\
0.120 & 10(10) & 51.87(*) & 2.41 & (10)26.34 & 10(10) & 4.53(*) & 1.87 & (10)5.89 \\
0.140 & 10(10) & 16.90(*) & 2.48 & (10)24.88 & 10(10) & 3.07(*) & 1.69 & (10)4.58 \\
0.160 & 10(10) & 24.67(*) & 2.52 & (10)23.75 & 10(10) & 2.79(*) & 1.92 & (10)4.51 \\
0.180 & 10(10) & 21.71(*) & 2.67 & (10)22.22 & 10(10) & 2.29(*) & 1.67 & (10)3.46 \\
\bottomrule
\end{tabular}
\end{table}
\section{Big-$M$ Computation for Resource Planning}\label{app:big-M-resource-planning}
In this appendix we describe our big-$M$ calculation used in~\eqref{eq:rsrcp-re} of the probabilistic resource planning problem studied in~\cref{sec:resource-planning}.
Recall that each sample data $(\bm{\rho}^i,\vmu^i,\vlambda^i)$ is nonnegative almost surely. Recall also that the domain $\mathcal{X}$ of~\eqref{eq:rsrcp-re} is not bounded, so we need to choose a value for $M^i$ based on~\eqref{eq:bigMvalue-joint-opt}, i.e., for some optimal $(\vx,\vy)$ to \cref{eq:rsrcp-re}, $M^i$ must be selected to be greater than or equal to the quantity in~\eqref{rsrcp:M-lb}.
Next, we provide an upper bound on~\eqref{rsrcp:M-lb}, thereby providing a value for $M^i$.
\begin{remark}\label{remark:rsrcp-M}
Let $(\vx,\vy)$ be an optimal solution to~\eqref{eq:rsrcp-re}. Note that if $\mu_{dp}^i=0$ for all $i\in[N]$, we may assume that $y_{dp}=0$, for otherwise, reducing $y_{dp}=0$ does not affect~\eqref{rsrcp:bigM2-p}, and is less restrictive for~\eqref{rsrcp:bigM2-d}. Consider a pair of $d\in[D]$ and $p\in[P]$ such that $y_{dp}>0$. Since $(\vx,\vy)$ satisfies the nominal chance constraint with nonzero probability, there exists $i\in [N]$ such that $\sum_{d\in [D]}\mu_{dp}^iy_{dp}-\lambda_p^i\geq0$. In fact, we may assume that there exists $j\in [N]$ such that $\mu_{dp}^{j}>0$ and equality $\sum_{d\in [D]}\mu_{dp}^{j}y_{dp}-\lambda_p^{j}=0$ holds, for otherwise, one can slightly reduce $y_{dp}$ without affecting the validity of $(\vx,\vy)$. Hence, it follows that $y_{dp}\leq \lambda_p^{j}/\mu_{dp}^{j}$ if $y_{dp}>0$. Let $U_{dp}$ be defined as
\[
U_{dp}=\begin{cases}
\max\left\{{\lambda_p^{i}}/{\mu_{dp}^{i}}:\ \mu_{dp}^{i}>0,\ i\in[N]\right\},&\text{if}\ \mu_{dp}^{i}>0 \ \text{for some} \ i\in[N],\\
0,&\text{otherwise}.
\end{cases}
\]
Then, for every $d\in[D]$ and $p\in[P]$, we have $0\leq y_{dp}\leq U_{dp}$. This implies that
\begin{equation}\label{rsrcp:M-lb1}
\max_{p\in[P]}\bigg\{\bigg|\sum_{d\in [D]}\mu_{dp}^iy_{dp}-\lambda_p^i\bigg|\bigg\}\leq\max\limits_{p\in[P]}\bigg\{\max\bigg\{\lambda_p^i,\ \sum_{d\in [D]}\mu_{dp}^iU_{dp}-\lambda_p^i\bigg\}\bigg\}.
\end{equation}
Now let us consider the other term inside~\eqref{rsrcp:M-lb}. Since $(\vx,\vy)$ satisfies the nominal chance constraint with nonzero probability, there exists $i\in [N]$ such that $\rho_d^ix_d- \sum_{p\in [P]}y_{dp}\geq0$, and as before, we may assume that equality $\rho_d^jx_d- \sum_{p\in [P]}y_{dp}=0$ holds for some $j\in[N]$. Hence, it follows that for $i\in[N]$, $
\rho_d^ix_d- \sum_{p\in [P]}y_{dp}=({\rho_d^i}/{\rho_d^j}-1)\sum_{p\in [P]}y_{dp}.$
Let $\rho_d^{\max}:=\max\{\rho_d^j:\ j\in[N]\}$ and $\rho_d^{\min}:=\min\{\rho_d^j:\ j\in[N]\}$. Then
\begin{equation}\label{rsrcp:M-lb2}
\max\limits_{d\in[D]}\bigg\{\bigg|\rho_d^ix_d- \sum_{p\in [P]}y_{dp}\bigg|\bigg\}\leq\max\limits_{d\in[D]}\bigg\{\max\bigg\{1-\frac{\rho_d^i}{\rho_d^{\max}},\ \frac{\rho_d^i}{\rho_d^{\min}}-1\bigg\}\cdot\sum_{p\in[P]}U_{dp}\bigg\},
\end{equation}
and $M^i$ can be set to the maximum of the two values given in the right-hand sides of~\eqref{rsrcp:M-lb1} and~\eqref{rsrcp:M-lb2}.
\ifx\flagJournal1 \hfill\hbox{\hskip 4pt \vrule width 5pt height 6pt depth 1.5pt}\vspace{0.0cm}\par \fi
\end{remark}
While the demand constraints in~\eqref{eq:resource-panning} are covering type and so we can use~\cref{lemma:cov-pack-quantiles} to improve~\eqref{rsrcp:bigM2-p} by reducing $M^i$, the resource assignment constraints in~\eqref{eq:resource-panning} are neither covering nor packing type, hence we cannot apply~\cref{lemma:cov-pack-quantiles} to compute the reduced coefficient for~\eqref{rsrcp:bigM2-d}. Instead, we compute the reduced coefficient for~\eqref{rsrcp:bigM2-d} based on~\eqref{eq:lower-bounds-joint} as follows.
\begin{remark}\label{remark:rsrcp-yield-lower-bounds}
By \cref{remark:rsrcp-M}, at optimality we have $0 \leq y_{d'p} \leq U_{d'p}$ for all $d' \in [D]$, $p \in [P]$. Then, for $i,j\in[N]$,
\begin{align}
&\bar h_d^j(-\mathbf{A}\bm{\xi}_d^i- \mathbf{a}_d)\label{eq:rsrcp-quantiles-bound}\\
&\geq\min\limits_{\vx\geq\bm{0}, \vy\geq\bm{0}}\left\{\rho_d^ix_d- \sum_{p\in [P]}y_{dp}: \begin{array}{l}
\sum_{d'\in[D]}\mu_{d'p}^jy_{d'p}\geq\lambda_p^j, \ p\in[P],\\ \rho_d^jx_d\geq \sum_{p\in [P]}y_{dp},\\
y_{d'p} \leq U_{d'p}, \ d' \in [D], \ p \in [P]
\end{array}
\right\}\notag\\
&\geq\min\limits_{\vx\geq\bm{0}, \vy\geq\bm{0}}\left\{\left(\frac{\rho_d^i}{\rho_d^j}-1\right)\sum_{p\in [P]}y_{dp}: \begin{array}{l}
\sum_{d'\in[D]}\mu_{d'p}^jy_{d'p}\geq\lambda_p^j, \ p\in[P],\\
y_{d'p} \leq U_{d'p}, \ d' \in [D], \ p \in [P]
\end{array}\right\}.\notag
\end{align}
When $({\rho_d^i}/{\rho_d^j}-1)\geq 0$, we set $y_{d'p} = U_{d'p}$ for $d' \neq d$. Then for each $p \in [P]$, we set
\[ y_{dp} = L_{dp}^j := \begin{cases}
\max\left\{0,\lambda_p^j - \sum_{d'\in[D], d' \neq d} U_{d'p}\right\}/{\mu_{dp}^j}, &\mu_{dp}^j > 0\\
0, & \mu_{dp}^j = 0.
\end{cases} \]
It follows from~\eqref{eq:rsrcp-quantiles-bound} that $\bar h_d^j(-\mathbf{A}\bm{\xi}_d^i- \mathbf{a}_d)\geq({\rho_d^i}/{\rho_d^j}-1)\sum_{p\in[P]}L_{dp}^j$ when $({\rho_d^i}/{\rho_d^j}-1)\geq 0$. When $({\rho_d^i}/{\rho_d^j}-1)<0$, since $0 \leq y_{dp}\leq U_{dp}$ at optimality, we obtain $\bar h_d^j(-\mathbf{A}\bm{\xi}_d^i- \mathbf{a}_d)\geq({\rho_d^i}/{\rho_d^j}-1)\sum_{p\in[P]}U_{dp}$. Based on these lower bounds on $\bar h_d^j(-\mathbf{A}\bm{\xi}_d^i- \mathbf{a}_d)$, we can compute a lower bound on $q_p^i$. Note that in the definition of $L_{pd}^j$, if $\lambda_p^j - \sum_{d'\in[D], d' \neq d} U_{d'p} > 0$ but $\mu_{dp}^j = 0$, then scenario $j$ is infeasible, so we can set $z_j = 1$.
\ifx\flagJournal1 \hfill\hbox{\hskip 4pt \vrule width 5pt height 6pt depth 1.5pt}\vspace{0.0cm}\par \fi
\end{remark}
\section{Computational Results for Mixing Inequalities}\label{app:mixing}
\cref{tab:mixing} presents computational results for \texttt{Mixing} described in \cref{sec:experiments}.
\begin{table}[h!]
\centering
\caption{Results for \texttt{Mixing}.}
\label{tab:mixing}
\begin{tabular}{cll|rrrrr}
\toprule
& $N$ & $\theta$ & Slv(Fnd) & Time(Gap) & R.time & R.gap(Fnd) & Cuts \\
\midrule
\multirow{4}{*}{\rotatebox[origin=c]{90}{Portfolio}} & \multirow{2}{*}{500} & 0.001 & 0(10) & *(6.10) & 0.62 & 16.99(10) & 89.6 \\
& & 0.020 & 0(10) & *(12.74) & 2.54 & 34.15(10) & 0.0 \\
\cline{2-8}
& \multirow{2}{*}{1000} & 0.001 & 0(10) & *(8.79) & 2.28 & 16.43(10) & 271.5 \\
& & 0.020 & 0(9) & *(17.52) & 3.86 & 23.46(9) & 0.2 \\
\cline{1-8}
\multirow{4}{*}{\rotatebox[origin=c]{90}{Res. plan.}} & \multirow{2}{*}{100} & 0.0001 & 9(10) & 800.84(6.53) & 8.11 & *(0) & 687.3 \\
& & 0.0010 & 1(10) & 2307.23(7.99) & 11.66 & *(0) & 12.5 \\
\cline{2-8}
& \multirow{2}{*}{300} & 0.0001 & 0(10) & *(7.55) & 39.78 & 10.28(5) & 4337.9 \\
& & 0.0010 & 0(10) & *(11.88) & 50.85 & *(0) & 223.5 \\
\bottomrule
\end{tabular}
\end{table}
\section{Conic disjunctions}
By \eqref{eq:cc-distance-formulation}, the feasible region is
\begin{align*}
\mathcal{X}_{\DR}(\mathcal{S}) = \left\{ \vx \in \mathcal{X} : \begin{aligned}
&\quad \exists\ t \geq 0, \ \mathbf{r} \geq \bm{0},\\
&\quad \dist(\bm{\xi}^i,\mathcal{S}(\vx)) \geq t - r^i, \ i \in [N],\\
&\quad \epsilon\, t \geq \theta + \frac{1}{N} \sum_{i \in [N]} r^i
\end{aligned} \right\}.
\end{align*}
By \eqref{eq:safety} and \eqref{eq:distance-linear}, the distance function is
\[ \dist(\bm{\xi},\mathcal{S}(\vx)) = \max\left\{ 0,\ \min_{p \in [P]} \frac{(\vb-\mathbf{A}^\top \vx)^\top \bm{\xi}_p + d_p - \mathbf{a}_p^\top \vx}{\|\vb-\mathbf{A}^\top \vx \|_*} \right\}. \]
This can be reformulated as
\[(\vx,\mathbf{r},t,u) \in \mathcal{X} \times \mathbb{R}_+^N \times \mathbb{R}_+ \times \mathbb{R}_+ \quad \text{s.t.} \quad
\begin{aligned}
&\epsilon t \geq \theta u + \frac{1}{N} \sum_{i \in [N]} r^i\\
&(\vx,r^i,t,u) \in S(\bm{\xi}^i)\\
&\|\vb - \mathbf{A}^\top \vx\|_* \leq u
\end{aligned}
\]
where
\[ S(\bm{\xi}) = \left\{ (\vx,r,t,u) : \begin{aligned}
&(\vb - \mathbf{A}^\top \vx)^\top \bm{\xi}_p + d_p - \mathbf{a}_p^\top \vx \geq t - r, \ \forall p \in [P]\\
&\|\vb - \mathbf{A}^\top \vx\|_* \leq u
\end{aligned} \right\} \cup \left\{ (\vx,r,t,u) : \begin{aligned}
&0 \geq t - r\\
&\|\vb - \mathbf{A}^\top \vx\|_* \leq u
\end{aligned} \right\}. \]
{\color{red} Nam: previously, I had explored disjunctions for the following set
\[ \left\{ (\vx,s,u) : \begin{aligned}
&\vx^\top \bm{\xi} - \alpha u \geq s\\
&\|\vx\|_* \leq u
\end{aligned} \right\} \cup \left\{ (\vx,s,u) : \begin{aligned}
&0 \geq s\\
&\|\vx\|_* \leq u
\end{aligned} \right\} \]
where $\alpha \geq 0$, and saw that it was related to an eigenvalue problem. I used conic disjunction results from Fatma's paper with Sam Burer.
The difference between this and $S(\bm{\xi})$ is that $P=1$, $\vb - \mathbf{A}^\top \vx = \vx$, $d_p - \mathbf{a}_p^\top x = 0$, and there is an additional $-\alpha u$ term in the linear part. I don't remember if $\alpha > 0$ was necessary for this, or whether it could work with $\alpha = 0$.
}
\section*{Acknowledgments}
This research is supported, in part, by ONR grant N00014-19-1-2321, by the Institute for Basic Science (IBS-R029-C1, IBS-R029-Y2), Award N660011824020 from the DARPA Lagrange Program and NSF Award 1740707.
\bibliographystyle{abbrvnat}
|
2,877,628,090,408 | arxiv | \section*{Introduction}
The Ising model, proposed by Lenz~\cite{Lenz}, and solved in one dimension by his student Ising~\cite{Ising}, is one of the most studied models of statistical mechanics.
It was introduced as a model for ferromagnetism with the intention to explain spontaneous magnetization. Ising proved that the one dimensional
case does not account for the existence of this phenomenon and concluded that the same should hold in higher dimensions. This was
later disproved by Peierls~\cite{Peierls}, whose, now classical, argument established that in dimensions higher than one the model does exhibit a phase transition in the magnetic behavior.
The critical point, i.e.,\ the value of the temperature parameter where the phase transition occurs, for the model defined on the two-dimensional square lattice
was first identified by Kramers and Wannier~\cite{KraWan1} as the fixed point of a certain duality transformation. The first rigorous proof of
criticality of the self-dual point came together with the exact solution of the two-dimensional model done by Onsager~\cite{Onsager},
who explicitly computed the free energy density and showed that it is not analytic only at this particular value of the temperature.
Since then, several different methods have been developed to study the two-dimensional Ising model. One of them is the approach of Kac and Ward~\cite{KacWard},
who expressed the partition function of the model in terms of the determinant of what is now called the Kac--Ward operator. This combinatorial in nature idea
has been so far a source of numerous results about the planar Ising model. The most classical are the (alternative to the solution of
Onsager and Yang~\cite{Yang}) analytic derivations of the free energy density and magnetization performed by Vdovichenko~\cites{Vdovichenko1,Vdovichenko2},
who built on earlier works of Sherman~\cite{Sherman} and Burgoyne~\cite{Burgoyne}.
However, most of the articles concerning the Kac--Ward formula left many details of the
method unexplained and even contained errors. The first completely rigorous account of this approach seems to be given much later by Dolbilin et al.\
\cites{DolEtAl}. A more recent treatment, presented by Kager, Meester and the author~\cite{KLM}, concentrates on loop expansions
of the Kac--Ward determinants. As a result, the authors not only obtain rigorous proofs of the combinatorial foundations of the approach,
but also rederive the critical temperature of the Ising model on the square lattice.
The Kac--Ward determinants also turned out to be the right tool for the computation of the critical point of Ising models defined on planar doubly periodic
graphs (Cimasoni and Duminil-Copin~\cite{CimDum}). Moreover, Cimasoni~\cite{Cimasoni} showed that the Kac--Ward formula can be
generalized to Ising models defined on surfaces of higher genus. Finally, as pointed out by the author in~\cite{Lis},
the Kac--Ward method is intrinsically connected with the discrete holomorphic approach to the Ising model introduced by Smirnov~\cite{Smirnov}.
In this paper, we continue in the spirit of~\cite{KLM}, where the spectral radius and operator norm of the Kac--Ward transition matrices were first considered.
We explicitly compute the operator norm of what we call the conjugated transition matrix defined for a general graph in the plane,
and hence we provide an upper bound on the spectral radius of the standard Kac--Ward transition matrix.
Combining this result with the Kac--Ward formula for the high and
low-temperature expansion of the partition function yields domains of parameters of the model where there is no phase transition.
We will focus only on the analytic properties of the free energy, but our bounds, together with the methods from~\cite{KLM},
also allow to identify regions where there is
spontaneous magnetization or exponential decay of the two-point functions.
The advantage of our approach is that it does not require any form of periodicity of the underlying graph.
Moreover, our results are optimal for the Ising model defined on isoradial graphs with uniformly bounded rhombus angles
(see condition~\eqref{eq:isoradialcond}), i.e.\ we can conclude that the self-dual
Z-invariant coupling constants, first considered by Baxter~\cite{Baxter1}, are indeed critical in the classical sense.
To be more precise, after introducing the inverse temperature parameter~$\beta$ to the corresponding Ising model,
we show that the thermodynamic limits of the free energy density can have singularities only at $\beta=1$.
The isoradial graphs, or equivalently rhombic lattices, were introduced by Duffin~\cite{Duffin} as potentially the largest family of graphs
where one can do discrete complex analysis. As mentioned in~\cite{ChelkSmir},
this class of graphs seems to be the most general family of graphs where the critical Ising model can be nicely defined, and
it also seems to be the one, where our bounds for the spectral radius and operator norm of the Kac--Ward transition matrix yield the critical point of the Ising model.
The self-dual Z-invariant Ising model has been extensively studied in the mathematics literature.
Chelkak and Smirnov~\cite{ChelkSmir} proved that the associated discrete holomorphic fermion has a universal, conformally invariant
scaling limit. Boutillier and de Tili{\`e}re~\cites{BoutTil1, BoutTil2} gave a complete description of the corresponding
dimer model, yielding also an alternative proof of Baxter's formula for the critical free energy density. Mercat~\cite{Mercat} defined a
notion of criticality for discrete Riemann surfaces and investigated its connection with criticality in the Ising model.
The self-dual Z-invariant Ising model is commonly referred to as critical.
However, criticality in the statistical mechanics sense has been established only in the case of doubly periodic isoradial graphs
(see Example 1.6 of~\cite{CimDum} and the references therein).
As already mentioned, we extend this result to a wide class of aperiodic isoradial graphs.
This paper is organized as follows: in Section~\ref{sec:Isingresults}, we introduce the Ising model and the notion of phase transition, and we state our main theorem. Section~\ref{sec:KWresults}
defines the Kac--Ward operator and presents its connection to the Ising model. It also contains our results for the Kac--Ward transition matrix.
The proof of the main theorem is postponed until Section~\ref{sec:freeenergy}.
\section{Results for the Ising model} \label{sec:Isingresults}
\subsection{The Ising model}
Let $\Gamma$ be an infinite, planar, simple graph embedded in the complex plane and let $\Gamma^*$ be its planar dual.
We assume that both $\Gamma$ and $\Gamma^*$ have uniformly bounded vertex degrees. One should think of~$\Gamma$ as any kind of tiling or discretization of the plane.
In particular, $\Gamma$ can be a regular lattice, or an instance of an isoradial graph (see Section~\ref{sec:isoradialIsing}).
We call a subgraph~$\mathcal{G}$ of $\Gamma$ a \emph{subtiling} if
there is a collection of faces of~$\Gamma$, such that $\mathcal{G}$ is the subgraph induced by all edges forming boundaries of these faces.
We define the \emph{boundary} $\partial \mathcal{G} $ of $\mathcal{G}$ to be the set of vertices of~$\mathcal{G}$
which lie on the boundary of at least one face which is not in the defining collection of faces.
For a simple graph~$\mathcal{G}$ embedded in the complex plane, we will write $V(\mathcal{G})$ for the set
of vertices of $\mathcal{G}$, which we identify with the corresponding complex numbers. By $E(\mathcal{G})$ we will denote the set of edges which are represented by unordered pairs of vertices.
Let $J=(J_{e})_{e \in E(\Gamma)}$ be a system of \emph{ferromagnetic}, i.e.\ positive, \emph{coupling constants} on the edges of $\Gamma$.
For each finite subtiling $\mathcal{G}$, we will consider an \emph{Ising model} on $\mathcal{G}$ defined by $J$ and the \emph{inverse temperature} parameter $\beta$. Borrowing the notation from \cite{KLM}, let
\[
\Omega_{\mathcal{G}}^{\text{free}}=\{-1,+1\}^{V(\mathcal{G})} \quad \text{and} \quad \Omega_{\mathcal{G}}^+=\{ \sigma \in \Omega_{\mathcal{G}}^{\text{free}} : \sigma_z =+1 \text{ if } z \in\partial \mathcal{G} \}
\]
be the spaces of \emph{spin configurations} with \emph{free} and \emph{positive boundary conditions}. The Ising model with $\Box$ boundary conditions ($\Box \in \{\text{free}, +\}$)
is defined by a probability measure on $\Omega^{\Box}_{\mathcal{G}}$ given by
\begin{align*}
\mathbf{P}^{\Box}_{\mathcal{G},\beta}(\sigma) = \frac{1}{\mathcal{Z}^{\Box}_{\mathcal{G}}(\beta) }\prod_{ \{z,w\} \in E(\mathcal{G})} \exp \big(\beta J_{\{z,w\}} \sigma_z \sigma_w \big),\qquad \sigma \in \Omega^{\Box},
\end{align*}
where the normalizing factor
\begin{align*}
\mathcal{Z}^{\Box}_{\mathcal{G}}(\beta) = \sum_{\sigma \in \Omega^{\Box}} \prod_{ \{z,w\} \in E(\mathcal{G})} \exp \big(\beta J_{\{z,w\}} \sigma_z \sigma_w \big)
\end{align*}
is called the \emph{partition function}.
Throughout the paper, we will make a natural assumption on the coupling constants, namely we will require
that there exist numbers $m$ and $M$, such that for all $e \in E(\Gamma)$,
\begin{align} \label{eq:regularcoupling}
0 < m \leq J_{e} \leq M < \infty.
\end{align}
\subsection{Phase transition}
An object of interest in statistical physics is the \emph{free energy density} (or \emph{free energy per site}) defined by
\begin{align*}
f^{\Box}_{\mathcal{G}}(\beta) = - \frac{\ln \mathcal{Z}^{\Box}_{\mathcal{G}}(\beta)}{\beta|V(\mathcal{G})|} .
\end{align*}
It is clear that the free energy density is an analytic function of the inverse temperature $\beta \in (0,\infty)$ for every finite subtiling $\mathcal{G}$.
However, when $\mathcal{G}$ approaches
$\Gamma$, or more generally, some infinite subgraph of $\Gamma$ (this is called taking a \emph{thermodynamic limit}), the
limiting function can have a \emph{critical point}, i.e.\ a particular value of $\beta$ where it is not analytic. The existence of such a point indicates
that the system undergoes a phase transition when one varies $\beta$ through the critical value. This is a universal way of looking at the phenomenon of phase transition since
it can be applied to any model of statistical mechanics.
Another approach, and perhaps a more natural one in the setting of the Ising model, is to investigate the magnetic behavior of the system.
To this end, one defines the \emph{spin correlation functions}, i.e.\ the expectations of products of the \emph{spin variables} taken with
respect to the Ising probability measure. The simplest cases are the \emph{one} and \emph{two-point functions}
\begin{align*}
\langle \sigma_{z} \rangle^{\Box}_{\mathcal{G},\beta} = \sum_{\sigma \in \Omega^{\Box}_{\mathcal{G}}} \sigma_{z} \mathbf{P}^{\Box}_{\mathcal{G},\beta}(\sigma),
\quad \langle \sigma_{z}\sigma_{w} \rangle^{\Box}_{\mathcal{G},\beta} = \sum_{\sigma \in \Omega^{\Box}_{\mathcal{G}}} \sigma_{z}\sigma_{w} \mathbf{P}^{\Box}_{\mathcal{G},\beta}(\sigma), \quad z,w \in V(\mathcal{G}).
\end{align*}
Since the model is ferromagnetic, and due to the effect of positive boundary conditions,
the corresponding one-point function $\langle \sigma_{z} \rangle^{+}_{\mathcal{G},\beta}$ is strictly positive for all finite subtilings $\mathcal{G}$ and for all $\beta$.
In other words, in finite volume, the spins prefer the~$+1$ state at all temperatures.
However, when $\mathcal{G}$ approaches~$\Gamma$, the boundary moves further and further away
and, at temperatures high enough, its influence on a particular spin vanishes. As a result, the limiting one-point function equals zero and the spin equally likely occupies the~$+1$ and~$-1$ state.
On the other hand, this does not happen at low temperatures, i.e.\ if $\beta$ is sufficiently large, then $\langle \sigma_{z} \rangle^{+}_{\mathcal{G},\beta}$
stays bounded away from zero uniformly in~$\mathcal{G}$.
This means that the effect of positive boundary conditions is carried through all length scales and
there is \emph{spontaneous magnetization}. In this approach, the critical point is the value of~$\beta$, which separates the regions with and without spontaneous magnetization.
In some cases, it is more convenient to investigate the behavior of the two-point functions.
Here, one also discerns two different non-critical cases: either the system is \emph{disordered}, i.e.\
the thermodynamic limits of the two point functions decay exponentially fast to zero with the graph distance between
$z$ and $w$ going to infinity, or the system is \emph{ordered}, which means that the limiting two-point functions stay bounded away from zero uniformly in $z$ and $w$.
For periodic Ising models, the critical point defined as the value of $\beta$ which separates these two regimes is the same as the critical point defined via spontaneous magnetization
(see Theorem 1 in \cite{ABF} and the references therein). In particular, the system exhibits long-range ferromagnetic order if and only if there is spontaneous magnetization.
Property~\eqref{eq:regularcoupling},
together with the conditions we imposed on~$\Gamma$ and~$\Gamma^*$, is enough for the existence of a phase transition in terms of spontaneous magnetization and the behavior of the two-point functions.
This is a consequence of the classical arguments of Peierls~\cite{Peierls} and Fisher~\cite{Fisher}.
In this paper, we will only focus on the phase transition in the analytic behavior of the free energy density limits, but our results for the Kac--Ward operator
can be also used in the setting of the magnetic phase transition (see Section~\ref{sec:magnetic}).
\subsection{The main result}
Let $\vec{E}(\mathcal{G})$ be the set of directed edges of~$\mathcal{G}$ which are the ordered pairs of vertices. For a directed edge $\vec{e}=(z,w)$, we define its \emph{reversion} by $-\vec{e}=(w,z)$ and we
obtain the undirected version by dropping the arrow from the notation, i.e.\ $e=\{z,w\}$. If $z$ is a vertex, then we write $\ensuremath{\text{Out}}_{\mathcal{G}}(z)=\{ (z',w')\in \vec{E}(\mathcal{G}): z'=z\}$
for the set of edges emanating from~$z$.
Let $\vec{x}$ and $x$ be systems of nonzero complex weights on the directed and undirected edges of $\mathcal{G}$ respectively.
We call $\vec{x}$ \emph{(Kac--Ward) contractive} if
\begin{align} \label{eq:contractive}
\sum_{\vec{e} \in \ensuremath{\text{Out}}_{\mathcal{G}}(z)} \arctan |\vec{x}_{\vec{e}}|^2 \leq \frac{\pi}{2} \qquad \text{for all} \ z \in V(\mathcal{G}),
\end{align}
and we say that $x$ \emph{factorizes} to~$\vec{x}$ if
\begin{align} \label{eq:factorizes}
x_{e} = \vec{x}_{\vec{e}} \vec{x}_{-\vec{e}} \qquad \text{for all} \ \vec{e} \in \vec{E}(\mathcal{G}).
\end{align}
For the origin of condition \eqref{eq:contractive}, see Corollary \ref{cor:contraction}.
In the context of the Ising model, two particular systems of edge weights will be important, namely the so called \emph{high} and \emph{low-temperature} weights given by
\begin{align*}
\tanh \beta J = \big(\tanh \beta J_{e}\big)_{e \in E(\Gamma)} \quad \text{and} \quad \exp(-2 \beta J) = \big(\exp(-2 \beta J_{e})\big)_{e^* \in E(\Gamma^*)}.
\end{align*}
\begin{definition}
We say that the coupling constants satisfy the \emph{high-temperature} condition if $\tanh J$ factorizes to a contractive system of weights on the directed edges of $\Gamma$, and we say that they satisfy the
\emph{low-temperature} condition if $\exp(-2 J)$ factorizes to a contractive system of weights on the directed edges of $\Gamma^*$.
\end{definition}
Let
\begin{align*}
\Upsilon_{\Box} = \big\{ f^{\Box}_{\mathcal{G}}: \text{$\mathcal{G}$ is a finite subtiling of $\Gamma$} \big \}
\end{align*}
be the family of all free energy densities with $\Box$ boundary conditions, and let~$\overline{\Upsilon}_{\Box}$ be its closure in the topology of pointwise convergence on $(0,\infty)$.
Note that~$\overline{\Upsilon}_{\Box}$ contains all thermodynamic limits and can also contain other types of accumulation points of $\Upsilon_{\Box}$.
Using the definition of $\mathcal{Z}^{\Box}_{\mathcal{G}}$, it is not difficult to prove that, under condition \eqref{eq:regularcoupling}, $\Upsilon_{\Box}$ is uniformly bounded and
equicontinuous on compact subsets of $(0,\infty)$. In particular, all sequences
in $\Upsilon_{\Box}$ which converge pointwise, converge uniformly on compact sets,
and therefore all functions in~$\overline{\Upsilon}_{\Box}$ are continuous on $(0,\infty)$. However, this is not enough to conclude analyticity of the limiting functions, and indeed, critical points do arise.
In this paper, we show that, if the coupling constants satisfy the high-temperature condition, then all functions in $\Upsilon_{\text{free}}$
can be extended analytically to a complex domain
\begin{align*}
\mathcal{T}_{\text{high}} & =\Big\{\beta :\ 0< \mathrm{Re} \beta<1,\ 2M|\mathrm{Im} \beta| < \frac{\pi}{2},\ \frac{\cosh(2 m\mathrm{Re} \beta )}{\cosh(2m) \cos (2M \mathrm{Im} \beta) } < 1 \Big\}
\end{align*}
which we call the \emph{high-temperature regime}. Note that $(0,1) \subset \mathcal{T}_{\text{high}}$.
Similarly we prove that, if the coupling constants satisfy the low-temperature condition, then all functions in $\Upsilon_{+}$ can be extended to analytic functions on
\begin{align*}
\mathcal{T}_{\text{low}}=\{\beta: 1 < \mathrm{Re} \beta \}
\end{align*}
which we call the \emph{low-temperature regime}. Moreover, we show that $\Upsilon_{\Box}$ is uniformly bounded on compact subsets of the
corresponding regimes.
\begin{figure}
\begin{center}
\includegraphics{Regimes}
\end{center}
\caption{The high and low-temperature regimes.}
\label{fig:regimes}
\end{figure}
For complex analytic functions, this is enough to conclude that all pointwise limits are also complex analytic.
More precisely, let $D$ be a complex domain and let $E\subset D$ have an accumulation point in $D$.
The Vitali-Porter theorem (see~\cite{Shiff}*{\S 2.4}) states that if a sequence of holomorphic functions
defined on $D$ converges pointwise on $E$, and is uniformly bounded
on compact subsets of $D$, then it converges uniformly on compact subsets of $D$ and the limiting function is holomorphic.
In our context, the role of the domain $D$ is played by the high and low-temperature regimes, and
$E$ is the intersection of the given regime with the positive real numbers.
In other words, under the high and low-temperature conditions on the coupling constants, the high and low-temperature regimes
are free of phase transition in terms of analyticity of the thermodynamic limits of the free energy density.
This is summarized in the following theorem:
\begin{theorem} \label{thm:FreeEnergy}
If the coupling constants satisfy
\begin{itemize}
\item[(i)] the high-temperature condition, then all functions in $\Upsilon_{\text{free}}$ extend analytically to $\mathcal{T}_{\text{high}}$, and $\Upsilon_{\text{free}}$ is
uniformly bounded on compact subsets of~$\mathcal{T}_{\text{high}}$. As a consequence, all functions in $\overline{\Upsilon}_{\text{free}}$ are analytic on $\mathcal{T}_{\text{high}}$, and in particular on $(0,1)$.
\item[(ii)] the low-temperature condition, then all functions in $\Upsilon_+$ extend analytically to $\mathcal{T}_{\text{low}}$, and $\Upsilon_+$ is uniformly bounded on
compact subsets of~$\mathcal{T}_{\text{low}}$. As a consequence, all functions in $\overline{\Upsilon}_{+}$ are analytic on $\mathcal{T}_{\text{low}}$, and in particular on $(1,\infty)$.
\end{itemize}
\end{theorem}
The proof of this theorem is provided in Section \ref{sec:freeenergy}.
Its main ingredients are the Kac--Ward formula for the partition function of the Ising model (see Theorem~\ref{thm:expansions})
and the bound on the spectral radius of the the Kac--Ward transition matrix given in Theorem~\ref{thm:boundedradius}.
In most of the applications, the role of boundary conditions is immaterial for the thermodynamic limit of the free energy density. Indeed,
it is not hard to prove that whenever $|\partial \mathcal{G}|/|V(\mathcal{G})|$ is small,
then for $\beta\in(0,\infty)$, $f^{\text{free}}_{\mathcal{G}}(\beta)$ and $f^+_{\mathcal{G}}(\beta)$ are close to each other
(and also to any other free energy density function defined for other types of boundary conditions on $\mathcal{G}$). Hence, limits of the free energy density taken along sequences, where the above ratio approaches zero, are the same for all boundary conditions.
In this paper, we consider the free and positive boundary conditions since in these cases, the partition function of the model is given in terms of the determinant of the Kac--Ward operator.
Thus, one can use properties of the operator itself to derive results for the free energy density.
\subsection{The isoradial case} \label{sec:isoradialIsing}
\begin{figure}
\begin{center}
\includegraphics{IsoradialGraph}
\end{center}
\caption{Local geometry of an isoradial graph and its dual. The underlying rhombic lattice is drawn in pale lines. The directed arc marks the turning angle $\angle(\vec{e}_1,\vec{e}_2)$.}
\label{fig:isoradial}
\end{figure}
Assume that $\Gamma$ is an isoradial graph, i.e.\ all its faces can be inscribed in circles with a common radius,
and all the circumcenters lie within the corresponding faces. An equivalent characterization says
that $\Gamma$ and $\Gamma^*$ can be simultaneously embedded in the plane in such a way, that each pair of mutually dual edges
forms diagonals of a rhombus. The roles of $\Gamma$ and $\Gamma^*$ are therefore symmetric and the dual graph is also isoradial.
The simplest cases of isoradial graphs are the regular lattices: the square, triangular and hexagonal lattice.
One assigns to each edge $e$ the interior angle $\theta_{e}$ that $e$ creates with any side of the associated rhombus (see Figure~\ref{fig:isoradial}).
Note that $\theta_{e}+\theta_{e^*}= \pi/2$.
There is a particular geometric choice of the coupling constants given by
\begin{align} \label{eq:zinvariant}
\tanh J_{e} = \tan (\theta_{e}/2), \quad \text{or equivalently,} \quad \exp(-2 J_{e})= \tan (\theta_{e^*}/2).
\end{align}
These coupling constants were first considered by Baxter~\cites{Baxter1}.
We will refer to them as the \emph{self-dual Z-invariant} coupling constants since these are the only coupling constants that
make the Ising model invariant under the star-triangle transformation, and also satisfy the above generalized Kramers-Wannier self-duality \eqref{eq:zinvariant}.
For more details on their origin, see \cites{BoutTil1,Baxter2}.
Observe that in this setting, condition \eqref{eq:regularcoupling} is equivalent to the existence of constants
$k$ and $K$, such that for all $e \in E(\Gamma)$,
\begin{align} \label{eq:isoradialcond}
0 < k \leq \theta_{e} \leq K < \pi.
\end{align}
This means that the associated rhombi have a positive minimal area, and also gives a uniform
bound on the maximal degree of~$\Gamma$ and~$\Gamma^*$.
The next corollary states that, for the Ising model defined by the above coupling constants, the only possible point of phase transition in the analytic behavior of the free energy density
is~$\beta =1$.
\begin{corollary} \label{cor:isoradial}
Let $\Gamma$ be an isoradial graph satisfying condition~\eqref{eq:isoradialcond}.
Consider Ising models defined by the self-dual Z-invariant coupling constants on finite subtilings of~$\Gamma$.
Then, all functions in $\overline{\Upsilon}_{\text{free}}$ are analytic on~$(0,1)$, and all functions in $\overline{\Upsilon}_{+}$ are analytic on $(1,\infty)$.
\end{corollary}
\begin{proof}
By \eqref{eq:zinvariant} and the fact that the angles $\theta$ sum up to $\pi$ around each vertex of $\Gamma$ and $\Gamma^*$,
the self-dual Z-invariant coupling constants simultaneously satisfy the
high and low-temperature condition. Indeed, the contractive weight systems on the directed edges are given by
$\vec{x}_{\vec{e}} = \sqrt{\tan (\theta_{e}/2)}$.
The claim follows therefore
from Theorem \ref{thm:FreeEnergy}.
\end{proof}
Note, that in this case, the inequalities in \eqref{eq:contractive} become equalities.
\subsection{Implications for the magnetic phase transition} \label{sec:magnetic}
Recently~\cite{KLM}, the Kac--Ward operator and the signed weights it induces on the closed non-backtracking walks in a graph
were used to rederive the critical temperature of the homogeneous Ising model on the square lattice.
It was done both in terms of analyticity of the free energy density limit and the change in behavior of the
one and two-point functions.
The methods used there to analyse the correlation functions work also for general planar graphs under
some slight regularity constraints. To be more precise, the proof of Theorem~1.4 in~\cite{KLM} which gives the existence of spontaneous
magnetization, uses the fact that
appropriate Kac--Ward transition matrices have spectral radius smaller than one and that the dual graph (which is $\Gamma^*$ in our setup)
has subexponential growth of volume, i.e.\ the volume of balls in graph distance grows subexponentially with the radius.
This condition is, for instance, satisfied by all isoradial graphs where~\eqref{eq:isoradialcond} holds true.
On the other hand, Theorem~1.6 and Corollary~1.7 from \cite{KLM}, which yield exponential decay of the two-point functions,
use the fact that the operator norm of appropriate Kac--Ward matrices is smaller than one.
The bounds that are stated in Section~\ref{sec:KWresults} allow
to generalize the above results to arbitrary planar graphs, i.e.\ together with the methods from~\cite{KLM} they
provide regions of parameters~$J$ and~$\beta$
where there is spontaneous magnetization or exponential decay of the two-point functions. These regions coincide with those in
Theorem~\ref{thm:FreeEnergy} (one can analytically extend the correlation functions to the high and low-temperature regime),
that is, if the coupling constants satisfy the low-temperature condition, then there is spontaneous magnetization on~$\mathcal{T}_{\text{low}}$,
and if they satisfy the high-temperature condition, then there is exponential decay of the two-point functions on~$\mathcal{T}_{\text{high}}$.
In particular, our bounds together with the methods developed in~\cite{KLM} prove that the self-dual Z-invariant weights are critical in the sense of magnetic phase transition.
We would also like to point out that the arguments, which are used in~\cite{KLM} to conclude analyticity of the free energy density limit,
do not work for general graphs since they rely on periodicity of the square lattice.
This is why, in this paper, we go into details of this aspect of phase transition and we do not focus on the magnetic behavior of the model.
\section{Results for the Kac--Ward operator} \label{sec:KWresults}
\subsection{The Kac--Ward operator and the Ising model}
Let $\mathcal{G}$ be a finite simple graph embedded in the plane.
For a directed edge $\vec{e}=(z,w)$, we define its \emph{tail} $t(\vec{e}) =z$ and \emph{head} $h(\vec{e})=w$.
For $\vec{e},\vec{g} \in \vec{E}(\mathcal{G})$, let
\begin{align} \label{eq:defangle}
\angle(\vec{e},\vec{g})= \text{Arg}\Big(\frac{h(\vec{g})-t(\vec{g})}{h(\vec{e})-t(\vec{e})}\Big) \in (-\pi,\pi]
\end{align}
be the \emph{turning angle} from $\vec{e}$ to $\vec{g}$ (see Figure \ref{fig:isoradial}).
The \emph{transition matrix} for~$\mathcal{G}$ and the weight system $x$ is given by
\begin{align} \label{eq:transitionmatrix1}
\Lambda_{\vec{e},\vec{g}}(x) = \begin{cases}
x_{e} e^{\frac{i}{2}\angle(\vec{e},\vec{g})}
& \text{if $h(\vec{e}) = t(\vec{g})$ and $\vec{g} \neq -\vec{e}$}; \\
0 & \text{otherwise},
\end{cases}
\end{align}
where $\vec{e},\vec{g} \in \vec{E}(\mathcal{G})$.
To each $\vec{e} \in \vec{E}(\mathcal{G})$ we attach a copy of the complex numbers denoted by $\mathds{C}_{\vec{e}}$ and
we define a complex vector space
\begin{align*}
\mathcal{X} = \prod_{\vec{e} \in \vec{E}(\mathcal{G})} \mathds{C}_{\vec{e}}.
\end{align*}
We identify $\Lambda(x)$ with the automorphism of $\mathcal{X}$ it defines via matrix multiplication.
The \emph{Kac--Ward operator} for $\mathcal{G}$ and the weight system $x$ is the automorphism of $\mathcal{X}$ given by
\begin{align*}
T(x) = \Id -\Lambda(x),
\end{align*}
where $\Id$ is the identity on $\mathcal{X}$. When necessary, we will use subscripts to express the fact that the above operators depend on the underlying graph~$\mathcal{G}$.
If $\mathcal{G}$ is a finite subtiling of $\Gamma$, then we will denote by $\mathcal{G}^*$ the subgraph of~$\Gamma^*$ whose edge set
consists of all dual edges $e^*$, such that at least one of the endpoints of $e$ belongs to~$V(\mathcal{G}) \setminus \partial \mathcal{G}$. One can see that~$\mathcal{G}^*$
is a subtiling of~$\Gamma^*$ whose defining set of dual faces is given by the vertices from $V(\mathcal{G}) \setminus \partial \mathcal{G}$.
We will call it the \emph{dual subtiling} of $\mathcal{G}$.
We say that a graph is \emph{even} if all its vertices have even degree.
There are two classical methods of representing the partition function of the Ising model on $\mathcal{G}$
as a weighted sum over all even subgraphs of $\mathcal{G}$ or $\mathcal{G}^{*}$.
The first one, called the \emph{low-temperature expansion}, involves a bijective mapping between the spin configurations
with positive boundary conditions and the collection of even subgraphs of $\mathcal{G}^{*}$. The graph associated with a spin configuration is composed
of these dual edges, whose corresponding primal edge has two opposite values of spins assigned to its endpoints. Hence,
the resulting even subgraph forms an interface between the clusters of positive and negative spins in the configuration.
In this expansion, each even graph is given a weight which is proportional to the product of the low-temperature
edge weights $\exp(-2 \beta J)$ taken over all edges in the graph.
The second method is called the \emph{high-temperature expansion} and it is a way of expressing
the partition function of the Ising model with free boundary conditions as a sum over all even subgraphs of $\mathcal{G}$.
Similarly, it assigns to each even subgraph a product weight composed of factors given by
the high-temperature weight system $\tanh \beta J$. However, unlike in the low-temperature case, the even subgraphs
do not have a geometrical interpretation in terms of the spin variables.
The weighted sums arising in both of these expansions are called the \emph{even subgraph generating functions}.
The Kac--Ward formula expresses the square of an even subgraph generating function
as the determinant of a Kac--Ward matrix with an appropriate edge weight system.
The combined result of the high and low-temperature expansion together with the Kac--Ward formula is stated
in the next theorem.
Here, we assume that the edges of $\mathcal{G}$ (and also $\mathcal{G}^{*}$) are embedded as straight line segments
which do not intersect.
For the origin of this condition, a detailed account of the high and low-temperature expansion, and the proof of the following theorem, see~\cite{KLM}.
\begin{theorem} \label{thm:expansions}
For all choices of the coupling constants $J$ and all $\beta$ with $\mathrm{Re} \beta > 0$,
\begin{align*}
\quad \text{\textnormal{(i)}}& \quad \big(\mathcal{Z}^{\text{free}}_{\mathcal{G}}(\beta) \big)^2 = 2^{2|V(\mathcal{G})|}
\Big( \prod_{e \in E(\mathcal{G})} \cosh^2(\beta J_{e})\Big)\det\big[T_{\mathcal{G}}(\tanh \beta J)\big],\\
\quad \text{\textnormal{(ii)}}& \quad \big(\mathcal{Z}^{+}_{\mathcal{G}}(\beta) \big)^2 = \exp \Big(2\beta \sum_{e \in E(\mathcal{G})}
J_{e}\Big) \det\big[T_{\mathcal{G}^{*}}(\exp(-2 \beta J))\big].
\end{align*}
\end{theorem}
Note that the condition $\mathrm{Re} \beta >0$ is needed only for the weight system $\tanh \beta J$ to be well defined.
The determinant of the Kac--Ward matrix is the characteristic polynomial of the transition matrix evaluated at one:
\begin{align*}
\det T = \det(\Id -\Lambda) = \prod_{k=1}^{2n}(1- \lambda_k),
\end{align*}
where $n$ is the number of edges of $\mathcal{G}$, and $\lambda_k$, $k\in \{1,2,\ldots, 2n \}$, are the eigenvalues of $\Lambda$.
Recall that we want to extend the free energy density functions to domains in the complex plane.
The free energy density is given by the logarithm of the partition function, and the square of the partition function is proportional to the above product
involving eigenvalues of the transition matrix.
In this situation, it is natural to use the power series expansion of the logarithm around one:
\begin{align*}
\ln (1- \lambda) = -\sum_{r=1}^{\infty} \lambda^r/r, \qquad |\lambda| <1.
\end{align*}
This series is convergent whenever $\lambda$ stays within the unit disc, and hence we should require that the spectral radius of the
transition matrix is bounded from above by one. The next section is devoted to providing the necessary estimates.
\subsection{Bounds on the spectral radius and operator norm}
In this paper we will make use of transition matrices conjugated by diagonal matrices of a certain type:
if $x$ factorizes to $\vec{x}$ (see~\eqref{eq:factorizes}), then we define the \emph{conjugated transition matrix} by
\begin{align*}
\Lambda(\vec{x})= D^{-1}(\vec{x}) \Lambda(x) D(\vec{x}),
\end{align*}
where $D(\vec{x})$ is the diagonal matrix satisfying $D_{\vec{e},\vec{e}}(\vec{x})=\vec{x}_{\vec{e}}$ for all $\vec{e} \in \vec{E}(\mathcal{G})$.
The resulting transition matrix takes the following form:
\begin{align} \label{eq:transitionmatrix2}
\Lambda_{\vec{e},\vec{g}}(\vec{x})= \begin{cases}
\vec{x}_{-\vec{e}}\vec{x}_{\vec{g}} e^{\frac{i}{2}\angle(\vec{e},\vec{g})}
& \text{if $h(\vec{e}) = t(\vec{g})$ and $\vec{g} \neq -\vec{e}$}; \\
0 & \text{otherwise}.
\end{cases}
\end{align}
This matrix is similar to the standard transition matrix, and in particular has the same spectrum.
Moreover, it turns out that one can explicitly compute its operator norm.
To this end, let us make some additional observations.
For a square matrix~$A$, let~$\| A \|$ be its operator norm induced by the Euclidean norm,
and let $\rho(A)$ be its spectral radius.
Note that there is a natural involutive automorphism $P$ of $\mathcal{X}$ induced by the map $\vec{e} \mapsto -\vec{e}$, i.e.\
the automorphism which assigns to each complex number in $\mathds{C}_{\vec{e}}$ the same complex number in~$\mathds{C}_{-\vec{e}}$.
Fix $\vec{x}$ and let $A= P \Lambda(\vec{x})$. Observe that $\| A \| =\| \Lambda(\vec{x}) \|$ since $P$ is an isometry.
Moreover, the operator norm of $A$ depends only on the absolute values of $\vec{x}$. Indeed, if
\begin{align*}
B = D(\vec{u}) A D(\vec{u}), \qquad \text{where} \quad \vec{u}_{\vec{e}} = |\vec{x}_{\vec{e}}|/\vec{x}_{\vec{e}},
\end{align*}
then $B$ is given by the matrix
\begin{align} \label{eq:transitionmatrixb}
B_{\vec{e},\vec{g}} = \begin{cases}
|\vec{x}_{\vec{e}}\vec{x}_{\vec{g}}| e^{\frac{i}{2}\angle(-\vec{e},\vec{g})}
& \text{if $t(\vec{e}) = t(\vec{g})$ and $\vec{g} \neq \vec{e}$}; \\
0 & \text{otherwise},
\end{cases}
\end{align}
and $\| B\| = \|A\|$ since $D(\vec{u})$ is an isometry.
Note that $\mathcal{X}$ can be decomposed as
\begin{align*}
\mathcal{X} = \prod_{z \in V(\mathcal{G})} \mathcal{X}^z, \qquad \text{where} \quad \mathcal{X}^z = \prod_{\vec{e} \in \ensuremath{\text{Out}}_{\mathcal{G}}(z)} \mathds{C}_{\vec{e}}.
\end{align*}
One can see from \eqref{eq:transitionmatrixb} that $B$ gives a nonzero transition weight only between two edges sharing the same
tail $z$. In other words, $B$ maps $\mathcal{X}^z$ to itself and therefore is block-diagonal, that is
\begin{align*}
B = \prod_{z \in V(\mathcal{G})} B^z,
\end{align*}
where $B^z : \mathcal{X}^z \rightarrow \mathcal{X}^z$
is the restriction of $B$ to the space $\mathcal{X}^z$. Moreover, the angles satisfy
\begin{align} \label{eq:anglereflection}
\angle(-\vec{e},\vec{g}) = - \angle(-\vec{g},\vec{e}) \qquad \text{for} \quad \vec{e} \neq \vec{g},
\end{align}
and hence $B$ is Hermitian, i.e.\ $B_{\vec{e},\vec{g}} = \overline{B_{\vec{g},\vec{e}}}$.
Combining these two properties and the fact that the operator norm of a Hermitian matrix is given by its spectral radius, we arrive at
the identity:
\begin{align} \label{eq:blockhermitian}
\| B \| = \rho(B)= \max_{z \in V(\mathcal{G})} \rho(B^z).
\end{align}
It turns out that the characteristic polynomial of $B^z$ is easily expressible in terms of the weight vector $\vec{x}$:
\begin{lemma} \label{lem:charpol}
For any real $t$ and any vertex $z$,
\begin{align*}
\det(t\Id- B^z) = \mathrm{Re} \Big(\prod_{\vec{e} \in \text{\textnormal{Out}}_{\mathcal{G}}(z)} (t + i|\vec{x}_{\vec{e}}|^2) \Big),
\end{align*}
where $\Id$ is the identity on $\mathcal{X}^z$.
\end{lemma}
\begin{proof}
The proof is by induction on the degree of $z$.
One can easily check that the statement is true for all vertices of degree one or two.
Now suppose that it is true for all vertices of degree at most $n \geq 2$. Let~$z$ be a vertex of degree $n+1$ and
let $\vec{e}_1,\vec{e}_2, \ldots, \vec{e}_{n+1}$ be a counterclockwise ordering of the edges of $\ensuremath{\text{Out}}_{\mathcal{G}}(z)$. Consider the matrix $S=t\Id - B^z$
with columns and rows ordered accordingly.
Note that for all $\vec{g} \in \ensuremath{\text{Out}}_{\mathcal{G}}(z)$ different from $\vec{e}_1$ and~$\vec{e}_2$,
\begin{align*}
\angle(\vec{g},\vec{e}_1)+ \angle(\vec{e}_1, \vec{e}_2) + \angle(\vec{e}_2,\vec{g}) = 0 \ (\text{mod} \ 2 \pi).
\end{align*}
Also observe that, for geometric reasons, at least two of the above angles are positive.
Combining this together with the fact that $\text{Arg}(w) = \text{Arg}(-w) \pm \pi$ for any complex~$w$,
and that the angles are between~$-\pi$ and~$\pi$, yields
\begin{align} \label{eq:charpol1}
\angle(-\vec{e}_1,\vec{g}) = \angle(-\vec{e}_2,\vec{g}) + \angle(-\vec{e}_1, \vec{e}_2) + \pi.
\end{align}
This identity guarantees that every two consecutive rows and columns of $S$ are ``almost proportional'' to each other.
To be more precise,
we first subtract from the first row of $S$, the second row multiplied by $i e^{\frac{i}{2} \angle(-\vec{e}_1, \vec{e}_2)} |\vec{x}_{\vec{e}_1}| /|\vec{x}_{\vec{e}_2}| $. Then, we subtract from the first
column the second one multiplied by $-i e^{-\frac{i}{2} \angle(-\vec{e}_1, \vec{e}_2)} |\vec{x}_{\vec{e}_1}| /|\vec{x}_{\vec{e}_2}|$. The resulting matrix has the same determinant as $S$. By the definition of~$B^z$,
\eqref{eq:anglereflection} and~\eqref{eq:charpol1},
\begin{align*} \det S =
\det \begin{pmatrix}
a & b & 0 & 0 &\cdots \\
\overline{b} & t & -B^z_{\vec{e}_2,\vec{e}_3} & -B^z_{\vec{e}_2,\vec{e}_4} & \cdots \\
0 & -\overline{B^z}_{\vec{e}_2,\vec{e}_3} & t & -B^z_{\vec{e}_3,\vec{e}_4} & \cdots \\
0 & -\overline{B^z}_{\vec{e}_2,\vec{e}_4} & -\overline{B^z}_{\vec{e}_3,\vec{e}_4} & t & \cdots \\
\vdots & \vdots & \vdots & \vdots &\ddots
\end{pmatrix}
\end{align*}
where $a=t\big(1 + |\vec{x}_{\vec{e}_1}|^2 /|\vec{x}_{\vec{e}_2}|^2 \big)$ and $b = -e^{\frac{i}{2} \angle(-\vec{e}_1,\vec{e}_2)} \big(it|\vec{x}_{\vec{e}_1}| /|\vec{x}_{\vec{e}_2}|+|\vec{x}_{\vec{e}_1}\vec{x}_{\vec{e}_2}|\big)$.
Let $S_1$ be the matrix resulting from removing from $S$ the first column and the first row, and let $S_2$ be the matrix,
where the first two rows and the first two columns of $S$ are removed. By the induction hypothesis, $\det S_1 = \mathrm{Re}\big((t+i|\vec{x}_{\vec{e}_2}|^2)\vartheta\big)$
and $S_2= \mathrm{Re} \vartheta$, where $\vartheta = \prod_{\vec{g} \in \ensuremath{\text{Out}}_{\mathcal{G}}(z) \setminus \{\vec{e}_1, \vec{e}_2 \}}(t+i|\vec{x}_{\vec{g}}|^2)$. Expanding the determinant,
we get
\begin{align*}
\det S &= a \det S_1 - b \overline{b} \det S_2 \\
&= t\big(1 + |\vec{x}_{\vec{e}_1}|^2 /|\vec{x}_{\vec{e}_2}|^2 \big) \mathrm{Re}\big((t+i|\vec{x}_{\vec{e}_2}|^2)\vartheta\big) \\
&- \big(|\vec{x}_{\vec{e}_1}|^2|\vec{x}_{\vec{e}_2}|^2 + t^2 |\vec{x}_{\vec{e}_1}|^2/|\vec{x}_{\vec{e}_2}|^2\big) \mathrm{Re}\vartheta \\
&= \mathrm{Re} \big( ( t + i |\vec{x}_{\vec{e}_1}|^2) (t+ i |\vec{x}_{\vec{e}_2}|^2) \vartheta \big).
\end{align*}
The last equality follows since both sides are real linear in $\vartheta$, and one can check that it holds true for $\vartheta=1,i$.
\end{proof}
For $z \in V(\mathcal{G})$, we define $\xi^z(\vec{x})$ to be the unique solution in~$s$ of the equation
\begin{align} \label{eq:solutiondef}
\sum_{\vec{e} \in \text{\textnormal{Out}}_{\mathcal{G}}(z)}\arctan \big(|\vec{x}_{\vec{e}}|^2/s\big)= \frac{\pi}{2}.
\end{align}
As a corollary we obtain the following result:
\begin{corollary} \label{cor:HTrMrho}
$\rho(B^z) = \xi^z(\vec{x})$.
\end{corollary}
\begin{proof}
Since $B^z$ is Hermitian, it has a real spectrum.
By Lemma~\ref{lem:charpol}, the characteristic polynomial of~$B^z$ at a nonzero real number~$t$ is given by
\begin{align*}
t^{|\ensuremath{\text{Out}}_{\mathcal{G}}(z)|} \Big(\prod_{\vec{e} \in \ensuremath{\text{Out}}_{\mathcal{G}}(z)} \cos\big( \arctan (|\vec{x}_{\vec{e}}|^2/t) \big)\Big)^{-1} \cos\Big( \sum_{\vec{e} \in \ensuremath{\text{Out}}_{\mathcal{G}}(z)} \arctan(|\vec{x}_{\vec{e}}|^2/t)\Big) .
\end{align*}
This expression vanishes only when the last cosine term is zero. The largest in modulus values of $t$ for which this happens are equal to $\pm \xi_{\mathcal{G}}^z(\vec{x})$.
\end{proof}
We can now compute the operator norm of the conjugated transition matrix $\Lambda(\vec{x})$.
The following result is the main tool in our considerations:
\begin{lemma} \label{lem:normbound}
\begin{align*}
\| \Lambda(\vec{x})\| = \max_{z \in V(\mathcal{G})} \xi^z(\vec{x}).
\end{align*}
\end{lemma}
\begin{proof}
It follows from the fact that $\|\Lambda(\vec{x})\| =\|B \|$, identity \eqref{eq:blockhermitian}, and Corollary~\ref{cor:HTrMrho}.
\end{proof}
Note that the operator norm depends only on the absolute values of $\vec{x}$.
One can rephrase this result as follows:
\begin{corollary} \label{cor:contraction}
$\| \Lambda(\vec{x}) \| \leq s$ if and only if
\begin{align*}
\sum_{\vec{e} \in \text{\textnormal{Out}}_{\mathcal{G}}(z)}\arctan \big(|\vec{x}_{\vec{e}}|^2/s\big) \leq \frac{\pi}{2} \qquad \text{for all} \ z \in V(\mathcal{G}).
\end{align*}
\end{corollary}
We say that an operator is a \emph{contraction} if its operator norm is smaller or equal one, and hence the name of condition \eqref{eq:contractive}.
Since the operator norm bounds the spectral radius from above, we obtain the following corollary:
\begin{corollary} \label{cor:spectralbound}
If $x$ factorizes to $\vec{x}$, then
\begin{align*}
\rho(\Lambda(x) ) \leq \max_{z \in V(\mathcal{G})} \xi^z(\vec{x}).
\end{align*}
\end{corollary}
This inequality is preserved when one takes the infimum over all factorizations of the weight system $x$. One can check
that the spectral radius of the transition matrix depends not only on the moduli but also on the complex arguments of $x$.
Since the above bound depends only on the absolute values, it is in general not sharp. Nonetheless, it is optimal for the self-dual
Z-invariant Ising model on isoradial graphs.
\begin{remark}
Note that finiteness of $\mathcal{G}$ was not important in our computations. Since the transition matrix is defined locally
for each vertex, we only used the fact that all vertices have finite degree. Hence, one can consider transition matrices and
Kac--Ward operators on infinite graphs as automorphisms of the Hilbert space $\ell^2$ on the directed edges of $\mathcal{G}$.
The results from this section translate directly to this setting by interchanging all maxima with suprema.
This is used in \cite{Lis} to analyse infinitely dimensional Kac--Ward operators.
\end{remark}
\subsection{High and low-temperature spectral radii}
We will now use the bounds from the previous section in a more concrete setting of the high and low-temperature weight systems.
We define
\begin{align*}
R(\beta) =\sup_{\mathcal{G}} \rho\big[\Lambda_{\mathcal{G}}(\tanh \beta J)\big] \quad \text{and} \quad R^*(\beta) =\sup_{\mathcal{G}} \rho\big[\Lambda_{\mathcal{G}^{*}}(\exp(-2 \beta J))\big],
\end{align*}
where the suprema are taken over all finite subtilings of $\Gamma$.
The reason for our particular choice of the high and low-temperatures regimes in the statement of Theorem \ref{thm:FreeEnergy} is the following result:
\begin{theorem} \label{thm:boundedradius}
If the coupling constants satisfy
\begin{itemize}
\item[(i)] the high-temperature condition, then $\sup_{\beta \in K} R(\beta) < 1$ for any compact set $K \subset \mathcal{T}_{\text{high}}$.
\item[(ii)] the low-temperature condition, then $\sup_{\beta \in K} R^*(\beta) < 1$ for any compact set $K \subset \mathcal{T}_{\text{low}}$.
\end{itemize}
\end{theorem}
\begin{proof}
We will prove part (i). Fix a compact set $K \subset \mathcal{T}_{\text{high}}$ and let
\begin{align*}
L(\beta) = \sup_{j \in [m,M]} \frac{ |\tanh \beta j| }{ \tanh j} =
\sup_{j \in [m,M]} \frac{\cosh j}{\sinh j}\sqrt{\frac{ \cosh (2j \mathrm{Re} \beta) - \cos (2j \mathrm{Im} \beta) }{\cosh (2j \mathrm{Re} \beta) + \cos (2j \mathrm{Im} \beta) }} .
\end{align*}
By compactness of $[m,M]$ and the fact that the hyperbolic tangent does not vanish and is continuous in the right half-plane, $L$
is a continuous function on $\{\beta: 0 <\mathrm{Re} \beta \}$.
From a simple computation, it follows that $L(\beta)<1$ if and only if
\begin{align*}
\cosh (2j \mathrm{Re} \beta)/\cosh 2j < \cos (2j\mathrm{Im} \beta) \qquad \text{for all} \ j \in [m,M].
\end{align*}
The above inequality can hold only when $|\mathrm{Re} \beta| <1$ and when the right hand side is positive.
The latter is in particular true when $2M|\mathrm{Im} \beta| < \frac{\pi}{2}$.
Under these assumptions, both sides of the inequality are decreasing functions of~$j$.
This means that the above condition is satisfied whenever $0<\mathrm{Re} \beta <1$, $2M|\mathrm{Im} \beta| < \frac{\pi}{2}$ and
$\cosh (2m \mathrm{Re} \beta)/\cosh 2m < \cos (2M\mathrm{Im} \beta)$.
Hence, by the definition of $\mathcal{T}_{\text{high}}$, we have that $\mathcal{T}_{\text{high}} \subset \{\beta: L(\beta) <1 \}$ and thus, by continuity of $L$,
\begin{align*}
s:=\sup_{\beta \in K} L(\beta)<1.
\end{align*}
From the definition of $L$, it follows that
\begin{align*}
|\tanh \beta J_{e}|/ \tanh J_{e}\leq s \qquad \text{for all} \ e \in E(\Gamma) \ \text{and} \ \beta \in K.
\end{align*}
We assume that the coupling constants $J$ satisfy the high-temperature condition, which means that the weight system $\tanh J$ factorizes to a contractive
weight system $\vec{x}$. Therefore, for $\beta \in K$, $\tanh \beta J$
factorizes to a weight system $\vec{x}(\beta)$ satisfying $|\vec{x}_{\vec{e}}(\beta)| = \sqrt{|\tanh \beta J_{e}| / \tanh J_{e}}\cdot|\vec{x}_{\vec{e}}|$, and hence $|\vec{x}_{\vec{e}}(\beta)|^2/s \leq |\vec{x}_{\vec{e}}|^2$
for all $\vec{e} \in \vec{E}(\Gamma)$. Since $\arctan$ is increasing and $\vec{x}$ is contractive, we have by Corollary \ref{cor:contraction} that
$\| \Lambda_{\mathcal{G}}(\vec{x}(\beta))\| \leq s$ for all subtilings~$\mathcal{G}$ and all $\beta \in K$. The claim follows because the spectral radius is bounded
from above by the operator norm, and $\Lambda_{\mathcal{G}}(\vec{x}(\beta))$ has the same spectral radius as $\Lambda_{\mathcal{G}}(\tanh \beta J)$.
Part (ii) involves less computations and can be proved similarly after noticing that
\[
\mathcal{T}_{\text{low}} =\Big\{\beta: \sup_{j \in [m,M]} \frac{|\exp(-2\beta j)|}{\exp ( -2 j)} <1\Big\}. \qedhere
\]
\end{proof}
In the light of Thoerem \ref{thm:expansions} and the remarks which follow it, we are now in a position to prove our main result.
\section{Proof of Theorem \ref{thm:FreeEnergy}} \label{sec:freeenergy}
\begin{proof}
We will prove part (i). Suppose that the coupling constants satisfy the high-temperature condition
and fix a compact set $K \subset \mathcal{T}_{\text{high}}$.
We have to show that the functions $f^{\text{free}}_{\mathcal{G}}$ extend analytically to $\mathcal{T}_{\text{high}}$ and are uniformly bounded on $K$.
First of all, since zero is not in $\mathcal{T}_{\text{high}}$, the factor $1/\beta$ is analytic on $\mathcal{T}_{\text{high}}$ and uniformly bounded on $K$. Thus, it is enough
to consider functions of the form $\ln \mathcal{Z}^{\text{free}}_{\mathcal{G}}(\beta) /|V(\mathcal{G})|$.
We will use the formula from part (i) of Theorem~\ref{thm:expansions}.
The logarithm of the partition function can therefore be expressed as a sum of three different terms.
The first one is the constant $|V(\mathcal{G})| \ln 2$, which equals $ \ln 2$ after rescaling by the number of vertices.
To talk about the second term, which comes from the product of hyperbolic cosines, one has to argue that
there is a continuous branch of $\ln(\cosh \beta J_e)$ on $\mathcal{T}_{\text{high}}$. Indeed, one can take the principal
value of the logarithm since $\mathrm{Re} (\cosh \beta J_e )= \cosh (J_e \mathrm{Re} \beta) \cos (J_e \mathrm{Im} \beta)> 0$ on $\mathcal{T}_{\text{high}}$. Analyticity of this term follows
since $ \cosh \beta J_e $ is analytic. Furthermore,
we have
\begin{align*}
\Big| \ln \Big(\prod_{e \in E(\mathcal{G})} \cosh \beta J_e\Big) \Big| &\leq \sum_{e \in E(\mathcal{G})} \big| \ln (\cosh \beta J_e) \big| \\
&\leq \sum_{e \in E(\mathcal{G})} \Big( \big|\ln |\cosh \beta J_e|\big| + |\text{Arg}(\cosh \beta J_e)| \Big) \\
&\leq | E(\mathcal{G})|\Big( \sup_{j\in[m,M]} \big|\ln |\cosh \beta j|\big| + \pi/2\Big).
\end{align*}
Since the hyperbolic cosine does not vanish in the right half-plane and $[m,M]$ is compact,
the above supremum is a continuous function of $\beta$ on $\mathcal{T}_{\text{high}}$, and therefore is bounded on $K$.
The number of edges is bounded by the number of vertices times the maximal degree of $\Gamma$, and thus,
after rescaling by the volume, this term is uniformly bounded in $\mathcal{G}$.
The last term is given by the logarithm of the determinant of the
Kac--Ward operator. Let $\lambda_k$, $k\in\{1,2,\ldots,2n\}$, $n=|E(\mathcal{G})|$, be the eigenvalues of $\Lambda_{\mathcal{G}}(\tanh \beta J)$.
By Theorem \ref{thm:boundedradius}, we know that their moduli are bounded from above by some constant $s<1$ (uniformly in $\mathcal{G}$ and $\beta \in K$).
One can therefore define the logarithm by its power series around one, i.e.\
\begin{align*}
\ln \det \big[\Id - \Lambda_{\mathcal{G}}(\tanh \beta J)\big] &= \ln \prod_{k=1}^{2n} (1- \lambda_k) = \sum_{k=1}^{2n} \ln (1- \lambda_k) \\
&= -\sum_{k=1}^{2n} \sum_{r=1}^{\infty} \lambda_k^r/r = - \sum_{r=1}^{\infty} \sum_{k=1}^{2n} \lambda_k^r/r \\
&= - \sum_{r=1}^{\infty} \text{tr} [\Lambda^r_{\mathcal{G}}(\tanh \beta J) ]/r,
\end{align*}
where $\text{tr}$ is the trace of a matrix. It is clear that $\text{tr} [\Lambda^r_{\mathcal{G}}(\tanh \beta J)]$ is an analytic function of $\beta$.
Moreover, $|\text{tr} [\Lambda^r_{\mathcal{G}}(\tanh \beta J)]| \leq 2|E(\mathcal{G})| s^r$ for any~$r$,
and therefore the above series converges uniformly on $K$.
It follows that the series defines a holomorphic function on $\mathcal{T}_{\text{high}}$.
Again, after rescaling by the number of vertices, it becomes uniformly bounded in $\mathcal{G}$.
This completes the proof of the first part of the theorem.
The proof of part (ii) uses the second formula from Theorem \ref{thm:expansions} and proceeds in a similar manner.
\end{proof}
\textbf{Acknowledgments.}
The research was supported by NWO grant Vidi 639.032.916.
\bibliographystyle{amsplain}
|
2,877,628,090,409 | arxiv | \section{Introduction}
Understanding the behaviour of rotating black holes
and their ergoregions when immersed in
magnetic fields forms a central part of
astrophysical theories of quasars, active galactic nuclei
and other objects containing black holes (see, for example,
\cite{KKL,BlandfordZnajek,Bicak1}).
Typically the magnetic fields, which contribute
a negligible amount of energy density near the black hole, are treated
as test fields on the background of the Kerr solution
of the vacuum Einstein equations, as, for example, in the work of Wald
\cite{Wald0}. While perfectly justified
astrophysically, it is not without interest to treat the energy
exchange between black hole and magnetic field at the fully non-linear
level, and in particular to ask to what extent the ideas of
black hole thermodynamics, which have proved so useful in the study
of quantum processes near black holes, may be extended to
this more general setting. This seems especially appropriate
since rotating black holes are believed to drag magnetic field lines, inducing
electric fields to flow and hence currents to flow.
For sufficiently strong magnetic fields this may lead to the breakdown
of the vacuum due to pair creation \cite{Gibbons,Putten1,Putten2}.
These thoughts motivated a recent study \cite{gimupo} of the exact metric
and electromagnetic field of a magnetized Kerr-Newman black hole,
constructed using solution-generating methods pioneered by Ernst \cite{Ernst}.
Contrary to the widespread belief that
the asymptotic metric was approximately static and Melvin-like,
it was found that generically the metric has ergoregions that extend
all the way to infinity. The complicated nature of the metric
at infinity presented difficulties in evaluating the
total energy and angular momentum of the system and in the treatment
of the thermodynamics. In this paper, we are able partially to
overcome these problems and to present expressions
for the total angular momentum $J$ and total energy $E$ of the system,
together with a form of the relevant Smarr relation and
and first law in which variations of the appended magnetic
field $B$ are fully taken into account. (The thermodynamics of the
Schwarzschild-Melvin black hole were discussed in \cite{radu}.)
The plan of the paper is as follows. In section 2 we discuss the Wald
procedure for evaluating the total angular momentum $J$, and some
subtleties that can arise in cases such as the magnetised black holes
that were not present for the asymptotically-flat geometries
considered by Wald. These lead to potential ambiguities in the definition
of the angular momentum. We argue that these may be resolved by a careful
consideration of the behaviour of the conserved angular momentum under
gauge transformations. In section 3 we show how the electric charge
and the angular momentum may
be conveniently evaluated by first performing a Kaluza-Klein reduction on
the azimuthal coordinate $\phi$, and then expressing the conserved Wald
charge in terms of three-dimensional quantities.
The definition of the mass of a black hole in an external magnetic
field is also somewhat problematical, on account of the unusual asymptotic
behaviour of the metric and the other fields. We discuss this in section 4,
where we present a formalism, again based on the Kaluza-Klein reduction
to three dimensions, within which we are able to obtain an expression
for the mass.
In section 5 we evaluate our general expressions for the angular momentum and
the mass in the case of the magnetised Kerr-Newman black holes. We show
that these results are consistent with the first law of thermodynamics, in
the case that the appended magnetic field $B$ is held fixed. In doing so,
we essentially use the first law to derive expressions for the angular
velocity $\Omega$ and the electrostatic potential $\Phi$.\footnote{To
be precise, $\Omega$ and $\Phi$ represent the {\it differences}
$\Omega=\Omega_H-\Omega_\infty$ and $\Phi=\Phi_H-\Phi_\infty$ between
the values on the horizon and the values at infinity. $\Omega_H$ and
$\Phi_H$ are easily computed directly, but the asymptotics of the
magnetised black hole solutions make it difficult to define
$\Omega_\infty$ and $\Phi_\infty$ directly.}
We then
extend the discussion, treating $B$ also as a thermodynamic variable
by introducing an extra contribution $-\mu dB$ in the first law, where
$\mu$ has an interpretation as an induced magnetic moment.
The explicit expressions for $\Omega$, $\Phi$ and $\mu$
may be calculated exactly, but their forms are rather
complicated. However they simplify considerably if one works
to low orders in $B$ or $q$. We show also that the thermodynamic
variables in the extended system obey a Smarr-type relation.
In section 6, we attempt to compare our thermodynamic
formalism with some work of Wald \cite{Wald,Wald2}.
We minimize the total energy $E$
with respect to the total charge $Q$,
at fixed values of the energy, angular momentum
and magnetic field of the black hole.
Our result resembles that of Wald in general form, but differs in detail.
In section 7 we examine some further properties of the
magnetised black holes, including the Meissner effect
whereby as one approaches extremality, the magnetic flux
penetrating the horizon vanishes. In other words, flux is
expelled \cite{Bicak}.
We find, that if the total charge on the hole $Q$ vanishes,
then the magnetic field on the horizon does indeed vanish as one
approaches extremality, consistent with earlier work.
In section 8 we extend our discussion of the conserved angular momentum to
the case of the STU supergravity model, which comprises four-dimensional
${\cal N}=2$ supergravity coupled to three vector multiplets. We
apply our results to the case of the magnetisation of certain
4-charge static black holes that have been investigated recently in
\cite{cvgiposa}. The paper ends with conclusions in section 9.
\section{Conserved Charges in Einstein-Maxwell Theory}
Here we present a discussion of some aspects of the Wald procedure
\cite {Wald2} for calculating conserved charges, applied to the case of
the four-dimensional Einstein-Maxwell theory. Our motivation for
doing so will be as part of an investigation of the thermodynamics of
the magnetised
Kerr-Newman black holes that were recently studied in \cite{gimupo}.
We shall find that some subtleties arise in this context that make
it necessary to pay close attention to some of the details of the
Wald procedure.
Starting from the Einstein-Maxwell Lagrangian
\begin{equation}
{\cal L}_4=\fft{1}{16 \pi}\, (R\, {*\rlap 1\mkern4mu{\rm l}}- 2{*F}\wedge F)
\,,\label{originallag}
\end{equation}
and following a calculation developed by Wald \cite{Wald,Wald2}, one can use
the Noether procedure to derive a current ${\cal J}$, given
by
\begin{equation}
{\cal J} = -d{*d}\xi - 4 {*F}\wedge d(\xi^\mu\, A_\mu)\,,\label{cJexp}
\end{equation}
where $\xi=\xi_\mu\, dx^\mu$ and $\xi^\mu\, {\partial}_\mu$ is a Killing
vector.\footnote{We shall present a detailed derivation in section
\ref{stusec} of the analogous result in the more complicated context of
the STU supergravity model.}
Since $d{{\cal J}}=0$, we can write ${{\cal J}}=-d{{\cal P}}$ and hence derive the conserved charge
\begin{equation}
{{\cal Q}}[\xi]=\frac{1}{16\pi G}\int_{S^2} {{\cal P}} \,.\label{Qcon1}
\end{equation}
From this point on we shall work in units where $G=1$.
One way to obtain a local expression for ${{\cal P}}$ is to note that the Maxwell
equation $d{*F}=0$ allows us to extract an exterior derivative from
the second term in (\ref{cJexp}) and write
\begin{equation}
{{\cal P}} = {*d}\xi +4 {*F}\, (\xi^\mu\, A_\mu)\,.\label{waldP}
\end{equation}
This is the the form in which the conserved charge was obtained in
\cite{Wald}.
An objection one may raise to the expression (\ref{waldP}) is that it is
not invariant under gauge transformations of $A$. Specifically, if
we send $A\longrightarrow A + d\lambda$, then we shall have
\begin{equation}
{{\cal P}}\longrightarrow {{\cal P}} + 4 {*F}\, (\xi^\mu\, {\partial}_\mu\lambda)\,.
\end{equation}
Since the Killing vector $\xi^\mu$ generates a symmetry of the solution,
it follows that the Lie derivative of $F$ will vanish, ${\mathfrak L}_\xi F=0$.
We may assume that a gauge choice for $A$ is made so
that ${\mathfrak L}_\xi A=0$ also. However,
there can still remain a residual gauge freedom that preserves
this choice, namely when the gauge parameter $\lambda$ satisfies
\begin{equation}
\xi^\mu\, {\partial}_\mu\lambda = c\,,\label{residuals}
\end{equation}
where $c$ is a constant. This can be seen from the fact that
gauge transformations preserving ${\mathfrak L}_\xi\, A=0$
must satisfy ${\mathfrak L}_\xi\,d\lambda
=(i_\xi\, d + d\, i_\xi)d\lambda = d\, i_\xi\, d\lambda =
d(\xi^\mu\, {\partial}_\mu\lambda)=0$.\footnote{Here $i_\xi$ denotes the
interior product of $\xi=\xi^\mu{\partial}_\mu$ with a $p$-form
$\omega=(1/p!)\, \omega_{\mu_1\cdots\mu_p}\, dx^{\mu_1}\wedge\cdots\wedge
dx^{\mu_p}$.
Its action is defined by $i_\xi\omega= (1/(p-1)!)\, \xi^{\mu_1}\,
\omega_{\mu_1\cdots\mu_p}\, dx^{\mu_2}\wedge\cdots\wedge
dx^{\mu_p}$. Note that if $\omega$ and $\nu$ are a $p$-form and a $q$-form,
then $i_\xi(\omega\wedge\nu)= (i_\xi \omega)\wedge \nu +
(-1)^p\, \omega\wedge (i_\xi \nu)$. The Lie derivative of any $p$-form
is given by ${\mathfrak L}_\xi\, \omega=
(di_\xi + i_\xi d)\omega$.\label{idef}}
The conserved charge in (\ref{Qcon1})
will then undergo a gauge transformation of the form
\begin{equation}
{{\cal Q}}[\xi] \longrightarrow {{\cal Q}}[\xi] + c\, Q\,,
\end{equation}
where $Q=1/(4\pi)\, \int{*F}$ is the electric charge.
An alternative way of extracting an exterior derivative from
the expression (\ref{cJexp}) for ${{\cal J}}$ is to introduce a dual gauge
potential $\widetilde A$ such that ${*F}\equiv\widetilde F = d\widetilde A$,
and then write ${{\cal J}}=-d\widetilde{{\cal P}}$, where
\begin{equation}
\widetilde{{\cal P}} = {*d}\xi +4 \widetilde A\wedge d(\xi^\mu\, A_\mu)\,.
\label{gaugeinvP}
\end{equation}
It is evident that the corresponding conserved charge $\widetilde{{\cal Q}}[\xi]$
obtained by substituting $\widetilde{{\cal P}}$ into (\ref{Qcon1}) will
be invariant under the residual gauge transformations of $A$, which
satisfy (\ref{residuals}).
Our principle interest in this paper will be to apply the Wald construction
to the calculation of the angular momentum.
The two alternative expressions (\ref{waldP}) and (\ref{gaugeinvP}) for
a 2-form whose exterior derivative gives ${\cal J}$ then essentially
correspond to the standard Wald expressions that one obtains by using
either the original Lagrangian (\ref{originallag}) (leading to (\ref{waldP}))
or else the dual Lagrangian\footnote{The dual Lagrangian can be
obtained by adding a Lagrange multiplier term to (\ref{originallag}) and
writing
\begin{equation}
{\cal L}= \fft1{16\pi G}\, (R\, {*\rlap 1\mkern4mu{\rm l}} -2{*F}\wedge F + 4d\widetilde A
\wedge F)\,,
\end{equation}
where now $F$ and $\widetilde A$ are viewed as fundamental fields.
The equation of motion for $\widetilde A$ implies the usual Bianchi
identity for $F$. If instead we eliminate $F$ via its (algebraic)
equation of motion, we obtain (\ref{duallag}). The original and the
dual Lagrangian differ on-shell by the total derivative term
$-4(d\widetilde A\wedge dA)/(16\pi G)$.}
\begin{equation}
\widetilde{\cal L}_4 = \fft{1}{16\pi G}\, (R\, {*\rlap 1\mkern4mu{\rm l}} -
2{*\widetilde F}\wedge\widetilde F)\,,\label{duallag}
\end{equation}
where $\widetilde F= {*F}=d\widetilde A$. To see this, consider the
analogue of the Wald expression (\ref{waldP}) that one would derive
from the dual Lagrangian (\ref{duallag}):
\begin{equation}
{{\cal P}}_{\rm dual} = {*d}\xi +4 {*\widetilde F}\, (\xi^\mu\, \widetilde A_\mu)
\,.\label{waldPdual}
\end{equation}
The difference between this and $\widetilde {{\cal P}}$ (defined in (\ref{gaugeinvP}))
is therefore
\begin{equation}
{{\cal P}}_{\rm dual}-\widetilde {{\cal P}}= 4{*\widetilde F}\,\, i_\xi \widetilde A
- 4\widetilde A\wedge di_\xi A\,,
\end{equation}
Assuming that $\xi^\mu$ is a
Killing vector, so that the Lie derivative of the field strength $F$
in a solution vanishes, ${\mathfrak L}_\xi F =(di_\xi + i_\xi d)F=0$,
we may choose gauges where ${\mathfrak L}_\xi A=0$ and
${\mathfrak L}_\xi\widetilde A=0$. In
particular, this means that $di_\xi A=-i_\xi d A = -i_\xi F$. Using also
that ${*\widetilde F}=-F$, we see that
\setlength\arraycolsep{2pt} \begin{eqnarray}
{{\cal P}}_{\rm dual}-\widetilde {{\cal P}}&=& -4 i_\xi \widetilde A\, F + 4\widetilde A
\wedge i_\xi F\,,\nonumber\\
&=& -4i_\xi(\widetilde A\wedge F)\,.\label{Pdiff}
\end{eqnarray}
In particular, if we consider the case when $\xi={\partial}/{\partial}\phi$
is the
Killing vector that generates azimuthal rotations, then it follows from
the final line of (\ref{Pdiff}) that $({{\cal P}}_{\rm dual}-\widetilde {{\cal P}})$
has no pullback onto the 2-sphere over which we integrate to obtain
a conserved charge. This means that
${{\cal P}}_{\rm dual}$ and $\widetilde {{\cal P}}$ would give
identical expressions for the angular momentum.
In the following section,
we shall discuss the dimensional reduction of the theory, and
its solutions, on the azimuthal Killing vector ${\partial}/{\partial}\phi$. This
will provide us with a formalism that is particularly well adapted to
computing the angular momentum for the solutions we are interested in.
\section{Conserved Charge, Angular Momentum and Mass via Dimensional Reduction}
A convenient way of calculating the conserved charges is to perform
a Kaluza-Klein dimensional reduction on the $\phi$ coordinate. Thus we
write\footnote{Note that the reduction ansatz for $A$ is compatible
with the partial gauge condition ${\mathfrak L}_\xi A=0$
that we discussed previously,
since $i_\xi A= \chi$ and $i_\xi dA= -d\chi$, so $(di_\xi + i_\xi d)A=0$.}
\begin{equation}
d{s}^2_4 = e^{2\varphi}d\bar s^2_3+e^{-2\varphi}(d\phi+2\bar{\cal A})^2\,,
\qquad {A}=\bar A+ \chi(d\phi+2\bar {\cal A})\,,\label{kkred}
\end{equation}
where, whenever there is an ambiguity, we place a ``bar'' on three-dimensional
quantities to distinguish them from the unbarred four-dimensional ones.
Note that $F=\bar F + d\chi\wedge (d\phi+2\bar{{\cal A}})$.
The equations of motion for the three-dimensional fields then follow from
the dimensionally-reduced Lagrangian
\setlength\arraycolsep{2pt} \begin{eqnarray}
{\cal L}_3&=&\fft{\Delta\phi}{16\pi}\, \sqrt{-\bar g} \,\Big[\bar R-
2(\partial\varphi)^2-2e^{2\varphi}(\partial\chi)^2-
e^{-4\varphi}\bar{\cal F}^2-e^{-2\varphi} \bar F^2\Big]\,, \\
\bar {\cal F}&=&d\bar{\cal A},
\quad \bar F=d\bar A+2\chi d\bar{\cal A}\,,
\end{eqnarray}
where $\Delta\phi$ is the period of the azimuthal coordinate $\phi$.
The equations of motion for $\bar A$ and $\bar{\cal A}$ imply that
we can write
\begin{equation}
e^{-2\varphi}\, {\bar*\bar F}=d\psi,\qquad e^{-4\varphi}\, {\bar*\bar {\cal F}}
=d\sigma-2\chi d\psi\,.\label{Fduals}
\end{equation}
Here, $\psi$ and $\sigma$ are the axionic scalar duals of the 1-form
potentials $\bar A$ and $\bar{\cal A}$.
\subsection{Conserved electric charge}
Since ${*F}=e^{-2\varphi}\, {\bar * \bar F}\wedge (d\phi +2\bar{{\cal A}}) +
e^{2\varphi}\, {\bar *d\chi}$, the conserved electric charge is given by
\setlength\arraycolsep{2pt} \begin{eqnarray}
Q &=& \fft1{4\pi} \int_{S^2} {*F} = \fft{\Delta\phi}{4\pi}\,
\int e^{-2\varphi} \, {\bar *\bar F}\,,\nonumber\\
&=& \fft{\Delta\phi}{4\pi} \, \int d\psi = \fft{\Delta\phi}{4\pi}\,
\Big[\psi\Big]_{\theta=0}^{\theta=\pi}\,.\label{charge}
\end{eqnarray}
Note that here, and henceforth, we are allowing for the possibility
that the period $\Delta\phi$ of the azimuthal coordinate $\phi$ might
be different from $2\pi$. In particular, this happens in the case of
the magnetised black hole solutions that we shall be considering in
this paper.
\subsection{Conserved angular momentum}
We first calculate the angular momentum $J={{\cal Q}}[\xi]$ using (\ref{Qcon1})
with ${{\cal P}}$ given by (\ref{waldP}), and with
$\xi={\partial}/{\partial}\tilde\phi$ where $\tilde\phi$ is the
canonically-normalised azimuthal angular coordinate with period $2\pi$.
It will, in general, be related to $\phi$ by $\phi=\alpha\tilde\phi$, with
$\alpha=\Delta\phi/(2\pi)$. As a 1-form, $\xi$ will be given, in terms of the
three-dimensional quantities, by
\begin{equation}
\xi = \alpha\, e^{-2\varphi}\, (d\phi+2\bar{{\cal A}})\,,
\end{equation}
and furthermore $\xi^\mu A_\mu= \alpha \chi$, so from (\ref{waldP})
\begin{equation}
{{\cal P}} = *\Big[2\alpha e^{-2\varphi}\, \bar{{\cal F}} + 4\alpha \chi \bar F
-2\alpha(e^{-2\varphi}\, d\varphi -2\chi d\chi)\wedge (d\phi+2\bar{{\cal A}})\Big]
\,.
\end{equation}
Thus using (\ref{Fduals}) we have
\setlength\arraycolsep{2pt} \begin{eqnarray}
{{\cal P}} &=& 2\alpha (e^{-4\varphi}\, {\bar *\bar {{\cal F}}} + 2\chi\, e^{-2\varphi}\,
{\bar *\bar F})\wedge (d\phi+2\bar{{\cal A}}) -2\alpha({\bar *d}\varphi
-2\chi\, e^{2\varphi}\, {\bar *d}\chi)\,,\nonumber\\
&=& 2\alpha d\sigma\wedge(d\phi+2\bar{{\cal A}}) -2\alpha({\bar *d}\varphi
-2\chi\, e^{2\varphi}\, {\bar *d}\chi) \,.
\end{eqnarray}
Only the first term has a non-zero pullback onto the 2-sphere, and so
this gives a conserved angular momentum
\begin{equation}
J = \fft1{16\pi} \int_{S^2} {{{\cal P}}} = \fft{\alpha\Delta\phi}{8\pi} \int d\sigma=
\fft{(\Delta\phi)^2}{16\pi^2}\, \Big[\sigma\Big]_{\theta=0}^{\theta=\pi}
\,.\label{angmom}
\end{equation}
As we discussed in section 2, a different choice for the definition of
the angular momentum is
to perform a dualisation of the four-dimensional field strength $F$,
and work instead with $\widetilde F= {*F}$ as the fundamental
electromagnetic field strength. As discussed in \cite{gimupo}, in the
three-dimensional language this dualisation amounts to interchanging the
three-dimensional fields $\chi$ and $\psi$. At the same time, the
field $\sigma$ must be redefined, so that in the dual formulation we shall have
tilded fields given in terms of the original ones by \cite{gimupo}
\begin{equation}
\widetilde\chi=\psi\,,\qquad \widetilde\psi=\chi\,,\qquad
\tilde\sigma = \sigma -2\chi\, \psi\,.\label{duality}
\end{equation}
It follows that in this dualised formalism, the angular momentum defined
in (\ref{angmom}) would be replaced by
\begin{equation}
\widetilde J =
\fft{(\Delta\phi)^2}{16\pi^2}\, \Big[\tilde\sigma\Big]_{\theta=0}^{\theta=\pi}
\,.\label{tangmom}
\end{equation}
It is instructive to look at the behaviour of the
two expressions under gauge transformations.
The quantity ${\cal P}$ defined in (\ref{Qcon1}) which we used in order to
calculate the angular momentum (\ref{angmom})
is in general gauge dependent, since the
potential $A_\mu$ appears explicitly in its construction. This can
be seen in the three-dimensional language as follows. If we perform a
gauge transformation $A\longrightarrow A'=A+d\lambda$ on the four-dimensional
gauge potential, then this will be compatible with the Kaluza-Klein reduction
ansatz (\ref{kkred}) for $A$ provided that $\lambda$ is restricted to
have the form
\begin{equation}
\lambda = \bar\lambda + c\, \phi\,,\label{gaugetrans}
\end{equation}
where $\bar\lambda$ depends only on the three-dimensional coordinates
and $c$ is a constant. Specifically, comparing with the reduction
ansatz
\begin{equation}
A' =\bar A' + \chi'\, (d\phi + 2\bar{{\cal A}}')\,,
\end{equation}
we see that the three-dimensional fields will transform as
\begin{equation}
\chi'= \chi+c\,,\qquad \bar A'= \bar A-2 c \,\bar{{\cal A}} + d\bar\lambda\,,\qquad
\bar {{\cal A}}' =\bar{{\cal A}}\,.\label{chiAtrans}
\end{equation}
Since $\bar F= d\bar A + 2\chi\, d\bar{{\cal A}}$, it follows that
\begin{equation}
\bar F' = d\bar A' + 2\chi'\, d\bar{{\cal A}} = d\bar A+ 2\chi\, d\bar{{\cal A}}=\bar F\,,
\end{equation}
and therefore from (\ref{Fduals}) we see that
\begin{equation}
\psi'=\psi\,.\label{psitrans}
\end{equation}
Since we also have $\bar{{\cal F}}'=\bar{{\cal F}}$ it also follows from (\ref{Fduals})
that
\begin{equation}
d\sigma'- 2\chi'\, d\psi' = d\sigma - 2\chi\, d\psi\,,
\end{equation}
and so using $\chi'= \chi+c$ and $\psi'=\psi$ we see that
\begin{equation}
\sigma'= \sigma + 2c\, \psi\,.\label{sigtrans}
\end{equation}
It follows from (\ref{sigtrans}) that if we perform the gauge
transformation in (\ref{gaugetrans}) that is parameterised by the
constant $c$, then the angular momentum given by (\ref{angmom}) will
transform to
\begin{equation}
J' = J + c\, Q\, \fft{\Delta\phi}{2\pi}\,,\label{Jtrans1}
\end{equation}
where $Q$ is the conserved electric charge given by (\ref{charge}).
If, on the other hand, we consider the angular momentum $\widetilde J$
defined by (\ref{tangmom}), then we see that under the gauge transformations
parameterised by $c$ we shall have
\begin{equation}
\tilde\sigma' = \sigma' - 2\chi'\, \psi'= \sigma + 2 c\,\psi -
2\chi\, \psi -2c\, \psi= \sigma-2\chi\, \psi=\tilde\sigma,
\end{equation}
and so $\widetilde J$ is gauge invariant. To be more precise, the
expression (\ref{tangmom}) for $\widetilde J$ is invariant under gauge
transformations of the original potential $A$. Conversely, the
expression (\ref{angmom}) for $J$ is invariant under gauge transformations
of the {\it dual} potential $\widetilde A$. Correspondingly, $J$ does
depend on gauge transformations of $A$, whilst $\widetilde J$ depends
on gauge transformations of $\widetilde A$.
In the reduced three-dimensional description, the residual gauge
transformations of the dual potential $\widetilde A$, preserving
${\mathfrak L}_\xi\, \widetilde A=0$, correspond to sending
\begin{equation}
\psi\longrightarrow \psi + b\,,\qquad
\chi\longrightarrow \chi\,,\qquad
\sigma\longrightarrow \sigma\,,\label{dualtrans}
\end{equation}
where $b$ is a constant parameter. This implies that the
$\tilde\sigma\equiv \sigma-2 \chi\, \psi$, which is invariant under
the original residual gauge transformations, will transform as
\begin{equation}
\tilde\sigma\longrightarrow \tilde\sigma -2b\, \chi
\end{equation}
under the dual residual gauge transformations. However, if there
is no magnetic charge, and thus $[\chi]_{\theta=0}^{\theta=\pi}=0$,
then the angular momentum $\widetilde J$ calculated using (\ref{tangmom})
will be invariant also under (\ref{dualtrans}).
It should be noted also that if we are able to make a gauge transformation
of the form (\ref{chiAtrans}) that sets $\chi$ to zero on the $z$ axis,
then the gauge-invariant expression (\ref{tangmom}) for the angular
momentum of an electrically-charged solution will coincide with the
expression, in general gauge-dependent, following from (\ref{angmom}).
\section{Mass of the Kerr-Newman-Melvin Black Holes}
In this section, we shall be describing an approach to calculating the
mass of the magnetised black holes by means of a dimensional reduction to
three dimensions. In order to avoid a profusion of annotations on
the three-dimensional equations we shall, {\it in this section only},
adopt the convention that four-dimensional quantities are denoted with
hats, while three-dimensional ones are unadorned.
\subsection{Hamiltonian formalism}
The original four-dimensional theory is given by
\begin{equation}
\hat I =\frac1{16\pi G_4}\int_{\hat{M}}
\Big(\hat{R} - \hat{F}^2\Big)\,\sqrt{-\hat g}\, d^4 x
+\frac1{8\pi G_4}
\oint_{\partial \hat{M}}\hat{K}\, \sqrt{|\hat \gamma|}\, d^3x\,,
\end{equation}
where $\hat K$ is the extrinsic curvature of the three-dimensional
boundary ${\partial}\hat M$, which
has the induced metric $\hat\gamma_{\mu\nu}$. Upon dimensional
reduction on a circle using the standard Kaluza-Klein ansatz
\begin{equation}
d\hat s_4^2 = e^{2\varphi}\, ds_3^2 + e^{-2\varphi}\, (d\phi + 2{{\cal A}})^2
\,,\qquad \hat A= A + \chi\, (d\phi+2 {{\cal A}})\,,\label{kkans2}
\end{equation}
we obtain the three dimensional theory
\setlength\arraycolsep{2pt} \begin{eqnarray}
I&=&\frac1{16\pi G_3}\int_{M}
\Big(R-2\Box\varphi-2(\partial\varphi)^2-2e^{2\varphi}(\partial\chi)^2-
e^{-4\varphi}{\cal F}^2-e^{-2\varphi}F^2\Big)\, \sqrt{-g}\, d^3 x\nonumber\\
&&+\frac1{8\pi G_3}\oint_{\partial M}(K+n^{\mu}\partial_{\mu}\varphi)
\,\sqrt{|\gamma|}\, d^2x \,,
\qquad G_4=(\Delta\phi)\, G_3\,,
\end{eqnarray}
where $\Delta\phi$ is the period of the reduction coordinate $\phi$.
In the following, we set $G_4=1$, and therefore $G_3=1/(\Delta\phi)$.
After integration by parts,
\begin{equation}
I=\frac{\Delta\phi}{16\pi}\int_{M}\Big(R-2(\partial\varphi)^2-
2e^{2\varphi}(\partial\chi)^2-e^{-4\varphi}{\cal F}^2-
e^{-2\varphi}F^2\Big)\, \sqrt{-g}\, d^3x
+\frac{\Delta\phi}{8\pi}\oint_{\partial M}K\, \sqrt{|\gamma|}\, d^2x\,.
\end{equation}
Adding Lagrange multipliers $4d\psi\wedge(F-2\chi{\cal F})+
4d\sigma\wedge {\cal F}$ and eliminating $F$ and ${\cal F}$, we arrive at
the dualised
Lagrangian describing three-dimensional gravity coupled to a sigma model
\begin{equation}
I=\frac{\Delta\phi}{16\pi}\int_{M}\Big(R-2\Sigma_{AB}(\phi)\,
\partial\phi^A\partial\phi^B\Big)\, \sqrt{-g}\, d^3x
+\frac{\Delta\phi}{8\pi}\oint_{\partial M}K\, \sqrt{|\gamma|}\, d^2x\,,
\end{equation}
where $\phi^A$ represents all the scalars. The sigma-model metric
is
\begin{equation}
d\Sigma^2=d\varphi^2+e^{2\varphi}(d\chi^2+d\psi^2)+
e^{4\varphi}(d\sigma-2\chi d\psi)^2\,.
\end{equation}
As stated before, the dualised action differs from the original one
by a total derivative term, and this will modify the definition of
energy\footnote{It is easy to see this in the Wald procedure, where
adding a total derivative term $d\nu$ to the Lagrangian will shift the
canonical charge associated with Killing vector $\xi$ by $i_{\xi}\nu$.}.
The original Lagrangian cannot easily be used to calculate the energy
because the corresponding Hamiltonian contains terms
such as $\oint_{S_{\infty}} A d\psi$ and
$\oint_{S_{\infty}} {\cal A} d\sigma$,
whose evaluation is unclear. However, these term are absent in the
dualised Lagrangian, rendering the calculation more well defined.
We shall therefore carry out our calculations,
and give a thermodynamic interpretation, using the
dualised form of the Lagrangian. This can be viewed as a choice of
regularisation scheme for giving a definition of mass that is
applicable in the rather unusual asymptotic geometry of the magnetised
black hole solution.
In the ADM decomposition, the three dimensional metric is recast into
the form
\begin{equation}
ds^2_3=-N^2dt^2+h_{ij}(dx^i+N^idt)(dx^j+N^jdt)\,.
\end{equation}
It follows that the Hamiltonian defined on the constant $t$ surface takes
the form \cite{Brown:1990fk}
\begin{equation}
H=\int_{\Sigma^t}d^2x(N{\cal H}+N^i{\cal H}_i)-
\oint_{S_{\infty}^t}dx\sqrt{\sigma}[\frac{\Delta\Phi}{8\pi }N k+
\frac{2}{\sqrt{h}}N^iP_{ij}n^j]\,,
\end{equation}
where ${\cal H}$ and ${\cal H}_i$ are the total Hamiltonian constraint
and the momentum constraint. Using the extrinsic curvature $K_{ij}$ of
$\Sigma^t$, the momentum $P^{ij}$, conjugate to $h_{ij}$, can be expressed as
\begin{equation}
P_{ij}=\frac{\Delta \phi}{16\pi}\sqrt{h}(K_{ij}-Kh_{ij}),\qquad
K_{ij}=\frac{1}{2N}(\dot{h}_{ij}-2\nabla_{(i}N_{j)})\,,
\end{equation}
where $\nabla_i$ is defined with respect to $h_{ij}$. $S_{\infty}^t$,
defined at $t={\rm const}$ and $r=\infty$, is a hypersurface
inside $\Sigma^t$ with outward unit normal vector $n^i$. The
quantity
$k \equiv h^{ij}\nabla_i n_j$ is the trace of the extrinsic
curvature of $S_{\infty}^t$.
In general, the above expression for the Hamiltonian diverges. To
obtain a
meaningful result, we must regularize the Hamiltonian by making
a subtraction in the surface term:
\begin{equation}
H=\int_{\Sigma^t}d^2x(N{\cal H}+N^i{\cal H}_i)-\oint_{S_{\infty}^t}
dx\sqrt{\sigma}[\frac{\Delta\phi}{8\pi }N(k-k_0)+
\frac{2}{\sqrt{h}}N^ip_{ij}n^j]\,,
\label{Ham3}
\end{equation}
where $k_0$ is the extrinsic curvature of $S_{\infty}^t$ embedded in
a certain two-dimensional reference background.
\subsection{Mass of the Kerr-Newman black hole}
Before computing the mass of the Kerr-Newman-Melvin black hole, we first
show how the three-dimensional Hamiltonian we have derived reproduces
the standard
mass for the Kerr-Newman black hole.
On shell, we have ${\cal H}={\cal H}^i=0$, and the Hamiltonian receives
contributions only from the boundary terms.
According to the reduction ansatz (\ref{kkans2}), the three-dimensional
metric induced from the four-dimensional Kerr-Newman black hole is given by
\setlength\arraycolsep{2pt} \begin{eqnarray}
&&ds^2_{3{\rm KN}} =-
\Delta\sin^2\theta dt^2+\Sigma\, \sin^2\theta
\, (\frac{dr^2}{\Delta}+ d\theta^2)\,,\label{gkn}\\
&&\rho^2=r^2+a^2 \cos^2\theta\,,\quad \Delta=r^2-2mr + a^2 + q^2\,,\quad
\Sigma=(r^2+a^2)^2 - a^2\Delta \sin^2\theta\,.\nonumber
\end{eqnarray}
This 3-metric is static, and so $p^{ij}=0$. The extrinsic curvature of
$S^t_{r=r_0}$ in $\Sigma^t$ can be computed, giving
\begin{equation}
k = \fft1{\sqrt{\sigma}}\, \fft{{\partial}\sqrt{\sigma}}{{\partial} n}
=
\frac1{2\sin\theta}\sqrt{\frac{\Delta}{\Sigma}}
\frac{\partial_r\Sigma}{\Sigma}\Big|_{r=r_0}\,,
\end{equation}
where $\sigma=g_{\theta\theta}$ is the determinant of the 1-dimensional
boundary metric and ${\partial}/{\partial} n$ is the derivative with respect to the
unit normal in the radial direction at $r=r_0$.
To compute $k_0$, we recall that the reference metric for the
four-dimensional Kerr-Newman black hole is the four-dimensional Minkowski
metric, which upon dimensional reduction gives rise to the three-dimensional
reference metric
\begin{equation}
ds^2=-
R^2\sin^2\theta dt^2+ R^4\, \sin^2\theta\,
\Big(\frac{dR^2}{R^2}+ d\theta^2\Big)\,.\label{refmet}
\end{equation}
The calculation of $k_0$ requires us to embed $S^t_{r=r_0}$ into the above
background in such a way that the metric on $S^t_{r=r_0}$ induced from
the reference metric should be isometric to the metric on $S^t_{r=r_0}$
induced from $\Sigma^t$. Thus the $t=\,$constant boundary at $R=R_0$
in the reference metric should be matched to the $t=\,$constant boundary
at $r=r_0$ in the reduction of the Kerr-Newman metric, implying
\begin{equation}
R_0^4 = \Sigma\Big|_{r=r_0}\,.
\end{equation}
This gives
\begin{equation}
k_0 = \fft1{\sqrt{\sigma_0}}\, \fft{{\partial}\sqrt{\sigma_0}}{{\partial} n}
= \fft{2}{\sqrt{\Sigma}\, \sin\theta}\Big|_{r=r_0}\,,
\end{equation}
where $\sigma_0= g_{\theta\theta}$ is the determinant of the 1-dimensional
boundary metric in the reference metric (\ref{refmet}).
Bearing in mind that the azimuthal coordinate $\phi$ has period
$\Delta\phi=2\pi$ in the Kerr-Newman metric, we therefore find
from (\ref{Ham3}) that
\begin{equation}
E_{\rm KN}=-\frac1{4}\oint_{S_{\infty}^t}dx\sqrt{\sigma}N(k-k_0)=m\,,
\label{knmass}
\end{equation}
which reproduces the mass for the Kerr-Newman black hole.
\subsection{Mass of the Kerr-Newman-Melvin black hole}
We now turn to the calculation of the the mass of the Kerr-Newman-Melvin
black hole. The calculation closely resembles the previous case,
since the dimensionally-reduced 3-metric of the Kerr-Newman-Melvin black hole
is identical to that for the Kerr-Newman case, given in (\ref{gkn}). The
three-dimensional evaluation of the mass differs in only one respect,
namely that the period $\Delta\phi$ of the azimuthal coordinate is
no longer $2\pi$, and so the mass is now given by
\begin{equation}
E_{\rm KNM}=\fft{\Delta\phi}{2\pi}\, m\,.\label{knmmass}
\end{equation}
\subsection{Euclidean action}
The Lorentzian action is related to the Hamiltonian by
\begin{equation}
I=\int d^4x\Big(P^{ij}\dot{h}_{ij}+\pi_A\dot{\phi}^A\Big)-\int dt H\,.
\end{equation}
Therefore, for stationary solutions, the Euclidean action is given by
\begin{equation}
I_E=\beta H\,.
\end{equation}
The Hamiltonian will include a
contribution from the horizon. The total on-shell Hamiltonian
then takes the form \cite{Brown:1990fk}
\begin{equation}
H=-\oint_{S_{\infty}^t}dx\sqrt{\sigma}[\frac{\Delta\phi}{8\pi }
N(k-k_0)+\frac{2}{\sqrt{h}}N^ip_{ij}n^j]-
\oint_{S^t_H}dx\sqrt{\sigma}[\frac{\Delta\phi}{8\pi }n^i\partial_iN-
\frac{2}{\sqrt{h}}N^ip_{ij}n^j]\,.
\end{equation}
The first is equal to the energy $E$, and the second term gives rise to
$-TS$. Thus the Euclidean action of the dualised action is equal to
the Helmholtz free energy $F=E-TS$, suggesting that the dualised action
provides a canonical ensemble description for black-hole thermodynamics.
\section{Conserved Charges for Kerr-Newman-Melvin Black Holes}
\subsection{Kerr-Newman black holes}
Before turning to the magnetised black hole metric, let us first
illustrate the three-dimensional procedure for calculating the conserved
charges by considering the original four-dimensional Kerr-Newman solution
itself, for which the reduction to three dimensions
gives\footnote{Expressions for the three-dimensional fields $\psi$,
$\chi$ and $\sigma$ can be found from those in \cite{gimupo}, by specialising
to the case where the external magnetic field is set to zero. Note that
some sign conventions in \cite{gimupo}, associated with the
definition of the orientation of the 2-spheres, differ from ours.}
\setlength\arraycolsep{2pt} \begin{eqnarray}
d\bar s^2_3 &=&\sin^2\theta \Sigma(\frac{dr^2}{\Delta}+ d\theta^2)-
\Delta\sin^2\theta dt^2\,,\nonumber\\
\chi &=& \fft{a q r \sin^2\theta}{\rho^2}\,,
\qquad \psi= -\fft{q(r^2+a^2)\cos\theta}{\rho^2}
\,,\nonumber\\
\sigma &=& -\fft{2 a m \cos\theta[r^2(3-\cos^2\theta) + a^2(1+\cos^2\theta)]}{
\rho^2} - \fft{2a^3 q^2 r \cos\theta \sin^4\theta}{\rho^4}\,,
\label{kn3dim}
\end{eqnarray}
where
\begin{equation}
\rho^2=r^2+a^2 \cos^2\theta\,,\qquad \Delta=r^2-2mr + a^2 + q^2\,,\qquad
\Sigma=(r^2+a^2)^2 - a^2\Delta \sin^2\theta\,.
\end{equation}
For the mass, we saw that (\ref{knmass}) gave the expected result
\begin{equation}
E=m\,.
\end{equation}
The angular velocity and the electrostatic potential on
the horizon are given, in the three-dimensional calculation, by
\begin{equation}
\Omega= -2 i_k\bar {\cal A}\Big|_{r=r_+} =\fft{a}{r_+^2+a^2}\,,\qquad
\Phi_H= - i_k \bar A\Big|_{r=r_+} = \fft{q r_+}{r_+^2 + a^2}\,.
\end{equation}
The electric
charge and angular momentum, given by (\ref{charge}) and (\ref{angmom}),
are
\begin{equation}
Q=\ft12
\Big[\psi\Big]_{\theta=0}^{\theta=\pi}\ = q\,,\qquad
J= \ft14
\Big[\sigma\Big]_{\theta=0}^{\theta=\pi} \ = am\,,
\qquad
\widetilde J= \ft14
\Big[\tilde\sigma\Big]_{\theta=0}^{\theta=\pi} \ = am\,,
\label{knQJ}
\end{equation}
since in this Kerr-Newman example the period $\Delta\phi$ of the azimuthal
coordinate $\phi$ is $2\pi$.
As we discussed in section 3.2, the expression for $J$
given in (\ref{angmom}) is not invariant under the residual gauge
transformations
\begin{equation}
\chi\longrightarrow \chi+c\,,\qquad \sigma\longrightarrow
\sigma + 2 c\, \psi\,,\qquad \psi\longrightarrow \psi\,,
\label{ctrans}
\end{equation}
and in fact, from (\ref{Jtrans1}), we will have
\begin{equation}
J\longrightarrow J + c\, Q
\end{equation}
in this case. Thus the fact that $J$ in (\ref{knQJ}) has turned out to
give the correct result for the angular momentum is
a consequence of happy choice of gauge; as can be seen from (\ref{kn3dim}),
it is the one in which
$\chi$ goes to zero on the axis at $\theta=0$ and $\theta=\pi$,
thus implying that it is regular there. Furthermore, it goes to zero
at infinity.
By contrast, as we discussed in section 3.2, the expression (\ref{tangmom})
for $\widetilde J$ {\it is} invariant under the residual gauge transformations
(\ref{ctrans}), and so it is not subject to the same ambiguities.
Finally, we remark that a simple way to calculate the angular
momentum of a dyonically charged Kerr-Newman black hole is to perform
first a duality transformation to the case with purely electric charge,
then and use the gauge-invariant expression (\ref{tangmom}).
\subsection{Charges and thermodynamics for the Kerr-Newman-Melvin metrics}
Here, we use the Kaluza-Klein formalism of section 3 to
calculate the electric charge and angular momentum for the Kerr-Newman-Melvin
metrics. These are again given by (\ref{charge}) and (\ref{angmom}) or
(\ref{tangmom}), where expressions for
the scalar fields $\sigma$, $\chi$ and $\psi$ can be found
in appendices A and B of \cite{gimupo}. The period of $\phi$, determined
by the requirement of there being no conical singularity on the axis
at $\theta=0$ and $\theta=\pi$, is now given by \cite{gimupo}
\begin{equation}
\Delta\phi= 2\pi \Big[1+\ft32 q^2 B^2 + 2aqm B^3 +
(a^2 m^2 +\ft1{16} q^4)B^4\Big]\,.\label{Deltaphi}
\end{equation}
It is straightforward to see
that the expression for $\chi$ given in \cite{gimupo} is non-vanishing on
the axis at $\theta=0$ and $\theta=\pi$: we have
\begin{equation}
\chi|_{\theta=0} = \chi|_{\theta=\pi} = \gamma\equiv \fft{\pi B}{4\Delta\phi}\,
[12 q^2 + 24 a m q B + (q^4 + 16 a^2 m^2) B^2]\,.
\end{equation}
It is therefore natural, in the light of the previous calculation for the
Kerr-Newman black hole, to make a gauge transformation of the form
(\ref{ctrans}) with $c=-\gamma$ before evaluating the gauge-dependent
expression (\ref{angmom}) for the angular momentum. Assuming that
we do this, we then find
\setlength\arraycolsep{2pt} \begin{eqnarray}
Q &=& q + 2am B - \ft14 q^3 B^2\,,\label{QJ}\\
J&=& \widetilde J=
a m - q^3 B - \ft32 am q^2 B^2 -\ft14 q B^3 (8 a^2 m^2 + q^4)
-\ft1{16} am B^4 (16 a^2 m^2 + 3 q^4) \,, \nonumber
\end{eqnarray}
(The calculation of $Q$ is discussed in \cite{gimupo}.) By having chosen
the gauge where $\chi$ vanishes on the axis, we obtain the same expression
$J$ for the angular momentum as we get from the gauge-invariant
expression $\widetilde J$ given by (\ref{tangmom}).
For the mass, we see from (\ref{knmmass}) that result for the
Kerr-Newman-Melvin metric
will be just the usual factor $m$, however now scaled by a factor of
$(\Delta\phi)/(2\pi)$, where $\Delta\phi$ is the period of the
azimuthal angle $\phi$, given in (\ref{Deltaphi}).
Thus we find
that the mass is given by
\begin{equation}
E=m \Big[1+\ft32 q^2 B^2 + 2aqm B^3 +
(a^2 m^2 +\ft1{16} q^4)B^4\Big]
\,.\label{kmmass}
\end{equation}
The area $A_H$ of the outer horizon
and the surface gravity $\kappa$ can be straightforwardly calculated
from the Kerr-Newman-Melvin metrics given in \cite{gimupo}, leading to
\setlength\arraycolsep{2pt} \begin{eqnarray}
A_H&=&4 \pi \Big(1+a^2 m^2 B^4+2 a mB^3 q+\frac{3 B^2q^2}{2}+
\frac{B^4 q^4}{16}\Big) \Big(a^2+(m+\sqrt{m^2-a^2-q^2})^2\Big)\,,\nonumber\\
\frac{\kappa}{8\pi}&=&\frac{\sqrt{m^2-a^2-q^2}}{8\pi
\Big(a^2+(m+\sqrt{m^2-a^2 -q^2})^2\Big)}\,.\label{area}
\end{eqnarray}
Assuming for now that we hold the external magnetic field $B$ fixed, we
can expect that the first law should take the form
\begin{equation}
dE=\frac{\kappa}{8\pi}dA_H+\Omega dJ+\Phi dQ\,,\label{firstlaw}
\end{equation}
where $\Omega=\Omega_H-\Omega_\infty$ is the difference between the
angular velocity of the horizon
and the angular velocity at infinity, and $\Phi=\Phi_H-\Phi_\infty$ is
the potential difference
between the horizon and infinity. Because of subtleties associated with the
asymptotic structure of the Kerr-Newman-Melvin metrics at infinity
it is not obvious
how to calculate $\Omega_\infty$ and $\Phi_\infty$. We can, however,
proceed by using our results
above for the other thermodynamic quantities, and then
seeking solutions for $\Phi$ and $\Omega$ such that the first law
(\ref{firstlaw}) holds. We find that solutions do indeed exist. This is
in fact non-trivial, since with three independent parameters being varied
in (\ref{firstlaw}) we have three equations for the two unknowns
$\Omega$ and $\Phi$.\footnote{The fact that a solution exists for $\Omega$
and $\Phi$ also provides non-trivial support for the validity
of our expression (\ref{kmmass}) for the mass of the Kerr-Newman-Melvin
solution.} The solutions for $\Omega$ and $\Phi$ are rather
complicated, and we shall not present them in detail here. Later, we
shall present leading-order terms in $\Omega$ and $\Phi$ in useful
approximations.
Firstly, however, we remark that we can also allow $B$ to become an
additional thermodynamic variable in the first law, which will now be
generalised to
\begin{equation}
dE=\frac{\kappa}{8\pi}dA_H+\Omega dJ+\Phi dQ - \mu dB\,,\label{genfirstlaw}
\end{equation}
where $\mu$ has the interpretation of being the magnetic moment of the
system. (Analogous expressions
have been obtained by for the case of Einstein-Dilaton-Maxwell
theory in the Kaluza-Klein case by Yazadjiev \cite{yaz}.)
Again, it is non-trivial that a solution for $\mu$ exists. Having
obtained $\mu$, which is also rather complicated in general, it is
straightforward to verify that the various thermodynamic quantities
satisfy the Smarr-like relation
\begin{equation}
E= \frac{\kappa}{4\pi} A_H + 2\Omega J + \Phi Q + \mu B\,.
\end{equation}
As we mentioned above, the solutions for $\Omega$, $\Phi$ and $\mu$
are rather complicated in general. It is instructive to look at
the leading-order
forms of these quantities. Up to linear order in $q$, we find
\begin{equation}
\mu = a q(1+a^2m^2 B^4) + {\cal O}(q^2)\,.\label{muexp}
\end{equation}
To linear order in $B$, we find
\setlength\arraycolsep{2pt} \begin{eqnarray}
\Omega &=& \fft{a}{r_+^2 +a^2} - \fft{2 q B r_+}{r_+^2 +a^2} +
{\cal O}(B^2)\,,\nonumber\\
\Phi &=& \fft{q r_+}{r_+^2+a^2} +
\fft{3 a q^2 B}{(r_+^2+a^2)} + {\cal O}(B^2)
\,.\label{OmPhi}
\end{eqnarray}
Note that from (\ref{muexp}) we have, to lowest order, that
$\mu = J q/m$, reproducing the gyromagnetic ratio $g=2$ as found
by Carter \cite{Carter}.
We also see that the second term in the expression for $\Omega$ in(
\ref{OmPhi}) agrees with the standard formula for the Larmor precession
frequency $\Omega_L = \mu B/J$, in the limit that we may approximate
$r_+$ by $2m$.
\subsection{The case $q=-a m B$}
It was shown in \cite{gimupo} that in general the magnetised Kerr-Newman
black holes have an ergoregion that extends out to infinity close to the
axis of rotation. A special case arises if the charge parameter $q$
of the original Kerr-Newman solution is chosen to satisfy \cite{gimupo}
\begin{equation}
q=-a m B\,,\label{qspec}
\end{equation}
where $m$ and $a$ are the mass and rotation parameters of the
Kerr-Newman metric. Under these circumstances we find that the
conserved charge and angular momentum, given in general by (\ref{QJ}),
simplify considerably, and become
\setlength\arraycolsep{2pt} \begin{eqnarray}
Q &=& a m B\, \sqrt{\fft{\Delta\phi}{2\pi}}=
- q\,\sqrt{\fft{\Delta\phi}{2\pi}}=
- Q_0\, \sqrt{\fft{\Delta\phi}{2\pi}}\,,\nonumber\\
J &=& a m\, \fft{\Delta\phi}{2\pi} = J_0\, \fft{\Delta\phi}{2\pi}\,,
\end{eqnarray}
where $Q_0=q$ and $J_0=am$ are the conserved charge and angular momentum
of the original Kerr-Newman solution.
The period of the azimuthal coordinate is now
\begin{equation}
\fft{\Delta\phi}{2\pi}= (1 + \ft14 a^2 m^2 B^2)^2\,.
\end{equation}
The area of the event horizon, given in general by (\ref{area}),
can now be written as
\begin{equation}
A_H = A^0_H\, \fft{\Delta\phi}{2\pi}\,,
\end{equation}
where $A^0_H$ is the area of the event horizon of the
Kerr-Newman black hole. Of course we still also have, from (\ref{knmmass}),
that the mass is given by
\begin{equation}
E= E_0\, \fft{\Delta\phi}{2\pi}\,,
\end{equation}
where $E_0=m$ is the mass of the Kerr-Newman black hole, while the
surface gravity $\kappa$ is, as always, just equal to its value in
the Kerr-Newman solution (see equation (\ref{area})).
\section{Energy Minimisation}
Defining $E_0=m$ and $J_0=a m$ as the mass and the angular momentum of
the Kerr-Newman black hole (i.e. the $B=0$ specialisation), we may eliminate
$q$ between the expressions for $Q$ and $E$ in (\ref{QJ}) and (\ref{kmmass}),
thereby obtaining an equation that determines $E$ in terms of $Q$, $J_0$
and $B$:
\setlength\arraycolsep{2pt} \begin{eqnarray}
&&E^3 - E^2 (17+ 3 B^4 J_0^2) E_0 +
\ft12 E(160-192 B^4 J_0^2 + 6 B^8 J_0^4 + 136 B^3 J_0 Q - 11 B^2 Q^2)E_0^2\nonumber\\
&&-\Big(64 + 48 B^4 J_0^2 + 12 B^8 J_0^4 + B^{12} J_0^6 -128 J_0 Q -
32 B^7 J_0^3 Q + 68 B^2 Q^2 + 17 B^6 J_0^2 Q^2 \nonumber\\
&&\qquad- 2 B^5 J_0 Q^3 +
\ft1{16} B^4 Q^4\Big) E_0^3=0\,.
\end{eqnarray}
Extremising $E$ with respect to $Q$, while holding $E_0$, $J_0$ and $B_0$
fixed then implies
\setlength\arraycolsep{2pt} \begin{eqnarray}
E =\fft{(512 B J_0 + 128 B^5 J_0^3 - 544 Q - 136 B^4 J_0^2 Q +
24B^3 J_0 Q^2 - B^2 Q^3) E_0}{4(11 Q- 68 B J_0)}\,.
\end{eqnarray}
From these equations we can now obtain expressions for $\bar E$ and
$\bar Q$, the values of $E$ and $Q$ at the extremum, as function of
$E_0$, $J_0$ and $B$. For $Q$, we find that $\bar Q$ is given by the roots
of the factorised polynomial $P(Q)=P_1(Q) P_2^2(Q)$, where
\setlength\arraycolsep{2pt} \begin{eqnarray}
P_1&=& B^2 Q^3 - 12 B^3 J_0 Q^2 + 16(4+B^4 J_0^2)(3Q-4B J_0)\,,\nonumber\\
P_2&=& B^2 Q^3 - 30 B^3 J_0 Q^2 -4(392-75 B^4 J_0^2) Q +
8 B J_0(588-125 B^4 J_0^2)\,.
\end{eqnarray}
Expanding around $B=0$
we find just one real root for $P_1(Q)=0$, giving
\setlength\arraycolsep{2pt} \begin{eqnarray}
\bar Q &=& \ft43 B J_0 + \ft{8}{81} B^5 J_0^3 + \cdots\,,\nonumber\\
\bar E &=& E_0 + \ft13 E_0 B^4 J_0^2 +\cdots\,.
\end{eqnarray}
For $P_2(Q)=0$ we find three real roots, with
\setlength\arraycolsep{2pt} \begin{eqnarray}
\bar Q &=& 3 B J_0+\cdots\,,\nonumber\\
\bar E &=& 8 E_0 -\ft34 E_0 B^4 J_0^2 +\cdots
\end{eqnarray}
or
\setlength\arraycolsep{2pt} \begin{eqnarray}
\bar Q &=& \pm \fft{28\sqrt2}{B} +\fft{27}{2}\, B J_0 \mp
\fft{21}{32\sqrt 2}\, B^3 J_0^2 +\cdots\,,\nonumber\\
\bar E &=& -48 E_0 \mp 7\sqrt2 E_0 B^2 J_0 +\fft{15}{8} E_0 B^4 J_0^2
+\cdots\,.
\end{eqnarray}
\section{Further Properties of Uncharged Black Holes}
In this section, we explore various properties of
magnetised Kerr-Newman black holes in the special case where the physical
charge $Q$ vanishes.
\subsection{Angular momentum of uncharged magnetised black holes}
Using the expressions (\ref{QJ}) for the physical charge $Q$ and the
angular momentum $J$ of a magnetised Kerr-Newman black hole, we may express
$J$ in terms of $J_0=am$ and $B$ in the case that $Q$ is required to be zero.
Note that here $J_0$ is the angular momentum of the unmagnetised
Kerr-Newman seed solution. We find that $J$, $J_0$ and $B$ are then related
by
\begin{equation}
B^4 J^3 + B^4 J_0\, (79+ 3 B^4 J_0^2)\, J^2 -
(256 -944 B^4 J_0^2 + 248 B^8 J_0^4 - 3B^{12} J_0^6)\, J +
J_0\, (4+B^4 J_0^2)^4=0\,.\label{xxexp}
\end{equation}
Expanding in powers of $B$, the branch that reduces to $J=J_0$ in the case
that $B$ vanishes gives
\begin{equation}
J= J_0 + 5 B^4\, J_0^3 + 21 B^8\, J_0^5 + 94 B^{12}\, J_0^7 +
454 B^{16}\, J_0^9 +\cdots\,.
\end{equation}
In order that $J$ remain real the product $B^2\, J_0$ should not
exceed a maximum value, given by
\begin{equation}
B^2\, J_0\big|_{\rm max} = \fft{2}{3\sqrt3}\,.
\end{equation}
This corresponds to
\begin{equation}
J\big|_{\rm max} = \fft{128}{27}\, J_0\big|_{\rm max}
= \fft{256}{81\sqrt3\, B^2}\,.
\end{equation}
\subsection{Meissner effect for extremal black holes}
The electromagnetic field in the magnetised Kerr-Newman solution takes the
form $A=\bar A +\chi\, (d\phi +2 \bar{{\cal A}} dt)$, as in (\ref{kkred}),
where the various
quantities may be found in appendix B of \cite{gimupo}. The magnetic flux
threading the upper hemisphere $S^2_+$ of the horizon is given by
\begin{equation}
{{\cal F}}_H = \fft1{4\pi}\, \int_{S^2_+} F = \fft{\Delta\phi}{4\pi}\,
\Big[\chi\Big]_{\theta=\ft12\pi}^{\theta=\pi}\,.\label{flux}
\end{equation}
Consider the case where the physical charge $Q$ on the black hole vanishes.
From (\ref{QJ}), this is achieved if the magnetic field is given by
$B=B_\pm$ where
\begin{equation}
B_\pm = \fft{2}{q^3}\, \Big[2 a m \pm\sqrt{4 a^2 m^2 + q^4}\Big]\,.
\label{zerocharge}
\end{equation}
Suppose furthermore that the black hole is extremal,
which means that the inner and outer horizons at $r=r_\pm$ coincide at
\begin{equation}
r_\pm= m\,,\qquad m=\sqrt{q^2+a^2}\,.\label{extremal}
\end{equation}
Inserting the zero-charge condition (\ref{zerocharge})
and the extremality condition (\ref{extremal}) into the expression for
$\chi$ given in \cite{gimupo}, we find that $\chi$ is constant on the
horizon, and it is given by
\begin{equation}
\chi\big|_H = \pm \fft{q^3}{2(q^2+2a^2)}\,.
\end{equation}
Evidently therefore, from (\ref{flux}), it follows that the magnetic
flux threading the upper hemisphere of the horizon is zero.
This is consistent
with much previous work (see \cite{Bicak} for a review).
\section{Angular Momentum in STU Supergravity}\label{stusec}
In this section, we extend our earlier discussion of the conserved
angular momentum in Einstein-Maxwell theory to the case of the
four-dimensional STU model, which is ${\cal N}=2$ supergravity coupled
to three vector multiplets. For our purposes, it suffices to focus
just on the bosonic sector of the theory. The bosonic Lagrangian,
in the notation of \cite{cclp}, is
\setlength\arraycolsep{2pt} \begin{eqnarray}
{\cal L}_4 &=& R\, {*\rlap 1\mkern4mu{\rm l}} - \ft12 {*d\varphi_i}\wedge d\varphi_i
- \ft12 e^{2\varphi_i}\, {*d\chi_i}\wedge d\chi_i - \ft12 e^{-\varphi_1}\,
\Big( e^{\varphi_2-\varphi_3}\, {* F_{{\sst{(2)}} 1}}\wedge F_{{\sst{(2)}} 1}\nonumber\\
&& + e^{\varphi_2+\varphi_3}\, {* F_{{\sst{(2)}} 2}}\wedge F_{{\sst{(2)}} 2}
+ e^{-\varphi_2 + \varphi_3}\, {* {{\cal F}}_{\sst{(2)}}^1 }\wedge {{\cal F}}_{\sst{(2)}}^1 +
e^{-\varphi_2 -\varphi_3}\, {*{{\cal F}}_{\sst{(2)}}^2}\wedge {{\cal F}}_{\sst{(2)}}^2\Big)\nonumber\\
&& + \chi_1\, (F_{{\sst{(2)}} 1}\wedge {{\cal F}}_{\sst{(2)}}^1 +
F_{{\sst{(2)}} 2}\wedge {{\cal F}}_{\sst{(2)}}^2)\,,
\label{d4lag}
\end{eqnarray}
where the index $i$ labelling the dilatons $\varphi_i$ and axions $\chi_i$
ranges over $1\le i \le 3$. The four field strengths can be written in
terms of potentials as
\setlength\arraycolsep{2pt} \begin{eqnarray}
F_{{\sst{(2)}} 1} &=& d A_{{\sst{(1)}} 1} - \chi_2\, d{{\cal A}}_{\sst{(1)}}^2\,,\nonumber\\
F_{{\sst{(2)}} 2} &=& d A_{{\sst{(1)}} 2} + \chi_2\, d{{\cal A}}_{\sst{(1)}}^1 -
\chi_3\, d A_{{\sst{(1)}} 1} +
\chi_2\, \chi_3\, d{{\cal A}}_{\sst{(1)}}^2\,,\nonumber\\
{{\cal F}}_{\sst{(2)}}^1 &=& d{{\cal A}}_{\sst{(1)}}^1 + \chi_3\, d {{\cal A}}_{\sst{(1)}}^2\,,\nonumber\\
{{\cal F}}_{\sst{(2)}}^2 &=& d{{\cal A}}_{\sst{(1)}}^2\,.
\end{eqnarray}
\subsection{Derivation of the conserved angular momentum}
The conserved charge associated with a diffeomorphism $\xi$ can be calculated
using the standard Wald procedure. Thus, we first calculate
$\delta{\cal L}(\Phi) =
d\Theta + \hbox{e.o.m. terms}$, where all the fields $\Phi$
are varied using the Lie derivatives $\delta\Phi ={\mathfrak L}_\xi\, \Phi$.
For example,
for the metric we have $\delta g_{\mu\nu}= \nabla_\mu\xi_\nu +
\nabla_\nu\xi_\mu$, and for gauge potentials
$\delta A_\mu =\xi^\nu \nabla_\nu A_\mu + A_\nu\nabla_\mu\xi^\nu$.
In the standard way, we then define
\begin{equation}
{\cal J}= \Theta -i_\xi{\cal L}\,,
\end{equation}
where $i_\xi$ denotes the contraction of the vector
$\xi$ with the Lagrangian 4-form ${\cal L}$, as
defined in footnote \ref{idef}.
It follows that $d{\cal J}=0$ and hence we can write
\begin{equation}
{\cal J}= -d{\cal P}\,.
\end{equation}
After considerable algebra, we find that for the Lagrangian (\ref{d4lag})
we shall have
\begin{equation}
{\cal P} = {\cal P}_{\rm Ein} + {\cal P}_{\rm Kin} + {\cal P}_{\rm CS}\,,
\label{Qsum}
\end{equation}
where
\setlength\arraycolsep{2pt} \begin{eqnarray}
{\cal P}_{\rm Ein} &=& {*d\xi}\,,\nonumber\\
{\cal P}_{\rm Kin} &=&
e^{-\varphi_1+\varphi_2-\varphi_3}\, {*F}_{{\sst{(2)}} 1} \,
\xi^\mu(A_{\mu\, 1} -\chi_2\, {{\cal A}}_\mu^2)\nonumber\\
&&
+ e^{-\varphi_1+\varphi_2+\varphi_3}\, {*F}_{{\sst{(2)}} 2} \,
\xi^\mu(A_{\mu 2} + \chi_2\, {{\cal A}}_\mu^1 -\chi_3\, A_{\mu 1} +
\chi_2\, \chi_3\, {{\cal A}}_\mu^2)\nonumber\\
&&
+e^{-\varphi_1-\varphi_2+\varphi_3}\, {*{{\cal F}}}_{{\sst{(2)}}}^1 \,
\xi^\mu({{\cal A}}_\mu^1 + \chi_3\, {{\cal A}}_\mu^2)
+e^{-\varphi_1-\varphi_2-\varphi_3}\, {*{{\cal F}}}_{{\sst{(2)}}}^2 \,
\xi^\mu\, {{\cal A}}_\mu^2
\,,\nonumber\\
{\cal P}_{\rm CS} &=& -\chi_1\, [(\xi^\mu \,A_{\mu 1}) d{{\cal A}}_{\sst{(1)}}^1 +
(\xi^\mu\, {{\cal A}}_\mu^1) dA_{{\sst{(1)}} 1} +
(\xi^\mu \,A_{\mu 2}) d{{\cal A}}_{\sst{(1)}}^2 +
(\xi^\mu\, {{\cal A}}_\mu^2) dA_{{\sst{(1)}} 2}]\,.
\end{eqnarray}
Here ${\cal P}_{\rm Ein}$ is the contribution from the Einstein-Hilbert
term in (\ref{d4lag}), ${\cal P}_{\rm Kin}$ is the contribution from the
four kinetic terms for the gauge field strengths, and ${\cal P}_{\rm CS}$ is
the contribution from the Chern-Simons terms.
We now make the spacelike dimensional reduction
\begin{equation}
ds_4^2 = e^{-\varphi_4}\, d\bar s_3^2 + e^{\varphi_4}\,
(d\phi + \bar{{\cal B}}_{\sst{(1)}})^2
\,,\label{metans}
\end{equation}
and
\setlength\arraycolsep{2pt} \begin{eqnarray}
A_{{\sst{(1)}} 1} &=& \bar A_{{\sst{(1)}} 1} + \sigma_1\, (d\phi + \bar{{\cal B}}_{\sst{(1)}})\,,\qquad
A_{{\sst{(1)}} 2} = \bar A_{{\sst{(1)}} 2} + \sigma_2\, (d\phi + \bar{{\cal B}}_{\sst{(1)}})\,,\nonumber\\
{{\cal A}}_{\sst{(1)}}^1 &=& \bar{{\cal A}}_{\sst{(1)}}^1 + \sigma_3\, (d\phi+\bar{{\cal B}}_{\sst{(1)}})\,,\qquad
{{\cal A}}_{\sst{(1)}}^2 = \bar{{\cal A}}_{\sst{(1)}}^2 + \sigma_4\, (d\phi+\bar{{\cal B}}_{\sst{(1)}})\,,\label{kkAred}
\end{eqnarray}
otherwise following the notation of the timelike reduction described
in \cite{cclp}. In particular, in three dimensions the four reduced
field strengths
and the Kaluza-Klein field strength $\bar {{\cal G}}_{\sst{(2)}}=d\bar {{\cal B}}_{\sst{(1)}}$
are re-expressed in terms of
scalar fields, by means of dualisations \cite{cclp}:
\setlength\arraycolsep{2pt} \begin{eqnarray}
- e^{-\varphi_1 + \varphi_2 - \varphi_3 + \varphi_4}\, {\bar * \bar F_{{\sst{(2)}} 1}}
&=& d\psi_1 + \chi_3\, d\psi_2 - \chi_1\, d\sigma_3 - \chi_1\,
\chi_3\, d\sigma_4\,,\nonumber\\
- e^{-\varphi_1 + \varphi_2 + \varphi_3 + \varphi_4}\, {\bar * \bar F_{{\sst{(2)}} 2}}
&=& d\psi_2 -\chi_1\, d\sigma_4\,,\nonumber\\
- e^{-\varphi_1 - \varphi_2 + \varphi_3 + \varphi_4}\, {\bar * \bar {{\cal F}}_{{\sst{(2)}}}^1}
&=& d\psi_3 - \chi_2\, d\psi_2 - \chi_1\, d\sigma_1 + \chi_1\,
\chi_2\, d\sigma_4\,,\nonumber\\
- e^{-\varphi_1 - \varphi_2 - \varphi_3 + \varphi_4}\, {\bar * \bar{{\cal F}}_{{\sst{(2)}}}^2}
&=& d\psi_4 + \chi_2\, d\psi_1 - \chi_3\, d\psi_3 -
\chi_1\,d\sigma_2 + \chi_2\, \chi_3\, d\psi_2 \nonumber\\
&&- \chi_1\, \chi_2\, d\sigma_3 +
\chi_1\, \chi_3\, d\sigma_1 - \chi_1\, \chi_2\, \chi_3\, d\sigma_4\,,\nonumber\\
e^{2\varphi_4}\, {\bar *\bar {{\cal G}}_{\sst{(2)}}} &=& d\chi_4 + \sigma_1\, d\psi_1 +
\sigma_2\, d\psi_2 +\sigma_3\, d\psi_3 + \sigma_4\, d\psi_4\,.
\label{dualfields}
\end{eqnarray}
We then find after some algebra that with $\xi={\partial}/{\partial}\phi$
we shall have
\setlength\arraycolsep{2pt} \begin{eqnarray}
{\cal P}_{\rm Ein} &=& (d\chi_4 + \sigma_1\, d\psi_1 +
\sigma_2\, d\psi_2 + \sigma_3\, d\psi_3 + \sigma_4\, d\psi_4)\wedge d\phi
+ \cdots\nonumber\\
{\cal P}_{\rm Kin} &=& [-\sigma_1\, d\psi_1 -
\sigma_2\, d\psi_2 - \sigma_3\, d\psi_3 - \sigma_4\, d\psi_4+
\chi_1\, d(\sigma_1\, \sigma_3 + \sigma_2\, \sigma_4)]\wedge d\phi + \cdots
\nonumber\,,\\
{\cal P}_{\rm CS} &=& -\chi_1\, d(\sigma_1\, \sigma_3 + \sigma_2\, \sigma_4)
\wedge d\phi + \cdots\,,
\end{eqnarray}
where the ellipses denote terms that have vanishing pullback to
the 2-sphere.
Thus, from (\ref{Qsum}) we conclude that ${\cal P}= d\chi_4\wedge d\phi
+\cdots$, and so the conserved charge associated
with the Killing vector $\xi=(\Delta\phi/(2\pi))\, {\partial}/{\partial}\phi$ is given by
\begin{equation}
J= \fft{1}{16\pi} \int_{S^2} {\cal P} = \fft{(\Delta\phi)^2}{32\pi^2}\,
\int d\chi_4= \fft{(\Delta\phi)^2}{32\pi^2}\,
\Big[\chi_4\Big]_{\theta=0}^{\theta=\pi}\,.\label{Jexp}
\end{equation}
(The $(\Delta\phi/(2\pi))$ factor in the choice of the Killing
vector takes account of the fact that angular momentum should be defined
with respect to a canonically-normalised azimuthal angle having
period $2\pi$.)
This result is the analogue of the expression we derived in (\ref{angmom})
for the angular momentum for the Einstein-Maxwell black holes. It
also has the same feature as in that case, of not being invariant under
gauge transformations of the electromagnetic potentials. Specifically,
we have four abelian $U(1)$ gauge symmetries in the STU model, under which
the potentials transform as
\begin{equation}
A_{\sst{(1)}}^{[i]} \longrightarrow A_{\sst{(1)}}^{[i]}{'} =
A_{\sst{(1)}}^{[i]} + d\lambda_i\,,\label{gaugetransi}
\end{equation}
where $A_{\sst{(1)}}^{[i]}$ for $i=1$, 2, 3 and 4 denotes
$(A_{{\sst{(1)}} 1}, A_{{\sst{(1)}} 2}, {{\cal A}}_{\sst{(1)}}^1, {{\cal A}}_{\sst{(1)}}^2)$ respectively. The subset of
gauge transformations where
\begin{equation}
\lambda_i=\bar\lambda_i + c_i\, \phi\,,
\end{equation}
with $\bar\lambda_i$ being independent of $\phi$ and $c_i$ being constants,
preserve the form of the Kaluza-Klein reductions (\ref{kkAred}). For these
gauge transformations, the three-dimensional gauge potentials and the
$\sigma_i$ fields therefore transform as
\begin{equation}
\bar A_{\sst{(1)}}^{[i]}{'} = \bar A_{\sst{(1)}}^{[i]} - c_i\, \bar{{\cal B}}_{\sst{(1)}} + d\bar\lambda_i\,,
\qquad \sigma_i'=\sigma_i + c_i\,.\label{sigitrans}
\end{equation}
From these, it follows that the quantities $d\bar A_{\sst{(1)}}^{i]} + \sigma_i\,
d\bar{{\cal B}}_{\sst{(1)}}$ from which three-dimensional field strengths
$\bar F_{\sst{(2)}}^{[i]}$ are constructed, and hence the three-dimensional field
strengths themselves, are inert under the gauge transformations. This in
turn implies that the scalar fields $\psi_i$ are inert,
\begin{equation}
\psi_i'=\psi_i\,.\label{psiitrans}
\end{equation}
Finally, since $\bar{{\cal F}}_{\sst{(2)}}$ is inert, it follows from (\ref{sigitrans}) and
(\ref{psiitrans}) that $\chi_4$ transforms as
\begin{equation}
\chi_4' = \chi_4 - \sum_i c_i\, \psi_i\,.\label{chi4trans}
\end{equation}
Thus we see that under the gauge transformations (\ref{gaugetransi}),
the angular momentum $J$ defined in (\ref{Jexp}) transforms as
\begin{equation}
J' = J -\fft{\Delta\phi}{8\pi}\, \sum_i c_i \, Q_i\,,
\end{equation}
where
\begin{equation}
Q_i = \fft{\Delta\phi}{4\pi}\, \Big[\psi_i\Big]_{\theta=0}^{\theta=\pi}
\end{equation}
are the electric charges carried by the four field strengths.
As in the Einstein-Maxwell case discussed earlier, we can derive a
different expression for the angular momentum, which {\it is} gauge invariant,
by performing dualisations of all the four gauge fields. This is
easily done in the three-dimensional description, where it amounts to
sending $\sigma_i$, $\psi_i$ and $\chi_4$ to tilded quantities, defined by
\begin{equation}
\tilde\sigma_i=\psi_i\,,\qquad \tilde \psi_i=\sigma_i\,,\qquad
\widetilde\chi_4 = \chi_4 + \sum_i \sigma_i\, \psi_i\,.\label{tilded}
\end{equation}
Repeating the calculation of the angular momentum for the dualised theory
will give
\begin{equation}
\widetilde J= \fft{(\Delta\phi)^2}{32\pi^2}\,
\Big[\widetilde\chi_4\Big]_{\theta=0}^{\theta=\pi}\,.\label{JJexp}
\end{equation}
Using our results above for the gauge transformations of $\sigma_i$,
$\psi_i$ and $\chi_4$, it is easily seen that $\widetilde\chi_4$, and hence
$\widetilde J$, is gauge invariant. We can then argue, in a manner analogous
to our argument in the Einstein-Maxwell case, that (\ref{JJexp}) would be
the appropriate expression to use if all four of the charges carried by the
gauge fields were electric.
\subsection{Angular momentum for the magnetised static STU-model black holes}
In the case of the four-charge black holes in the STU model discussed
in \cite{cclp}, and with external fields in \cite{cvgiposa},
the field strengths numbered 1 and 3 carry
magnetic charges, whilst those numbered 2 and 4 carry electric charges.
It follows, therefore, that rather than using (\ref{JJexp}) directly
in order to calculate the angular momentum, we should first ``undualise''
the contributions associated with fields 1 and 3, meaning that
$\widetilde\chi_4$ in (\ref{JJexp}), which was defined in (\ref{tilded}),
should be replaced by
$\widetilde\chi_4 - \sigma_1\, \psi_1-\sigma_3\, \psi_3$.
Thus the proposal for the angular momentum in this case is now
\begin{equation}
J_e= \fft{(\Delta\phi)^2}{32\pi^2}\,
\Big[\chi_4 + \sigma_2\, \psi_2 +
\sigma_4\, \psi_4\Big]_{\theta=0}^{\theta=\pi}\,.\label{JJ4che}
\end{equation}
Substituting the expressions for the scalar fields obtained in \cite{cvgiposa},
we obtain the result
\begin{equation}
J_e =\ft14 \Pi_q\, \sum_{i=1}^4 \fft{B_i}{q_i} +
\ft1{16} \Pi_B\, \Pi_q\, \sum_{i=1}^4 \fft{q_i}{B_i}\,,
\label{JJtrue}
\end{equation}
for the angular momentum of the magnetised 4-charge
black holes
immersed in the background of the four external fields $B_i$, where $\Pi_B=
\prod_i B_i$ and $\Pi_q=\prod_i q_i$. Here $q_i$ are the four charges of
the original static black hole solutions, prior to the magnetisation.
This expression reduces, as it should,
to $\widetilde J$ given in (\ref{QJ})
if we set all four charge parameters $q_i$ equal and set $a=0$ in (\ref{QJ}).
An alternative way to calculate the angular momentum is to use the
four-charge analogue of the expression (\ref{angmom}) that we considered
in the Einstein-Maxwell case. In the present context, this
amounts to starting from the expression (\ref{Jexp}), and then dualising
the contributions from fields 1 and 3 to take account of the fact that
they actually carry magnetic charges. This gives
\begin{equation}
J_m= \fft{(\Delta\phi)^2}{32\pi^2}\,
\Big[\chi_4 + \sigma_1\, \psi_1 +
\sigma_3\, \psi_3\Big]_{\theta=0}^{\theta=\pi}\,.\label{JJ4chm}
\end{equation}
Since (\ref{Jexp}) is gauge
dependent, it is then necessary to perform gauge transformations to
ensure that the four functions $(\sigma_1,\psi_2,\sigma_3,\psi_4)$
vanish on the axis at $\theta=0$ and $\theta=\pi$. After doing this, we
obtain a result that agrees with the gauge-invariant one given in
(\ref{JJtrue}).
\section{Conclusions}
In this paper we have obtained expressions for the energy and angular
momentum of magnetised Kerr-Newman black holes. We showed how
these quantities can be conveniently calculated by making a Kaluza-Klein
reduction of the four-dimensional Einstein-Maxwell theory, and the black
hole solutions, on the azimuthal coordinate $\phi$. Using these
expressions, we have
verified the first law of thermodynamics and the associated Smarr
formulae for rotating black holes immersed in an external magnetic
field.
We also extended the the first law to include variations of the magnetic
field, and hence we obtained the induced magnetic moment.
In an attempt to make contact with some early work of Wald
in which the magnetic field was treated at the test level, ignoring
back-reaction, we have calculated the electric charge that minimises
the energy, holding the initial energy and angular momentum fixed and
at constant magnetic field. Our results resemble qualitatively those of
Wald but differ quantitatively. Finally we extended our calculation of
the angular momentum to the case of the STU model of four-dimensional
${\cal N}=2$
supergravity coupled to three vector multiplets, in preparation for a
future paper \cite{cvgiposa} on that subject.
\section*{Acknowledgements}
We are grateful to Mirjam Cveti\v c for useful conversations.
The research of C.N.P. is supported in part by
DOE grant DE-FG03-95ER40917.
|
2,877,628,090,410 | arxiv | \section{Introduction}
\label{sec:intro}
Lattice QCD calculations can now achieve a very high level of accuracy for
ground-state meson masses. For example, a recent calculation of the
mass splitting between the $J/\psi$ and $\eta_c$ achieved an accuracy
of 1 MeV~\cite{Hatton:2020qhk}. This precision requires that QED effects
arising from the electric charge of the quarks be included in the
calculation and this is now being widely
done, with a variety of
approaches~\cite{Borsanyi:2014jba, Giusti:2017dmp, Boyle:2017gzv, Basak:2018yzz, Kordov:2019oer, Hatton:2020qhk}.
The QED effects arise almost entirely from the electric charge
of the valence quarks.
To $\mathcal{O}(\alpha_{\mathrm{QED}})$
we then expect the impact of QED on a meson made of quark $q_1$ and
antiquark $\overline{q}_2$ to take the form
\begin{equation}
\label{eq:QED-mass-shift}
\Delta M_{q_1\overline{q}_2} = Ae_{q_1}e_{\overline{q}_2} + Be_{q_2}^2 + Ce_{q_1}^2 ,
\end{equation}
ignoring the much smaller effects from the electric charge of the
sea quarks (suppressed by powers of $\alpha_s$ and sea quark mass
effects).
Here $e_{q_1}$ and $e_{q_2}$ are the electric charges of quarks $q_1$
and $q_2$ in units of $e$, the magnitude of the charge on the electron.
The last two terms are dominated by `self-energy' shifts
in the valence quark masses.
These are unphysical because they
amount purely to a renormalisation of the quark mass by QED.
The first term, with coefficient $A$, is physical, however.
It is dominated, for nonrelativistic quarks, by the Coulomb interaction between
the valence quark and antiquark in the meson.
This effect depends on the average separation of the quarks
and so provides a measure for the size of the meson.
Its accurate determination requires a calculation
that fully controls the QCD effects that bind the quark and antiquark
into the meson, i.e. the use of lattice QCD.
We will use lattice QCD calculations to which
we also add the effect of QED on the valence quarks in
an approach known as `quenched QED'~\cite{Duncan:1996xy}.
This is simply achieved by generating a random photon field
in momentum-space and then packaging the field in position space
into a compact U(1) variable that can be multiplied into the gluon
field as the Dirac equation is solved for each quark propagator.
Since QCD is responsible for binding the quark and antiquark into
the meson and the effect of QED is simply a perturbation to
the meson mass, then
the QED interaction term $Ae_{q_1}e_{\overline{q}_2}$
in Eq.~(\ref{eq:QED-mass-shift})
can be isolated by comparing
results from lattice calculations in which we flip the sign
of the electric charge for one of the quarks.
We have
\begin{equation}
\label{eq:coulomb-isolate}
M(e_{q_1},e_{\overline{q}_2})-M(-e_{q_1},e_{\overline{q}_2}) = 2Ae_{q_1}e_{\overline{q}_2} .
\end{equation}
We focus here on studying this QED effect
for relatively heavy mesons ($\eta_c$, $J/\psi$ and $D_s$) to
test our understanding of the impact of QED.
The reason for this is that
the internal structure of these mesons is reasonably well understood and
in the past we have made use of estimates of the Coulomb interaction
effects to assess the impact of QED on these meson
masses~\cite{Davies:2009tsa, Davies:2010ip}.
In Section~\ref{sec:lattice} we describe our lattice calculation
and the results and in Section~\ref{sec:discussion} we compare
to these earlier estimates from phenomenological models.
Section~\ref{sec:conclusions} gives
our conclusions.
\section{Lattice QCD calculation} \label{sec:lattice}
\begin{table*}
\centering
\caption{The parameters of the ensembles used in our calculation,
numbered in column 1. Column 2 gives the QCD gauge coupling and
column 3 the lattice spacing in units of the Wilson flow parameter,
$w_0$~\cite{Borsanyi:2012zs}. The lattice spacing in fm is then given
by using $w_0=$ 0.1715 fm, fixed from $f_{\pi}$~\cite{fkpi}. $L_s$
and $L_t$ are the lattice spatial and temporal extents in lattice units.
Columns 6 and 7 give the sea light quark masses in lattice units, with
the sea $u$ and $d$ quark masses taken to be the same and denoted $l$.
Column 8 gives the valence $s$ quark mass used in the $D_s$ mesons.
Columns 9 and 10 give the sea and valence $c$ quark masses in lattice
units, respectively. Not all sets are used for all calculations; * indicates
that the set was used for charmonium, $\dag$ that the set was used for
$D_s$ and $\ddag$ that the set was used for valence masses of $2m_c$.
Column 11 gives the corresponding number of configurations used from the set.
}
\label{tab:ensembles}
\begin{tabular}{lllllllllll}
\hline \hline
Set & $\beta$ & $w_0/a$ & $L_s$ & $L_t$ & $am_l^{\mathrm{sea}}$ & $am_s^{\mathrm{sea}}$ & $am_s^{\mathrm{val}}$ & $am_c^{\mathrm{sea}}$ & $am_c^{\mathrm{val}}$ & $N_{\mathrm{cfgs}}$ \\
\hline
1$*$ & 5.80 & 1.1272(7) & 24 & 48 & 0.0064 & 0.064 & - & 0.828 & 0.873 & 340 \\
2$\dagger$ & 5.80 & 1.1367(5) & 32 & 48 & 0.00235 & 0.0647 & 0.0677 & 0.831 & 0.863 & 100 \\
3$*\dagger$ & 6.00 & 1.4029(9) & 32 & 64 & 0.00507 & 0.0507 & 0.0533 & 0.628 & 0.650 & 220$*$ / 140$\dagger$ \\
4$*$ & 6.30 & 1.9330(20) & 48 & 96 & 0.00363 & 0.0363 & - & 0.430 & 0.439 & 371 \\
5$\dagger \ddagger$ & 6.30 & 1.9518(7) & 64 & 96 & 0.00120 & 0.0363 & 0.036 & 0.432 & 0.433 & 87$\dagger$ / 184$\ddagger$ \\
6$*\dagger \ddagger$ & 6.72 & 2.8960(60) & 48 & 144 & 0.0048 & 0.024 & 0.0234 & 0.286 & 0.274 & 133$*$ / 87$\dagger$ / 199$\ddagger$ \\
\hline \hline
\end{tabular}
\end{table*}
\begin{table}
\centering
\caption{Results for the charmonium case.
Column 2 gives the ground-state $\eta_c$ (upper rows) and $J/\psi$ (lower rows)
meson masses in lattice units in the pure QCD case for the gluon
field configuration sets given in column 1. Column 3 gives the ratio
of the mass difference for the physical and unphysical QED scenarios
(see Eq.~(\ref{eq:QCDQEDrat})) to the pure QCD mass. Column 4 gives the finite-volume
correction needed on that gluon configuration set for the unphysical
QED scenario (Eq.~(\ref{eq:fvolshift}) for meson charge 4$e$/3).
The uncertainty in $\Delta_{\mathrm{FV}}$ comes mainly from the
uncertainty in the lattice spacing and does not include the systematic
error from missing higher orders in $1/L_s$ (see text).
Finally column 5 gives the extracted coefficient, $A_{\eta_c}$
or $A_{J/\psi}$ (Eq.~(\ref{eq:calcA})).
}
\label{tab:charm-results}
\begin{tabular}{lllll}
\hline \hline
Set & $aM^{\text{QCD}}_{\eta_c}$ & $R_{\eta_c}$ & $\Delta_{\mathrm{FV}}$ [MeV] & $A_{\eta_c}$ [MeV] \\
\hline
1 & 2.305364(39) & -0.002080(39) & -1.0308(54) & 8.16(14) \\
3 & 1.848041(35) & -0.001806(25) & -0.9600(51) & 7.139(91) \\
4 & 1.342455(21) & -0.0017726(58) & -0.8795(47) & 6.944(42) \\
6 & 0.896675(24) & -0.001641(21) & -1.3373(75) & 7.020(80) \\
\hline
Set & $aM^{\text{QCD}}_{J/\psi}$ & $R_{J/\psi}$ & $\Delta_{\mathrm{FV}}$ [MeV] & $A_{J/\psi}$ [MeV] \\
\hline
1 & 2.39308(14) & -0.001342(23) & -1.0295(54) & 5.844(85) \\
3 & 1.914749(67) & -0.001144(12) & -0.9589(51) & 5.057(77) \\
4 & 1.391390(43) & -0.001063(17) & -0.8785(47) & 4.688(63) \\
6 & 0.929860(54) & -0.000883(25) & -1.3352(75) & 4.580(90) \\
\hline \hline
\end{tabular}
\end{table}
\begin{table}
\centering
\caption{Results, as in Table~\ref{tab:charm-results},
but now for heavyonium mesons using
quarks with mass $2m_c$.
Column 2 gives the ground-state `$\eta_{2c}$' (upper rows) and `$\psi_{2c}$' (lower rows)
meson masses in lattice units in the pure QCD case for the gluon
field configuration sets given in column 1. Column 3 gives the ratio
of the mass difference for the physical and unphysical QED scenarios
to the pure QCD mass. Column 4 gives the finite-volume
correction needed on that gluon configuration set for the unphysical
QED (Q=$4e/3$) scenario. Finally column 5 gives the extracted coefficient, $A_{\eta_{2c}}$
or $A_{\psi_{2c}}$.
}
\label{tab:heavy-results}
\begin{tabular}{lllll}
\hline \hline
Set & $aM^{\text{QCD}}_{\eta_{2c}}$ & $R_{\eta_{2c}}$ & $\Delta_{\mathrm{FV}}$ [MeV] & $A_{\eta_{2c}}$ [MeV] \\
\hline
5$\ddagger$ & 2.185464(53) & -0.001527(22) & -0.6552(34) & 9.17(13) \\
6$\ddagger$ & 1.487111(36) & -0.0012878(80) & -1.3137(74) & 8.657(66) \\
\hline
Set & $aM^{\text{QCD}}_{\psi_{2c}}$ & $R_{\psi_{2c}}$ & $\Delta_{\mathrm{FV}}$ [MeV] & $A_{\psi_{2c}}$ [MeV] \\
\hline
5$\ddagger$ & 2.221922(47) & -0.001078(12) & -0.6550(34) & 6.789(76) \\
6$\ddagger$ & 1.509707(53) & -0.000886(11) & -1.3132(74) & 6.491(72) \\
\hline \hline
\end{tabular}
\end{table}
We work on $n_f=2+1+1$ gluon field configurations generated
by the MILC collaboration~\cite{Bazavov:2012xda,Bazavov:2017lyh}.
These configurations include the effect of $u/d$, $s$ and $c$
quarks in the sea using the Highly Improved Staggered Quark (HISQ)
action~\cite{Follana:2006rc}. Details of the parameters for the configurations are
given in Table~\ref{tab:ensembles}.
In~\cite{Hatton:2020qhk} we
analysed charmonium correlators calculated in pure QCD and
in QCD + quenched QED using these (and further sets) of
gluon field configurations. This enabled us to determine
accurately how the $\eta_c$ and $J/\psi$ meson masses shift
(for a fixed valence $c$ quark mass)
when the $2e/3$ electric charge of the valence $c$ quarks is
included. The shifts are very small, upwards by $\sim$0.1\%, but clearly
visible. From this we could work out how the $c$ quark mass
should be retuned when QED is switched on. We chose the
natural tuning procedure in which the $c$ quark mass
is adjusted in both QCD and QCD+QED until the $J/\psi$ meson mass
determined on the lattice agrees with experiment.
This led us to a determination of the $c$ quark mass in
the $\overline{\text{MS}}$ scheme of
$\overline{m}_c(3\,\text{GeV})_{\text{QCD+QED}} =$0.9841(51) GeV.
This value is then 0.2\% lower than in pure QCD~\cite{Hatton:2020qhk}.
The reason that the inclusion of QED lowers the $c$ quark mass
(tuning to a fixed meson mass) is because the positive self-energy terms
in Eq.~(\ref{eq:QED-mass-shift}) raise the meson mass. The Coulomb interaction
is attractive inside a charmonium meson, however, and so must lower
the meson mass. Here we set out to isolate the Coulomb-dominated
piece of the QED
effect.
As described in Section~\ref{sec:intro} we can do this by comparing
two calculations in QCD+QED (which will be shorthand for QCD + quenched QED in what
follows). One calculation is the normal QCD+QED charmonium calculation with
$c$ and $\overline{c}$ quarks with opposite electric charge.
The second calculation is one in which the $c$ quark electric charge is
flipped but not that of the $\overline{c}$. The difference between the two
results then gives twice the QED interaction contribution to the meson
mass (Eq.~(\ref{eq:coulomb-isolate})).
Note that the second calculation is for an unphysical scenario
as far as QED is concerned. The underlying QCD physics is the
same in both cases. We use the same valence quark mass in
the two calculations, i.e. a mass close to the tuned $c$ quark mass
in QCD+QED for the physical scenario.
Our valence $c$ quark masses are given in Table~\ref{tab:ensembles}. These are the same
as masses used in~\cite{Hatton:2020qhk}, with tuning errors below 0.5\%.
For the two calculations we combine $c$ and $\overline{c}$ propagators
to generate two-point correlation functions that we average over
the gluon field configurations. We fit these as a function of time
separation between source and sink to determine the ground-state masses
in lattice units. The procedure for including quenched QED and
for fitting correlation functions is exactly the same as that
described in~\cite{Hatton:2020qhk} and we do not repeat the
discussion of either procedure here.
In Table~\ref{tab:charm-results} we give our results for the
$\eta_c$ and $J/\psi$ mesons. We calculate the ground-state masses
in the pure QCD case and also in the physical and unphysical QCD +
quenched QED scenarios. It is convenient to give the QCD+QED results
for the masses as a ratio to the value in pure QCD.
In Table~\ref{tab:charm-results} we
therefore give values for the difference of the ratios in the physical
and unphysical QCD+QED cases:
\begin{equation}
\label{eq:QCDQEDrat}
R = \frac{M(e_{q_1},e_{\overline{q}_2})-M(-e_{q_1},e_{\overline{q}_2})}{M(0,0)}.
\end{equation}
$e_{q_1}$ and $e_{\overline{q}_2}$ are the electric charges of
the quark and antiquark in units of $e$; for the
charmonium case these are $2/3$ and $-2/3$.
Multiplying $R$ by the mass in the pure QCD case, $M(0,0)$ (column 2 of
Table~\ref{tab:charm-results}), then gives
the mass difference needed for Eq.~(\ref{eq:coulomb-isolate}).
\begin{figure}
\centering
\includegraphics[width=0.45\textwidth]{fv_convergence_etac.pdf}
\caption{A plot to show the size of finite-volume shifts needed
in the $\eta_c$ case (for the unphysical QED scenario with
meson charge $Q=4/3$) as a function of lattice spatial size.
The plot compares the leading-order $1/L_s$ calculation, which
is independent of meson mass, to the
result of adding in higher order terms in $1/L_s$.
}
\label{fig:fv}
\end{figure}
\begin{figure}
\centering
\includegraphics[width=0.45\textwidth]{etac_coulomb.pdf}
\includegraphics[width=0.45\textwidth]{jpsi_coulomb.pdf}
\caption{The coefficient, $A$, of the QED interaction effect
(Eq.~(\ref{eq:QED-mass-shift})) in the
$\eta_c$ (upper plot, blue symbols) and $J/\psi$
(lower plot, purple symbols) meson masses, shown
as a function of squared lattice spacing (given in units of the quark
mass, denoted $m_h$ but here $m_c$).
The fit is described in the text and shown by the curves in each plot.
The pink symbols show the same
results, but for mesons made from a quark-antiquark pair with quark
mass $m_h$ twice that of the $c$ quark. The symbol shape denotes the gluon
field configurations used and is the same as that for the matching
charmonium calculation.
}
\label{fig:charmonium}
\end{figure}
One difference between the physical and unphysical QED scenarios that we must take
into account, however, is that of finite-volume effects from QED. In the physical
scenario the charmonium meson is electrically neutral and finite-volume
effects are negligible, as demonstrated in~\cite{Hatton:2020qhk}.
In the unphysical scenario the meson has an electric charge of $4e/3$ and
QED finite-volume effects are much larger. The finite-volume effects
have been calculated analytically as an inverse power series in the spatial
extent of the lattice~\cite{Hayakawa:2008an, Davoudi:2014qua, Borsanyi:2014jba}.
We need only the (universal) result up to $1/L_s^2$ which takes the form
\begin{eqnarray}
\label{eq:fvolshift}
\Delta_{\mathrm{FV}}(L_s) &=& M(L_s) - M(\infty) \\
&=& -\frac{Q^2\alpha_{\text{QED}}\kappa}{2L_s}\left( 1+ \frac{2}{M L_s}\right) \nonumber
\end{eqnarray}
with $\kappa =$ 2.8373 and $Q$ the meson electric charge in units of $e$.
The leading term, which is independent of meson mass,
takes a value of $Q^2 \times$ 0.5 MeV on a 4 fm lattice. We see then that the
finite-volume effects are small here, but not negligible compared
to our QED shifts. We handle them by correcting our
finite-volume masses using the formula above for the cases where we have an
(unphysical) electrically charged charmonium meson.
The finite-volume shifts in each case are given in Table~\ref{tab:charm-results}.
Figure~\ref{fig:fv} plots $\Delta_{\mathrm{FV}}$ for $Q=4/3$ as a function of
spatial lattice size for the range of lattice sizes that we use here.
The plot compares the leading $1/L_s$ term of Eq.~(\ref{eq:fvolshift})
to the result of including both the $1/L_s$ and $1/L_s^2$ terms.
We also show the impact of next-to-next-to-leading-order (NNLO) terms
at $1/L_s^3$ from~\cite{Davoudi:2014qua}. We take the value of
$\langle r^2 \rangle$ that appears in the $1/L_s^3$ terms from
vector meson dominance as $6/M_{J/\psi}^2$
(since we have shown in~\cite{Davies:2019nut} that vector dominance works well
for the electromagnetic form factor of mesons at small
momentum-transfer, including for the $\eta_c$).
We estimate the systematic
uncertainty from missing out the $1/L_s^3$ terms at 0.005 MeV, which
is negligible compared to other sources of uncertainty.
We then combine the mass differences and finite-volume shifts
to isolate the QED interaction effect for the $\eta_c$ and
$J/\psi$ (Eq.~\ref{eq:coulomb-isolate}).
The coefficient, $A$, is determined as:
\begin{equation}
\label{eq:calcA}
A = \frac{1}{2e_{q_1}e_{\overline{q}_2}} \left(R \times M(0,0) + \Delta_{\mathrm{FV}}\right).
\end{equation}
These values are given for each ensemble in Table~\ref{tab:charm-results}.
We plot $A_{\eta_c}$ and $A_{J/\psi}$ in Fig.~\ref{fig:charmonium}
as a function of lattice spacing.
We see that, as expected, the attractive Coulomb interaction yields
a negative contribution to the meson masses because $A$ is positive and
$e_{q_1}e_{\overline{q}_2}$ is negative. The $A$ values are
not the same for the $\eta_c$ and $J/\psi$ mesons because of the
QED hyperfine interaction, which acts in the same direction as the
QCD hyperfine interaction raising the $J/\psi$ mass relative to the
$\eta_c$~\cite{Hatton:2020qhk}.
In order to obtain a value for the coefficient $A$ in the continuum limit we use
a fit that allows for discretisation errors as well as possible effects from
the mistuning of the charm quark valence mass and the mistunings of the sea
quark masses from their physical values.
The fit form we use is similar to that in \cite{Hatton:2020qhk}:
\begin{eqnarray}
\label{eq:amc-fit}
A(a^2, \delta m) &=& A \Bigg[ 1 + \sum_{i=1}^3 c_a^{(i)} (am_c)^{2i} + c_{m,\mathrm{sea}} \delta_m^{\mathrm{sea},uds} \nonumber \\
&+& c_{c,\mathrm{sea}} \delta_m^{\mathrm{sea},c} + c_{c,\mathrm{val}} \delta_m^{\mathrm{val},c} \Bigg] .
\end{eqnarray}
The mass mistuning terms here are defined as in \cite{Hatton:2020qhk}:
\begin{eqnarray}
\label{eq:mistunings}
\delta_m^{\mathrm{sea},uds} &=& \frac{2m_l^{\mathrm{sea}} + m_s^{\mathrm{sea}} - 2m_l^{\mathrm{phys}} - m_s^{\mathrm{phys}}}{10m_s^{\mathrm{phys}}} , \\
\delta_m^{\mathrm{sea},c} &=& \frac{m_c^{\mathrm{sea}} - m_c^{\mathrm{phys}}}{m_c^{\mathrm{phys}}} , \nonumber \\
\delta_m^{\mathrm{val},c} &=& \frac{M_{J/\psi} - M_{J/\psi}^{\mathrm{expt}}}{M_{J/\psi}^{\mathrm{expt}}} . \nonumber
\end{eqnarray}
$M_{J/\psi}$ is the lattice value in the QCD+QED case with the physical QED scenario.
For the experimental $J/\psi$ mass we use 3.0969 GeV \cite{Tanabashi:2018oca}.
We use priors of 0(1) for the $c_a^{(i)}$, $c_{m,\mathrm{sea}}$ and
$c_{c,\mathrm{val}}$ coefficients and a prior of $0\pm 0.1$ for $c_{c,\mathrm{sea}}$.
The mistuning terms in the fit have very little effect but including
them allows us to incorporate uncertainties from them in the final result.
With $\chi^2/\mathrm{dof}$ of 0.06 and 0.1 respectively we find
\begin{eqnarray}
\label{eq:charmres}
A_{\eta_c} &=& 6.99(28) \, \mathrm{MeV} \\
A_{J/\psi} &=& 4.49(20) \, \mathrm{MeV} \, .\nonumber
\end{eqnarray}
The uncertainty is dominated by that from the extrapolation to
zero lattice spacing and is much larger than that from possible
systematic errors in the finite-volume correction discussed above.
The fit is able to pin down the coefficient of the $(am_c)^2$ term
($c_a^{(1)}$) to be within 0.3 of zero. This is consistent with
the expectation that this coefficient should be of
size $\mathcal{O}(\alpha_s)$~\cite{Follana:2006rc}.
The Coulomb interaction effect probes the internal structure of the
meson at short distances between the quark-antiquark pair. It is therefore
interesting to ask how the coefficient $A$ changes for heavier
quarks than the $c$ quark. In Table~\ref{tab:heavy-results} we give our results for a
heavyonium meson made from a quark-antiquark pair with quark mass
twice that of the $c$ quark (but the same electric charge).
Again we use these results to determine the coefficient $A$ (which is
independent of electric charge) in this case. These results are
also plotted in Fig.~\ref{fig:charmonium}. The coefficient $A$
is substantially larger for the heavier mass case.
We perform fits to the heavier mass points
also using the fit form of Eq.~\eqref{eq:amc-fit}, but with $am_c$ now
replaced with $2am_c$ and dropping the $a^6$ terms because we have
results on fewer ensembles for this case.
The functional form of the lattice spacing dependence
should be the same in the $m_c$ and $2m_c$ cases up to possible
dependence on the squared velocity of the heavy quark inside
the bound-state in higher order coefficients in $a^2$~\cite{Follana:2006rc}.
We therefore use the results of the $m_c$ fit as prior information
to constrain the coefficients
of the lattice spacing dependence ($c_a^{(1)}$ and $c_a^{(2)}$)
in the fit for the $2m_c$ case.
This amounts to choosing a prior width of 0.3 for the $c_a^{(1)}$ coefficient
and 0.7 for $c_a^{(2)}$.
We find
\begin{eqnarray}
\label{eq:2charmres}
A_{\eta_{2c}} &=& 8.64(61) \, \mathrm{MeV} \\
A_{\psi_{2c}} &=& 6.24(27) \, \mathrm{MeV} . \nonumber
\end{eqnarray}
The fits give $\chi^2/\mathrm{dof}$ of 0.39 and 0.09 respectively.
We will discuss a comparison of the
values for $A$ for charmonium and heavyonium
with those determined from static QCD potentials
in Section~\ref{sec:discussion}.
We can contrast the heavyonium case with that of
a heavy-light meson. The simplest meson to use for this case
is the heavy-strange meson since this has no valence light quark.
We carry out the same analysis for the $D_s$ as for charmonium,
but now the QED finite-volume
effects (Eq.~(\ref{eq:fvolshift})) apply to both the physical
scenario (since the $D_s$ meson is electrically charged with $Q$=1) and
the unphysical scenario (where the `$D_s$' has the smaller charge $Q=1/3$).
In Eq.~(\ref{eq:calcA}) we therefore substitute for $\Delta_{\mathrm{FV}}$
the difference $\delta \Delta_{\mathrm{FV}}$ of the finite-volume effects
for the $Q=1$ and $Q=1/3$ cases.
The valence $s$ quark masses that we use are given in Table~\ref{tab:ensembles}
and are those obtained from the $m_s$ tuning exercise in~\cite{Chakraborty:2014aca}.
Our results for the $D_s$ meson mass in pure QCD, along with the ratio
$R$ of Eq.~(\ref{eq:QCDQEDrat}) and the finite-volume shifts discussed above,
are given in Table~\ref{tab:dsresults}.
\begin{table}
\centering
\caption{ Results that we use to obtain the QED
interaction effect for the $D_s$ meson.
Column 2 gives the ground-state $D_s$
meson mass in lattice units in the pure QCD case for the gluon
field configuration sets given in column 1. Column 3 gives the ratio
of the mass difference for the physical and unphysical QED scenarios
(Eq.~(\ref{eq:QCDQEDrat})) to the pure QCD mass. Column 4 gives the finite-volume
correction needed on that gluon configuration set for the difference
between the physical QED scenario (with meson charge 1) and the unphysical
QED scenario (with meson charge 1/3). Finally column 5 gives the
extracted coefficient of the effect on the mass from the quark electric
charge interaction term, $A_{D_s}$.
}
\label{tab:dsresults}
\begin{tabular}{lllll}
\hline \hline
Set & $aM^{\text{QCD}}_{D_s}$ & $R_{D_s}$ & $\delta \Delta_{\mathrm{FV}}$ [MeV] & $A_{D_s}$ [MeV] \\
\hline
2 & 1.52428(16) & 0.000921(39) & 0.3917(24) & 5.01(18) \\
3 & 1.22386(17) & 0.000820(29) & 0.4880(30) & 4.74(13) \\
5 & 0.87740(10) & 0.000891(47) & 0.3345(20) & 4.70(21) \\
6 & 0.59203(22) & 0.00075(12) & 0.6839(46) & 4.86(52) \\
\hline \hline
\end{tabular}
\end{table}
\begin{table}
\centering
\caption{Results as for Table~\ref{tab:dsresults} but now for a heavy quark mass
with value $2m_c$, for which we denote the meson $D_{s,2c}$.
}
\label{tab:hsresults}
\begin{tabular}{llllll}
\hline \hline
Set & $aM^{\text{QCD}}_{D_{s,2c}}$ & $R_{D_{s,2c}}$ & $\delta \Delta_{\mathrm{FV}}$ [MeV] & $A_{D_{s,2c}}$ [MeV] \\
\hline
5$\ddagger$ & 1.34202(14) & 0.000637(43) & 0.3305(20) & 5.06(30) \\
6$\ddagger$ & 0.91076(19) & 0.000489(75) & 0.6682(44) & 4.84(51) \\
\hline \hline
\end{tabular}
\end{table}
\begin{figure}
\centering
\includegraphics[width=0.45\textwidth]{Ds_coulomb.pdf}
\caption{The coefficient, $A$, of the QED interaction effect
(Eq.~(\ref{eq:QED-mass-shift})) in the
$D_s$ meson mass, shown with
purple symbols as a function of squared lattice spacing (in units
of the heavy quark mass, here $m_c$).
The fit is described in the text. The pink symbols show the same
results, but for mesons made from a heavy quark and strange antiquark with
heavy quark
mass twice that of the $c$ quark. Symbol shapes match those of the $D_s$
results on the same gluon field configurations.
}
\label{fig:Ds}
\end{figure}
The results for the QED interaction coefficient $A$ for this case
are shown in Fig.~\ref{fig:Ds}.
$A_{D_s}$ is positive, and combined with a positive product of electric
charges gives, as expected, a positive shift to the meson
mass because the Coulomb interaction inside
an electrically charged meson is repulsive.
We perform the same continuum extrapolation fit as for the charmonium
case, with the same priors, see Eq.~(\ref{eq:amc-fit}).
Our fit returns a value
\begin{equation}
\label{eq:dsresult}
A_{D_s}= 4.69(48) \, \mathrm{MeV}
\end{equation}
with a $\chi^2/\mathrm{dof}$ of 0.015.
We can also, as in the heavyonium case, work with heavy quarks with
mass $2m_c$. The results for this mass are given in Table~\ref{tab:hsresults} and also plotted
in Fig.~\ref{fig:Ds}.
In contrast to the heavyonium case, we find that the coefficient $A$ hardly
changes as we change the heavy quark mass in the heavy-strange meson
to be $2m_c$.
From a fit to this case we obtain
\begin{equation}
\label{eq:2dsresult}
A_{D_{s,2c}}= 4.68(66) \, \mathrm{MeV}
\end{equation}
with $\chi^2/\mathrm{dof}$ of 0.002.
This agrees very well with that for the $D_s$ above, consistent with
the fact that the points are on top of each other in Fig.~\ref{fig:Ds}.
\section{Discussion}
\label{sec:discussion}
The coefficient $A$ is a physical quantity, encoding
information about meson structure.
The quantitative information that lattice QCD results
for $A$ provide
can be used to calibrate more qualitative model approaches
for comparable quantities.
We discuss this below first for heavyonium and then heavy-light mesons.
The language of potential models provides a reasonably good
approximation for heavyonium.
A simple Cornell potential~\cite{Eichten:1979ms} of the form
\begin{equation}
\label{eq:cornell}
V(r) = -\frac{\kappa}{r} + \frac{r}{b^2}
\end{equation}
can readily be tuned to give the radial excitation energy of charmonium
with an accuracy of $\sim$10\%. Here $\kappa$ is $4\alpha_s/3$ and
$b^2$ is the inverse string tension. The parameters used
are: $\kappa=0.52$ and $b=2.34\,\mathrm{GeV}^{-1}$, along with a
$c$ quark mass in the kinetic energy
term of Schr\"{o}dinger's equation of 1.84 GeV~\cite{Eichten:1979ms}.
It
is then straightforward to perturb the
coefficient of $1/r$ in Eq.~(\ref{eq:cornell}) by $\alpha_{\mathrm{QED}}$
to include the Coulomb interaction effect and determine a value
for the ground-state energy shift which is the potential model
value for $A$ for charmonium, $A_c^{\mathrm{potl}}$.
Alternatively this can be obtained
by integrating over $\alpha_{\mathrm{QED}}/r$ weighted by the square
of the ground-state wavefunction.
Doing this gives a
value for $A_c^{\mathrm{potl}}$ of 5.9 MeV
(this is a shift of 2.6 MeV downwards in the meson mass
when multiplied by $e_{q_1}e_{\overline{q}_2}$ for
charmonium~\cite{Davies:2009tsa}).
This result is for the leading spin-independent central potential
of Eq.~(\ref{eq:cornell}) and does not include any spin-dependent effects.
Our lattice QCD results, on the other hand, are for
the $\eta_c$ and $J/\psi$ mesons
separately. To compare our lattice results to those
from a spin-independent potential
we need to take the spin average:
\begin{equation}
\label{eq:spinav}
A_c = \frac{A_{\eta_c}+3A_{J/\psi}}{4}
\end{equation}
Our results from Section~\ref{sec:lattice} (Eq.~(\ref{eq:charmres})) yield
\begin{equation}
\label{eq:Ac}
A_c = 5.12(17) \, \mathrm{MeV}
\end{equation}
The potential model result, $A_c^{\mathrm{potl}}$, given above
is 15(3)\% larger than our lattice QCD value. The uncertainty
here comes from the lattice QCD calculation where it can be quantified.
Clearly more sophisticated potential
models, including potentials derived from lattice QCD~\cite{Bali:2000gf}, could be
used to improve on
the potential model result. Our value for $A_c$ in Eq.~(\ref{eq:Ac})
can also be used to tune the parameters of potential models.
Frequently the tuning is done using
quantities such as the wavefunction at the origin,
along with the spectrum (see, for example,~\cite{Eichten:1995ch, Eichten:2019hbb}).
The wavefunction at the origin is not a physical
quantity, however, and there are sizeable uncertainties associated with
renormalising this to relate it to experimental decay rates.
In contrast the quantity $A$ is a physical,
renormalisation-group invariant quantity that can be compared much more
precisely. A systematic uncertainty of order 10\% on
$A_c^{\mathrm{potl}}$ might be expected on
the potential model result from missing $\mathcal{O}(v^4)$ relativistic
corrections. However this could be ameliorated by tuning the potential.
We can also compare the lattice and Cornell potential results for the
heavier quark mass of $2m_c$. Then our lattice spin-averaged result,
using the values from Eq.~(\ref{eq:2charmres}) is
\begin{equation}
\label{eq:Ac2}
A_{2c} = 6.71(49) \,\mathrm{MeV} .
\end{equation}
The result for $A_{2c}^{\text{potl}}$ from the same Cornell potential as
for the $m_c$ case is
9.1 MeV, now 30\% too large.
Our results show a variation of $A$ with quark mass that behaves
approximately as $\sqrt{m}$. We can compare this to what might
be expected from scaling arguments for a potential of the
form $Cr^N$. Then, as we change the quark's
reduced mass, $\mu \equiv m/2$,
we obtain the same solution for a rescaled
distance $\lambda r$ where~\cite{Quigg:1979vr, Davies:1997hv}
\begin{equation}
\lambda \propto \mu^{-1/(2+N)} \, .
\end{equation}
A $\sqrt{\mu}$ behaviour for
$A^{\mathrm{potl}} \equiv \langle \alpha_{\mathrm{QED}}/r \rangle$
would then correspond to $N\approx 0$.
Such a form for the heavy quark potential is in fact a standard one
that
has been successful in obtaining spectra, either taking $N$ to
be a small value or taking $V(r)$ to be
logarithmic~\cite{Martin:1980jx, Quigg:1977dd}.
These forms for the potential give a wavefunction that does not
grow so rapidly with mass at small distance as the Cornell potential
and might give results for $A_c^{\mathrm{potl}}$ and $A_{2c}^{\mathrm{potl}}$
in better agreement
with our lattice QCD value.
See~\cite{Eichten:1994gt, Eichten:1995ch} for a comparison of spectrum
and wavefunction results for
different potential forms.
The difference of our results for $A_{\eta_c}$ and $A_{J/\psi}$ gives
the `direct' QED effect on the charmonium hyperfine splitting, when multiplied
by -4/9. Note that in~\cite{Hatton:2020qhk} we also included a quark-line
disconnected contribution from QED to the hyperfine splitting
that is not included here.
We have
a difference of $A_{J/\psi}$ and $A_{\eta_c}$ of -2.5(3) MeV from
Eq.~(\ref{eq:charmres}) for the $m_c$ case and -2.4(8) MeV from
Eq.~(\ref{eq:2charmres}). The QED effect is then to raise the vector
mass with respect to the pseudoscalar by about 1 MeV in both cases (using
electric charge $2/3$).
The hyperfine splitting itself falls with increasing quark mass, so
the relative QED effect (for the same electric charge) is growing.
In~\cite{Hatton:2020qhk} we did a complete analysis of the charmonium
hyperfine splitting, including QED effects, as a function of lattice
spacing and sea quark mass. To compare the $m_c$ and $2m_c$ cases here
it is sufficient to
take results from a single ensemble, set 6, as a guide to variation
with heavy quark mass.
The results in Tables~\ref{tab:charm-results} and~\ref{tab:heavy-results}
then yield a pure QCD hyperfine splitting of 111 MeV for the $m_c$
case and 75 MeV
for the $2m_c$ case, i.e. a fall of 30\% on doubling the quark mass.
This is to be compared to a QED contribution that does not change
(at the level of our uncertainties).
A key difference between the QCD and QED hyperfine splittings
is the effective inclusion
(implicit in our lattice QCD calculation) of a running coupling constant
in the QCD case which reduces the splitting as the mass increases.
The QED hyperfine effect can also be compared to the expectation
from a potential model calculation, by determining the impact of the
perturbation from the Coulomb term on the wavefunction at the origin.
The leading term in the hyperfine splitting from a potential model
is proportional to the square of the wavefunction at the origin and so
the percentage change in the hyperfine splitting is simply twice the
percentage shift in $\psi(0)$.
For the Cornell potential discussed above we find the percentage
change in the hyperfine splitting (for $e_{q}=1$) to be 1.92\% for the $m_c$ case
and 2.74\% for the $2m_c$ case. This shows an increase in the percentage
QED effect that grows with the quark mass, as we find from our lattice
calculation.
To compare more quantitatively to our results
we multiply these percentages by the pure QCD
hyperfine splitting on set 6 given above.
This gives a QED hyperfine effect (for $e_{q}=1$)
of 2.1 MeV for both cases, in good agreement with our results
from the difference of $A_{\eta_c}$ and $A_{J/\psi}$.
Note, however, that sizeable ($\mathcal{O}(30\%)$ for charmonium)
systematic errors are to be expected
in analyses of fine structure from a potential model,
so a semi-quantitative
comparison is the best we can do here.
We now turn to the heavy-light meson case.
In~\cite{Davies:2010ip} we analysed a model of QED and light-quark
mass effects in heavy-light pseudoscalar meson masses to isolate the QED
interaction term phenomenologically. We used~\cite{Goity:2007fu}
\begin{equation}
\label{eq:hl-pheno}
M(e_{q_h},e_{\overline{q}_l},m_q) = M_0 + Ae_{q_h}e_{\overline{q}_l} + Be_{q_l}^2 + C(m_{q_l}-m_l)
\end{equation}
where $e_{q_h}$ and $e_{\overline{q}_l}$ are the electric charges of the heavy quark
and light antiquark respectively and $m_{q_l}$ is the light quark mass,
$m_l$ being the average $u/d$ quark mass. The coefficient $A$ gives
the QED interaction term that we are interested in here. The
coefficient $B$ is that of the light-quark QED self-energy, assumed to be
independent of light quark mass. No term is included for the heavy
quark self-energy because that cancels in the differences of heavy-light
meson masses for the same heavy quark that we will use to fix the
coefficients. The coefficient $C$ allows for linear dependence on
the light quark mass, independent of QED effects.
From heavy quark symmetry we can expect that
$A$, $B$ and $C$ will be constant up to $\Lambda/m_h$ corrections as
the heavy quark mass
$m_h \rightarrow \infty$ and independent of $m_{q_l}$ up to small
chiral corrections. This model was also used in~\cite{Bazavov:2017lyh}.
If we assume that the coefficient $A\equiv A^{\mathrm{phen}}$ is independent of heavy quark
mass (i.e. we ignore $\Lambda/m_h$ terms) we can easily determine
it from experimental
information. If we add the experimental
mass difference of $B^+$ and $B^0$ to the mass
difference of $D^+$ and $D^0$~\cite{Zyla:2020zbs} then the terms with coefficients
$C$ and $B$ (if independent of heavy quark mass) cancel out. We have
\begin{eqnarray}
\label{eq:Aexp}
\frac{2}{3}A^{\mathrm{phen}} + \frac{1}{3}A^{\mathrm{phen}} &=& 4.822(15) - 0.31(5) \, \mathrm{MeV} \, , \nonumber \\
A^{\mathrm{phen}} &=& 4.51(5) \, \mathrm{MeV} .
\end{eqnarray}
This result agrees well with our lattice determination of
$A$ for the $D_s$ meson in Eq.~(\ref{eq:dsresult}).
From our calculation of results with a heavy quark mass twice that
of charm (Eq.~(\ref{eq:2dsresult})) we are able to show that
indeed $A$ is independent of heavy
quark mass at the level of our uncertainties ($\sim$10\%).
It would be straightforward to extend our calculation to the
$D$ meson from the $D_s$ to test for any dependence
of $A$ on the light quark mass.
\section{Conclusions}
\label{sec:conclusions}
We have shown here how to separate out the QED interaction piece from
the self-energy terms in the determination of the effect of including QED
for valence quarks on heavyonium and heavy-light meson masses.
Lattice QCD calculations are now accurate enough that the effect of
QED, at least for the valence quarks, can have an impact. The full
effect of QED needs to be included in order to tune parameters, such
as quark masses, by tuning meson masses until they take their experimental
value in the QCD+QED calculation (this was done, for example, for
QCD + quenched QED in~\cite{Hatton:2020qhk}).
There are multiple reasons for wanting to separate out the
QED interaction piece from the self-energy terms of the QED effect, however.
One is to
test our understanding of the physical contribution of QED by comparing
to phenomenological model calculations.
Another is to use the effect
as a probe of meson, and more generally, hadron structure by using it
to determine an effective average radial separation of the valence
quarks.
We have determined the coefficient $A$ of the QED interaction
piece
for the $\eta_c$, $J/\psi$ and $D_s$ mesons as well as for the corresponding
mesons constructed by doubling the $c$ quark mass
(see Eqs~(\ref{eq:charmres}),~(\ref{eq:2charmres}),~(\ref{eq:dsresult})
and~(\ref{eq:2dsresult})).
The uncertainties we obtain at the physical point are 5\% for the heavyonium
case and 10\% for heavy-light.
A simple potential model gives results for the Coulomb interaction
effect in charmonium
in reasonable agreement with the lattice QCD numbers
(spin-averaged to remove
spin effects). We suggest
that the lattice results
could be used to tune potential models more accurately. This in turn
could improve results for calculations, for example involving excited
states and hadronic decay channels, that are currently more readily done in a
potential model than using lattice QCD. We also find that a phenomenological
model based on heavy quark symmetry gives good agreement
with our $D_s$ results. We
are able to demonstrate in that case that $A$ is independent of heavy
quark mass.
Since $A$ is dominated by the Coulomb interaction effect
for heavy mesons we can define an effective size parameter
$\langle 1/r_{\mathrm{eff}} \rangle$ by dividing
our results for $A$ by $\alpha_{\mathrm{QED}}$.
This gives values for $\eta_c$, $J/\psi$ and $D_s$
mesons of
\begin{eqnarray}
\label{eq:size}
1/\langle 1/r_{\mathrm{eff}} \rangle &=& 0.206(8) \, \mathrm{fm}, \quad \eta_c \\
&=& 0.321(14) \, \mathrm{fm}, \quad J/\psi \nonumber \\
&=& 0.307(31) \, \mathrm{fm}, \quad D_s \, .\nonumber
\end{eqnarray}
The $\eta_c$ result can be compared to the
value for $\sqrt{\langle r^2 \rangle}$ using
$\langle r^2 \rangle = 6/M_{J/\psi}^2$ which is in reasonable agreement
with our results for the electromagnetic form factor of the
$\eta_c$ at small squared momentum-transfer, $q^2$~\cite{Davies:2019nut}.
This would give $\sqrt{\langle r^2 \rangle}$ = 0.156 fm.
We also find that the size parameter from Eq.~(\ref{eq:size})
falls for heavier heavyonium
masses approximately as $\sqrt{m}$ but does not change at all
for heavy-light mesons as the mass is increased.
We believe that this could be a useful approach to assessing
the size of other hadrons because it requires only the
calculation and fitting of correlated 2-point functions.
The noncompact QED action is simply being used as a convenient
way to probe the $r$-dependence so
a larger value of $\alpha_{\mathrm{QED}}$ than the physical one
can be used to increase the signal for the perturbation~\cite{Duncan:1996xy}.
For these purposes it might also be easier to use a purely Coulomb photon
on each timeslice of the lattice as the direct Fourier transfrom of $1/r$.
By giving electric charge
to pairs of quarks in more complicated hadrons such as baryons, tetraquarks
or pentaquarks it might be possible
to distinguish diquark-like configurations where they occur.
We plan to test this out.
\subsection*{\bf{Acknowledgements}}
We are grateful to the MILC collaboration for the use of
their configurations and QCD code. We adapted this to include quenched QED.
Computing was done on the Cambridge service for Data
Driven Discovery (CSD3), part of which is operated by the
University of Cambridge Research Computing on behalf of
the DIRAC
HPC Facility of the Science and Technology Facilities
Council (STFC). The DIRAC component of CSD3 was funded by
BEIS capital funding via STFC capital grants ST/P002307/1 and
ST/R002452/1 and STFC operations grant ST/R00689X/1.
DiRAC is part of the national e-infrastructure.
We are grateful to the CSD3 support staff for assistance.
Funding for this work came from the UK
Science and Technology Facilities Council grants
ST/L000466/1 and ST/P000746/1 and from the National Science
Foundation.
|
2,877,628,090,411 | arxiv | \section{Introduction}
\label{sec:Introduction}
When the CEO of a major Swiss telecommunication provider was asked about the long-term goal of his company, he replied: Still being in the market in five years.
This example could well serve as a shorthand description of resilience.
Being there in five years means that the company could either withstand shocks or recover from them if they could not be avoided.
For economic entities like communication firms, shocks could result from various sources, for instance, market disruptions from new competitors, legal regulations about privacy from governments, technological innovations that change communication behaviour, and many more.
Such incidents are likely to happen over time.
What makes them \emph{shocks} is their unpredictability.
Hence, a responsible CEO would probably prepare his company to cope with the unforeseeable; he would strengthen the company's ability to adapt to any changes quickly.
A similar understanding of resilience applies to individuals that face various mental, health, economic or social challenges.
In a mechanical sense, it would be difficult to define a ``stable state'' for them.
Their stability is indicated by the fact that they can master these challenges and still are ``there'', despite a very demanding life.
Instead of individuals, in the following, we focus on \emph{collectives}, i.e. social organisations comprising a larger number of individuals.
This term refers to formal or informal groups of interrelated individuals who pursue a collective goal and are embedded into an environment~\citep{Ostrom2009,Hoegl2001}. Examples range from companies and non-profit organisations to high school classes or virtual teams collaborating via online systems.
We use as our running example a collective of developers of the open source software project \textsc{Gentoo} (for the details, see Appendix~\ref{sec:gentoo-data}).
Compared to other types of systems, e.g., technical or ecological systems, we know very little about the resilience of collectives, which provides the primary motivation for our paper.
We argue that the difficulties of tackling the resilience of collectives with a formal approach result from two \emph{dynamical} problems, discussed in the following.
The first one is the fast and continuing change within collectives, and the second one is the additional feedback cycle resulting from their response to changes induced by themselves.
Most collectives have in common that they are very \emph{volatile}.
They may experience fast changes in their structure, e.g. the number of individuals and their relations, have to cope with fluctuating task volumes or frequent interruptions, constant environmental impacts, etc.
This volatility makes them different from, e.g., engineered systems, which are built to last.
The common notion of resilience for engineered artefacts, such as bridges, is illustrated in Figure~\ref{fig:resilience:a}.
A bridge is planned for a defined functionality, e.g. a given number of cars per hour passing the bridge.
This functionality remains as long as no critical shocks appear, either caused by internal malfunction, e.g., lack of maintenance, or external disruptions, e.g., an earthquake.
If the shock happens, the bridge's functionality is partially or entirely destroyed.
Nonetheless, the bridge can be rebuilt, recovering the functionality and often even improving it.
\begin{figure}[t]\centering
\begin{subfigure}{.5\textwidth}\centering
\includegraphics[width=0.8\textwidth]{figures/draw-resilience1A.pdf}
\caption{}\label{fig:resilience:a}
\end{subfigure}\hfill
\begin{subfigure}{.5\textwidth}\centering
\includegraphics[width=0.9\textwidth]{figures/draw-resilience2A.pdf}
\caption{}\label{fig:resilience:b}
\end{subfigure}
\caption{Problems defining a reference state for resilience, understood as the ability to absorb shocks and to recover: (a) engineered system (e.g. a bridge), (b) social system (e.g. a collective).}
\label{fig:resilience}
\end{figure}
The assumption underlying Figure~\ref{fig:resilience:a} is a known reference state, i.e. the planned functionality, which remains relevant over time.
For highly volatile systems, shown in Figure~\ref{fig:resilience:b}, we cannot define such a reference state, partly because it is hardly quantifiable and partly because it is constantly changing.
This implies that we are also unable to specify what we mean by a ``shock''.
Unlike the bridge, where shocks result in a measurable dropdown of functionality, we always have ``shocks'' of varying sizes.
The ability to recover is not restricted to the aftermath of a breakdown.
Instead, it requires a continuous effort from the collectives to adapt to all sorts of challenges.
Most importantly, the recovery is not an external intervention, like the repair of a bridge, but the result of an internal response of the collectives.
Consequently, we need a new and dynamical approach to the resilience of such social systems.
\section{A new resilience measure}
\label{sec:resilience}
\subsection{Defining robustness and adaptivity}
\label{sec:robustn-adapt}
Resilience concepts have been developed in different disciplines, ranging from ecology to engineering, the social sciences, management sciences, or mathematics \citep{Hosseini2016}.
Its precise meaning differs across and sometimes even within these disciplines \citep{Fraccascia2018a,Baggio2015}.
Many approaches take resilience simply as a synonym for stability.
In ecology, for example,
a system is said to be resilient if, after a perturbation, it returns to a previously assumed stable state \citep{Grodzinski1990,Gunderson2000}. This idea borrows from classical mechanics and thermodynamics with their definitions of equilibrium states as minima of some potential energy.
Collectives are inherently open non-equilibrium social systems.
Stationary states in non-equilibrium can only be kept if they are constantly \emph{maintained}, and collectives are no exception.
Their resilient state has to be actively managed.
Otherwise, it dissolves over time like any other ordered state.
So, what precisely has to be maintained?
We propose that resilience $\mathcal{R}[A(t),R(t)]$ is composed of a structural component that captures the \emph{robustness}, $R(t)$, and a dynamic component that captures the \emph{adaptivity}, $A(t)$, of a system, which both can change over time.
\begin{figure}[htbp]
\centering
\includegraphics[width=0.45\textwidth]{figures/gentoo_2.pdf}
\caption{
Network of task assignments between Gentoo developers in September 2007.
A node's size and colour intensity is proportional to its degree. }
\label{fig:gentoo-network}
\end{figure}
Collectives can only function if they build on social structures.
In the example of a software developers' team, these structures are reflected by their work relations, communication channels, etc.
These structural features can be represented by a \emph{social network}, conveniently extracted from the project repositories using state-of-the-art tools such as \texttt{git2net} \citep{Gote2019}.
Links in this network are timestamped, directed, and weighted \citep{gote2021analysing}, and multiple relationships can be captured by multi-edge \citep{casiraghi2017relational} and multi-layer networks \citep{garas2016interconnected}.
This social network evolves if nodes or links are added or deleted, or links are rewired.
Collectives utilise this social structure for their activities.
A well-maintained social network will allow developers to, e.g., write more code, fix bugs faster, and reduce coordination overhead.
While robustness has an intuitive interpretation, adaptivity is more challenging to grasp.
It refers to the \emph{ability} of the collective to attain different states, not necessarily to actual transformations \citep{schweitzer2021fragile}.
Hence, adaptivity depends on the available \emph{options} to change the current state.
One way to measure this ability is \emph{potentiality} \citep{zingg2019entropy}, which quantifies how many different states become potentially available in a given situation.
This strongly depends on existing constraints for the collective.
The generalised hypergeometric ensemble of graphs (gHypEG) \citep{casiraghi2021configuration} allows calculating these states from an analytic approach.
Obviously, a collective cannot be resilient if it has no options to escape from an impaired situation.
Hence, the \emph{ability} to change is crucial for resilience.
However, knowing how the collective precisely evolves would imply predicting the future, which is not the aim of our approach.
In Appendix~\ref{sec:gentoo-data}, we summarise how the two variables, robustness and adaptivity, are operationalised using data from the software developers' collective.
\subsection{Composing resilience from robustness and adaptivity}
\label{sec:comp-resil-from}
How does the resilience of collectives depend on their robustness and their adaptivity?
Should it be monotonously increasing with these two variables, assuming the more, the better?
The relations are more intricate, as we summarise in Figure~\ref{fig:square:a}.
\textbf{Region~(1)} is characterised by a low resilience of collectives because both robustness and adaptivity are \emph{low}.
Hence, there is nothing to build on, and the collective has few alternatives to change this unfavourable situation.
\textbf{Region~(2)} is characterised by high robustness, which implies a solid, structured social network.
It cannot be easily destroyed but also not be changed.
This state might be beneficial only if collectives \emph{should not} change because they are already close to an optimal state.
Collectives with high robustness have a lot to lose.
Thus adaptivity should be low to keep this state.
Only then can resilience become high.
\textbf{Region~(3)} is also characterised by high robustness, but the high adaptivity increases the risk that the collective could lose its robustness.
Therefore, such states have low resilience.
In the complementary case, if adaptivity should be \emph{high} because the collective needs different options to adapt, high robustness would only work against the necessary change.
Again, this means a lower resilience.
\textbf{Region~(4)} is characterised by low robustness.
That means the collective has nothing to lose, and alternative states will be better.
Thus, a high adaptivity can only improve the situation.
Therefore, resilience should be high.
\begin{figure}[htbp]\centering
\begin{subfigure}{.5\textwidth}\centering
\includegraphics[height=0.75\textwidth]{figures/adapt-robust-worse.png}
\caption{}\label{fig:square:a}
\end{subfigure}\hfill
\begin{subfigure}{.5\textwidth}\centering
\includegraphics[height=0.75\textwidth]{figures/pgf-3dplot.pdf}
\caption{}\label{fig:square:b}
\end{subfigure}
\caption{ Resilience $\mathcal{R}$ as a function of robustness $R$ and adaptivity $A$: (a) Qualitative assessment of different states. (b) Quantification using Eqn.~\eqref{eq:5} with $A_{\mathrm{max}}=R_{\mathrm{max}}=1$ for illustration.}
\label{fig:square}
\end{figure}
Resilience, as a quantitative measure, should try to balance the influence of both robustness and adaptivity depicted in Figure~\ref{fig:square:a}.
This can be achieved quantifying resilience as proposed in \citep{IChingpaper} and shown in Figure~\ref{fig:square:b}:
\begin{align}
\label{eq:5}
\mathcal{{R}}(A,R)=R (A_{\mathrm{max}}-A)+A(R_{\mathrm{max}}-R)
\end{align}
To summarise our discussion, adaptivity as the dynamic component of resilience is a two-edged sword.
It bears the chance to improve the bad states of collectives with low robustness and the risk of destroying good states with high robustness.
We also note that robustness or adaptivity \emph{alone} cannot guarantee that a collective is resilient.
Unlike robustness, which describes the current state, resilience has to reflect the ability to improve in the near \emph{future}.
Conversely, without the ability to adapt, collectives can be stable or unstable, but they are not resilient, i.e., they cannot respond to internal or external changes.
\subsection{A formal model to build up resilience}
\label{sec:formal-model}
We now proceed in two directions.
First, we study a formal model of generating resilience from robustness and adaptivity.
This will result in hypotheses for the behaviour of collectives.
Secondly, we test these hypotheses using data from our team of software developers.
From the above discussion, it becomes clear that robustness has to lead the improvement of the resilience of collectives, as all further activities depend on the existing social network.
At the same time, maintaining the social network also requires adaptivity.
New nodes have to be integrated.
Links have to be rewired or reinforced.
Therefore, the dynamics of robustness $R$ and adaptivity $A$ are coupled in a nonlinear manner.
For the details see Appendix \ref{sec:formal-relations}.
\begin{figure}[htbp]
\centering
\includegraphics[width=0.45\textwidth]{figures/curve_regimes.pdf}
\caption{Resilience trajectory in the phase space of robustness and adaptivity.
The two scenarios (grey: I, black: II) are obtained from the formal model presented in Appendix~\ref{sec:formal-relations} for two different parameter sets. The color code refers to the Regions defined in Figure~\ref{fig:square:a}.}
\label{fig:loop}
\end{figure}
As Figure~\ref{fig:loop} shows, the formal model generates distinct trajectories in the phase space of robustness and adaptivity for collectives' dynamics.
They resemble cycles, i.e., \emph{life cycles} in the development of collectives.
We show two different trajectories starting in Region (1) of low resilience, characterized by low robustness and low adaptivity.
The trajectories then quickly turn towards Region (2) of high resilience, characterized by high robustness, while adaptivity is low enough not to destroy the resilient state.
This region would be fortunate for the collective if it could stay there.
This, however, is not the case.
Our model predicts two scenarios shown in Figure~\ref{fig:loop}, which will be compared to the software developers' data.
Starting from Region (2), in Scenario (I), robustness remains high, but adaptivity is further increased such that Region (3) is reached.
In this region, resilience is low because the robust social structure is at risk of being lost because of too many alternative states and too little attention to maintain the current one.
Consequently, a failure follows, and the trajectory returns to the initial Region (1), where both robustness and adaptivity are low.
There, a new life cycle could start.
In Scenario (II), starting from Region (2), robustness decreases at the expense of adaptivity, which increases, such that Region (4) is reached.
Adaptivity and robustness are both coupled and, for certain parameter regions, cannot be increased simultaneously.
Such a coupling first leads the collective to another state of high resilience in which robustness does not work against adaptivity.
However, this state cannot be kept for long because robustness, the precondition of adaptivity, is low.
Therefore, after adaptivity has decreased, a failure follows, and a new cycle can start from Region (1).
These two scenarios are different in their sequence of resilient ($\square$) and nonresilient ($\triangle$) states.
Scenario (I) follows $(\triangle)\to (\square)\to (\triangle)\to (\triangle)\to ...$,
while Scenario (II) follows $(\triangle)\to (\square)\to (\square)\to (\triangle)\to ...$.
In Section~\ref{sec:comparison}, we will test the two hypotheses about the life cycle against the data from the developer collective and discuss the reasons for the collective failure in more detail.
\section{Resilience at work: An application}
\label{sec:comparison}
\subsection{Measuring resilience for a collective }
\label{sec:meas-resil-coll}
To demonstrate the applicability of our resilience measure, we analyse data from the \emph{bug handling collective} of \textsc{Gentoo} (see Appendix~\ref{sec:gentoo-data}).
Between October 2004 and March 2008, a central developer, named \emph{Alice} in the literature \citep{Zanetti2013,Garcia2013b}, became the most central figure in this collective (see also Figure~\ref{fig:gentoo-network}).
She assigned most bug reports to other developers for a while but left the project suddenly in March 2008.
The unforeseeable drop out of a core developer was a severe shock for the collective, which struggled for several years before it could reach a comparable level of operation.
\citet{Zanetti2013} already studied how different network measures reflect the dropout of Alice, while \citet{Casiraghi2021} developed a load redistribution model of task reassignments to study the likelihood of team failure.
For us, the recorded data allows studying the resilience of the collective during this period.
First, we constructed a social network from the available interaction data, where nodes indicate developers and directed links task assignments.
Because this network changes daily, we used a 30-day sliding window for aggregation.
Applying our quantitative measure for resilience requires operationalising the two main factors, robustness and adaptivity, for this network.
The details are again presented in Appendix~\ref{sec:gentoo-data}.
Robustness, as the structural component, is large if the nodes in the network have a similar degree.
That means everyone in the collective processes roughly the same number of tasks, either by solving or reassigning them, and nobody gets overloaded.
Adaptivity, as the dynamic component, compares the number of developers assigning tasks to this number six months ago.
If it increases, more developers become potentially involved in bug handling.
Thus, the workload is better balanced, alternative members for task processing are available, and the time to process them becomes shorter \citep{Zanetti2013}.
The results in Figure~\ref{fig:time} reveal the following scenario of how this collective copes with change.
Initially, adaptivity is low because the collective first has to establish a robust social structure for collaboration.
As this progresses, adaptivity also increases because more options become available for performing tasks.
In the same way, if robustness decreases, adaptivity follows the decrease with a time lag of several months.
That means robustness is instrumental for generating activity and ensuring adaptivity.
This is also reflected in our formal approach (Appendix~\ref{sec:formal-relations}).
Our attention shall focus on the time interval after 2004 when robustness started to decrease.
According to our operationalisation, this indicates that the task assignment became more centralised.
It was the time when developer Alice started to assign most of the tasks.
Interestingly, this concentration led to an increase in adaptivity, i.e. the number of developers who got tasks assigned still increased.
That means Alice effectively utilised the collective's workforce, involving more members.
However, the further concentration of the responsibilities eventually led to a decrease in adaptivity, i.e., fewer options for the collective to contribute.
\begin{figure}[htbp]
\centering
\includegraphics[width=0.45\textwidth]{figures/ada_rob_vs_time_highBW.pdf}
\caption{Robustness and adaptivity over time. Dots indicate the values obtained from the social network. Using a kernel density estimation, we reduce this information to the empirical curves, from which the fits to the dynamic model of robustness and adaptivity, Appendix~\ref{sec:formal-relations}, are obtained. }
\label{fig:time}
\end{figure}
\subsection{Explaining the failure}
\label{sec:explain}
The findings from our case study are remarkable in different aspects.
First, in Figures~\ref{fig:loop} and \ref{fig:time} we observe a \emph{life cycle}, i.e., the resilience of the collective first increases to decrease afterwards rapidly.
After returning to the initial low resilience state, the collective starts to consolidate again by building up robustness and adaptivity.
The dynamic model presented in Appendix~\ref{sec:formal-relations} is compatible with such a life cycle behaviour but, obviously, cannot capture additional perturbations occurring in this particular collective.
For example, in Figure~\ref{fig:time}, one can notice a deviation of adaptivity between the model and the data in 2007.
This was caused by the fact that during that period the central developer was temporarily suspended by the collective.
Such singular events cannot be reflected by our dynamic model.
But, interestingly, they do not change the principal dynamics of the life cycle.
Secondly, thanks to our dynamic resilience concept, we can understand the reasons behind the life cycle.
These are the adaptive processes inside the collective that push it out of the resilient state and eventually causes the failure.
This reminds of self-organized criticality, a dynamic phenomenon in non-equilibrium complex systems and networks \citep{watkins201625,PhysRevE.85.026103}, where feedback processes drive a system into unstable states.
However, at difference to mechanical or physical systems, the dynamics approaching the critical state is not universal, but depends on the goal of the collective and the social mechanisms at work.
Specifically, the two states of high resilience for the collective are of different natures.
The state with high robustness in Region (2) is characterised by balanced interactions between developers, who were all involved similarly in assigning, redistributing and solving tasks.
However, little changes to the social network occurred because strategies to integrate new developers were missing.
Following the advent of Alice, the collective evolved to a second resilient state in Region (4).
In this state, adaptivity increased because more developers were involved in solving the tasks, and new members could be quickly and easily integrated into the organisational structure.
However, the effort to assign tasks became more centralised, and links that have become redundant disappeared from the social network.
Therefore robustness decreased.
This development reflects an internal reorganisation in the workflow.
With Alice as the central developer, the collective obtained a hierarchical organisation.
It became highly efficient regarding the task assignments but also highly vulnerable because the collective depended on a single individual.
Once the robustness has critically lowered, an adaptivity increase eventually destroys the previous resilient state.
The fact that the failure is caused \emph{endogenously}, i.e., by the collective itself, makes it different from engineered systems, such as the bridge example mentioned earlier, which may fail because of exogenous shocks.
It also makes it different from ecological systems, which often reach a stationary non-equilibrium state that persists over a long time if no critical perturbations occur.
Social systems constantly adapt to changes caused either exogenously or endogenously.
This response leads to intended as well as unintended consequences.
The intended one was the increased efficiency in utilising the workforce, thanks to the central developer.
The unintended one was the increased dependency on this central developer, causing the unnoticed erosion of robustness.
The life cycle observed allows us to characterize resilience in a more general manner.
Collectives could be seen as resilient \emph{only} if they follow more than one round of the life cycle.
This denotes a \emph{higher-order}, or \emph{long-term}, resilience.
A \emph{first-order}, or short-term resilience in contrast refers only to one cycle.
There, we already observe resilient states of the collective which can last for long, but are eventually destroyed by the adaptive dynamics.
Long-term resilience addresses the question how a collective is able to cope with a collapse.
The collective of the \textsc{Gentoo} developers was able to recover, albeit on a longer time scale that is not covered in our data set.
But other software development projects were not able to build up this long-term resilience, they disappeared after a few years.
\section{Discussion} \label{sec:comparison:-what-do}
\subsection{Comparison with existing approaches}
\label{sec:comp-with-exist}
Our analysis clarifies why existing resilience concepts cannot provide a comparable, quantifiable insight into the failure of the developers' collective.
They largely miss the coupling between structure and dynamics, expressed in the nonlinear relation between robustness and adaptivity.
Instead, they treat these dimensions as independent or, more often, only focus on robustness and stability.
Robustness models of networks are a prime example of such lopsided resilience concepts.
They can be classified into different approaches.
One group of models simulate attacks on the network structure by removing links or nodes.
Resilience is then measured as the size of the largest connected component surviving \citep{Kitsak2018, casiraghi2020improving,schweitzer2020intervention}.
Another group of models simulate failure cascades after a shock initiates processes of load redistribution \citep{Burkholz2016b, Garcia2013, Cohen2000a}.
The size of failure cascades serves as a resilience measure.
Such attempts model only the robustness of the networks.
They leave out the ability of the network to respond, i.e. the adaptivity that is stressed in this paper.
While these models consider at least a time dimension for cascades and redistribution, other concepts simply take static topological network measures as proxies for resilience.
For instance, closeness centrality was applied as a resilience measure for transportation systems \citep{Ilbeigi2019} and infrastructure systems \citep{Omer2014}, but also for software developer teams
\citep{Zanetti2013}.
such measures only capture the structural dimension of resilience.
Adaptivity, which we have identified as the second dimension of resilience, is often discussed only as a synonym for dynamics.
Concepts such as first-passage times take explicitly into account only the time a system needs to return to a previously occupied equilibrium state after a perturbation \citep{Grodzinski1990}.
If a perturbation leads the system to transit to a different equilibrium state, this is known as \emph{robust adaption}, combining the notions of robustness and adaptation.
But adaptivity is reduced to a simple relaxation dynamic, whereas the volatility of collectives requires modelling a continuous dynamics.
This is often considered as \emph{adaptive capacity}.
It can be expressed in several different ways, for instance, in terms of the ability to learn and store knowledge, the ability to anticipate disruptive events, the level of creativity in problem-solving, or the dynamics of organisational structures~\citep{Folke2002,Smit2006}.
Some of these aspects have been assessed through survey research designs.
Examples are learning capability~\citep{Svetlik2007}, situational awareness, creativity~\citep{Mcmanus2007}, or the fluidity of structures~\citep{Goggins2014}.
The problem in measuring adaptive capacities is usually operationalisation.
Moreover, we miss a formal relation between adaptivity and robustness to understand resilience fully.
The literature further provides examples of more general resilience measures.
\citet{Hosseini2016} distinguishes the following categories:
\begin{description}[noitemsep,nolistsep]
\item (1) \emph{Conceptual frameworks} aim to find qualitative best practice recommendations to ensure a system's resilience.
\item (2) \emph{Semi-quantitative indices} entail surveys asking experts to rate different resilience factors for a system on a scale, e.g. from $0$ to $10$.
\item (3) \emph{General resilience measures} quantify the resilience of a specific class of systems, such as civil infrastructure or transportation systems.
\item (4) \emph{Structural-based modelling approaches} model individual systems with domain-specific resilience factors.
\end{description}
This elaborate classification highlights that neither a universal resilience definition nor a measure working in all scenarios exists \citep{Carpenter2001, Meerow2016, Walker2012}.
Apart from this, the real problem behind most of these approaches is the \emph{quantification} of resilience factors and the efficiency in obtaining information.
We wish for measures that can be automatically and instantaneously calculated based on available data about collectives to monitor resilience continuously.
In contrast, almost every existing resilience measure is based on an \emph{ex-post evaluation}.
This approach may help us to understand why some failures have happened, but it is not sufficient to see them coming.
It is one of the main achievements of our framework that it allows precisely this: (i) quantification, (ii) monitoring, (iii) early warning in case of risky situations.
Moreover, the concepts of robustness and adaptivity underlying our resilience approach also allow a better understanding of the \emph{reasons} for decreasing resilience.
\subsection{A need for further research}
\label{sec:need-furth-rese}
The need for overarching, quantitative and explanatory resilience measures for collectives has been pointed out in the literature for long.
\citet{Davidson2010} emphasises that \textit{``the current [resilience theory] is not readily applicable to social systems.''}
She mentions reasons such as the ability of social systems to postpone the effects of disruptions, unequally distributed agency, humans' ability to anticipate risks, complex power relations, or a tendency for complex collective actions in social systems.
\citet{Al-Khudhairy2012} argue similarly, emphasising the ability of collectives to adapt and self-organise as essential building blocks of resilience.
They acknowledge that existing studies are \textit{``still at the very early stages to learn how to design resilient groups and organisations.''}
As we have demonstrated in our analysis, it is not sufficient to import existing measures or factor classifications from other disciplines to fill the research gap about the resilience of collectives.
They must not be uncritically applied to collectives because \textit{``human systems embody power relations and do not involve analogies of being self-regulating or `rational' ''} \citep{Cannon2010}.
Static resilience measures may work for engineering artefacts but not for volatile social systems where change is the new normal.
Hence, studying the resilience of collectives requires developing a \emph{dynamic} approach that reflects the non-equilibrium conditions and the permanent adaptivity.
But there is more to it.
In fact, resilience is a \emph{system specific} response to a \emph{specific shock}.
That means any approach to understanding resilience has to be contextualised with respect to specific collectives.
Operationalisations for collectives' robustness and adaptivity, thus, have to reflect the available data.
In this paper, we have used the example of a collective with the specific goal of collaboration, where the data allowed us to employ a social network perspective.
This is not always guaranteed.
Our framework of constituting resilience from robustness and adaptivity can rightly claim to provide a new and overarching perspective for collectives.
However, the specific measures for these two dimensions have to be developed with concrete collectives and concrete data in mind.
Ideally, such measures shall reflect the micro-processes generating social resilience.
In turn, this would open the door for mechanism design to improve resilience in collectives.
\subsection*{Acknowledgements}
The authors thank Ingo Scholtes, David Garcia, Antonios Garas and Pavlin Mavrodiev for discussions.
{\small \setlength{\bibsep}{1pt}
|
2,877,628,090,412 | arxiv | \section{Introduction}
\section{Definitions}
\begin{definition}
A word $w$ is called a $factor$ of a word $u$ if there exists words x, y such that $u=xwy$.
\end{definition}
\begin{definition}
An abelian-square is a word of length $2n$ where the first $n$ symbols form an anagram of the last $n$ symbols.
\end{definition}
\section{Conjecture}
\begin{lemma}
Let $w$ is a word of length $n$, containing $k$
many distinct abelian-square factors, and with the last symbol is in an abelian-square factor. Then a binary word of length $n$
containing at least $k$ many distinct abelian-square factors, and with the last symbol is in an abelian-square factor, exists.
\end{lemma}
The binary word will be called a {\itshape binary image} of $w$.
\begin{proof}
By induction on $n$. For $n<=2$, the claim is clear.\\
Assume that the claim holds for a word $w$ of length $n$ and $w'$ is a binary image of $w$.
So, $wx$ with $x$ equals to the last symbol of $w$, has $k$
many distinct abelian-square factors, and has a length $n+1$. And, $w'y$ with $y$ equals to the last symbol of $w'$
is a binary image of $wx$ \qed
\end{proof}
\begin{conjecture} \cite{FiciM15}
Assume that a word with length $n$, and containing $k$
many distinct abelian-square factors, exists. Then a binary word of length $n$
containing at least $k$ many distinct abelian-square factors exists.
\end{conjecture}
\begin{proof}
By induction on $n$. For $n<=2$, the claim is clear.\\
Assume that the claim holds for a word $w$ of length $n$.
So, if the last symbol of $w$ is in a factor, then by lemma 1, $wx$ with $x$ equals to the last symbol of $w$ has a binary image of length $n+1$ with at least $k$ distinct abelian-square factors. If the last symbol is not in a factor, then also by lemma
1, $wx$ has a binary image with at least $k+1$ distinct abelian-square factors
\qed
\end{proof}
\section*{Acknowledgements}
The author acknowledges, the financial support of this work from the Tunisian General Direction of Scientific Research (DGRST).
\bibliographystyle{splncs04}
|
2,877,628,090,413 | arxiv | \section{Introduction}
Recent significant progress in deep neural networks has led to remarkable success in computer vision tasks\cite{lee1999integrated, lee2003pattern, yang2007reconstruction} such as image classification\cite{he2016deep, kipf2016semi,zoph2018learning,dosovitskiy2020image} and object detection\cite{girshick2015fast,ren2015faster,liu2016ssd,redmon2016you,redmon2017yolo9000,redmon2018yolov3,bochkovskiy2020yolov4, carion2020end, cai2018cascade,vu2021scnet, chen2019hybrid}. However, these improvements are only relevant when a large amount of annotated data is available, which is more difficult in the case of object detection. It takes extra effort and cost to annotate data, as it requires not only identifying the categorical labels for all objects in the image but also providing accurate location information with bounding box coordinates.
Moreover, the ability of a human to grasp novel notions at a few glances is still beyond the capabilities of current models. There has been thus significant interest in identifying unseen objects given very few training examples.
Following the previous philosophy of few-shot image classification\cite{koch2015siamese, li2019large,snell2017prototypical, vinyals2016matching, garcia2017few, gidaris2018dynamic, finn2017model, matsumi2021few}, existing few-shot object detection approaches mainly focus on achieving as discriminative feature
\begin{figure}[H]
\begin{center}
\includegraphics[width=0.95\linewidth]{nAP_plot_turbo.png}
\end{center}
\caption{Few-shot object detection performance (mAP) on the third PASCAL VOC\cite{everingham2010pascal} novel set at different shot numbers. The performance of our approach is superior to other state-of-the-art approaches.}
\end{figure}embeddings as possible to improve the bounding box classification performance.
The recent state-of-the-art method SRR-FSD\cite{zhu2021semantic} further strengthens the discrimination among the classes with additional class-wise semantic embeddings. CoRPNs+Halluc\cite{zhang2021hallucination} expands the detector with a hallucinator to address the lack of variation in training data and boost the classification of novel classes.
These prior works demonstrate that few-shot object detection can be alleviated by improving the classification performance of the Region-of-Interests (RoIs). Nevertheless, object detection is more challenging to be improved relying solely on attaining discriminative feature embeddings, since it is by definition a jointed task of classification and localization. The quality of region proposals is also a crucial factor to the overall object detection performance (the Intersection over Union (IoU) score). However, the current design of the fine-tuning based approach, TFA\cite{wang2020frustratingly}, induces a heavily biased IoU distribution of novel samples. In particular, TFA chooses to freeze a feature extractor and the region proposal network (RPN) during the novel fine-tuning stage, in order to avoid deteriorating a well-constructed feature space from the sufficient training samples. Necessarily, RPN has no choice but to directly facilitate the feature representation obtained from the base training phase to predict the objects of novel classes. However, this brings two major drawbacks: (1) The general quality of region proposals in novel fine-tuning stage is restricted to be lower than the base counterpart. (2) The number of proposals created from the positive anchors in novel fine-tuning stage is substantially less than the base counterpart. Our intuition is that this clear gap between the stages is one of the primary factors attributing to the current unsatisfactory performance of few-shot object detection.
To resolve this problem, we present few-shot object detection with proposal balance refinement approach, aiming at solving the imbalanced IoU distribution and deficient amount of good quality proposals of novel classes via exploiting auxiliary sequential refinement process.
This simple resampling process provides us with a new solution to increase the number of proposals at different IoU degrees.
The contributions of this study are three-fold:
\begin{itemize}
\item We carefully analyze and address the fundamental weakness of the fine-tuning based approach – optimizing the detector to the novel RoIs of the severely imbalanced IoU distribution which, to the best of our knowledge, has not been tackled before.
\item To address this problem, we propose a new approach in few-shot domain to enrich the RoI samples to include various IoU scores, and balance out the disproportionate IoU distribution that are highly prevalent under the data-scarce scenarios.
\item In the verified experiential settings, we assess our model on the standard benchmark datasets: PASCAL VOC and COCO. The results demonstrate that our approach achieves a sizeable improvement compared to other existing methods without bells and whistles.
\end{itemize}
\section{Related Works}
\subsection{Object detection}
Object detection is one of the most fundamental problems in computer vision. There are two main lines in the current systems: two-stage proposal-based approach and one-stage integrated-manner one.
R-CNN series\cite{ren2015faster, he2017mask} has the two-stage architecture of firstly predicting probable candidate regions of the image with multiple anchors per pixel in the RPN and then distinguishing the RoIs by category classification and bounding box localization. Amongst these proposal-based approaches, multi-stage detectors\cite{cai2018cascade,chen2019hybrid,vu2021scnet} show high performing detection and segmentation abilities by resampling the RoIs.
Meanwhile, the one-stage object detectors including YOLO\cite{redmon2016you}, SSD\cite{liu2016ssd} and their variants\cite{redmon2017yolo9000, redmon2018yolov3, bochkovskiy2020yolov4} directly detect objects in the image with a single fully convolutional network. As all these frameworks are designed without the consideration of data-scarce scenarios, it is inappropriate to directly utilize them for recognizing unseen novel class objects.
\subsection{Few-shot Object Detection}
Recently, various meta-learning\cite{yan2019meta, wang2019meta, xiao2020few,li2021beyond} and metric learning\cite{yang2020restoring} based approaches have been proposed to tackle few-shot object detection. Meta R-CNN\cite{yan2019meta} reweights the
\begin{figure}[H]
\begin{center}
\includegraphics[width=0.95\linewidth]{black-bar.png}
\end{center}
\caption{Imbalanced intersection-over-union (IoU) distribution under the few-shot setting. The blue and orange bars denote the average number of proposals from positive anchors of base training and novel fine-tuning, respectively. The yellow bounding boxes denote the ground-truth, while the green bounding boxes are the predicted results yielded by RPN. The percentages indicated above the bars of IoU threshold 0.9 represent the reduced ratio compared to the case of 0.4. As shown in the figure, there is a sizable gap between the two training steps. }
\end{figure}
importance of the attention of each RoI feature with the help of a meta-learner and utilizes the class attentive vectors.
Meta-Det\cite{wang2019meta} focuses on predicting the parameters of category-specific components from few samples.
FsDetView\cite{xiao2020few} presents a joint feature embedding to leverage rich feature information from abundant base class examples and Np-RepMet\cite{yang2020restoring} proposes an inference scheme with negative and positive representatives by restoring negative information. CME\cite{li2021beyond} aims at retaining proper margin space among novel classes.
However, these meta-learning based methods require specific data preparation, and have difficulty in achieving the conventional performance. Accordingly, fine-tuning based approaches\cite{wang2020frustratingly, zhang2021hallucination, fan2021generalized, wu2020multi} are increasingly becoming recognized in the field. TFA\cite{wang2020frustratingly} introduces a simple two-stage fine-tuning approach where the detector is firstly trained on all base classes with abundant samples and then fine-tuned only on the few samples of novel classes. Most of the fine-tuning based works follow the training scheme of TFA, improving upon it from various angles. MPSR\cite{wu2020multi} tackles the problem of scale variation and Retentive R-CNN\cite{fan2021generalized} debiases the pretrained RPN and emphasizes on the base forgetting issue. FSOD-SR\cite{kim2021spatial} and SRR-FSD\cite{zhu2021semantic} put forward the importance of exploiting the contextual information by considering the co-occurrence of objects in the visual scenes. CoRPNs+Halluc\cite{zhang2021hallucination} introduces a hallucinator to further augment the feature variation of few given samples.
As all these transfer-learning based approaches are the variants of TFA, our work also takes TFA as our main baseline and aims at resolving the remaining issue. We tackle the highly disproportionate IoU distribution of unseen novel classes via a sequential proposal refinement process.
\newcommand{\textit{i}.\textit{e}\textit{.}\textit{,}}{\textit{i}.\textit{e}\textit{.}\textit{,}}
\newcommand{\cmark}{\ding{51}}%
\newcommand{\xmark}{\ding{55}}%
\newcommand{\centered}[1]{\begin{tabular}{l} #1 \end{tabular}}
\makeatletter
\newcommand*{\da@rightarrow}{\mathchar"0\hexnumber@\symAMSa 4B }
\newcommand*{\da@leftarrow}{\mathchar"0\hexnumber@\symAMSa 4C }
\newcommand*{\xdashrightarrow}[2][]{%
\mathrel{%
\mathpalette{\da@xarrow{#1}{#2}{}\da@rightarrow{\,}{}}{}%
}%
}
\newcommand*{\da@xarrow}[7]{%
\sbox0{$\ifx#7\scriptstyle\scriptscriptstyle\else\scriptstyle\fi#5#1#6\m@th$}%
\sbox2{$\ifx#7\scriptstyle\scriptscriptstyle\else\scriptstyle\fi#5#2#6\m@th$}%
\sbox4{$#7\dabar@\m@th$}%
\dimen@=\wd0 %
\ifdim\wd2 >\dimen@
\dimen@=\wd2
\fi
\count@=2 %
\def\da@bars{\dabar@\dabar@}%
\@whiledim\count@\wd4<\dimen@\do{%
\advance\count@\@ne
\expandafter\def\expandafter\da@bars\expandafter{%
\da@bars
\dabar@
}%
\mathrel{#3}%
\mathrel
\mathop{\da@bars}\limits
\ifx\\#1\\%
\else
_{\copy0}%
\fi
\ifx\\#2\\%
\else
^{\copy2}%
\fi
\mathrel{#4}%
}
\makeatother
\section{Proposed method}
We first introduce the problem settings of few-shot object detection in subsection A. Then we revisit the proposal imbalance issue in existing approaches in B and expand our proposal balance refinement approach in C.
\subsection{Few-Shot Object Detection Setting}
In the standard setup of few-shot object detection from previous works [13, 21, 50, 55], classes are split into two sets: base classes $C_{base}$ and novel classes $C_{novel}$ where $C_{base}\cap\ C_{novel}=\O$. Accordingly, datasets $D=\{(x,y)|x\in X, y\in Y\}$ are composed of two types based on these classes: $S_{base}$ and $S_{novel}$ where $x$ is input image and $y=\{(c_i,b_i)|i=1,...,N\}$. Here, $c_i$ and $b_i$ denotes the category and the bounding box coordinates of the $i$-th object out of the N object instances in the image $x$, respectively. The full training procedure consists of two-stage fine-tuning paradigm: the base training stage and novel fine-tuning stage.
In the first base training stage, the model $Det$ is trained on a large base set $S_{base}$ and becomes a $|C_{base}|$-way detector.
In the novel fine-tuning stage, model is fine-tuned on a balanced set of $C_{base}\cup\ C_{novel}$, such that it can maintain the detection performance on pre-trained base classes and learn about novel classes at the same time. Accordingly, the model becomes a ($|C_{base}| + |C_{novel}|$)-way detector in the second stage and can be summarized as follow,
\begin{equation}
Det_{init} \xhookrightarrow{{S_{base}}} Det_{base}\xdashrightarrow{{S_{novel}}} Det_{novel}
\end{equation}
where $\hookrightarrow$ and $\dashrightarrow$ represents the conventional training and the fine-tuning process, respectively. The captions above the arrows denote the utilized dataset for training.
\subsection{Revisiting Proposal Imbalance}
The primary philosophy of TFA is to implicitly leverage the knowledge of multiple base classes to construct novel class feature space. Derived from the performance trade-off between novel classes and base classes, only the last RoI classifier and regressor are further fine-tuned on the data-scarce novel classes. Nevertheless, this gives rise to two negative consequences: inadequate quality and quantity of proposals from RPN during the novel-fine tuning.
One could assume RPN is a wholly class-agnositc component as it is merely trained to classify the objectness of the anchors. However, RPN inevitably becomes tilted towards the base classes because the anchors of novel class objects are categorized into non-object during the base training.
Fig.2 shows that the number of proposals from the positive anchors decreases as the IoU threshold increases in both stages. However, the general quality of region proposals in novel fine-tuning stage is suppressed to be lower than the base counterpart. In particular, in the novel fine-tuning stage, the region proposals of IoU [0.4, 0.6) comprise of 72.4\%, whereas its base training counterpart comprises of only 49.4\%.
Moreover, in the base training, the number of the highest quality proposals of IoU [0.9, 1.0) takes up 39.2\% of the counterpart of IoU [0.4, 0.5). However, in the novel-fine tuning stage, the highest-quality proposals are only 8.4\% of the lowest quality.
Besides, in terms of quantity, there is a large discrepancy between the two statistics in the whole IoU range. Setting the IoU threshold [0.5, 0.9], the amount of the proposals of novel fine-tuning is even less than 25\% of the base training. These two properties induce an undesirable consequence of interfering the improvement of learning capabilities on novel classes: Less and low-quality positive anchors become proposals, which is followed by the lack of learning opportunities for the RoI classifier and regressor. Necessarily, this inhibits the detector from obtaining various feature variations of novel classes and induces the overfitting to the proposals of impaired quality.
Based on these analyses above, we argue that these neglected issue of severe IoU imbalance and extremely deficient amount of the positive samples of novel classes should be resolved to boost the unsatisfactory detection performance of few-shot setting.
\begin{figure*}
\begin{center}
\small
\centerline{\includegraphics[width=0.9\textwidth]{final_model.png}}
\end{center}
\caption{An overall architecture of our approach. Green and grey indicates the trainable layers and the frozen layers during the novel fine-tuning phase, respectively. $\alpha_{rpn}$ is the IoU threshold of RPN and $\alpha_t$ is the IoU threshold of each stage $t$ of the sequential detectors. $\alpha_t$ is set to increase as the stage ascends. The turquoise and orange bounding boxes represent the region proposals of novel classes and base classes. The grey arrow line indicates the flow of the regressed bounding boxes. The improvement of IoU distribution of RoIs at each stage is illustrated with bar graphs before the RCNN Heads. $\gamma_{rpn}$ denotes the loss coefficient of the RPN.}
\end{figure*}
\subsection{Proposal Balance Refinement}
Motivated by the aforementioned findings, we employ an auxiliary proposal balance refinement process during the base training and novel fine-tuning. As shown in Fig. 3, a given image is firstly processed by the backbone network, FPN and RPN. Subsequently, it is fed to the sequential detectors composed of the RoI Align layer, two consecutive fully connected layers, namely the RCNN head, and the bounding box classifier and regressor. These detectors have three stages with increasing IoU threshold $\alpha_t$ at each stage $t$, where the following stages take the output of a previous stage as their input. Predicted bounding boxes from the regressor of each stage show necessarily better output IoU than the input IoU, yet for further dynamic improvement, the bounding boxes go through an additional refinement process. This successive balancing process enables the detector to obtain sufficient positive samples at all different IoU levels, which prevents overfitting to the predominant low-quality proposals.
In the base training, we fully train the whole network on the $C_{base}$. At each stage $t$, classifier $\textit{l}_t$ and regressor $\textit{g}_t$ is optimized for IoU threshold $\alpha_t$.
\begin{equation}
\mathcal{L}_{rcnn_t}=\mathcal{L}^{cls}_{rcnn}(l_t(X_i),c_i)+\mathcal{L}^{reg}_{rcnn}(g_t(X_i, p_i),b_i)
\end{equation}
,where $p_i$ and $X_i$ denotes the predicted bounding boxes and the extracted features from the $i$-th object out of N object instances in the input image $x$, respectively. While $T$ denotes the total number of stages, the regression function and the loss for the base training stage can be formulated as
\begin{equation}
g(X_i,p_i)=g_T\circ g_{T-1}\circ \cdot\cdot\cdot \circ g_1(X_i,p_i)
\end{equation}
\begin{equation}
\mathcal{L}_{total}=\mathcal{L}_{rpn}^{cls}+\mathcal{L}_{rpn}^{reg}+\sum^T_{t=1}\lambda_t(\mathcal{L}_{rcnn}^{cls}+\mathcal{L}_{rcnn}^{reg})
\end{equation}
where $\lambda_t$ indicates the loss coefficient of the each stage.
For novel fine-tuning, instead of following the previous TFA fine-tuning strategy, we jointly update the RPN with the classifiers and regressors.
We provide opportunities for RPN to learn the representations of unseen novel classes, and accordingly enlarge the amount of the RoIs fed to the following classifiers and regressors. To further enrich training samples for the back end of the network, we double the total number of RoIs after the Non-maximum suppression (NMS) is conducted. The loss of the RPN during the novel fine-tuning can be formulated as below:
\begin{equation}
\mathcal{L}_{rpn}=\gamma_{rpn} (\mathcal{L}_{rpn}^{cls}+\mathcal{L}_{rpn}^{reg})
\end{equation}
Nevertheless, as it is also important to avoid deteriorating the pre-trained RPN, we regulate the rpn loss coefficient $\gamma_{rpn}$ and examine the three cases of $\gamma_{rpn}$:
(1) $\gamma_{rpn} =0$ which corresponds to not updating RPN at all, and the gradients being only dominated by the last layer of RCNN during fine-tuning as in the original TFA.
(2) $\gamma_{rpn} \in (0,1]$ which corresponds to scaling the gradient of RPN with nonzero loss coefficient.
(3) $\gamma_{rpn} =1$ which is equivalent to updating the RPN without considering the base forgetting.
We empirically scale the gradients with a nonzero coefficient of 0.5. This scaling is also applied during the recognition of the novel classes. In this manner, the base feature space becomes less corrupted while the scaled gradients assist the construction of the novel feature space.
The whole loss for the second stage is as follows:
\begin{equation}
\mathcal{L}_{total} =\mathcal{L}_{rpn}+\sum^T_{t=1}\lambda_t(\mathcal{L}_{rcnn}^{cls}+\mathcal{L}_{rcnn}^{reg})
\end{equation}
\section{Experiments}
\subsection{Experimental Setting}
We follow the few-shot training protocol from previous works\cite{wang2020frustratingly,zhang2021hallucination,wu2020multi} and assess our framework on the same data splits of PASCAL VOC\cite{everingham2010pascal} and COCO\cite{lin2014microsoft}. PASCAL VOC uses three different random class splits which of each are composed of 15 base classes and 5 novel classes: novel category split 1,2,3. Different to the data-abundant base classes, novel training set consists of \textit{K} = 1, 2, 3, 5, 10 objects sampled from the combination of VOC07 and VOC12 train/val set. For evaluation, we utilize VOC07 test set and report the default PASCAL Challenge protocol mAP50 and the 11-point interpolated AP of IoU range [0.5,1.0], such that we can evaluate the framework under comprehensive conditions.
\begin{table*}
\centering
\small
\setlength\extrarowheight{0.3pt}
\setlength{\tabcolsep}{5.8pt}
\caption{Few-shot detection performance (AP50) on PASCAL VOC novel classes of three category splits. Results in \textcolor{red}{red} and \textcolor{blue}{blue} denotes the first and the second best, respectively. Results in \textbf{bold} denotes the higher one compared to the main baseline.}
\label{tab:main-table}
\begin{tabular}{c|ccccc|ccccc|ccccc}
\toprule
\multirow{2}{*}{Method} & \multicolumn{5}{c|}{Novel Category Set 1} & \multicolumn{5}{c|}{Novel Category Set 2} & \multicolumn{5}{c}{Novel Category Set 3} \\
& shot=1 & 2 & 3 & 5 & 10 & shot=1 & 2 & 3 & 5 & 10 & shot=1 & 2 & 3 & 5 & 10 \\
\midrule\midrule
\rowcolor[rgb]{0.937,0.937,0.937} \textbf{ Ours } & 39.2 & \textbf{\textcolor{blue}{49.2}} & \textbf{\textcolor{red}{56.4}} & \textbf{57.4} & \textbf{\textcolor{red}{61.6}} & \textbf{28.7} & \textbf{31.3} & \textbf{36.9} & \textbf{37.4} & \textbf{44.3} & \textbf{37.4} & \textbf{\textcolor{red}{44.3}} & \textbf{\textcolor{red}{48.6}} & \textbf{\textcolor{red}{51.8}} & \textbf{\textcolor{red}{56.0}} \\
TFA w/cos \cite{wang2020frustratingly} & \textbf{39.8} & 36.1 & 44.7 & \textbf{\textcolor{blue}{55.7}} & 56.0 & 23.5 & 26.9 & 34.1 & 35.1 & 39.1 & 30.8 & 34.8 & 42.8 & 49.5 & 49.8 \\
\midrule
FRCN+ft-full\cite{wang2019meta} & 15.2 & 20.3 & 29.0 & 40.1 & 45.5 & 7.9 & 15.3 & 26.2 & 31.6 & 39.1 & 9.8 & 11.3 & 19.1 & 35.0 & 45.1 \\
Meta-Det\cite{yan2019meta} & 18.9 & 20.6 & 30.2 & 36.8 & 49.6 & 21.8 & 23.1 & 37.8 & 31.7 & 43.0 & 20.6 & 23.9 & 29.4 & 43.9 & 44.1 \\
Meta R-CNN\cite{wang2019meta} & 19.9 & 25.5 & 35.0 & 45.7 & 51.5 & 10.4 & 19.4 & 29.6 & 34.8 & 45.4 & 14.3 & 18.3 & 27.5 & 41.2 & 48.1 \\
TFA w/fc\cite{wang2020frustratingly} & 36.8 & 29.1 & 43.6 & 55.7 & 57.0 & 18.2 & 29.0 & 33.4 & 35.5 & 39.0 & 27.7 & 33.6 & 42.5 & 48.7 & 50.2 \\
MPSR\cite{wu2020multi} & 41.7 & 43.1 & \textbf{\textcolor{blue}{51.4}} & 55.2 & 61.8 & 24.4 & 29.5 & 39.2 & 39.9 & \textbf{\textcolor{blue}{47.8}} & 35.6 & 40.6 & 42.3 & 48.0 & 49.7 \\
RetentiveR-CNN\cite{fan2021generalized} & 42.4 & 45.8 & 45.9 & 53.7 & 56.1 & 21.7 & 27.8 & 35.2 & 37.0 & 40.3 & 30.2 & 37.6 & 43.0 & 49.7 & 50.1 \\
FsDetView\cite{xiao2020few} & 24.2 & 35.3 & 42.2 & 49.1 & 57.4 & 21.6 & 24.6 & 31.9 & 37.0 & 45.7 & 21.2 & 30.0 & 37.2 & 43.8 & 49.6 \\
NP-RepMet\cite{yang2020restoring} & 37.8 & 40.3 & 41.7 & 47.3 & 49.4 & \textbf{\textcolor{red}{41.6}} & \textbf{\textcolor{red}{43.0}} & \textbf{\textcolor{red}{43.4}} & \textbf{\textcolor{red}{47.4}} & \textbf{\textcolor{red}{49.1}} & 33.3 & 38.0 & 39.8 & 41.5 & 44.8 \\
CoRPNs+Halluc\cite{zhang2021hallucination} & \textbf{\textcolor{blue}{47.0}} & 44.9 & 46.5 & 54.7 & 54.7 & 26.3 & 31.8 & 37.4 & 37.4 & 41.2 & \textbf{\textcolor{red}{40.4}} & \textbf{\textcolor{blue}{42.1}} & 43.3 & \textbf{\textcolor{blue}{51.4}} & 49.6 \\
CME\cite{li2021beyond} & 41.5 & 47.5 & 50.4 & \textbf{\textcolor{red}{58.2}} & \textbf{\textcolor{blue}{60.9}} & 27.2 & 30.2 & \textbf{\textcolor{blue}{41.4}} & \textbf{\textcolor{blue}{42.5}} & 46.8 & 34.3 & 39.6 & \textbf{\textcolor{blue}{45.1}} & 48.3 & \textbf{\textcolor{blue}{51.5}} \\
SRR-FSD\cite{zhu2021semantic} & \textbf{\textcolor{red}{47.8}} & \textbf{\textcolor{red}{50.5}} & 51.3 & 55.2 & 56.8 & \textbf{\textcolor{blue}{32.5}} & \textbf{\textcolor{blue}{35.3}} & 39.1 & 40.8 & 43.8 & \textbf{\textcolor{blue}{40.1}} & 41.5 & 44.3 & 46.9 & 46.4 \\
\bottomrule
\end{tabular}
\end{table*}
\begin{table}
\centering
\small
\setlength\extrarowheight{0.3pt}
\setlength{\tabcolsep}{6pt}
\caption{Few-shot detection performance (AP) on PASCAL VOC novel classes of the second base and novel category split. Results in \textbf{bold} denotes the higher one compared to the main baseline. AP is calculated at 0.05 intervals at 11-point interpolated in IoU range from 0.5 to 1.0.}
\label{tab:ap_comparison}
\begin{tabular}{c|ccccc}
\toprule
\multirow{2}{*}{\begin{tabular}[c]{@{}c@{}}Method\end{tabular}} & \multicolumn{5}{c}{Novel AP} \\
& shot=1 & 2 & 3 & 5 & 10 \\
\midrule\midrule
TFA w/cos\cite{wang2020frustratingly} & 23.4 & 21.5 & 27.8 & 34.2 & 34.5 \\
\rowcolor[rgb]{0.937,0.937,0.937} Ours & \begin{tabular}[c]{@{}>{\cellcolor[rgb]{0.937,0.937,0.937}}c@{}}\textbf{27.8}\\\textbf{(+4.4)}\\\end{tabular} & \begin{tabular}[c]{@{}>{\cellcolor[rgb]{0.937,0.937,0.937}}c@{}}\textbf{33.0}\\\textbf{(+11.5)}\\\end{tabular} & \begin{tabular}[c]{@{}>{\cellcolor[rgb]{0.937,0.937,0.937}}c@{}}\textbf{37.2}\\\textbf{(+9.4)}\\\end{tabular} & \begin{tabular}[c]{@{}>{\cellcolor[rgb]{0.937,0.937,0.937}}c@{}}\textbf{37.5}\\\textbf{(+3.3)}\\\end{tabular} & \begin{tabular}[c]{@{}>{\cellcolor[rgb]{0.937,0.937,0.937}}c@{}}\textbf{41.4}\\\textbf{(+6.9)}\\\end{tabular} \\
\bottomrule
\end{tabular}
\end{table}
\begin{table}
\centering
\small
\setlength\extrarowheight{0.3pt}
\setlength{\tabcolsep}{5pt}
\caption{The 10-shot cross-domain few-shot detection performance on COCO base set → PASCAL VOC novel set. We follow the evaluation setting from \cite{wang2019meta,wu2020multi}}
\label{tab:cross-domain}
\begin{tabular}{c|cccc|c}
\toprule
Method & FRCN-ft & MetaDet & MetaRCNN & MPSR& Ours \\
\midrule\midrule
mAP & 31.2 & 33.9 & 37.4 & 42.3 & {\cellcolor[rgb]{0.937,0.937,0.937}}\textbf{48.9} \\
\bottomrule
\end{tabular}
\end{table}
\begin{table*}
\centering
\small
\setlength\extrarowheight{0.3pt}
\setlength{\tabcolsep}{4.1pt}
\caption{ Effectiveness of the proposed ideas in our approach. All results are conducted on the second split of PASCAL VOC dataset and evaluated based on the AP50 of novel and base classes, respectively and 11-point interpolated AP of novel classes as well.}
\label{tab:ablation}
\begin{tabular}{c|cc|ccccc|ccccc|ccccc}
\toprule
\multirow{2}{*}{Method} & \multirow{2}{*}{\begin{tabular}[c]{@{}c@{}}Proposal\\ Refinement\end{tabular}} & \multirow{2}{*}{\begin{tabular}[c]{@{}c@{}}Fine-tune \\ RPN\end{tabular}} & \multicolumn{5}{c|}{Novel AP50} & \multicolumn{5}{c|}{Base AP50} & \multicolumn{5}{c}{Novel AP} \\
& & & 1 & 2 & 3 & 5 & 10 & 1 & 2 & 3 & 5 & 10 & 1 & 2 & 3 & 5 & 10 \\
\midrule\midrule
TFA w/cos\cite{wang2020frustratingly} & \xmark & \xmark & 23.5 & 26.9 & 34.1 & 35.1 & 39.1 & 79.5 & 77.7 & 78.8 & 78.9 & 78.5 & 11.7 & 14.0 & 17.9 & 19.9 & 21.1 \\
\midrule
\multirow{2}{*}{Ours} & \cmark & \xmark & 25.2 & 29.8. & 34.6 & 33.7 & 42.6 & \textbf{81.1} & 80.3 & \textbf{80.6} & \textbf{81.4} & \textbf{81.0} & 15.0 & 19.3 & 22.7 & 21.7 & 26.5 \\
& \cellcolor[rgb]{0.937,0.937,0.937}\cmark & \cellcolor[rgb]{0.937,0.937,0.937}\cmark & \cellcolor[rgb]{0.937,0.937,0.937}\textbf{28.7} & \cellcolor[rgb]{0.937,0.937,0.937}\textbf{31.3} & \cellcolor[rgb]{0.937,0.937,0.937}\textbf{36.9} & \cellcolor[rgb]{0.937,0.937,0.937}\textbf{37.4} & \cellcolor[rgb]{0.937,0.937,0.937}\textbf{44.3} &
\cellcolor[rgb]{0.937,0.937,0.937}80.3 & \cellcolor[rgb]{0.937,0.937,0.937}\textbf{80.4} &
\cellcolor[rgb]{0.937,0.937,0.937}80.5 &
\cellcolor[rgb]{0.937,0.937,0.937}80.6 &
\cellcolor[rgb]{0.937,0.937,0.937}80.6 &
\cellcolor[rgb]{0.937,0.937,0.937}\textbf{\textbf{17.3}} & \cellcolor[rgb]{0.937,0.937,0.937}\textbf{20.1} & \cellcolor[rgb]{0.937,0.937,0.937}\textbf{23.3} & \cellcolor[rgb]{0.937,0.937,0.937}\textbf{24.2} & \cellcolor[rgb]{0.937,0.937,0.937}\textbf{26.9} \\
\bottomrule
\end{tabular}
\end{table*}
\subsection{Existing Baselines}
As our approach is focusing on resolving the remaining issue of TFA, the proposed method is compared with the main baseline TFA w/cos\cite{wang2020frustratingly} and the other approaches\cite{yan2019meta,wang2019meta,xiao2020few,yang2020restoring,zhu2021semantic,wu2020multi,fan2021generalized,zhang2021hallucination,li2021beyond} to verify the effectiveness of our approach.
\subsection{Implementation Details}
We use ImageNet\cite{deng2009imagenet} pretrained ResNet-101\cite{he2016deep} with FPN\cite{lin2017feature} as a feature extractor. All models are trained using the SGD optimizer with the batch size of 8, the momentum of 0.9 and the weight decay of 0.0001. The learning rate is set to 0.02 during the base training phase and 0.01 during the few-shot fine-tuning phase.
IoU threshold $\alpha$ is set to 0.5, 0.6, 0.7 and R-CNN loss coefficient $\lambda$ is set to 1, 0.5, 0.25 at each stage, respectively. The RPN loss coefficient during the fine-tuning stage $\gamma_{rpn}$ is set to 0.5.
\begin{figure}
\begin{center}
\includegraphics[width=1\linewidth]{together_final.png}
\end{center}
\caption{Improvement of IoU distribution at each stage. Blue and orange denotes the base training and novel fine-tuning, respectively. IoU distribution of both stages shows similar statistics.}
\end{figure}
\subsection{Benchmarking Results}
We present our main evaluation results of PASCAL VOC on three different data splits in Table \Romannum {1}. The proposed framework significantly outperforms the main baseline, and is mostly superior or comparable to other state-of-the-art baselines. In particular, we demonstrate the effectiveness of our approach by achieving an increase in performance of up to 11.7\% on the main baseline, and 5\% on the current state-of-the-art.
Based on the Table \Romannum {1}, we notice that although the proposed method significantly surpasses the main baseline, the performance gain differs based on the selection of classes and the number of shots. This appears on other baselines as well, which can be explained by the inclusion of low quality samples of specific classes.
For more detailed comparison, we further assess the detector on the 11-point AP from range 0.5 to 1.0 as in \cite{Detectron2018}. We compare the performance with the main baseline as the general AP is only reported in TFA. Table \Romannum{2} shows that our model achieves significant improvements in every case of up to 11.5\%.
Furthermore, we have two other notable observations: (1) We achieve 79.1\%, 80.1\%, 80.1\% on base classes of three splits respectively before the novel fine-tuning, which are lower than the TFA counterpart 80.8\%, 81.9\%, 82.0\%. Nevertheless, after the novel fine-tuning, as shown in the Table \Romannum{1}, our approach significantly outperforms TFA. It is clear that addressing the imbalanced IoU distribution is a particularly effective solution for alleviating the restricted performance of few-shot object detection.
(2) IoU distribution of novel RoIs tilts more strongly towards high quality samples after each resampling stage.
Fig.4 shows that among the RoIs of IoU $\geq$ 0.4, high quality RoI samples of IoU $\geq$ 0.75 comprise only 3.5\% and 1.9\% at the first stage, but they increase to 59.1\% and 34.9\% at the final stage in base training and novel fine-tuning, respectively. The optimized IoU threshold is more obvious in the base training, however, note that this is fully due to the great abundance of data.
For qualitative comparison, we provide visualized results of detected novel objects for 5-shot case of the main baseline and our approach in Fig. 5. Our model produces better detection results on 4 different error cases of mis-localization, mis-classification, missing objects and the complex error case.
\subsection{Results on COCO to VOC}
We conduct the cross-domain few-shot object detection experiment following the previous works\cite{wang2019meta, wu2020multi}. We followed the general experimental setting where $C_{base}$ is the 60 classes in COCO\cite{lin2014microsoft} and $C_{novel}$ is the 20 classes in PASCAL VOC that are not overlapped with $C_{base}$. $S_{novel}$ is composed of 10-shot objects for each of $C_{novel}$. As shown in the Table \Romannum {3}, our approach achieves the highest performance of 48.9\% which is more than 6.5\% AP gain compared to the previous best performance. This leap in performance verifies that our approach has stronger generalization ability when the characteristics of the domains are different.
\begin{figure}
\begin{center}
\small
\includegraphics[width=1\linewidth]{visualcube.png}
\end{center}
\caption{Qualitative comparison between our approach and the main baseline on PASCAL VOC 5-shot case on 4 different various error cases of mis-localization, mis-classification, missing objects and complex case of multiple kinds of error.}
\end{figure}
\begin{table}
\centering
\small
\setlength\extrarowheight{0.3pt}
\setlength{\tabcolsep}{6pt}
\caption{ Recall change based on fine-tuning RPN. The assessment is conducted on both novel classes and the balanced set of base and novel classes.}
\label{tab:ablation}
\begin{tabular}{c!{\vrule width \lightrulewidth}l!{\vrule width \lightrulewidth}cc}
\toprule
\multicolumn{2}{c!{\vrule width \lightrulewidth}}{\multirow{2}{*}{Method / Metric}} & \multicolumn{2}{c}{Recall@100} \\
\multicolumn{2}{c!{\vrule width \lightrulewidth}}{} & $C_{novel}$ & $C_{base} \cup C_{novel}$ \\
\midrule\midrule
\multirow{2}{*}{Fine-tune~RPN} & \xmark & 90.8 & 95.6 \\
& \cellcolor[rgb]{0.937,0.937,0.937}\cmark & \cellcolor[rgb]{0.937,0.937,0.937}\textbf{92.2} & \cellcolor[rgb]{0.937,0.937,0.937}\textbf{95.9} \\
\bottomrule
\end{tabular}
\end{table}
\subsection{Ablation Study}
To verify the effectiveness of our approach, a comprehensive analysis is conducted on the every case of \textit{K} = 1, 2, 3, 5, 10 of the second split of PASCAL VOC.
\subsubsection{Proposal Balance Refinement}
Table \Romannum{4} shows that refining the object candidates can seamlessly improve both AP50 and AP of novel classes compared to the main baseline, upto 5.2\% and 6.1\%, respectively. We observe that improvement of the AP is more significant than in the AP50, as our initial motivation was to improve the impaired RoIs of unseen novel classes.
\subsubsection{Fine-tuning RPN}
The effectiveness of fine-tuning RPN can be seen in Table \Romannum{4} and Table \Romannum{5}. Table \Romannum{4} shows excluding the RPN update consistently degrades the performance on both AP50 and AP of novel classes but slightly improves the AP50 of base classes. Comparing the largest degree of the improvement of novel classes, 3.7\%, and the degradation of base classes, 0.8\%, it is clear that fine-tuning RPN is a effective way to improve the novel detector while doing less harm to the previously well-established detection capability. The same conclusion can be also drawn from Table \Romannum{5} by comparing the improvement of recall of RPN as we achieve an 1.4\% performance gain on novel unseen classes.
\section{Conclusion}
This paper targets the imbalanced IoU distribution which has been rarely handled, but has a crucial role as a fundamental basis in few-shot object detection. We present a new approach of balancing out the biased IoU distribution via auxiliary refinement. It corrects the proposals of low quality with sequential regression refinement, thus increasing the amount of positive samples of different IoU degrees.
We further revise the fine-tuning strategy and expose RPN to the novel classes to expand the learning opportunities of the RoI classifier and regressor. Extensive experiments conducted on PASCAL VOC and COCO demonstrate the advantage of our model compared to other state-of-the-art methods.
\bibliographystyle{IEEEtran.bst}
|
2,877,628,090,414 | arxiv | \section{What is order?}
The human brain is very skilled at detecting patterns and recognising
order in a structure, and ordered structures permeate cultural
achievements of human civilisations, be it in the arts, architecture
or music. The ability to detect and describe patterns is also at the
basis of all scientific enquiry; see Mumford \& Desolneux (2010) for
more on pattern theory. It may thus be surprising that a concept as
fundamental as \emph{order} does not have any well-defined precise
meaning, and that it appears to be rather challenging to come up with
a proper definition of what constitutes order in a structure. As a
consequence, there currently is no satisfactory measure to quantify
order in any given spatial structure.
There are two common approaches to tackle this issue. One is to employ
diffraction, which effectively measures two-point correlations in the
structure; see Cowley (1995) for background. For kinematic
diffraction, in the far-field approximation, the diffraction measure
is the Fourier transform of the autocorrelation (or Patterson)
function. Diffraction is the approach taken to characterise
crystalline materials. The current definition of a crystal, which is
based on its diffraction, derives from a definition which first
appeared in the terms of reference of the IUCr Commission on Aperiodic
Crystals, published in the 1991 report of the IUCr Executive Committee
(International Union of Crystallography, 1992, p.~928); in fact, it is
less general than what the commission proposed. The following quotes
the more specific definition given in Authier and Chapuis (2014), and
used in the IUCr Online Dictionary of Crystallography.
\begin{quote}
A material is a crystal if it has \textbf{essentially} a sharp
diffraction pattern. The word \textbf{essentially} means that most
of the intensity of the diffraction is concentrated in relatively
sharp \textbf{Bragg peaks}, besides the always present diffuse
scattering. In all cases, the positions of the diffraction peaks can
be expressed by
\begin{equation}\label{eq:crystal}
\mathbf{H}\,=\sum_{i=1}^{n}h^{}_{i}\,\mathbf{a}_{i}^{*}\qquad
(n\ge 3).
\end{equation}
Here $\mathbf{a}_{i}^{*}$ and $h^{}_{i}$ are the basis vectors of the
reciprocal lattice and integer coefficients respectively and the
number $n$ is the minimum for which the positions of the peaks can be
described with integer coefficient $h^{}_{i}$.
The conventional crystals are a special class, though very large, for
which $n=3$.
\end{quote}
The prominent role of the word \emph{essentially} shows that this is a
humble definition, in the sense that it reflects our limited knowledge
of the structures we may potentially encounter in nature. The
interpretation given in the definition that `essentially' means that
most of the intensity is concentrated in Bragg peaks means that the
integrated contribution from the background must be weak compared to
the Bragg diffraction, which is rather arbitrary, as any Bragg
diffraction indicates order. By allowing the integer $n$ to be larger
than the three space dimensions we live in, aperiodic crystals are
included in this definition, and conventional (periodic) crystals have
become a special class (for which $n=3$). Note that $n$ is restricted
to be \emph{finite} here, so this particular form of the definition
excludes pure point diffractive systems with non-finitely generated
Fourier modules (over integer coefficients); the definition stipulates
that Bragg peaks in crystals can be indexed by a finite number of
integer coefficients. Note that the definition originally proposed in
1991 did not include this restriction (International Union of
Crystallography, 1992, p.~928).
Because the inverse problem of diffraction is inherently difficult
(Bombieri \& Taylor, 1986) and, in general, not unique (Patterson,
1944), we do not have a complete characterisation of structures that
show pure point diffraction (which means that the diffraction consists
of Bragg peaks only), even in the idealised case of an ideal, perfect
structure. Neither do we have a good understanding of what structures
with an \emph{essentially} pure point spectrum may look like.
The second approach, which is particularly suited to stochastic
systems, employs the \emph{entropy} of a structure. Entropy takes into
account the number of different local configurations of a system, and
how this number grows with the system size; normally you are looking
at an exponential growth with the system size, and any sub-exponential
growth corresponds to zero entropy. Clearly, entropy can distinguish
deterministic from random systems, and looking at different forms of
scaling behaviour makes it possible to differentiate, at least to some
extent, between different degrees of disorder. However, any
deterministic structure has zero entropy (as has any sufficiently
small deviation from it), so entropy is a rather crude measure of
order.
In this article, the current state of knowledge of mathematical
diffraction of structures is summarised and discussed in relation with
our notion of crystalline order. The current article attempts to
present a broad overview only; for details on calculations and further
background, we refer to recent survey articles (Baake \& Grimm 2011a,
Baake \& Grimm 2012) and to the monograph by Baake \& Grimm (2013), as
well as to references therein. Using a number of explicit example
structures with different types of diffraction spectrum, the range of
possibilities is explored, contributing to the ongoing discussion on
what order means in crystals and beyond (van Enter \& Mi\c{e}kisz
1992, Lifshitz 2003, 2007, 2011).
\section{Mathematical diffraction}
In 2014, we were celebrating the international year of
crystallography, and were looking back at a century of rapid
developments in crystallography since von Laue (Friedrich, Knipping \&
von Laue 1912, von Laue 1912) and father and son Bragg (Bragg \& Bragg
1913) first employed X-ray diffraction to analyse the atomic structure
of crystalline materials; see Authier (2013) for a historical
account. In the simplest setting, which is suitable in particular for
X-ray diffraction, it is sufficient to describe the kinematic
scattering of radiation by the sample, and consider the far-field
(Fraunhofer) limit for the outgoing radiation. The calculation of the
diffraction pattern of a given structure then becomes possible by
means of harmonic analysis, while the corresponding inverse problem of
determining a structure from its pattern of diffraction intensities
is, in general, difficult and non-unique, even in this simplified
setting. This section attempts to present a summary of mathematical
diffraction theory, highlighting the ideas and the flavour of the
approach without going into technicalities, while trying to explain
some of the technical terms by means of simple examples and familiar
notions; for mathematical details, the reader is referred to Baake \&
Grimm (2013).
\subsection{What is a measure?}
A mathematically satisfactory approach to describe extended (infinite)
systems, such as ideal crystals, is provided by using \emph{measures}
to describe both the distribution of matter in the scattering medium
and the distribution of scattered radiation intensity in space. In
mathematics, measures are the natural concept to quantify spatial
distributions, and are related to the notion of integration. The
general approach to measures in mathematics is rather technical, but
there is a simpler way to think of measures (which is due to a result
called the Riesz-Markov representation theorem; see Reed \& Simon
(1980) for details). Indeed, it is possible to regard a measure as a
linear functional, which is a linear map that associates a number to
each function from an appropriate space. A familiar example is the
integral of a function, which is the example we start with.
A well-known and widely used measure in mathematics is \emph {Lebesgue
measure}, which is commonly used in integration of functions on the
real numbers $\mathbb{R}$. We denote Lebesgue measure by the letter
$\lambda$. If $f\!: \; x\mapsto f(x)$ is a function on $\mathbb{R}$,
the Lebesgue measure of $f$ is
\[
\lambda(f) \, = \, \int_{\mathbb{R}} f(x)\, \mathrm{d}\lambda(x)
\, = \, \int_{\mathbb{R}} f(x)\, \mathrm{d}x \, ,
\]
where the usual shorthand $\mathrm{d}x$ is used for integration with
respect to Lebesgue measure. So Lebesgue measure associates to a
function $f$ a number, which is its integral.
The Lebesgue measure of a set $A\subset\mathbb{R}$, written as
$\lambda(A)$, is given by
\[
\lambda(A) \, = \, \lambda(1_{A}) \, = \,
\int_{\mathbb{R}} 1_{A}(x)\, \mathrm{d}x \, = \,
\int_{A} \mathrm{d}x \, = \, \mathrm{vol}(A)\, ,
\]
where $1^{}_{A}$ is the characteristic function (or indicator
function) of $A$, which takes the value $1^{}_{A}(x)=1$ for all $x\in
A$, and $1^{}_{A}(x)=0$ otherwise. The Lebesgue measure of a set is
what we call the volume (as in this case we are in one dimension, the
volume is a length); for instance, the Lebesgue measure of the
interval $I=[a,b]$ with $b\ge a$ is $\lambda(I) = \lambda([a,b]) =
b-a$. Lebesgue measure is the unique translation-invariant measure on
$\mathbb{R}$ (meaning that $\lambda(A)=\lambda(A+t)$ for any
translation $t\in\mathbb{R}$) that assigns the volume $1$ to the unit
interval $[0,1]$. Lebesgue measure on $\mathbb{R}$ generalises in the
familiar way to $d$-dimensional space $\mathbb{R}^d$, corresponding to
$d$-dimensional (multiple) integrals. For simplicity, we will mainly
consider the case $d=1$ in what follows.
Another well-known and important measure is the \emph{Dirac measure}
(or point measure) at a point $x\in\mathbb{R}$, denoted by
$\delta_{x}$. It describes a localised structure at a point $x$ in
space, with total measure $1$. This means that, if $f$ is a function
on $\mathbb{R}$, its point measure at $x$ is $\delta_{x}(f)=f(x)$. In
the physics literature, the point measure is often written like a
function $\delta(x)$ (which can be considered a generalised function
or \emph{distribution} obtained as a limit of functions, for instance
of a sequence of Gaussian functions of integral $1$, centred at the
origin, and with a decreasing width, which then become increasingly
sharper), with the suggestive notation
\[
\delta_{x}(f) \, = \, \int_{\mathbb{R}} f(y)\, \delta(x-y)\, \mathrm{d}y
\, = \, f(x)\, .
\]
This notation can be used consistently as long as one remembers that
Dirac's $\delta$ is not a function in the usual sense. As above, one
can also define the point measure of a set $A\subset\mathbb{R}$, which
is $\delta_{x}(A)=\delta(1_{A})=1_{A}(x)$, so $\delta_{x}(A)=1$ if
$x\in A$ and $0$ otherwise.
\subsection{Dirac combs}
Point measures are often used to describe a set of localised
scatterers in space. Given a set of scatterers located at points in a
subset $\varLambda\subset\mathbb{R}$ (which we usually assume to be a
\emph{Delone set}, which means that it neither contains points that
are arbitrarily close to each other nor holes of arbitrary size), we
can associate a measure
\[
\delta_{\varLambda}\, := \, \sum_{x\in\varLambda} \delta_{x}
\]
which we call the \emph{Dirac comb} (a term coined by de Bruijn
(1986); see also C\'{o}rdoba (1989)) of $\varLambda$. An example of
such a Dirac comb is
$\delta_{\mathbb{Z}}=\sum_{n\in\mathbb{Z}}\delta_{n}$, which is the
uniform Dirac comb on the integer lattice.
By introducing scattering weights $w(x)$ at position $x\in\mathbb{R}$
(which in general can be complex numbers, but we will assume to be
real for the purpose of this exposition), we arrive at a \emph{weighted
Dirac comb}
\begin{equation}\label{eq:wdc}
\omega \, = \,
w\, \delta_{\varLambda}\, = \,
\sum_{x\in\varLambda} w(x)\, \delta_{x}\, ,
\end{equation}
which can serve as a model representing a scattering medium containing
different types of scatterers. Any measure of this type, consisting of
a (weighted) sum of point measures, is called a \emph{pure point
measure} (with respect to Lebesgue measure). It is possible to
include realistic scattering profiles by considering convolutions with
appropriate motives, so this approach is not as restrictive as it may
seem. A schematic representation of an example, the weighted (periodic)
Dirac comb
\begin{equation}\label{eq:comb}
\omega \, = \, \delta_{\mathbb{Z}}+\tfrac{1}{2}\,\delta_{\mathbb{Z}+\frac{1}{2}} +
\tfrac{1}{4}\,\delta_{\mathbb{Z}+\{\frac{1}{4},\frac{3}{4}\}}
\end{equation}
is shown in Figure~\ref{fig:comb}.
\begin{figure}
\centerline{\includegraphics[width=0.6\textwidth]{comb.eps}}
\caption{Schematic representation of the weighted (periodic) Dirac
comb $\omega$ of Eq.~\eqref{eq:comb}. Point
measures $a\,\delta_{x}$ are represented as columns at position $x$ of height
proportional to their weight $a$.}
\label{fig:comb}
\end{figure}
Attaching scattering profiles to a Dirac comb is one way to represent
a continuous scattering intensity in space. Of course, there is a more
direct approach if the scattering intensity is described by a
continuous distribution is space. If $\varrho$ is a such a continuous
(or, at least, locally integrable) function on $\mathbb{R}$, it
defines a measure $\mu$ on $\mathbb{R}$ via
\[
\mu(f) \, = \, \int_{\mathbb{R}} f(x)\, \mathrm{d}\mu(x)
\, = \, \int_{\mathbb{R}} f(x)\, \varrho(x)\, \mathrm{d}x \, .
\]
In this case $\varrho$ is called the \emph{density} of the measure
$\mu$. Any measure $\mu$ that can be written in this form is called an
\emph{absolutely continuous measure} (with respect to Lebesgue
measure).
The measures we are interested in are those which describe
distributions (of scatterers or scattering intensity) in space, and
one physical restriction we would like to impose is that any finite
region of space can only contain a finite total scattering strength or
finite intensity. The measures that satisfy this property are called
\emph{translation bounded} measures. A Dirac comb
$\delta_{\varLambda}$ of a Delone set $\varLambda\subset\mathbb{R}$ is
always translation bounded, because there can be only finitely many
points in any finite region of space, due to the minimum distance
between points. The same is true for a weighted Dirac comb provided
that the weight function $w(x)$ is bounded. An examples of a measure
that is not translation bounded would be a Dirac comb of a point set
with an accumulation point, for instance
$\sum_{n\in\mathbb{Z}\setminus\{0\}}\delta_{1/n}$. For this measure, any
interval containing the origin contains infinitely many point
measures, and thus has infinite measure.
\subsection{Lebesgue decomposition}
A central result in measure theory is Lebesgue's decomposition
theorem. It states that any measure can be written as a sum of three
components in a unique way (with respect to a reference measure, which
is our case will always be Lebesgue measure). If $\mu$ is a measure
on $\mathbb{R}$, the three components are denoted as
\[
\mu \, = \, \mu_{\mathsf{pp}} + \mu_{\mathsf{sc}} + \mu_{\mathsf{ac}}
\]
and are called the \emph{pure point} component $\mu_{\mathsf{pp}}$,
the \emph{singular continuous} component $\mu_{\mathsf{sc}}$ and the
\emph{absolutely continuous} component $\mu_{\mathsf{ac}}$. We have
met typical examples of pure point and absolutely continuous measures
above, so the obvious question is what a singular continuous measure
looks like. As it is defined, it is all that is `left' if you remove
the pure point part (consisting of a sum of weighted point measures)
and the absolutely continuous part (which is described by a locally
integrable density function) --- but this does not really help to gain
an understanding of what such a measure represents. Singular
continuous measures are rather weird objects indeed; they do not give
weight to any single point in space (because otherwise it would have a
pure point component), but are concentrated on sets of vanishing
volume (because otherwise you could describe part of it by a density
function).
\begin{figure}
\centerline{\includegraphics[width=0.8\textwidth]{cantor.eps}}
\caption{Sketch of the classic middle thirds Cantor set construction
(inset) and the `Devil's staircase' distribution function $F$ of the
corresponding singular continuous measure.}
\label{fig:cantor}
\end{figure}
The best-known example of a singular continuous measure is provided by
the classic middle-thirds Cantor set; see Figure~\ref{fig:cantor}.
Starting from the unit interval, the Cantor set is constructed by
removing the middle third of it, then removing the middle thirds of
the two resulting intervals, and iterating this procedure \emph{ad
infinitum}. The corresponding Cantor measure is constructed by
starting from the Lebesgue measure on the interval, so we have total
measure $1$, and at each step distributing the mass equally onto the
constituent intervals. In the limit, the total measure is thus still
$1$, but there is neither any isolated point that carries a finite
measure (since the measure of each interval in the $n$th step is
$2^{-n}$, so it vanishes in the limit) nor any interval of finite
length that is in the support of the measure (meaning that the measure
does not vanish on it). The measure constructed in this way is thus
singularly continuous, and can be described in terms of a distribution
function which is a `Devil's staircase'. This function is constant
almost everywhere, and displays a hierarchy of plateaux in its graph
(see Figure~\ref{fig:cantor}) which reflect the hierarchy of gaps
produced by the excision steps in the Cantor construction. This
function plays the role of the integrated density for the singular
continuous measure.
Lebesgue's decomposition provides a rigorous way to separate the
diffraction measure of a structure into its pure point (Bragg) part
and its singular and absolutely continuous components. However, using
this as the definition really only makes sense if one works with
infinite systems (because finite systems will always have absolutely
continuous diffraction). This is similar in spirit to the definition
of a phase transition in materials (as a discontinuity in a
thermodynamic potential), which again only applies in the
mathematically rigorous sense to an infinite system (because for
finite systems these potentials are smooth functions). Nevertheless,
these concepts have proved useful for applications to macroscopically
large (albeit finite) systems.
\subsection{Autocorrelation and diffraction}
A key quantity in diffraction theory is the \emph{autocorrelation},
which quantifies the two-point correlation of a structure. In
crystallography, this is often called the \emph{Patterson
function}. If the material is described by a (real) density function
$\varrho$ on $\mathbb{R}$ (so we deal with an absolutely continuous
distribution), the autocorrelation is an absolutely continuous measure
whose density is the familiar convolution
\[
P(x) \, = \, \int_{\mathbb{R}}\varrho(y)\, \varrho(y+x)\, \mathrm{d}x
\, = \, \bigl(\varrho * \widetilde{\varrho}\bigr)(x)\, ,
\]
where $\widetilde{\varrho}$ is the function defined by
$\widetilde{\varrho}(x)=\varrho(-x)$.
In the case that the material is described by a one-dimensional
weighted Dirac comb $\omega$ of the form given in Eq.~\eqref{eq:wdc}
(with real weight function $w(x)$), the autocorrelation is a
pure point measure
\[
\gamma \, = \, \sum_{z\in\varLambda-\varLambda} \eta(z)\, \delta_{z}\, ,
\]
with non-vanishing contributions only at distances $z$ in
the difference set $\varLambda-\varLambda=\{x-y\mid
x,y\in\varLambda\}$ (which you may interpret as the set of interatomic
distances). The point masses for interatomic distances $z$ are
weighted by the \emph{autocorrelation coefficients} $\eta(z)$,
which are given by
\[
\eta(z) \, =\, \lim_{R\to\infty}\frac{1}{2R}
\sum_{\substack{|x|\le R, x\in\varLambda \\ z-x\in\varLambda}} w(x)\,w(z-x)\, ,
\]
provided that these limits exist. Note that $2R$ is the length of the
interval $(-R,R)$, so the autocorrelation coefficient $\eta(z)$ is
precisely the volume-averaged two-point correlation for the
interatomic distance $z$.
Using the language of measures, these equations can be neatly summarised
as follows. Given a (translation bounded) real measure $\omega$,
again for simplicity in one dimension, its \emph{autocorrelation
measure} $\gamma$ is defined as
\begin{equation}\label{eq:def-gamma}
\gamma \, = \, \omega \circledast \widetilde{\omega}
\, := \lim_{R\to\infty}
\frac{\;\omega|^{}_{R} \ast \widetilde{\omega|^{}_{R}}\;}
{2R} ,
\end{equation}
provided the limit exists. Here, $\omega|^{}_{R}$ denotes the
restriction of $\omega$ to the interval $(-R,R)$, and
$\widetilde{\mu}$ is defined via $\widetilde{\mu}(g) =
\mu(\widetilde{g})$ with $\widetilde{g}(x)=\overline{g(-x)}$ as
above. Finally, $\ast$ denotes the standard \emph{convolution} of
measures, which is the appropriate generalisation of the convolution
of functions. The autocorrelation of $\omega$ is thus the
volume-averaged convolution $\circledast$ (also called the
\emph{Eberlein convolution}) of $\omega$ with its `flipped-over'
version $\widetilde{\omega}$, and thus picks out the two-point
correlations in the structure described by $\omega$. This approach to
mathematical diffraction was pioneered by Hof (1995).
The \emph{diffraction measure} is then the Fourier transform
$\widehat{\gamma}$ of the autocorrelation, so essentially it provides
a spectral analysis for the two-point correlations in the original
structure. It is a translation bounded, positive measure, which
quantifies how much of the kinematic scattering intensity reaches a
given volume in space. Lebesgue's decomposition
\[
\widehat{\gamma} \; = \;
\widehat{\gamma}^{}_{\mathrm{pp} } +
\widehat{\gamma}^{}_{\mathrm{sc} } +
\widehat{\gamma}^{}_{\mathrm{ac} }
\]
into its pure point part (the Bragg peaks, of which there are at most
countably many), its absolutely continuous part (the diffuse
background scattering, described by a locally integrable density
function) and its singular continuous part (which encompasses anything
that remains) provides a mathematically rigorous definition of the
different types of diffraction. For the definition of a crystal cited
above, it is the pure point part $\widehat{\gamma}^{}_{\mathrm{pp} }$
that matters --- a crystal is a structure where this part represents
the majority of the diffracted intensity (there will always be some
continuous diffraction in practice), though this alone does not
guarantee that the \emph{positions} of Bragg peaks can be indexed by a
finite number of integers. Indeed, there are examples of systems that
are pure point diffractive, meaning that $\widehat{\gamma} =
\widehat{\gamma}^{}_{\mathrm{pp}}$, where this is not the case; we
shall meet an example below.
\section{Periodic crystals}
A conventional, periodic crystal is described as a lattice-periodic
structure, corresponding to an ideal infinite crystal without defects
or surfaces. A periodic crystal is characterised by its periods
(translations that keep the crystal invariant), which form a lattice
$\varGamma$ (because any linear combinations of periods are also
periods), and by the distribution of scatterers in a unit cell
(fundamental domain) of this lattice. Here, a lattice $\varGamma$ in
$d$-dimensional space\footnote{Because the generalisation to higher
dimensions is straightforward, we give the results for the general
case, although you can always think of cases with $d\le 3$; see also
the examples below.} is the set of integer linear combinations of
$d$ linearly independent basis vectors, so it can be written in the
form
\[
\varGamma \, :=\, \left\{\textstyle\sum_{i=1}^{d} a_{i}v_{i}\mid
a_{i}\in\mathbb{Z}\right\},
\]
where $v_{i}\in\mathbb{R}^{d}$, for $1\le i\le d$, are linearly
independent vectors in $\mathbb{R}^{d}$. Familiar examples are the
integer lattice $\mathbb{Z}$ in one dimension, the square lattice
$\mathbb{Z}^{2}$ in two dimensions and the simple (primitive) cubic
lattice $\mathbb{Z}^{3}$ in three dimensions.
If the scattering medium has a (periodic) crystal structure described
by a lattice $\varGamma$, it can always be represented as a measure
\begin{equation}\label{eq:cryst}
\omega\, =\, \mu \ast \delta^{}_{\varGamma}\, ,
\end{equation}
where $\mu$ can be chosen as a finite measure which describes the
decoration of the fundamental domain, while the Dirac comb
$\delta^{}_{\varGamma}$ ensures lattice periodicity.
The corresponding autocorrelation measure is a $\varGamma$-periodic
measure that can be calculated from the appropriate generalisation of
Eq.~\eqref{eq:def-gamma} as
\begin{equation}\label{eq:crystauto}
\gamma \, =\, \mathrm{dens} (\varGamma)\, (\mu \ast
\widetilde{\mu}) \ast \delta^{}_{\varGamma}\, ,
\end{equation}
using $\widetilde{\delta^{}_{\varGamma}} =\delta^{}_{\varGamma}$ and
$\delta^{}_{\varGamma}\circledast\delta^{}_{\varGamma}= \mathrm{dens}
(\varGamma)\,\delta^{}_{\varGamma}$, where $\mathrm{dens}(\varGamma)$
denotes the density (per unit volume) of the lattice $\varGamma$. The
diffraction measure $\widehat{\gamma}$ is then given by\footnote{This
follows from Eq.~\eqref{eq:crystauto} by an application of
Poisson's famous summation formula, which can be cast as
$\widehat{\delta^{}_{\varGamma}} =
\mathrm{dens} (\varGamma) \, \delta^{}_{\varGamma^{*}}$, where
$\varGamma^{*}$ denotes the dual (or reciprocal) lattice of
$\varGamma$; see Baake \& Grimm (2013) for details.}
\begin{equation}\label{eq:crystdiff}
\widehat{\gamma} \, = \, \bigl( \mathrm{dens} (\varGamma) \bigr)^{2}
\, \big| \widehat{\mu} \big|^{2} \, \delta^{}_{\varGamma^{*}} \, .
\end{equation}
This provides the familiar result for periodic crystals: Any perfect
periodic crystal with lattice of periods $\varGamma$ shows pure point
diffraction with Bragg peaks located on the reciprocal
lattice\footnote{Note that we prefer to define the Fourier transform
of a function $f$ as $\widehat{f}(k)=\int_{\mathbb{R}}
\mathrm{e}^{2\pi\mathrm{i}kx}\,f(x)\,\mathrm{d}x$. Due to the
factor $2\pi$ in the exponent, one avoids the appearance of such
factors in the definition of the reciprocal lattice.}
$\varGamma^{*}$, and the intensity of the Bragg peak is determined by
the density of the crystal lattice $\varGamma$ and by the continuous
function $\big| \widehat{\mu} \big|^{2}$, which depends on the
distribution of scatterers in a fundamental domain of $\varGamma$. By
expressing the reciprocal lattice positions as linear combinations of
basis vectors of the reciprocal lattice $\varGamma^{*}$, this can be
cast in the form of Eq.~\eqref{eq:crystal} with $n=d$.
As a one-dimensional example, consider the weighted Dirac comb
$\omega$ of Eq.~\eqref{eq:comb} and Figure~\ref{fig:comb}. It can be
written as
\[
\omega \, = \, \left(\delta_{0}+\tfrac{1}{4}\,\delta_{\frac{1}{4}}+
\tfrac{1}{2}\,\delta_{\frac{1}{2}}+\tfrac{1}{4}\,\delta_{\frac{3}{4}}\right) *
\delta_{\mathbb{Z}}\, ,
\]
so here $\mu = \delta_{0}+\frac{1}{4}\delta_{\frac{1}{4}}+\frac{1}{2}
\delta_{\frac{1}{2}}+\frac{1}{4}\delta_{\frac{3}{4}}$ describes the
four scatterers within the fundamental domain $[0,1)$ of the lattice
$\varGamma=\mathbb{Z}$. Note that in this case we have
\[
\widetilde{\omega} \, = \,
\left(\delta_{0}+\tfrac{1}{4}\,\delta_{-\frac{1}{4}}+\tfrac{1}{2}\,
\delta_{-\frac{1}{2}}+\tfrac{1}{4}\,\delta_{-\frac{3}{4}}\right) *
\delta_{\mathbb{Z}}\, = \, \omega\, ,
\]
due to the equivalence of positions differing by integers in the
periodic lattice and the symmetric distributions of scatterers in the
fundamental domain. Let us now calculate the autocorrelation and
diffraction of this comb.
Clearly, since all distances are multiples of $\frac{1}{4}$, the
autocorrelation in this case will have a similar form as the comb
$\omega$ itself, just with different coefficients, which are given by
summing up the products of the weights of scatterers with a given
separation. To obtain these coefficients, one can compute the
convolution of $\omega$ with itself (or equivalently of $\mu$ with
itself) term by term, using the relation
$\delta_{x}*\delta_{y}=\delta_{x+y}$. This gives
\[
\gamma \, = \,
\left(\tfrac{11}{8}\,\delta_{0}+\tfrac{3}{4}\,\delta_{\frac{1}{4}}+
\tfrac{9}{8}\,\delta_{\frac{1}{2}}+\tfrac{3}{4}\,\delta_{\frac{3}{4}}\right) *
\delta_{\mathbb{Z}}\, .
\]
For instance, the coefficient $\frac{11}{8}=
1^2+(\frac{1}{4})^2+(\frac{1}{2})^2+(\frac{1}{4})^2$ of $\delta_{0}$
comes from adding up the contributions to integer distances. A
schematic presentation of the autocorrelation $\gamma$ is shown in
Figure~\ref{fig:autodiff}.
The corresponding diffraction measure $\widehat{\gamma}$
is obtained by taking the Fourier transform,
using that $\widehat{a\delta_{x}}=a\mathrm{e}^{2\pi\mathrm{i}kx}$. This gives
\[
\begin{split}
\widehat{\gamma} \, &= \,
\left(\tfrac{11}{8}+\tfrac{9}{8}(-1)^{k}+
\tfrac{3}{2}\cos(\tfrac{\pi k}{2})\right) *
\delta_{\mathbb{Z}}\\
&=\, 4\,\delta_{4\mathbb{Z}} + \delta_{4\mathbb{Z}+2} +
\tfrac{1}{4}\,\delta_{4\mathbb{Z}+\{1,3\}}\, .
\end{split}
\]
The diffraction pattern is thus periodic, but with period $4$ (due to
the smallest distance between scatterers being $\frac{1}{4}$). A
schematic picture of the diffraction pattern is shown in
Figure~\ref{fig:autodiff}.
\begin{figure}
\centerline{\includegraphics[width=0.6\textwidth]{autodiff.eps}}
\caption{Schematic representation of the autocorrelation (top) and
diffraction (bottom) of the weighted Dirac
comb $\omega$ of Eq.~\eqref{eq:comb} and Figure~\ref{fig:comb}.}
\label{fig:autodiff}
\end{figure}
As a second example, consider a two-dimensional crystal with lattice
of periods $\varGamma=\mathbb{Z}^{2}$, with two scatterers (of unit
scattering strength) per unit cell, one placed at lattice points and
the other at an arbitrary position $(a,b)\in [0,1)^2$. The
corresponding point set is $\varLambda=\mathbb{Z}^{2}\cup
\bigl(\mathbb{Z}^2+(a,b)\bigr)$, and the Dirac comb can be written
as $\omega=\varrho * \delta_{\mathbb{Z}^2} =
(\delta_{0,0}+\delta_{a,b}) * \delta_{\mathbb{Z}^2}$. Its
autocorrelation is
\[
\begin{split}
\gamma & \, =\,
\bigl(\varrho * \widetilde{\varrho}\bigr)*\delta^{}_{\mathbb{Z}^2}\\
& \, = \,
(\delta_{(0,0)}+\delta_{(a,b)}) *
(\delta_{(0,0)}+\delta_{-(a,b)}) * \delta^{}_{\mathbb{Z}^2}\\
& \, = \, \bigl(2\,\delta_{(0,0)} + \delta_{(a,b)}+\delta_{-(a,b)}\bigr)
* \delta^{}_{\mathbb{Z}^2} .
\end{split}
\]
The corresponding diffraction measure is then
\[
\begin{split}
\widehat{\gamma} &\, =\, \lvert\widehat{\varrho}\,\rvert^{2}(k,\ell)
\,\delta^{}_{\mathbb{Z}^{2}} \\
&\, = \,
\bigl(2 + 2\,\mathrm{Re}(\mathrm{e}^{-2\pi \mathrm{i} (ka+\ell b)})\bigr)\,
\delta^{}_{\mathbb{Z}^2}\\
&\, = \, \left(2+2\cos\bigl((2 \pi (k a + \ell b)\bigr)\right)\,
\delta^{}_{\mathbb{Z}^2}\\
&\, = \, \left(2\cos\bigl(\pi (k a + \ell b)\bigr)\right)^{2}\,
\delta^{}_{\mathbb{Z}^2}
\end{split}
\]
for $k,\ell\in\mathbb{Z}$. Note that, while the diffraction measure is
supported on $\mathbb{Z}^{2}$ as expected (as $\mathbb{Z}^{2}$ is
self-dual), it need not have any non-trivial period. In fact,
$\widehat{\gamma}$ is periodic in one or two directions precisely if
one or both coordinates $a$ and $b$ are rational, respectively; an
example with one periodic direction is shown in Figure~\ref{fig:twod}.
Although the positions of Bragg spots for a lattice-periodic structure
are again lattice-periodic, in general the intensities will not
respect the periodicity of the dual lattice.
\begin{figure}
\centerline{\includegraphics[width=\textwidth]{twod.eps}}
\caption{The left panel shows a schematic representation of the
two-dimensional toy crystal discussed in the text, with two
scatterers of equal strength at positions $(0,0)$ and
$(a,b)=(\frac{1}{3},\frac{1}{\sqrt{3}})$ of the fundamental domain
(indicated by shading). The corresponding autocorrelation $\gamma$
is shown in the central panel, while the corresponding diffraction
measure $\widehat{\gamma}$ is displayed in the right panel. Here, a
point measure is represented by a dot that is centred at the
position of the peak and has an area proportional to the weight of
the point measure. The irrational position in the vertical direction
leads to an incommensurate modulation of the peak intensities in
this direction, which the diffraction is periodic with period $3$ in
the horizontal direction.}
\label{fig:twod}
\end{figure}
\section{Aperiodic crystals}
The arguably best understood class of aperiodic structures are cut and
project sets, also called \emph{model sets}. Model sets can be viewed
as a natural generalisation of the notion of quasiperiodic functions,
which goes back to Harald Bohr (1947), and were first introduced by Yves Meyer
(1972) in the context of harmonic analysis. The basic idea of the
construction is to obtain an aperiodic structure as a suitable `slice'
of a higher-dimensional periodic lattice, which is then projected onto
a suitable space of the desired dimension. For simplicity, we mainly
consider the case where the higher-dimensional space is a Euclidean
space of the form $\mathbb{R}^{d+m}$, with $\mathbb{R}^{d}$ being the
physical space (sometimes also called the direct or the parallel
space) that hosts the aperiodic structure (so $1\le d\le 3$ for
physically relevant cases), and $\mathbb{R}^{m}$ the internal (or
perpendicular) space which is used in the construction.
Let us start with an example, where $d=m=1$. In this case, we are
projecting a one-dimensional aperiodic structure from a
two-dimensional (periodic) lattice. In this example, the lattice
$\mathcal{L}$ is given by all integer linear combinations of two basis
vectors, which we choose as $(1,1)$ and $(\tau,1-\tau)$, where
$\tau=(1+\sqrt{5})/2$ is the golden ratio, so we have
\[
\mathcal{L} \, = \,
\mathbb{Z}\, (1,1) + \mathbb{Z}\, (\tau,1-\tau) \, = \,
\bigl\{ m \, (1,1)+n\, (\tau,1-\tau) \;\big| \;
m,n\in\mathbb{Z}\bigr\} \, .
\]
The lattice points are shown as black dots in
Figure~\ref{fig:fiboproj}. The lattice is oriented such that the
horizontal space along the $(1,0)$ direction is the physical space and
the vertical direction along $(0,1)$ corresponds to the internal
space. We call the projection to the physical space $\pi$, and the
projection to the internal space $\pi_{\mathrm{int}}$. The projections
of all lattice points $L=\pi(\mathcal{L})$ to physical space and
$L^{\star}=\pi_{\mathrm{int}}(\mathcal{L})$ to internal
space\footnote{Note the difference between the star symbol $\star$
used here and the $*$ used for the dual or reciprocal lattice.} are
both dense in their corresponding one-dimensional spaces. The set
$L$ is explicitly given by $L=\mathbb{Z}[\tau]=\{m+n\tau\mid
m,n\in\mathbb{Z}\}$, so all integer combinations of multiples of $1$
and $\tau$, which is dense because $\tau$ is irrational, and $L^{\star}$
has the same form. Note that the projections are one-to-one in both
directions. In particular, any point in $L$ corresponds to a uniquely
defined point in $\mathcal{L}$. In fact, $\pi^{-1}(x)=(x,x^{\star})$,
where the $\star$-map is defined by mapping $1$ to $1$ and $\tau$ to
$1-\tau$ (which corresponds to the `algebraic conjugation'
$\sqrt{5}\mapsto -\sqrt{5}$), so $(m+n\tau)^{\star}=m+n(1-\tau)=m+n-n\tau$
for all $m,n\in\mathbb{Z}$.
The final ingredient that we require is a `window' $W$ in the internal
space, which we choose to be the half-open interval $W=(-1,\tau-1]$.
Shifting it along the physical space sweeps out the shaded horizontal
strip in Figure~\ref{fig:fiboproj}. The lattice points that fall
within this strip produce the set $\{x\in\mathcal{L}\mid
\pi^{}_{\mathrm{int}}(x) \in W\}$, and their projection onto the
physical space thus $\varLambda=\{\pi(x)\mid x\in\mathcal{L}\text{ and
} \pi^{}_{\mathrm{int}}(x) \in W\}$. Using $\pi(\mathcal{L})=L$ and
the $\star$-map, this point set can equivalently be written as
\begin{equation}\label{eq:fiboms}
\varLambda \, = \, \{ x\in L \mid x^{\star}\in W\}\, .
\end{equation}
Sets of this form are called cut and project sets or model sets.
The condition that $x^{\star}\in W$ selects a discrete subset of the dense
set point set $L$, in fact, a very special discrete subset where
points are separated either by intervals of length $1$ (for short
intervals $s$) or by intervals of length $\tau$ (for long intervals
$\ell$). As it turns out, this projection yields the famous Fibonacci
sequence $\dots \ell s \ell\ell s \ell s \ell\dots $ of long ($\ell$)
and short ($s$) intervals, which can be generated by the two-letter
substitution rule $\ell\mapsto \ell s$, $s\mapsto \ell$. In
particular, dividing the window into two parts as follows
\[
W_{s} \, = \, (\tau-2,\tau-1]\quad\text{and}\quad
W_{\ell} \, = \, (-1,\tau-2]
\]
shows that the sets of left endpoints of short or long intervals are
given by the projection of lattice points that fall within the
corresponding sub-strip, so
\[
\varLambda_{s} \, = \, \{ x\in L \mid x^{\star}\in W_{s}\}\quad\text{and}\quad
\varLambda_{\ell} \, = \, \{ x\in L \mid x^{\star}\in W_{\ell}\}\, ,
\]
with $\varLambda=\varLambda_{s}\cup\varLambda_{\ell}$. Hence the set of
left endpoints of short or of long intervals separately are model sets
with windows $W_{s}$ and $W_{\ell}$, respectively, while the set of all
left interval endpoints is a model set with window
$W=W_{s}\cup W_{\ell}$; compare Figure~\ref{fig:fiboproj}.
\begin{figure}
\centerline{\includegraphics[width=0.9\textwidth]{fiboproj.eps}}
\caption{Schematic representation of a natural projection approach
for the Fibonacci chain from the planar lattice spanned by the
vectors $(1,1)$ and $(\tau,1-\tau)$.}
\label{fig:fiboproj}
\end{figure}
As mentioned above, this construction can be generalised to physical
and internal spaces of any dimension. The general \emph{cut and
project scheme} (CPS) for Euclidean model sets can be summarised in
the following diagramme
\begin{equation}\label{eq:cps}
\renewcommand{\arraystretch}{1.2}\begin{array}{r@{}ccccc@{}l}
& \mathbb{R}^{d} & \xleftarrow{\,\;\;\pi\;\;\,}
& \mathbb{R}^{d} \times \, \mathbb{R}^{m}\! &
\xrightarrow{\;\pi^{}_{\mathrm{int}\;}} & \mathbb{R}^{m} & \\
& \cup & & \cup & & \cup & \hspace*{-2ex}
\raisebox{1pt}{\text{\footnotesize dense}} \\
& \pi(\mathcal{L}) & \xleftarrow{\; 1-1 \;} & \mathcal{L} &
\xrightarrow{\; \hphantom{1-1} \;} &
\pi^{}_{\mathrm{int}}(\mathcal{L}) & \\
& \| & & & & \| & \\
& L & \multicolumn{3}{c}{\xrightarrow{\qquad\quad\quad \;\;
\;\star\; \;\; \quad\quad\qquad}}
& {L_{}}^{\star} & \\
\end{array}\renewcommand{\arraystretch}{1}
\end{equation}
Here, $\mathcal{L}\subset \mathbb{R}^{d+m}$ is a lattice in the
$(d+m)$-dimensional space
$\mathbb{R}^{d}\times\mathbb{R}^{m}=\mathbb{R}^{d+m}$, and $\pi$ and
$\pi^{}_{\mathrm{int}}$ denote the natural projections from this space
onto the physical and internal spaces $\mathbb{R}^{d}$ and
$\mathbb{R}^{m}$, respectively. We assume that the point set
$L=\pi(\mathcal{L})\subset\mathbb{R}^{d}$, which is the projection of
the lattice points into the physical space, is a bijective image of
$\mathcal{L}$, which means that no two lattice points in $\mathcal{L}$
project onto the same point in $L$. In other words, each point in $L$
can be `lifted' to a unique lattice point in $\mathcal{L}$, and the
inverse map $\pi^{-1}$ is well-defined on all elements of $L$. This
ensures that the star-map $\star\!:\; x\mapsto x^{\star}$ is
well-defined on $L$, so each point in $L$ has a unique associate in
internal space; see Moody (2000) for details. Finally, we assume
that the corresponding set
$L^{\star}=\pi^{}_{\mathrm{int}}(\mathcal{L})\subset\mathbb{R}^{m}$ in
internal space is dense.
Given a CPS, the second ingredient in the definition of a cut and
project set is the \emph{window} (sometimes also called an acceptance
domain) $W\subset\mathbb{R}^{m}$, which is assumed to be a
sufficiently nice subset of $\mathbb{R}^{m}$ (technically, a
relatively compact subset with non-empty interior). A cut and project
set is then defined by selecting all points $x$ in the projected
lattice $L$ whose companion $x^{\star}$ in internal space falls inside
the window $W$. Expressed as an equation, this means that any set of
the form
\begin{equation}\label{eq:ms}
\varLambda \, = \,
\bigl\{ x\in L \mid x^{\star} \in W \bigr\} ,
\end{equation}
or indeed any translate of such a set, is what we call a \emph{model
set}. The technical conditions on the window $W$ ensure that
$\varLambda$ is a Meyer set (Meyer 1972, Moody 2000), which means that
the difference set
\[
\varLambda-\varLambda\, :=\, \{x-y\mid x,y\in\varLambda\}
\]
is uniformly discrete (so different distances between point in the
structure differ by at least a fixed amount) and that the set
$\varLambda$ is relatively dense (which means that there are no
arbitrarily large `holes' in the point set). If the boundary $\partial
W$ of the window $W$ is nice in the sense that it has zero volume (in
the sense of Lebesgue measure), we refer to $\varLambda$ as a
\emph{regular model set}. The setting of Eq.~\eqref{eq:cps} can be
generalised further to allow for the internal space to be a locally
compact Abelian group (Meyer 1972, Moody 2000, Schlottmann 2000).
It is worth mentioning that there are various equivalent ways of
interpreting the cut and project construction. One commonly used
approach attaches an inverted copy of the window as a `target' to each
lattice point, and projected points are then obtained as the
intersection of these targets with the physical space; see
Figure~\ref{fig:fibotarget} for an illustration of the Fibonacci case.
Albeit equivalent, this description offers a simpler way of
interpreting experimental data, and is therefore the preferred
presentation of the cut and project approach in experimental research
papers, where it is often referred to as the \emph{section
method}. Apart from providing an intuitive meaning for the targets
as `atomic surfaces', this approach allows for additional variation
(by deformations of the targets) that can be exploited, for instance
in the description of modulated phases. For further variants of the
cut and project method, the reader is referred to Chapter~7.5 in Baake
\& Grimm (2013).
\begin{figure}
\centerline{\includegraphics[width=0.9\textwidth]{fibotarget.eps}}
\caption{Equivalent description of the Fibonacci chain in terms of
`targets', often referred to as `occupation domains' or `atomic surfaces'.}
\label{fig:fibotarget}
\end{figure}
Familiar examples of model sets are some one-dimensional
substitution-based structures such as the Fibonacci chain discussed
above, and some of its generalisations. Planar examples include the
Penrose tiling and the T\"{u}bingen triangle tiling with decagonal
symmetry, the Ammann-Beenker tiling with octagonal symmetry and the
shield tiling with dodecagonal symmetry, which can all be obtained by
projection from four-dimensional lattices. Structure models of
icosahedral quasicrystals usually employ model sets based on the
(primitive, face-centred or body-centred) hypercubic lattice in six
dimensions. These examples have direct application to the
crystallography of quasicrystals, and serve as models for the
structure of decagonal, octagonal, dodecagonal and icosahedral
quasicrystals, respectively; compare Steurer \& Deloudi (2009) for
details. Realisations of model sets with other symmetries, such as
planar sevenfold symmetry for instance, have as yet not been observed
in nature, and the same is true for model sets where the internal
space is not Euclidean. Nevertheless, such systems share many
properties with the familiar quasicrystalline cases, and should not be
excluded \emph{a priori}. Even if it may prove impossible to realise
such structures in self-assembled systems, we may be able to produce
these in purpose-made manufactured structures at various length
scales, from macroscopic to nanometre and atomic scales.
Arguably the most important result in the theory of model sets, in the
context of mathematical diffraction theory, is the proof that regular
model sets have \emph{pure point diffraction}. Three different
approaches have been used to prove this statement. The first proof
using methods of dynamical systems theory was completed by Schlottmann
(2000), employing an argument by Dworkin (1993) and the mathematical
diffraction approach of Hof (1995); see also Lenz \& Strungaru (2009)
for further developments. An alternative approach employs the theory
of almost periodic measures (Baake \& Moody 2004, Moody \& Strungaru
2004, Strungaru 2005). Following a suggestion by Lagarias, Baake \&
Grimm (2013) present a proof based on Poisson's summation formula for
the embedding lattice in conjunction with Weyl's lemma on uniform
distribution, which exploits the uniform distribution of projected
lattice points in internal space. Although it has not been developed
into a proof, there is also a somewhat complementary approach based on
an average periodic structure; we refer to the recent review by Wolny,
Kozakowski, Kuczera, Strzalka \& Wnek, A. (2011) and references
therein for details.
Essentially, the pure point diffraction of a model set is a
consequence of the underlying higher-dimensional lattice periodicity.
Let us first discuss the example of the Fibonacci model set
$\varLambda$ of Eq.~\eqref{eq:fiboms}; compare also
Figure~\ref{fig:fiboproj}. The pure point diffraction pattern is
obtained again as a projection to physical space, but this time of the
\emph{dual} (or reciprocal) higher-dimensional lattice
$\mathcal{L}^{*}$. In our case, this is the lattice generated by the
dual basis vectors $\frac{2\tau-1}{5}(\tau-1,\tau)$ and
$\frac{2\tau-1}{5}(1,-1)$. The corresponding Fourier module is then
\[
L^{\circledast} \, =\, \pi (\mathcal{L}^{*}) \, = \,
\frac{1}{\sqrt{5}}\, \mathbb{Z}[\tau] \, ,
\]
where $\mathbb{Z}[\tau]=\{m+n\tau\mid m,n\in\mathbb{Z}\}$ as above.
The determines the positions of Bragg peaks, but what about their
intensities? It turns out that the intensity is a function of the
distance of the projected lattice point from the physical space, and
roughly the larger the internal coordinate the smaller the intensity.
The function in question is the absolute square of the Fourier transform
of the window function (the characteristic function of the window), which is the
function that takes the value $1$ on the window and $0$ otherwise. Its Fourier
transform is
\begin{equation}
A(k) \, = \, \mathrm{e}^{\pi \mathrm{i} k^{\star} (\tau-2)}\,\frac{\tau + 2}{5}\,
\mathrm{sinc}(\pi \tau k^{\star}) \, ,
\end{equation}
where $\mathrm{sinc}(x)=\sin(x)/x$, and $k^{\star}$ is the image of
$k$ under the $\star$-map introduced above. A sketch of the
diffraction pattern is shown in Figure~\ref{fig:fibodiff}.
\begin{figure}[t]
\centerline{\includegraphics[width=0.9\textwidth]{fibointproj.eps}}
\caption{Sketch of the projection of the dual lattice points giving
rise to Bragg peaks in the diffraction pattern for the Fibonacci
point set $\varLambda$, with scatterers of unit weight at all
points. The function displayed on the right-hand side is the
intensity function $\lvert A(k)\rvert^{2}$. The Bragg peak at $0$
has height $(\mathrm{dens} (\varLambda))^{2} = (\tau+1)/5\approx
0.5206$, and the entire pattern (once all reflections are included)
is reflection symmetric.}
\label{fig:fibodiff}
\end{figure}
Let us now return to the general result.
For a regular model set $\varLambda$ with Dirac comb
$\delta^{}_{\varLambda}$, the diffraction measure $\widehat{\gamma}$
can be written as
\begin{equation}\label{eq:modeldiff}
\widehat{\gamma}\, = \sum_{k\in L{}_{}^{\circledast}}
\lvert A(k) \rvert^{2}\, \delta_{k}\, ,
\end{equation}
where $L^{\circledast} = \pi (\mathcal{L}^{*})$, the projection of the
higher-dimensional dual lattice, is the corresponding Fourier module
on which the pure point diffraction is supported. For a Euclidean
model set with the CPS \eqref{eq:cps}, $\mathcal{L}$ is a lattice in a
Euclidean space $\mathbb{R}^{d+m}$, and the Fourier module
$L^{\circledast}$ is thus finitely generated, with rank $d+m$. By
choosing appropriate generating vectors, the pure point diffraction of
Eq.~\eqref{eq:modeldiff} can thus be recast in the form of
Eq.~\eqref{eq:crystal} with $n=d+m$. However, in the general
situation, where the internal space can be any locally compact Abelian
group, this is not necessarily the case, as the Fourier module
$L^{\circledast} = \pi (\mathcal{L}^{*})$ does not have to be finitely
generated. Note that the latter case is not covered by the definition
of a crystal cited above, while it does include any model set based on
a Euclidean CPS.
The diffraction amplitudes $A(k)$ are obtained by the Fourier
transform of the characteristic function $1^{}_{W}$ of the window $W$,
\begin{equation}\label{eq:modelamp}
A(k) \, = \,
\frac{\mathrm{dens} (\varLambda)}{\mathrm{vol} (W)}
\, \widehat{1^{}_{W}} (-k^{\star})\, .
\end{equation}
According to Eq.~\eqref{eq:modeldiff}, it is the absolute square of
these amplitudes that determine the intensity of a Bragg peak as
position $k\in L^{\circledast}$, with $k^{\star}$ denoting the
corresponding point in internal space. Eq.~\eqref{eq:modelamp} gives
the result for Euclidean model sets, the only difference for the
general case is that the volume (with respect to Lebesgue measure in
Euclidean space) is replaced by the suitable invariant measure (Haar
measure) on the locally compact Abelian group.
\section{Order beyond crystals}
The current definition of crystals thus covers conventional periodic
crystals, incommensurate crystals and quasicrystals, and hence all
currently known realisations of perfectly ordered structures in
nature. While the question asked by Bombieri \& Taylor (1986) has not
yet been satisfactorily answered, it is clear that pure point
diffraction is a severe constraint on the possible structure (Baake,
Lenz \& Richard 1997), and recent results by Lenz \& Moody (2009,
2011) indicate that model sets play a major role in a potential
abstract parametrisation of the inverse problem. However, there are
clearly well-ordered structures that do not possess this property, and
this section will discuss a few characteristic examples. But first, we
start with an example of a pure point diffractive system with
non-finitely generated Fourier module, which thus possesses a
diffraction pattern where Bragg peaks cannot be indexed by a finite
number of integers.
\subsection{Pure point diffraction with non-finitely generated support}
Well-known examples of systems with non-Euclidean internal spaces
are limit-periodic structures. Let us explain this with the arguably
simplest example, based on the \emph{period doubling} substitution
rule $\varrho\!: 1\mapsto 10, 0\mapsto 11$, on the two-letter alphabet
$\{0,1\}$. Any bi-infinite word\footnote{Here and below, the notation
$\mathcal{A}^{\mathbb{Z}}$ denotes the set of bi-infinite sequences
$(\ldots,a^{}_{-2},a^{}_{-1},a^{}_{0},a^{}_{1},a^{}_{2},\ldots)$
with letters $a_{i}$, $i\in\mathbb{Z}$, chosen from a finite
alphabet $\mathcal{A}$.} $w\in\{0,1\}^{\mathbb{Z}}$ that satisfies
the fixed point property $\varrho^{2}(w)=w$ is specified completely by
$w(2n)=1$, $w(4n+1)=0$ and $w(4n+3)=w(n)$ for $n\in\mathbb{Z}$, while
either letter can be chosen at position $n=-1$. The two possible
choices lead to two locally indistinguishable sequences (which means
that any finite subsequence of one occurs in the other), and hence
define the same system.
The word $w$ possesses a Toeplitz structure consisting of a hierarchy
of scaled and shifted copies of $\mathbb{Z}$ which carry the same
letter. Defining the point set
\begin{equation}\label{eq:toep}
\varLambda\, =\, \{n\in\mathbb{Z}\mid w(n)=1\}\subset\mathbb{Z}
\end{equation}
of the positions of the letter $1$ in $w$,
it is clear from the relations above that
$2\mathbb{Z}\subset\varLambda$, as all letters at even positions are
$1$. But then, due to $w(4n+3)=w(n)$, so are all letters
$w(8n+3)=w(2n)=1$, so $8\mathbb{Z}+3\subset\varLambda$, and
inductively one recognises that $2\cdot 4^{\ell}\mathbb{Z} +
(4^{\ell}-1) \subset\varLambda$ for all integer $\ell\ge 0$. In fact,
this hierarchy of scaled integer lattices describes the complete set,
and we obtain the following representation for the set as a union
(Baake, Moody \& Schlottman 1998, Baake \& Moody 2004, Baake \& Grimm
2013)
\begin{equation}\label{eq:pdps}
\varLambda \, = \, 2\,\mathbb{Z}\cup (8\,\mathbb{Z}+3) \cup
(32\,\mathbb{Z}+15) \cup \dots
\, = \,
\bigcup_{\ell\ge 0} \bigl(( 2\cdot
4^{\ell}\,\mathbb{Z} + (4^{\ell}-1)\bigr)
\end{equation}
of scaled (and shifted) lattices. Note that this result is for the
case where we choose $w(-1)=0$ (otherwise $-1$ has to be added to the
right-hand side). A schematic representation of the corresponding
Dirac comb $\delta_{\varLambda}$ is shown in Figure~\ref{fig:toep}.
\begin{figure}[t]
\centerline{\includegraphics[width=0.8\textwidth]{toep.eps}}
\caption{Schematic representation of the Dirac comb
$\delta_{\varLambda}$ of the period doubling point set of
Eq.~\eqref{eq:toep}. All point measures have the same mass. The
different shading highlights the Toeplitz structure, with point
masses at even integers shown in black, point masses on
$8\mathbb{Z}+3$ in dark grey and a single point mass in
$32\mathbb{Z}+15$ in lighter grey.}
\label{fig:toep}
\end{figure}
\begin{figure}[b]
\centerline{\includegraphics[width=0.8\textwidth]{toepdiff.eps}}
\caption{Schematic representation of the diffraction intensity pattern
of the Dirac comb $\delta_{\varLambda}$ of Figure~\ref{fig:toep}.
The pattern is periodic with period $1$ and consists of a dense set
of Bragg peaks, where increasingly smaller peaks are located at
rational numbers whose denominators are increasingly larger powers
of $2$. Note that only peaks corresponding to $r=0,1,2,3$ in
Eqs.~\eqref{eq:toepFou} and \eqref{eq:toepamp} are visible here.}
\label{fig:toepdiff}
\end{figure}
Using this representation for the point set $\varLambda$, the
diffraction of the Dirac comb $\delta_{\varLambda}$ can be computed
explicitly; see Baake \& Grimm (2011b) for details. The scaled
lattices with geometrically increasing period in the union in
Eq.~\eqref{eq:pdps} give rise to Bragg peaks supported on the
corresponding dual lattices, which are successively finer integer
lattices scaled with the inverse factor. The diffraction spectrum is
pure point, and the Fourier module can be parametrised as
\begin{equation}\label{eq:toepFou}
L^{\circledast}\, = \, \mathbb{Z}[\tfrac{1}{2}] \, = \,
\bigl\{ \tfrac{m}{2^{r}}\mid (r=0,m\in\mathbb{Z})
\text{ or } (r\ge 1, m\text{ odd})\bigr\}\, .
\end{equation}
The diffraction measure is of the form of Eq.~\eqref{eq:modeldiff}, with
diffraction amplitudes
\begin{equation}\label{eq:toepamp}
A\bigl(\tfrac{m}{2^r}\bigr) \, = \, \frac{2}{3}\, \frac{(-1)^{r}}{2^{r}}\,
\mathrm{e}^{2^{1-r}\pi \mathrm{i}m}
\end{equation}
at the positions in $L^{\circledast}$. The factor of $\frac{2}{3}$
reflects the density of scatterers, as two thirds of positions are
occupied. It is no coincidence that the model set expression applies
--- in fact, the set $\varLambda$ can be described as a model set, but
with a non-Euclidean internal space; in this case, the internal space
is what is known as the space of $2$-adic integers (which essentially
consists of all fractions whose denominators are powers of $2$, but
with a specific definition of the distance of two numbers). A
schematic representation of the diffraction pattern for the period
doubling chain is shown in Figure~\ref{fig:toepdiff}.
It is easy to generalise this example to other lattice-based
substitutions in one or more dimensions; any substitution of constant
length $p$ with a coincidence in the sense of Dekking (1978), which
means that the same letter appears at the same position for the images
of all letters under a certain power of the substitution rule, is a
good candidate, because it is always pure point diffractive and
carries a natural $p$-adic structure. A well-known example of this
type is the chair tiling, in its representation as a two-dimensional
block substitution; see Robinson (1999) and Baake \& Grimm (2013) for
details.
\subsection{Order and singular continuous diffraction}
The paradigm of singular continuous diffraction is provided by the
Thue--Morse system and its generalisations (Kakutani 1972, Baake \&
Grimm 2008). Here, we consider a family of generalised Thue--Morse
substitutions (Baake, G\"{a}hler \& Grimm 2012)
\begin{equation}\label{eq:tm}
\varrho^{(k,\ell)} : \;
\begin{array}{r@{\;}c@{\;}l}
1 & \mapsto & 1^{k}\,\bar{1}^{\ell} \\
\bar{1} & \mapsto & \bar{1}^{k}\,1^{\ell}
\end{array}
\end{equation}
on the two-letter alphabet $\{1,\bar{1}\}$, where
$k,\ell\in\mathbb{N}$ and the case $k=\ell=1$ corresponds to the
classic Thue--Morse case. Note that $1^j$ denotes a string of $j$
consecutive letters $1$ here. The one-sided fixed point
$v=\varrho^{(k,\ell)}(v)$ satisfies
\begin{equation}\label{eq:tmrec}
v_{(k+\ell)m+r} \, = \, \begin{cases}
v_{m}, & \text{if $0\le r\le k-1$},\\
\bar{v}_{m}, & \text{if $k\le r\le k+\ell-1$}
\end{cases}
\end{equation}
where $m\ge 0$ and $0\le r\le k+\ell-1$ and $\bar{\bar{1}}=1$. It
extends (by setting $v_{-i-1}=v_{i}$ for $i\ge 0$) to a symmetric
bi-infinite fixed point word under the square of the substitution
$\varrho^{(k,\ell)}$. For instance, the symmetric bi-infinite fixed
point for the classic Thue--Morse case $k=\ell=1$ has core
\[
\ldots \bar{1} 1 1 \bar{1}
1 \bar{1} \bar{1} 1
\bar{1} 1 1 \bar{1}
\bar{1} 1 1 \bar{1}
1 \bar{1} \bar{1} 1 \big|
1 \bar{1} \bar{1} 1
\bar{1} 1 1 \bar{1}
\bar{1} 1 1 \bar{1}
1 \bar{1} \bar{1} 1
\bar{1} 1 1 \bar{1}
\ldots
\]
where the vertical bar denotes the origin. A schematic representation
of the corresponding Dirac comb is shown in Figure~\ref{fig:tmcomb}.
\begin{figure}[b]
\centerline{\includegraphics[width=0.8\textwidth]{tmcomb.eps}}
\caption{Schematic representation of the Dirac comb of the
Thue--Morse chain with weights $\pm 1$. Note that this is
`balanced' in the sense that positive and negative weights are
equally frequent, so the average scattering strength is zero.}
\label{fig:tmcomb}
\end{figure}
The corresponding weighted Dirac comb on $\mathbb{Z}$, interpreting
the two letters as weights (with $\bar{1}$ interpreted as $-1$),
is thus given by $\omega = \sum_{i\in\mathbb{Z}}v_{i}\delta_{i}$. Its
autocorrelation $\gamma=\sum_{m\in\mathbb{Z}}\eta(m)\delta_{m}$ is
also a Dirac comb on $\mathbb{Z}$, where the autocorrelation
coefficients $\eta(m)$ satisfy $\eta(0)=1$, $\eta(-m)=\eta(m)$ and the
recursion relation
\begin{equation}\label{eq:etarec}
\begin{split}
\eta\bigl((k+\ell)m+r\bigr) \, = \, \frac{1}{k+\ell}
\Bigl(& \alpha(k,\ell,r) \,\eta(m) + \\
& \alpha(k,\ell,k +\ell - r) \,\eta(m+1)\Bigr)
\end{split}
\end{equation}
for arbitrary $m\in\mathbb{Z}$ and $0\le r\le k+\ell-1$. The recursion
follows directly from Eq.~\eqref{eq:tmrec}, with
$\alpha(k,\ell,r)=k+\ell-r-2\min(k,\ell,r,k+\ell-r)$. This system has
a unique solution, and properties of the solution show that the
corresponding diffraction measure $\widehat{\gamma}$ is purely
singular continuous; see Baake, G\"{a}hler \& Grimm (2012) for
the mathematical details of the argument.
\begin{figure}
\centerline{\includegraphics{tmdens4.eps}\hspace{2ex}
\includegraphics{tmdens5.eps}\hspace{2ex}
\includegraphics{tmdens6.eps}}
\centerline{\includegraphics{tmdist4.eps}\hspace{2ex}
\includegraphics{tmdist5.eps}\hspace{2ex}
\includegraphics{tmdist6.eps}}
\caption{Approximating density functions $f^{(N)}$ (top) and
corresponding approximating distribution function $F^{(N)}$ (bottom)
for the diffraction of the
classic Thue--Morse chain ($k=\ell=1$) with $N=4$ (left), $N=5$ (centre) and
$N=6$ (right).}
\label{fig:tmdens}
\end{figure}
The diffraction measure is periodic with period $1$ (due to the fact
that the Dirac comb is supported on the integer lattice $\mathbb{Z}$)
and the diffraction intensity can be represented as a limit of a product,
\[
f^{(N)}(x) \, = \prod_{n=0}^{N}
\vartheta\bigl((k+\ell)^{n}_{} x\bigr)\, ,
\]
which is known as a Riesz product, with
\[
\vartheta(x) \, = \, 1+\frac{2}{k+\ell}\!
\sum_{r=1}^{k+\ell-1} \!\!\alpha(k,\ell,r)\,
\cos(2\pi rx) .
\]
The limit as $N\to\infty$ has to be considered carefully. While the
truncated product $f^{(N)}$ is a smooth function that can be
interpreted as a density of an absolutely continuous measure, this is
not the case in the limit, because it represents a purely singular
continuous measure. Accordingly, the approximating density functions
$f^{(N)}$ become increasingly spiky with growing value of $N$; see
Figure~\ref{fig:tmdens} for an example. Mathematically, we speak of a
limit in the vague topology. However, the corresponding distribution
function $F^{(N)}(x)=\int_{0}^{x}f^{(N)}(x)\,\mathrm{d}x$ (which
corresponds to the integrated diffraction intensity) converges and
possesses a continuous limit; compare the bottom part of
Figure~\ref{fig:tmdens}. The limit function can be calculated and
expressed as an explicit Fourier series; several examples are shown
in Figure~\ref{fig:tm}.
\begin{figure}
\centerline{\includegraphics{tm21.eps}\hspace{2ex}
\includegraphics{tm31.eps}\hspace{2ex}
\includegraphics{tm41.eps}}
\caption{Continuous distribution functions for the diffraction of
generalised Thue--Morse chains with $\ell=1$ and $k=2$ (left),
$k=3$ (centre) and $k=4$ (right).}
\label{fig:tm}
\end{figure}
While this case has no point spectrum (the trivial Bragg peak at $0$
being absent due to our balanced choice of weights, corresponding to
zero average scattering strength), it is by no means featureless. In
fact, there are peaks that grow with certain scaling exponents (in
terms of the system size) at certain points in Fourier space (the most
prominent examples can be found at rational positions $\frac{1}{3}$
and $\frac{2}{3}$ in Figure~\ref{fig:tmdens}), while the growth is not
well defined at uncountably many other positions (due to the
non-convergence of the density functions); see Baake, Grimm \& Nilsson
(2014) for a detailed account of the classic Thue--Morse case.
Clearly, the generalised Thue--Morse systems possess hierarchical
order, although this is not reflected in a pure point component in their
diffraction measures. However, this `hidden' order is visible in other
correlations. Explicitly, it can be revealed by looking at the
two-point correlations of \emph{pairs} rather than of single points.
Looking at pairs can be described by considering the image of the
bi-infinite fixed point word $v$ under a sliding block map $\phi$,
which maps pairs of adjacent letters to $a$ or $b$ according to
whether they are equal or different, so $\phi\!:\, 11,
\bar{1}\bar{1}\mapsto a,\; 1\bar{1}, \bar{1}1\mapsto b$; see
Figure~\ref{fig:tmtopd} for an illustration.
\begin{figure}[b]
\centerline{\includegraphics[width=0.6\textwidth]{tmtopd.eps}}
\caption{The action of the sliding block map $\phi$ on a Thue--Morse word
produces a period doubling word.}
\label{fig:tmtopd}
\end{figure}
This maps the set of generalised Thue--Morse words to bi-infinite
words which are locally indistinguishable to fixed point words of the
generalised period doubling substitution
\[
a\mapsto b^{k-1}ab^{\ell-1}b,\quad b\mapsto b^{k-1}ab^{\ell-1}a,
\]
which reduces to the period doubling substitution (with $a=1$ and
$b=0$) in the case $k=\ell=1$. This map is globally $2:1$, meaning
that there are precisely two generalised Thue--Morse words that are
mapped onto the same generalised period doubling word. This is most
easily seen by noticing that, when going backwards from a generalised
period doubling word, there is a single free choice for one letter $a$
or $b$ at one position, where either preimage can be chosen, after
which all other preimages are uniquely determined (due to the overlap
of adjacent pairs). As the generalised period doubling substitution
has a coincidence in the sense of Dekking (1978), it is pure point
diffractive, as discussed above for the (standard) period doubling
case. In fact, the corresponding point sets are model sets, this time
with $(k+\ell)$-adic numbers as internal space, and the pure point
diffraction is supported on the set $\mathbb{Z}[\frac{1}{k+\ell}]$,
which contains all inverse powers of $(k+\ell)$ as generating
elements.
In the language of dynamical systems, the dynamical system (where the
dynamics is given by shifting the sequence by an integer)
corresponding to generalised period doubling words constitutes a
\emph{factor} of the dynamical system associated to the generalised
Thue--Morse words. Here, the word factor refers to the fact that it is
the image under the sliding block map $\phi$. What happens in this
case is that the diffraction spectrum of the factor (the generalised
period doubling system) picks up a non-trivial point spectrum, which
is `hidden' in the Thue--Morse system, in the sense that it does not
show up in its diffraction spectrum (even in the case of general
weights). However, this pure point spectrum is part of the so-called
\emph{dynamical spectrum} of the Thue--Morse system, where the
dynamical spectrum refers to the spectrum of the operator which
generates the translation action; see Queff\'{e}lec (2010) for
details. The dynamical spectrum is, in general, richer than the
diffraction spectrum. This can be intuitively understood because
diffraction, as the Fourier transform of the autocorrelation, only
measures two-point correlations, while the dynamical spectrum `knows'
about more general properties under the shift action, so effectively
can probe higher correlations. We shall come back to this point at the
end of this section.
\subsection{Order and absolutely continuous diffraction}
Absolutely continuous (`diffuse') diffraction is commonly associated
with randomness. Indeed, stochastic systems typically show absolutely
continuous diffraction; the simplest case is the Bernoulli shift,
based on a random sequence
\[
X \, =\, (\ldots, X^{}_{-2}, X^{}_{-1},
X^{}_{0}, X^{}_{1}, X^{}_{2}, \ldots) \,\in\, \{\pm 1\}^{\mathbb{Z}}
\]
of independent and identically distributed (i.i.d.) events with
outcome $\pm 1$, with probability $p$ for outcome $1$ and probability
$1-p$ for outcome $-1$. The Bernoulli shift has (metric) entropy $H(p)
= - p\log (p) - (1\!-\!p) \log (1\!-\!p)$, which is greater than zero
except for the deterministic limiting cases $p=0$ and $p=1$. All the
examples discussed earlier were deterministic sequences with zero
entropy.
A random sequence $X\in\{\pm 1\}^{\mathbb{Z}}$ leads to a Dirac comb
$\omega = \sum_{j\in\mathbb{Z}} X^{}_{j} \delta^{}_{j}$, which is a
translation bounded random measure with autocorrelation
$\gamma^{}_{\mathrm{B}} =\sum_{m\in\mathbb{Z}}
\eta^{}_{\mathrm{B}}(m)\,\delta^{}_{m}$. The autocorrelation
coefficients are, almost surely (in the probabilistic sense, so with
probability $1$), given by
\[
\begin{split}
\eta^{}_{\mathrm{B}}(m) \, = \,
\begin{cases}
1, & m = 0, \\
(2p\! -\! 1)^2, & m\ne 0.
\end{cases}
\end{split}
\]
as a consequence of the strong law of large numbers. The corresponding
diffraction measure then, almost surely, is given by
\[
\widehat{\gamma^{}_{\mathrm{B}}}\, =
\, (2p-1)^{2}\delta^{}_{\mathbb{Z}} \,+\,
4 p (1-p)\,\lambda\, ,
\]
which contains both pure point (for $p\ne \frac{1}{2}$) and absolutely
continuous components (for $p\not\in\{0,1\}$). The pure point part
vanishes when both weights appear with equal probability, while the
absolutely continuous part vanishes in the two deterministic, periodic
cases.
\begin{figure}
\centerline{\includegraphics[width=0.8\textwidth]{rscomb.eps}}
\caption{Schematic representation of the Dirac comb of the
Rudin--Shapiro chain with weights $\pm 1$, which is again
`balanced' in the sense that the average scattering strength is
zero.}
\label{fig:rscomb}
\end{figure}
It is, however, possible to construct deterministic systems with
absolutely continuous diffraction as well. The paradigm for this situation
is the Rudin--Shapiro chain (Rudin 1959, Shapiro 1951, Queff\'{e}lec
2010). Its binary version $w\in\{\pm 1\}^{\mathbb{Z}}$ can be defined
by initial conditions $w(-1)=-1$, $w(0)=1$, and the recursion
\begin{equation}\label{eq:rs}
w(4n+\ell)=
\begin{cases} w(n), & \mbox{for $\,\ell\in\{0,1\}$,} \\
(-1)^{n+\ell}\,w(n), & \mbox{for $\,\ell\in\{2,3\}$.}
\end{cases}
\end{equation}
A schematic representation of the corresponding Dirac comb is shown in
Figure~\ref{fig:rscomb}. By considering the recursion relation for
autocorrelation coefficients induced by Eq.~\eqref{eq:rs}, in a
similar way as for the generalised Thue--Morse case above, on can show
(Baake \& Grimm 2009) that the autocorrelation has the simple form
$\gamma^{}_{\mathrm{RS}} = \delta^{}_{0}$, which means that all
correlations (apart from the trivial case with distance zero) average
to zero along the chain. According to the two-point correlations, the
Rudin--Shapiro chain hence looks completely uncorrelated, exactly as
the random chain with probability $p=\frac{1}{2}$. As a consequence,
the diffraction measure is Lebesgue measure,
$\widehat{\gamma^{}_{\mathrm{RS}}} = \lambda$, which is clearly
absolutely continuous with respect to itself. This means that the
diffraction intensity is constant in space, and hence completely
featureless, reflecting the complete absence of two-point correlation
in the structure. This example shows that two very different systems
such as the $p=\frac{1}{2}$ Bernoulli chain with entropy $\log (2)$
and the completely deterministic binary Rudin--Shapiro chain (with
zero entropy) can produce the same autocorrelation and hence the same
diffraction measure. Such structures are called homometric (Patterson
1944) and show that the inverse problem does not have a unique
solution in general.
In fact, the situation is worse than that, as from the results above
one can construct an entire one-parameter family of stochastic Dirac
combs which all are homometric with the Rudin--Shapiro chain. This is
done by the \emph{Bernoullisation} procedure (Baake \& Grimm 2009).
Applying it to the Rudin--Shapiro Dirac comb, the weight at any
position is changed randomly with probability $1-p$, resulting in
\[
\omega \, = \, \sum_{j\in\mathbb{Z}} w^{}_{j}\, X^{}_{j}\, \delta^{}_{j} \, ,
\]
with the Rudin--Shapiro sequence $w\in\{\pm 1\}^{\mathbb{Z}}$ and the
random sequence $X\in\{\pm 1\}^{\mathbb{Z}}$ as defined above. This is
a `model of second thoughts' in the sense that, when starting from a
Rudin--Shapiro sequence, weights are randomly changed with probability
$1-p$ independently at each position along the chain. We thus can
continuously interpolate between the Rudin--Shapiro chain with entropy
$0$ and the $p=\frac{1}{2}$ Bernoulli chain with entropy $\log (2)$,
with all systems sharing the same absolutely continuous diffraction.
It is interesting to note that the Rudin--Shapiro chain, like the
generalised Thue--Morse chains above, possesses a `hidden'
limit-periodic order that is revealed when looking at an appropriate
dynamical factor. Using the same sliding block map $\phi$ as above,
one obtains once more a factor with pure point diffraction spectrum,
in this case supported on $\mathbb{Z}[\frac{1}{2}]$, as for the period
doubling case; see Baake \& Grimm (2013) for details. Clearly, this
does not happen for the stochastic chain, which does not have any
`hidden' order.
\subsection{Discrete structures with continuous symmetry}
An interesting (and still somewhat mysterious) class of structures is
provided by discrete systems which possess a continuous symmetry. The
paradigm for such a structure is the Conway--Radin pinwheel tiling
(Radin 1994). It is an inflation tiling based on a single triangular
prototile of edge lengths $1$, $2$ and $\sqrt{5}$ together with its
reflected version. The inflation rule is shown in
Figure~\ref{fig:pininf}; it consists of a linear rescaling by the
inflation factor $\sqrt{5}$ (first step) and the dissection of the
inflated triangle into five copies of the original prototile (second
step), where both orientations occur. The reflected rule applies to
the reflected triangle, and hence the tiling is reflection
symmetric. A realisation of the tiling is shown in
Figure~\ref{fig:melb}.
\begin{figure}[b]
\centerline{\includegraphics[width=0.5\textwidth]{pininfl.eps}}
\caption{Inflation rule for the pinwheel tiling. The dot marks the
reference point, and the shading emphasises that the particular
triangle is in the original position and orientation, ensuring that
repeated application of the inflation rule produces a fixed point
tiling.}
\label{fig:pininf}
\end{figure}
\begin{figure}[t]
\centerline{\includegraphics{melb.eps}}
\caption{A building at Melbourne's Federation Square featuring
a pinwheel tiling fa\c{c}ade.}
\label{fig:melb}
\end{figure}
What makes this inflation rule special is the rotation it introduces
between copies of the prototiles. This rotation by an angle
$\vartheta=-\arctan(\frac{1}{2})$ is incommensurate with $\pi$, and as
a result introduces new, independent directions under inflation.
Iteration of the inflation rule on an initial patch thus leads to
patches comprising an exponentially increasing number of triangles
occurring in a linearly growing number of independent directions. In
the limit of an infinite tiling, triangles appear in infinitely many
different orientations. While for the familiar cases of Penrose-type
tilings inflation rules produce tilings with discrete (in the Penrose
case decagonal) symmetry (in the sense that the tiling space defined
by the fixed point tilings has decagonal symmetry; see Baake \& Grimm
(2013) for details), the pinwheel inflation produces a tiling space
with complete circular symmetry (Radin 1994, Radin 1997, Moody,
Postnikoff \& Strungaru 2006). As a consequence, its diffraction is
circularly symmetric as well, and hence cannot have any pure point
component apart from the trivial Bragg peak at the origin.
In fact, the rotation is rather special, because it is a coincidence
rotation for the planar square lattice, as
$\tan(\vartheta)=-\frac{1}{2}$ is rational; see Baake (1997) for
background. This property is behind the observation that the point set
of pinwheel reference points can either be seen as a subset of rotated
square lattices or a subset of scaled square lattices, with scaling by
inverse powers of $5$ (Baake, Frettl\"{o}h \& Grimm 2007a), which
makes it possible to draw conclusions on the diffraction spectrum by
using a radial version of Poisson's summation formula. This provides
evidence that the diffraction consists of sharp rings, and that it is
surprisingly similar to a toy model of powder diffraction of square
lattice structures (Baake, Frettl\"{o}h \& Grimm 2007a, 2007b). A
diffraction measure supported on sharp rings in the plane is singular
continuous, and it is clear that the diffraction of the pinwheel
tiling contains singular continuous components of this type; however,
to date there is no complete characterisation of the diffraction
spectrum of this example. Results of numerical investigations suggest
that an absolutely continuous component may also be present. An
approximation of the radially averaged diffraction is shown in
Figure~\ref{fig:pinsq}.
\begin{figure}
\centerline{\includegraphics{pinsq.eps}}
\caption{Approximation of radial diffraction intensity $I(k)$ for the
pinwheel diffraction (black line), based on data from a finite
system. The grey columns indicate the sharp rings observed in a toy
model of powder diffraction from a planar square-lattice structure,
with the relative scale adjusted according to the first main peak;
see Baake, Frettl\"{o}h \& Grimm (2007a, 2007b) for details.}
\label{fig:pinsq}
\end{figure}
While the pinwheel tiling may seem a rather exotic structure, it is
generated by a quite simple inflation rule with only a single shape up
to congruency. There are many other structures of this type; see
Frettl\"{o}h (2008) for some examples.
\subsection{Diffraction versus dynamical spectra}
The examples of the Thue--Morse and Rudin--Shapiro systems show that
systems can possess `hidden' order that does not manifest itself by a
pure point component in the diffraction pattern. However, this order
can show up in the dynamical spectrum, which is related to the
analysis of the translation action on the structure. There is a close
relationship between these two spectral quantities --- indeed, the
first proofs of the pure point diffractivity of model sets employed
the link to dynamical spectra, using the results that the diffraction
spectrum is pure point if and only if the dynamical spectrum is. In
general, however, the dynamical spectrum can be richer (van Enter \&
Mi\c{e}kisz 1992), and the Thue--Morse and Rudin--Shapiro systems are
examples; in both cases, the dynamical spectrum contains the pure
point component $\mathbb{Z}[\frac{1}{2}]$ which arises because both
examples stem from primitive substitutions of constant length $2$ (in
the Rudin--Shapiro case, the underlying substitution employs four
different letters, and the binary system is derived from this by
identifying pairs of letters; see Baake \& Grimm (2013) for details).
A particularly simple yet striking example, originally suggested by
van Enter, is discussed in Baake \& van Enter (2011). It considers the
set of certain configurations of $\pm 1$ on the integer lattice
$\mathbb{Z}$. The allowed configurations $w\in\{\pm 1\}^{\mathbb{Z}}$
are obtained by partitioning the lattice into pairs of neighbouring
points (there are two ways to do this), and then randomly assigning to
each pair either the values $(+1,-1)$ or $(-1,+1)$. Turning a
configuration $w$ into a signed Dirac comb with weights $w_{i}\in\{\pm
1 \}$, it is easy to show, by an application of the strong law of
large numbers, that the autocorrelation is (almost surely) given by
$\gamma = \delta^{}_{0} - \frac{1}{2} (\delta^{}_{1} +
\delta^{}_{-1})$. The corresponding diffraction measure is then
\[
\widehat{\gamma} \,=\,
\bigl( 1 - \cos(2 \pi k) \bigr) \lambda\, ,
\]
and hence purely absolutely continuous, where the Radon-Nikodym
density relative to $\lambda$ is written as a function of
$k$. However, the dynamical spectrum of this system contains
eigenvalues (hence a pure point part), reflecting the order in the
system imposed by the `dimer' condition on pairs. This can be revealed
by considering a block map similar to to the map $\phi$ used
above. Explicitly, setting $u_{i} = - w_{i} w_{i+1}$ for
$i\in\mathbb{Z}$ maps $w$ to a new sequence $u$, which (almost surely)
has the diffraction measure
\[
\widehat{\gamma^{}_{u}} \, = \,
\tfrac{1}{4}\, \delta^{}_{\mathbb{Z}/2} + \tfrac{1}{2}\, \lambda\, ;
\]
see Baake \& van Enter (2011) for details. The `dimer' structure is
reflected in the presence of the pure point part supported on
$\frac{1}{2}\mathbb{Z}$, which also is the entire point part of the
dynamical spectrum.
This example again shows that the `hidden' order can also be seen in
diffraction, but not in the original system. Note that simply changing
the weights of the scatterers will not achieve this, although it may
contribute a trivial Bragg part. However, choosing a suitable factor
(or a family of factors) as an image of a continuous map such as the
sliding block map $\phi$ used above, makes it possible to detect the
`hidden' order via its diffraction. That this is a general property of
the relation between dynamical and diffraction spectrum is a recent
non-trivial insight; see Baake, Lenz \& van Enter (2013) for the
latest developments in this direction.
\section{Conclusions}
The discoveries of incommensurately modulated and aperiodically
ordered solids in the twentieth century (de Wolff 1974, Janner \&
Janssen 1977, Shechtman, Blech, Gratias \& Cahn 1984, Ishimasa, Nissen
\& Fukano 1985) have changed our view of
crystallography. Crystallography is no longer restricted to the
analysis of lattice periodic arrangements of atoms or molecules, but
takes a broader view which includes certain aperiodically ordered
structures, such as incommensurate crystals and quasicrystals. The
definition of a crystal has been amended to reflect this broader view.
The definition of a crystal is based on the currently known catalogue
of periodic and aperiodic crystals. We presently do not know of any
materials that have aperiodically ordered structures beyond
incommensurate crystals (including composite structures) and
quasicrystals. For the latter, so far only symmetries corresponding to
the smallest embedding dimension (in the sense of model sets) have
been observed, with octogonal, decagonal and dodecagonal quasicrystal
planes corresponding to projections from four-dimensional periodic
structures, and icosahedral quasicrystals being described by
projection from six-dimensional lattices. However, there is no \emph{a
priori} reason that excludes other symmetries completely, or indeed
aperiodically ordered structures that are not described by model sets
obtained from projections of a lattice in a finite-dimensional
Euclidean space.
The definition of a crystal also reflects the current lack of
understanding of what constitutes order in matter (and more
generally), and in this sense should be seen as a working definition
that may well need to be revised in the future. In crystallography,
order is linked to diffraction, which makes sense because diffraction
is the method of choice to experimentally determine the structure of a
solid. The examples discussed above demonstrate that there are ordered
structures which are not captured by the current definition, either
because their pure point diffraction fails to be finitely generated,
or because they do not have any non-trivial point component in their
diffraction. While we do not know whether such structures are realised
in nature, it should become possible to manufacture such materials
with purpose-designed structures and properties. In this sense, these
structures are relevant and should be considered to be within
the realm of crystallography.
{}From a mathematical point of view, a more satisfying attempt at
defining order might employ the dynamical spectrum, which is a
generalisation of the diffraction spectrum. The results above are in
line with the intuition by van Enter \& Mi\c{e}kisz (1992) that an
apparent disorder at an `atomic' scale could be accompanied by order
at a `molecular' scale, with diffraction of derived factor structures
probing the latter. While diffraction itself only measures the
averaged two-point correlations in a structure, the dynamical spectrum
probes the repetitivity of a structure under translations, and hence
also higher-order correlations, which generally can distinguish
homometric systems (Gr\"{u}nbaum \& Moore 1995). While these are not
necessarily directly accessible by experiment, the additional
information contained in the dynamical spectrum is, in principle,
encoded in diffraction spectra of derived systems; see Baake, Lenz \&
van Enter (2013) for recent developments on establishing this
connection. Defining order via a non-trivial pure point component of
the dynamical spectrum would include structures such as the
Thue--Morse and Rudin--Shapiro systems, though presumably examples of
pinwheel type (for which the dynamical spectrum is not known) would be
excluded. In this sense, it probably is still not completely
satisfactory to capture all possible manifestations of order, but it
may provide a first step towards a better understanding.
In this paper, the discussion was limited to deterministic systems,
apart from the brief excursion on the Bernoulli chain. Clearly, moving
to partially ordered systems, which contain an element of stochastic
disorder, is relevant as well. Not only does even the most perfect
crystal contain some amount of disorder, but there are also
entropically stabilised structures with intrinsic configurational
disorder, among them many quasicrystalline phases. In this context,
the notion of `entropic order' is relevant, which has been
investigated in statistical physics, in particular with respect to the
physics of glasses; see, for instance, Kurchan \& Levine (2011), Sasa
(2012a, 2012b) and Wolff \& Levine (2014) for recent work along these
lines.
Although the importance of random tiling structures was pointed out
early on (Elser 1985), and while there is some good heuristic
information from scaling considerations (Henley 1999), there are as
yet very few mathematically rigorous results for non-trivial random
tiling structures in two or more dimensions, the only examples known
being related to solvable models of lattice statistical mechanics
(Baake \& H\"{o}ffe 2000). We refer to Baake, Birkner \& Grimm (2015)
for a recent review on what is known about the diffraction of
partially ordered and stochastic systems.
\section*{Acknowledgements}
The author would like to express his gratitude to Michael Baake for
useful discussions and comments.
\section*{References}
\begin{trivlist}
\item
Authier, A. (2013).
\textit{Early Days of X-ray Crystallography},
Oxford: Oxford University Press.
\item
Authier, A. \& Chapuis, G. (2014).
\textit{A Little Dictionary of Crystallography},
International Union of Crystallography.
\item
Baake, M. (1997).
Solution of the coincidence problem in dimensions
$d\le 4$,
in: \textit{The Mathematics of Long-Range Aperiodic Order},
edited by R.V.~Moody, pp.~9--44,
Kluwer: Dordrecht.
\item
Baake, M., Birkner, M. \& Grimm U. (2015).
Non-periodic systems with continuous diffraction measures,
in: \textit{Mathematics of Aperiodic Order}, edited by
J.\ Kellendonk, D.\ Lenz \& J.\ Savinien,
in press,
Boston: Birkh\"{a}user.
\item
Baake, M., Frettl\"{o}h, D. \& Grimm, U. (2007a).
A radial analogue of Poisson's summation formula with applications
to powder diffraction and pinwheel patterns.
\textit{J.\ Geom.\ Phys.} \textbf{57}, 1331--1343.
arXiv:math.SP/0610408.
\item
Baake, M., Frettl\"{o}h, D. \& Grimm, U. (2007b).
Pinwheel patterns and powder diffraction,
\textit{Philos.\ Mag.} \textbf{87}, 2831--2838.
arXiv:math-ph/0610012.
\item
Baake M., G\"{a}hler F. \& Grimm U. (2012).
Spectral and topological properties of a family of
generalised Thue-Morse sequences,
\textit{J.\ Math.\ Phys.} \textbf{53}, 032701.
arXiv:1201.1423.
\item
Baake, M., G\"{a}hler, F. \& Grimm, U. (2013).
Examples of substitution systems and their factors.
\textit{J.\ Int.\ Seq.} \textbf{16}, article 13.2.14:\ 1--18.
arXiv:1211.5466.
\item
Baake, M. \& Grimm, U. (2007).
Homometric model sets and window covariograms.
\textit{Z.\ Krist.} \textbf{222}, 54--58.
arXiv:math.MG/0610411.
\item
Baake, M. \& Grimm, U.(2008).
The singular continuous diffraction measure of the Thue-Morse chain.
\textit{J.\ Phys.\ A:\ Math.\ Theor.} \textbf{41}, 422001:\ 1--6.
arXiv:0809.0580.
\item
Baake, M. \& Grimm, U. (2009).
Kinematic diffraction is insufficient to distinguish order from disorder.
\textit{Phys.\ Rev.\ B} \textbf{79}, 020203(R):\ 1--4 and
\textit{Phys.\ Rev.\ B} \textbf{80}, 029903(E). arXiv:0810.5750.
\item
Baake, M. \& Grimm, U. (2011a).
Kinematic diffraction from a mathematical viewpoint.
\textit{Z.\ Krist.} \textbf{226}, 711--725.
arXiv:1105.0095.
\item
Baake, M. \& Grimm, U. (2011b).
Diffraction of limit periodic point sets,
\textit{Philos.\ Mag.} \textbf{91}, 2661--2670.
arXiv:1007.0707.
\item
Baake, M. \& Grimm, U. (2012).
Mathematical diffraction of aperiodic structures.
\textit{Chem.\ Soc.\ Rev.} \textbf{41}, 6821--6843.
arXiv:1205.3633.
\item
Baake, M. \& Grimm, U. (2013). \emph{Aperiodic Order. Vol. 1:
A Mathematical Invitation}. Cambridge: Cambridge University Press.
\item
Baake, M. \& Grimm, U. (2014).
Squirals and beyond: Substitution tilings with singular continuous spectrum.
\emph{Ergodic Theory and Dynamical Systems} \textbf{34}, 1077--1102.
arXiv:1205.1384.
\item
Baake, M., Grimm, U. \& Nilsson, J. (2014).
Scaling of the Thue-Morse diffraction measure,
\textit{Acta Phys.\ Pol.\ A} \textbf{126}, 431--434.
arXiv:1311.4371
\item
Baake, M. \& H\"{o}ffe, M. (2000).
Diffraction of random tilings:\ some rigorous results,
\textit{J.\ Stat.\ Phys.} \textbf{99}, 219--261.
arXiv:math-ph/9904005.
\item
Baake, M., Lenz, D. \& Richard, C. (1997).
Pure point diffraction implies zero entropy for Delone sets with
uniform cluster frequencies,
\textit{Lett.\ Math.\ Phys.} \textbf{82}, 61--77.
arXiv:0706.1677.
\item
Baake, M., Lenz, D. \& van Enter, A.C.D. (2013).
Dynamical versus diffraction spectrum for structures with
finite local complexity,
\textit{Preprint} arXiv:1307.5718.
\item
Baake, M. \& Moody, R.V. (2004).
Weighted Dirac combs with pure point diffraction,
\textit{J.\ reine angew.\ Math.\ (Crelle)} \textbf{573}, 61--94.
arXiv:math.MG/0203030.
\item
Baake, M., Moody, R.V. \& Schlottmann, M. (1998).
Limit-(quasi)periodic point sets as quasicrystals with
$p$-adic internal spaces,
\textit{J.\ Phys.\ A:\ Math.\ Gen.} \textbf{31}, 5755--5765.
arXiv:math-ph/9901008.
\item
Baake, M. \& van Enter, A.C.D. (2011).
Close-packed dimers on the line:\ diffraction versus
dynamical spectrum,
\textit{J.\ Stat.\ Phys.} \textbf{143}, 88--101.
arXiv:1011.1628.
\item
Bohr, H. (1947).
\textit{Almost Periodic Functions}, reprint,
Chelsea: New York.
\item
Bombieri, E. \& Taylor, J.E. (1986).
Which distributions of matter diffract? An initial investigation.
\textit{J.\ Phys.\ Colloques} \textbf{47}, C3-19--C3-28.
\item
Bragg, W.H. \& Bragg, W.L. (1913).
The reflection of X-rays by crystals.
\textit{Proc.\ Roy.\ Soc.\ A} \textbf{88}, 428--438.
\item
C\'{o}rdoba, A. (1989).
Dirac combs,
\textit{Lett.\ Math.\ Phys.} \textbf{17}, 191--196.
\item
Cowley, J.M. (1995).
\textit{Diffraction Physics}, 3rd edition,
North-Holland: Amsterdam.
\item
Dekking, F.M. (1978).
The spectrum of dynamical systems arising from substitutions of
constant length,
\textit{Z.\ Wahrscheinlichkeitsth.\ verw.\ Geb.} \textbf{41}, 221--239.
\item
de Bruijn. N.G. (1986).
Quasicrystals and their Fourier transforms.
\textit{Indag.\ Math.\ (Proc.)} \textbf{89}, 123--152.
\item
de Wolff, P.M. (1974).
The pseudo-symmetry of modulated crystal structures,
\textit{Acta Cryst.\ A} \textbf{30}, 777--785.
\item
Dworkin, S. (1993).
Spectral theory and x-ray diffraction,
\textit{J.\ Math.\ Phys.} \textbf{34}, 2965--2967.
\item
Elser, V. (1985).
Comment on ``Quasicrystals:\ A new class of
ordered structures'',
\textit{Phys.\ Rev.\ Lett.} \textbf{54}, 1730.
\item
Frettl\"{o}h, D. (2008).
Substitution tilings with statistical circular symmetry,
\textit{Europ.\ J.\ Combin.} \textbf{29}, 1881--1893.
arXiv:0704.2521.
\item
Friedrich, W., Knipping, P. \& von Laue, M.\ (1912).
Interferenz-Erscheinungen bei R\"{o}ntgen\-strahlen.
\textit{Sitzungsberichte der Kgl.\ Bayer.\ Akad.\ der Wiss.}, 303--322.
\item
Gr\"{u}nbaum, F.A. \& Moore, C.C. (1995).
The use of higher-order invariants in the determination of generalized
Patterson cyclotomic sets,
\textit{Acta Cryst.\ A} \textbf{51}, 310--323.
\item
Henley, C.L. (1999).
Random tiling models,
in:\ \textit{Quasicrystals:\ The State of the Art},
edited by D.~P.\ DiVincenzo \& P.~J.\ Steinhardt, 2nd edition, pp~459--560,
World Scientific: Singapore.
\item
Hof, A. (1995)
On diffraction by aperiodic structures,
\textit{Commun.\ Math.\ Phys.} \textbf{169}, 25--43.
\item
International Union of Crystallography (1992).
Report of the Executive Committee for 1991.
\emph{Acta Cryst. A} \textbf{48}, 922--946.
\item
Ishimasa, T., Nissen H.-U. \& Fukano, Y. (1985).
New ordered state between crystalline and amorphous in Ni-Cr
particles,
\textit{Phys.\ Rev.\ Lett.} \textbf{55}, 511--513.
\item
Janner, A. \& and Janssen, T.\ (1977).
Symmetry of periodically distorted crystals,
\textit{Phys.\ Rev.\ B} \textbf{15}, 643--658.
\item
Janssen, T. \& Janner, A. (2014).
Aperiodic crystals and superspace concepts.
\emph{Acta Cryst. B} \textbf{70}, 617--651.
\item
Kakutani, S. (1972).
Strictly ergodic symbolic dynamical systems,
in: \textit{Proc.\ 6th Berkeley Symposium on Math.\ Statistics
and Probability}, edited by L.~M.\ LeCam, J.\ Neyman \&
E.~L.\ Scott, pp.\ 319--326.
Berkeley: University of California Press.
\item
Kurchan, J.\ \& Levine D.\ (2011).
Order in glassy systems,
\textit{J.\ Phys.\ A:\ Math.\ Theor.} \textbf{44}, 035001.
\item
Lenz, D. \& Moody, R.V. (2009).
Extinctions and correlations for uniformly discrete point
processes with pure point dynamical spectra,
\textit{Commun.\ Math.\ Phys.} \textbf{289}, 907--923.
arXiv:0902.0567.
\item
Lenz, D. \& Moody, R.V. (2011).
Stationary processes with pure point diffraction.
\textit{Preprint} arXiv:1111.3617.
\item
Lenz, D. \& Strungaru, N. (2009).
Pure point spectrum for measure dynamical systems on
locally compact Abelian groups,
\textit{J.\ Math.\ Pures Appl.} \textbf{92}, 323--341.
arXiv:0704.2498.
\item
Lifshitz, R.\ (2003).
Quasicrystals: A matter of definition,
\textit{Foundatiions of Physics} \textbf{33}, 1703--1711.
\item
Lifshitz, R.\ (2007).
What is a crystal? \textit{Z.\ Kristallogr.} \textbf{222}, 313--317.
\item
Lifshitz, R.\ (2011).
Symmetry breaking and order in the age of quasicrystals,
\textit{Isr.\ J.\ Chem.} \textbf{51}, 1156--1167.
\item
Meyer, Y. (1972).
\textit{Algebraic Numbers and Harmonic Analysis},
North Holland: Amsterdam.
\item
Moody, R.V. (2000).
Model sets:\ A survey, in:\
\textit{From Quasicrystals to More Complex Systems}, edited by
F.~Axel, F.~D\'enoyer \& J.~P.~Gazeau, pp.\ 145--166,
EDP Sciences: Les Ulis, and Springer: Berlin.
arXiv:math.MG/0002020.
\item
Moody, R.V., Postnikoff D. \& Strungaru N. (2006).
Circular symmetry of pinwheel diffraction,
\textit{Ann.\ Henri Poincar\'{e}} \textbf{7}, 711--730.
\item
Moody, R.V. \& Strungaru, N. (2004).
Point sets and dynamical systems in the autocorrelation topology,
\textit{Canad.\ Math.\ Bull.} \textbf{47}, 82--99.
\item
Mumford, D. \& Desolneux, A. (2010).
\textit{Pattern Theory: The Stochastic Analysis of Real-World Signals},
A K Peters: Natick, MA.
\item
Patterson, A.L. (1944).
Ambiguities in the X-ray analysis of crystal structures.
\textit{Phys.\ Rev.} \textbf{65}, 195--201.
\item
Queff\'{e}lec, M. (2010).
\textit{Substitution Dynamical Systems --- Spectral Analysis},
LNM 1294, 2nd edition, Springer: Berlin.
\item
Radin, C. (1994).
The pinwheel tilings of the plane,
\textit{Ann.\ Math.} \textbf{139}, 661--702.
\item
Radin, C. (1997).
Aperiodic tilings, ergodic theory and rotations, in:
\textit{The Mathematics of Long-Range Aperiodic Order},
edited by R.V.~Moody, pp.~499--519,
Kluwer: Dordrecht.
\item
Reed, M. \& Simon, B. (1980).
\textit{Methods of Modern Mathematical Physics I:\ Functional Analysis},
2nd edition, Academic Press: San Diego.
\item
Robinson, E.A. (1999).
On the table and the chair,
\textit{Indag.\ Math.} \textbf{10}, 581--599.
\item
Rudin, W. (1959).
Some theorems on Fourier coefficients,
\textit{Proc.\ Amer.\ Math.\ Soc.} \textbf{10}, 855--859.
\item
Sasa S.\ (2012a).
Statistical mechanics of glass transition in lattice molecule models,
\textit{J.\ Phys.\ A:\ Math.\ Theor.} \textbf{45}, 035002.
\item
Sasa S.\ (2012b).
Pure glass in finite dimensions,
\textit{Phys.\ Rev.\ Lett.} \textbf{109}, 165702.
\item
Schlottmann, M. (2000).
Generalised model sets and dynamical systems,
in:\ \textit{Directions in Mathematical Quasicrystals},
CRM Monograph Series, vol.\ 13,
edited by M.~Baake \& R.~V.~Moody, pp.~143--159,
AMS: Providence, RI.
\item
Shapiro, H.S. (1951).
\textit{Extremal Problems for Polynomials and Power Series},
Masters Thesis, MIT: Boston.
\item
Shechtman, D., Blech, I., Gratias, D. \& Cahn, J.W. (1984).
Metallic phase with long-range orientational order and no
translational symmetry,
\textit{Phys.\ Rev.\ Lett.} \textbf{53}, 1951--1953.
\item
Steurer W. \& Deloudi S. (2009).
\textit{Crystallography of Quasicrystals:\ Concepts, Methods and
Structures},
Springer: Berlin.
\item
Strungaru, N. (2005).
Almost periodic measures and long-range order in Meyer sets,
\textit{Discr.\ Comput.\ Geom.} \textbf{33}, 483--505.
\item
van Enter, A.C.D. \& Mi\c{e}kisz, J. (1992).
How should one define a weak crystal?
\textit{J.\ Stat.\ Phys.} \textbf{66}, 1147--1153.
\item
von Laue, M.\ (1912).
Eine quantitative Pr\"{u}fung der Theorie f\"{u}r die Interferenz-Erscheinungen
bei R\"{o}ntgenstrahlen.
\textit{Sitzungsberichte der Kgl.\ Bayer.\ Akad.\ der Wiss.}, 363--373.
\item
Wolff, G.\ \& Levine D.\ (2014)
Ordered amorphous spin system ,
\textit{Europhys.\ Lett.} \textbf{107}, 17005.
\item
Wolny, J., Kozakowski, B., Kuczera, P.,
Strzalka, R. \& Wnek, A. (2011).
Real space structure factor for different quasicrystals,
\textit{Israel J.\ Chem.} \textbf{51}, 1275--1291.
\end{trivlist}
\end{document}
|
2,877,628,090,415 | arxiv | \subsubsection*{Acknowledgments}
VK is a doctoral student at the University of Oxford funded by Samsung R\&D Institute UK through the AIMS program.
SW has received funding from the European Research Council under the European Union’s Horizon 2020 research and innovation programme (grant agreement number 637713).
The experiments were made possible by a generous equipment grant from NVIDIA.
The authors would like to thank Henry Kenlay and Marc Brockschmidt for useful discussions on~\glspl{gnn}.
\section{Background}
We now describe the necessary background for the rest of the paper.
\subsection{Reinforcement Learning}
\label{sec:mdp-formalism}
A~\gls{mdp} is a tuple $\langle \mathcal{S}, \mathcal{A}, \mathcal{R}, \mathcal{T}, \rho_0 \rangle$.
The first two elements define the set of states $\mathcal{S}$ and the set of actions $\mathcal{A}$.
The next element defines the reward function $\mathcal{R}(s,a,s')$ with $s, s'\in \mathcal{S}$ and $a \in \mathcal{A}$.
$\mathcal{T}(s'|s,a)$ is the probability distribution function over states $s' \in \mathcal{S}$ after taking action $a$ in state $s$.
The last element of the tuple $\rho_0$ is the distribution over initial states.
Task and environment are synonyms for \glspl{mdp} in this work.
A policy $\pi(a|s)$ is a mapping from states to distributions over actions.
The goal of an \meth{RL} agent is to find a policy that maximises the expected discounted cumulative return $J = \mathbb{E}\big[\sum_{t=0}^{\infty} \gamma^t r_t\big]$, where $\gamma \in [0,1)$ is a discount factor, $t$ is the discrete environment step and $r_t$ is the reward at step $t$.
In the \gls{mtrl} setting, the agent aims to maximise the average performance across $N$ tasks: $\frac{1}{N}\sum_{i=1}^N{J_i}$.
We use \emph{\gls{mtrl} return} to denote the average performance across the tasks.
In this paper, we assume that \wbr{states and actions are multivariate, but dimensionality remains constant for one \gls{mdp}}{the state and action sets elements of an \gls{mdp} are of constant dimension}: $s \in \mathop{\mathbb{R}}^k,\forall s \in \mathcal{S}$, and $a \in \mathop{\mathbb{R}}^{k'},\forall a \in \mathcal{A}$.
We use $dim(\mathcal{S})=k$ and $dim(\mathcal{A})=k'$ to denote this dimensionality, which can differ amongst \glspl{mdp}.
We consider two tasks \gls{mdp}$_1$ and \gls{mdp}$_2$ as \emph{incompatible} if the dimensionality of their state or action spaces disagree, i.e.,
$dim(\mathcal{S}_1)\neq dim(\mathcal{S}_2)$ or $dim(\mathcal{A}_1)\neq dim(\mathcal{A}_2)$ with the subscript denoting a task index.
In this case \gls{mtrl} policies or value functions
can not be represented by a \gls{mlp},
which requires fixed input dimensions.
We do not have additional assumptions on the semantics behind the state and action set elements and focus on the dimensions mismatch only.
Our approach, as well as the baselines in this work~\citep{wang2018nervenet,huang2020smp}, use~\gls{pg} methods~\citep{peters2006policy}.
\gls{pg} methods optimise a policy using gradient ascent on the objective: $\theta_{t+1} = \theta_t + \alpha \nabla_{\theta}J |_{\theta=\theta_t}$, where $\theta$ parameterises a policy.
Often, to reduce variance in the gradient estimates, one learns a critic so that the policy gradient becomes $\nabla_{\theta}J(\theta) \!=\! \mathbb{E} \big[\sum_{t} A^{\pi}_t \, \nabla_{\theta} \log\pi_{\theta}(a_t|s_t) \big]$, where $A^{\pi}_t$ is an estimate of the advantage function (e.g., TD residual $r_t + \gamma V^{\pi}(s_{t+1}) - V^{\pi}(s_{t})$).
The state-value function $V^{\pi}(s)$ is the expected discounted return a policy $\pi$ receives starting at state $s$.
\citet{wang2018nervenet} use PPO~\citep{schulman2017proximal}, which restricts a policy update to avoid instabilities from drastic changes in the policy behaviour.
\citet{huang2020smp} use \meth{TD3}~\citep{fujimoto2018addressing}, a~\gls{pg} method based on
\meth{DDPG}~\citep{lillicrap2015continuous}.
\subsection{Graph Neural Networks for Incompatible Multitask RL}
\def\concat{\bar}
\glspl{gnn} can address incompatible environments because they can process graphs of arbitrary sizes and topologies.
A~\gls{gnn} is a function that takes a labelled graph as input
and outputs a graph $\Set G'$ with different labels but the same topology.
Here, a labelled graph $\Set G := \langle \Set V, \Set E \rangle$
consists of a set of vertices $v^i \in \Set V$,
labelled with vectors $\vec v^i \in \mathop{\mathbb{R}}^{m_v}$ and
a set of directed edges $e^{ij} \in \Set E$
from vertex $v^i$ to $v^j$,
labelled with vectors $\vec e^{ij} \in \mathop{\mathbb{R}}^{m_e}$.
The output graph $\Set G'$
has the same topology but the labels can
be of different dimensionality than the input,
that is, $\vec v'^i \in \mathop{\mathbb{R}}^{m'_v}$ and
$\vec e'^{ij} \in \mathop{\mathbb{R}}^{m'_e}$.
By graph topology we mean the connectivity of the graph, which can be represented by an adjacency matrix, a binary matrix $\{a\}_{ij}$ whose elements $a_{ij}$ equal to one iff there is an edge $e_{ij} \in \Set E$ connecting vertices $v_i, v_j \in \Set V$.
A \gls{gnn} computes the output labels for entities of type $k$
by parameterised {\em update functions} $\phi_\psi^k$ represented by neural networks that can be learnt end-to-end via backpropagation.
These updates can depend on a varying number of edges or vertices,
which have to be summarised first using {\em aggregation functions} that we denote $\rho$.
Apart from their ability to operate on sets of elements, aggregation functions should be permutation invariant.
Examples of such aggregation functions include summation, averaging and $\max$ or $\min$ operations.
{Incompatible} \gls{mtrl} for continuous control implies learning a common policy for a set of agents with different number of limbs and connectivity of those limbs, i.e. \emph{morphology}.
To be more precise, a set of incompatible continuous control environments is a set of \glspl{mdp} described in Section~\ref{sec:mdp-formalism}.
When a state is represented as a graph, each node label contains features of its corresponding limb, e.g., limb type, coordinates, and angular velocity.
Similarly, each factor of an action set element corresponds to a node with the label meaning the torque for a joint to emit.
The typical reward function of a MuJoCo~\citep{todorov2012mujoco} environment includes a reward for staying alive, distance covered, and a penalty for action magnitudes.
We now describe two existing approaches to incompatible control:
\meth{NerveNet}~\citep{wang2018nervenet} and \gls{smp}~\citep{huang2020smp}.
\subsubsection{NerveNet}
In \meth{NerveNet}, the input observations are first encoded via a \gls{mlp} processing each node labels as a batch element:
$\vec v^{i} \leftarrow \phi_\chi\big(\, {\vec v}^i\big), \forall v^{i} \in \Set V$.
After that, the message-passing part of the model block performs the following computations (in order):
\def\\[1mm]{\\[1mm]}
\begin{equation*} \label{eq:nervenet_update}
\begin{array}{rcl}
\vec e'^{ij}
&\leftarrow&
\phi^e_\psi\big(\, {\vec v}^i\big)
\hfill, \forall e^{ij} \in \Set E \,,\\
\vec v^i \;
&\leftarrow&
\phi^v_\xi\big(\, {\vec v}^i, \,
\rho\{\vec e'^{ki} \,|\, e^{ki} \in \Set E\}
\big)
\quad, \forall v^i \in \Set V \,.
\end{array}
\end{equation*}
The edge updater $\phi^e_{\psi}$ in \meth{NerveNet} is an \gls{mlp} which does not take the receiver's state into account.
Using only one message pass restricts the learned function to local computations on the graph.
The node updater $\phi^v_{\xi}$ is a \gls{gru}~\citep{cho2014learning} which maintains the internal state when doing multiple message-passing iterations, and takes the aggregated outputs of the edge updater for all incoming edges as inputs.
After the message-passing stage, the \gls{mlp} decoder takes the states of the nodes and, like the encoder, independently processes them, emitting scalars used as the mean for the normal distribution from which actions are sampled: $\vec v_{dec}^{i} \leftarrow \phi_\eta\big(\, {\vec v}^i\big), \forall v^{i} \in \Set V$.
The standard deviation of this distribution is a separate state-independent vector with one scalar per action.
\subsubsection{Shared Modular Policies}
\gls{smp} is a variant of a \gls{gnn} that operates only on trees. Computation is performed in two stages: top-down and bottom-up.
In the first stage, information propagates level by level from leaves to the root with parents aggregating information from their children.
In the second stage, information propagates from parents to the leaves with parents emitting multiple messages, one per child.
The policy emits actions at the second stage of the computation together with the downstream messages.
Instead of a permutation invariant aggregation, the messages are concatenated.
This, as well as separate messages for the children, also injects structural bias to the model, e.g., separating the messages for the left and right parts of robots with bilateral symmetry.
In addition, its message-passing schema depends on the morphology and the choice of the root node.
In fact, \citet{huang2020smp} show that the root node choice can affect performance by 15\%.
\gls{smp} trains a separate model for the actor and critic.
An actor outputs one action per non-root node.
The critic outputs a scalar per node as well.
When updating a critic, a value loss is computed independently per each node with targets using the same scalar reward from the environment.
\subsection{Transformers}
Transformers can be seen as \glspl{gnn} applied to fully connected graphs with the attention as an edge-to-vertex aggregation operation~\citep{battaglia2018relational}.
Self-attention used in transformers is an associative memory-like mechanism that first projects the feature vector of each node $\vec v^i \in \mathop{\mathbb{R}}^{m_v}$ into three vectors: query $\vec q_i := \vec \Theta \vec v^{i} \in \mathop{\mathbb{R}}^{d}$, key $\vec k_i := \bar{\vec \Theta} \vec v^{i} \in \mathop{\mathbb{R}}^d$ and value $\hat{\vec v}_i := \hat{\vec \Theta} \vec v^i \in \mathop{\mathbb{R}}^{m_v}$.
Parameter matrices $\vec \Theta, \bar{\vec \Theta} \text{, and } \hat{\vec
\Theta}$ are learnt.
The query of the receiver $v_i$ is compared to the key value of senders using a dot product.
The resulting values $\vec w_i$ are used as weights in the weighted sum of all the value vectors in the graph.
The computation proceeds as follows:
\begin{equation}
\begin{array}{rcl}
\vec w_i &:=& \text{softmax}\big(\frac{[\vec k_1, \ldots, \vec k_n]^\top \vec q_i}{\sqrt{d}}\big) \\
\vec v'_i &:=& [\hat{\vec v}_1, \ldots, \hat{\vec v}_n] \vec w_{i}
\end{array}, \forall v_i \in \mathcal{V} \,,
\label{eq:attention}
\end{equation}
with $[x_1, x_2, ..., x_n]$ being a $\mathop{\mathbb{R}}^{k \times n}$ matrix of concatenated vectors $x_i \in \mathop{\mathbb{R}}^k$.
Often, multiple attention heads, i.e., $\vec \Theta, \bar{\vec \Theta} \text{, and } \hat{\vec \Theta}$ matrices, are used to learn different interactions between the nodes and mitigate the consequences of unlucky initialisation.
The output of multiple heads is concatenated and later projected to respect the dimensions.
A transformer block is a combination of an attention block and a feedforward layer with a possible normalisation between them.
In addition, there are residual connections from the input to the attention output and from the output of the attention to the feedforward layer output.
Transformer blocks can be stacked together to take higher order dependencies into account, i.e., reacting not only to the features of the nodes, but how the features of the nodes change after applying a transformer block.
\section{Reproducibility}
\label{sec:reproducibility}
We initially took the transformer implementation from the Official Pytorch Tutorial~\citep{pytorchTransformerTutorial} which uses \texttt{TransformerEncoderLayer} from
Pytorch~\citep{paszke2017automatic}.
We modified it for the regression task instead of classification, and removed masking and the positional encoding.
Table~\ref{tab:hyperparameters} provides all the hyperparameters needed to replicate our experiments.
\begin{table*}[h]
\centering
\caption{Hyperparameters of our experiments}
\label{tab:hyperparameters}
\begin{tabular}{lll}
\textbf{Hyperparameter}&\textbf{Value}&\textbf{Comment}\\
\midrule
\textit{\meth{Amorpheus}{}}&&\\
\midrule
-- Learning rate & 0.0001&\\
-- Gradient clipping & 0.1&\\
-- Normalisation & LayerNorm & As an argument to \texttt{TransformerEncoder} in \texttt{torch.nn}\\
-- Attention layers & 3&\\
-- Attention heads & 2&\\
-- Attention hidden size & 256&\\
-- Encoder output size & 128&\\
\midrule
Training&&\\
\midrule
-- runs &3& per benchmark\\
\end{tabular}
\end{table*}
\begin{table*}
\centering
\caption{Full list of environments used in this work.}
\label{tab:environments}
\begin{tabular}{lll}
\textbf{Environment}&\textbf{Training}&\textbf{Zero-shot testing}\\
\midrule
\texttt{Walker++}&&\\
\midrule
& \texttt{walker\_2\_main} &\texttt{walker\_3\_main}\\
& \texttt{walker\_4\_main} &\texttt{walker\_6\_main}\\
& \texttt{walker\_5\_main} &\\
& \texttt{walker\_7\_main} &\\
\midrule
\texttt{humanoid++}&&\\
\midrule
& \texttt{humanoid\_2d\_7\_left\_arm} &\texttt{humanoid\_2d\_7\_left\_leg}\\
& \texttt{humanoid\_2d\_7\_lower\_arms} & \texttt{humanoid\_2d\_8\_right\_knee}\\
& \texttt{humanoid\_2d\_7\_right\_arm} &\\
& \texttt{humanoid\_2d\_7\_right\_leg} &\\
& \texttt{humanoid\_2d\_8\_left\_knee} &\\
& \texttt{humanoid\_2d\_9\_full} &\\
\midrule
\texttt{Cheetah++}&&\\
\midrule
& \texttt{cheetah\_2\_back} & \texttt{cheetah\_3\_balanced}\\
& \texttt{cheetah\_2\_front} &\texttt{cheetah\_5\_back}\\
& \texttt{cheetah\_3\_back} &\texttt{cheetah\_6\_front}\\
& \texttt{cheetah\_3\_front} &\\
& \texttt{cheetah\_4\_allback} &\\
& \texttt{cheetah\_4\_allfront} &\\
& \texttt{cheetah\_4\_back} &\\
& \texttt{cheetah\_4\_front} &\\
& \texttt{cheetah\_5\_balanced} &\\
& \texttt{cheetah\_5\_front} &\\
& \texttt{cheetah\_6\_back} &\\
& \texttt{cheetah\_7\_full} &\\
\midrule
\texttt{Cheetah-Walker-}&&\\
\texttt{-Humanoid}&&\\
\midrule
&All in the column above&All in the column above\\
\midrule
\texttt{Hopper++}&&\\
\midrule
&\texttt{hopper\_3}&\\
&\texttt{hopper\_4}&\\
&\texttt{hopper\_5}&\\
\midrule
\texttt{Cheetah-Walker-}&&\\
\texttt{-Humanoid-Hopper}&&\\
\midrule
&All in the column above&All in the column above\\
\midrule
\texttt{Walkers} from &&\\
\citet{wang2018nervenet}&&\\
\midrule
&\texttt{Ostrich}&\\
&\texttt{HalfCheetah}&\\
&\texttt{FullCheetah}&\\
&\texttt{Hopper}&\\
&\texttt{HalfHumanoid}&\\
\end{tabular}
\end{table*}
\meth{Amorpheus}{} makes use of gradient clipping and a smaller learning rate.
We found, that~\gls{smp} also performs better with the decreased learning rate ($0.0001$) as well and we use it throughout the work.
Figure~\ref{fig:smp-hyperparameters} demonstrates the effect of a smaller learning rate on \texttt{Walker++}.
All other \gls{smp} hyperparameters are as reported in the original paper with the two-directional message passing.
\begin{figure}[h]
\centering
\begin{minipage}[t]{.49\textwidth}
\centering
\includegraphics[width=0.8\textwidth]{fig/walkers_mtrl_smp_hyperparams.pdf}
\caption{Smaller learning rate make~\gls{smp} to yield better results on \texttt{Walker++}.}
\label{fig:smp-hyperparameters}
\end{minipage}
\hfill
\begin{minipage}[t]{.49\textwidth}
\centering
\includegraphics[width=0.8\textwidth]{fig/nervenet_with_wo_limit.pdf}
\caption{Removing the return limit slightly deteriorates the performance of NerveNet on Walkers.}
\label{fig:nervenet-walkers-w-wo-limit}
\end{minipage}
\end{figure}
\citet{wang2018nervenet} add an artificial return limit of 3800 for their Walkers environment.
We remove this limit and compare the methods without it.
For NerveNet, we plot the results with the option best for it.
Figure~\ref{fig:nervenet-walkers-w-wo-limit} compares the two options.
\clearpage
\section{Morphology ablations}
\label{ref:app-morphology-ablations}
Figure~\ref{fig:topologies} shows examples of graph topologies we used in structure ablation experiments.
\begin{figure}[h]
\centering
\begin{subfigure}[b]{0.3\textwidth}
\centering
\includegraphics[width=\textwidth]{fig/topologies-morphology.pdf}
\caption{Morphology}
\label{fig:topologies-morphology}
\end{subfigure}
\hfill
\begin{subfigure}[b]{0.3\textwidth}
\centering
\includegraphics[width=\textwidth]{fig/topologies-star.pdf}
\caption{Star}
\end{subfigure}
\hfill
\begin{subfigure}[b]{0.3\textwidth}
\centering
\includegraphics[width=\textwidth]{fig/topologies-line.pdf}
\caption{Line}
\end{subfigure}
\hfill
\caption{Examples of graph topologies used in the structure ablation experiments.}
\label{fig:topologies}
\end{figure}
\clearpage
\section{Attention Mask Analysis}
\subsection{Evolution of masks throughout the training process}
Figures \ref{fig:mask-evolution-1}, \ref{fig:mask-evolution-2}
and \ref{fig:mask-evolution-3} demonstrate the evolution
of \meth{Amorpheus}~attention masks during training.
\label{sec:mask-evolution}
\begin{figure}[h]
\centering
\includegraphics[width=\textwidth]{fig/mask_variety_0.pdf}
\caption{\texttt{Walker++} masks for the 3 attention layers on \texttt{Walker}-\texttt{7} at the beginning of training.}
\label{fig:mask-evolution-1}
\end{figure}
\begin{figure}[h]
\centering
\includegraphics[width=\textwidth]{fig/mask_variety_5.pdf}
\caption{\texttt{Walker++} masks for the 3 attention layers on \texttt{Walker}-\texttt{7} after 2.5 mil frames.}
\label{fig:mask-evolution-2}
\end{figure}
\begin{figure}[h]
\centering
\includegraphics[width=\textwidth]{fig/mask_variety.pdf}
\caption{\texttt{Walker++} masks for the 3 attention layers on \texttt{Walker}-\texttt{7} at the end of training.}
\label{fig:mask-evolution-3}
\end{figure}
\clearpage
\subsection{Attention masks cumulative change}
\label{sec:mask-cumulative-change}
\begin{figure}[h]
\centering
\begin{subfigure}[t]{0.325\textwidth}
\centering
\includegraphics[width=\textwidth]{fig/attention_cumulative_change_1.pdf}
\end{subfigure}
\hfill
\begin{subfigure}[t]{0.325\textwidth}
\centering
\includegraphics[width=\textwidth]{fig/attention_cumulative_change_2.pdf}
\end{subfigure}
\hfill
\begin{subfigure}[t]{0.325\textwidth}
\centering
\includegraphics[width=\textwidth]{fig/attention_cumulative_change_3.pdf}
\end{subfigure}
\caption{Absolutive cumulative change in the attention masks for three different models on \texttt{Walker-7}.}
\label{fig:walker-cumulative-change}
\end{figure}
\section{Generalisation results}
\label{sec:generalisation-results}
\begin{table}[h]
\centering
\caption{Initial results on generalisation. The numbers show the average performance of three seeds evaluated on 100 rollouts and standard error of the mean. While the average values are higher for \meth{Amorpheus}{} on 5 out of 7 benchmarks, high variance of both methods might be indicative of instabilities in generalisation behaviour due to large differences between the training and testing tasks.}
\label{tab:generalisation-results}
\begin{tabular}{lll}
\hline
\textbf{} & \multicolumn{1}{l}{\meth{amorpheus}} & \multicolumn{1}{l}{\meth{smp}} \\ \hline
\texttt{walker}-\texttt{3}-\texttt{main} & \textbf{666.24} (133.66) & 175.65 (157.38) \\
\texttt{walker}-\texttt{6}-\texttt{main} & \textbf{1171.35} (832.91) & 729.26 (135.60) \\
\midrule
\texttt{humanoid}-\texttt{2d}-\texttt{7}-\texttt{left}-\texttt{leg} & \textbf{2821.22} (1340.29) & 2158.29 (785.33) \\
\multicolumn{1}{r}{\texttt{humanoid}-\texttt{2d}-\texttt{8}-\texttt{right}-\texttt{knee}} & \textbf{2717.21} (624.80 ) & 327.93 (125.75) \\
\midrule
\texttt{cheetah}-\texttt{3}-\texttt{balanced} & \textbf{474.82} (74.05) & 156.16 (33.00) \\
\texttt{cheetah}-\texttt{5}-\texttt{back} & \multicolumn{1}{l}{3417.72 (306.84)} & \multicolumn{1}{l}{\textbf{3820.77} (301.95)} \\
\texttt{cheetah}-\texttt{6}-\texttt{front} & 5081.71 (391.08) & \textbf{6019.07} (506.55) \\ \hline
\end{tabular}%
\end{table}
\section{Residual Connection Ablation}
We use the residual connection in \meth{Amorpheus}{} as a safety mechanim to prevent nodes from forgetting their own observations.
To check that \meth{Amorpheus}{}'s improvements do not come from the residual connection alone, we performed the ablation.
As one can see on Figure~\ref{fig:residual-ablation}, we cannot attribute the success of our method to this improvement alone.
High variance on \texttt{Humanoid++} is related to the fact that one seed started to improve much later, and the average performance suffered as the result.
\begin{figure}[h]
\centering
\begin{subfigure}[t]{0.32\textwidth}
\centering
\includegraphics[width=\textwidth]{fig/walkers_mtrl_ablation.pdf}
\caption{Walker++}
\end{subfigure}
\hfill
\begin{subfigure}[t]{0.32\textwidth}
\centering
\includegraphics[width=\textwidth]{fig/cheetah_mtrl_ablation.pdf}
\caption{Cheetah++}
\end{subfigure}
\hfill
\begin{subfigure}[t]{0.32\textwidth}
\centering
\includegraphics[width=\textwidth]{fig/humanoids_mtrl_ablation.pdf}
\caption{Humanoid++}
\end{subfigure}
\hfill
\begin{subfigure}[t]{0.32\textwidth}
\centering
\includegraphics[width=\textwidth]{fig/hopper_mtrl_ablation.pdf}
\caption{Hopper++}
\end{subfigure}
\begin{subfigure}[t]{0.32\textwidth}
\centering
\includegraphics[width=\textwidth]{fig/walker_humanoids_mtrl_ablation.pdf}
\caption{\scriptsize Walker-Humanoid++}
\end{subfigure}
\begin{subfigure}[t]{0.32\textwidth}
\centering
\includegraphics[width=\textwidth]{fig/walker_humanoids_hopper_mtrl_ablation.pdf}
\caption{\scriptsize Walker-Humanoid-Hopper++}
\end{subfigure}
\begin{subfigure}[t]{0.42\textwidth}
\centering
\includegraphics[width=\textwidth]{fig/cheetah_walker_humanoids_mtrl_ablation.pdf}
\caption{Cheetah-Walker-Humanoid++}
\end{subfigure}
\begin{subfigure}[t]{0.42\textwidth}
\centering
\includegraphics[width=\textwidth]{fig/cheetah_walker_humanoids_hopper_mtrl_ablation.pdf}
\caption{\scriptsize Cheetah-Walker-Humanoid-Hopper++}
\end{subfigure}
\caption{Residual connection ablation experiment.}
\label{fig:residual-ablation}
\end{figure}
\section{\textsc{Amorpheus}{}}
\label{sec:experiments}
Inspired by the results above, we propose \meth{Amorpheus}{}, a transformer-based method for incompatible~\gls{mtrl} in continuous control.
\meth{Amorpheus}{} is motivated by the hypothesis that any benefit \glspl{gnn} can extract from the morphological domain knowledge encoded in the graph is outweighed by the difficulty that the graph creates for message passing.
In a sparse graph, crucial state information must be communicated across multiple hops, which we hypothesise is difficult to learn in practice.
\meth{Amorpheus}{} belongs to the encode-process-decode family of architectures~\citep{battaglia2018relational} with a transformer at its core.
Since transformers can be seen as~\glspl{gnn} operating on fully connected graphs, this approach allows us to learn a message passing schema for each state \wbr{and each pass }{}separately, and limits the number of message passes needed to propagate sufficient information through the graph.
Multi-hop message propagation in the presence of aggregation, which could cause
problems with gradient propagation and information loss, is no longer required.
\begin{figure}[b!] %
\centering
\includegraphics[width=.7\textwidth]{fig/amorpheus-amorpheus-2.pdf}
\caption{\meth{Amorpheus}{} architecture. Lines with squares at the end denote concatenation. Arrows going separately through encoder and decoder denote that rows of the input matrix are processed independently as batch elements. Dashed arrows denote message-passing in a transformer block. The diagram depicts the policy network, the critic has an identical architecture, with the decoder outputs interpreted as value function values.}
\label{fig:amorpheus-diagram}
\end{figure}
We implement both actor and critic in the~\gls{smp}
codebase~\citep{huang2020smp} and made our implementation available online at \url{https://github.com/yobibyte/amorpheus}.
Like in~\gls{smp}, there is no weight sharing between the actor and the critic.
Both of them consist of three parts: a linear encoder, a transformer in the middle, and the output decoder \gls{mlp}.
Figure~\ref{fig:amorpheus-diagram} illustrates the \meth{Amorpheus}{} architecture (policy).
The encoder and decoder process each node independently, as if they are different elements of a mini-batch.
Like~\gls{smp}, the policy network has one output per graph node.
The critic has the same architecture as on Figure~\ref{fig:amorpheus-diagram}, and, as in~\citet{huang2020smp}, each critic node outputs a scalar with the value loss independently computed per node.
Similarly to~\meth{NerveNet} and~\gls{smp}, \meth{Amorpheus}{} is modular and can be used in incompatible environments, including those not seen in training.
In contrast to~\gls{smp} which is constrained by the maximum number of children per node seen at the model initialisation in training, \meth{Amorpheus}{} can be applied to any other morphology with no constraints on the physical connectivity.
Instead of one-hot encoding used in natural language processing, we apply a
linear layer on node observations.
Each node observation uses the same state representation as~\gls{smp} and includes a limb type (e.g. hip or shoulder), position with a relative $x$ coordinate of the limb with respect to the torso, positional and rotational velocities, rotations, angle and possible range of the values for the angle normalised to $[0,1]$.
We add residual connections from the input features to the decoder output to
avoid the nodes forgetting their own features by the time the decoder
independently computes the actions.
Both actor and critic use two attention heads for each of the three transformer layers.
Layer Normalisation~\citep{ba2016layer} is a crucial component of transformers which we also use in \meth{Amorpheus}{}.
See Appendix~\ref{sec:reproducibility} for more details on the implementation.
\subsection{Experimental Results}
\begin{figure}[h]
\centering
\begin{subfigure}[t]{0.32\textwidth}
\centering
\includegraphics[width=\textwidth]{fig/walkers_mtrl.pdf}
\caption{Walker++}
\end{subfigure}
\hfill
\begin{subfigure}[t]{0.32\textwidth}
\centering
\includegraphics[width=\textwidth]{fig/cheetah_mtrl.pdf}
\caption{Cheetah++}
\end{subfigure}
\hfill
\begin{subfigure}[t]{0.32\textwidth}
\centering
\includegraphics[width=\textwidth]{fig/humanoids_mtrl.pdf}
\caption{Humanoid++}
\end{subfigure}
\hfill
\begin{subfigure}[t]{0.32\textwidth}
\centering
\includegraphics[width=\textwidth]{fig/hopper_mtrl.pdf}
\caption{Hopper++}
\end{subfigure}
\begin{subfigure}[t]{0.32\textwidth}
\centering
\includegraphics[width=\textwidth]{fig/walker_humanoids_mtrl.pdf}
\caption{\scriptsize Walker-Humanoid++}
\end{subfigure}
\begin{subfigure}[t]{0.32\textwidth}
\centering
\includegraphics[width=\textwidth]{fig/walker_humanoids_hopper_mtrl.pdf}
\caption{\scriptsize Walker-Humanoid-Hopper++}
\end{subfigure}
\begin{subfigure}[t]{0.32\textwidth}
\centering
\includegraphics[width=\textwidth]{fig/cheetah_walker_humanoids_mtrl.pdf}
\caption{Cheetah-Walker-Humanoid++}
\label{fig:mtrl-cwh}
\end{subfigure}
\begin{subfigure}[t]{0.32\textwidth}
\centering
\includegraphics[width=\textwidth]{fig/cheetah_walker_humanoids_hopper_mtrl.pdf}
\caption{\scriptsize Cheetah-Walker-Humanoid-Hopper++}
\label{fig:mtrl-cwhh}
\end{subfigure}
\caption{\meth{Amorpheus}{} consistently outperforms~\gls{smp}
on \gls{mtrl} benchmarks from \citet{huang2020smp},
supporting our hypothesis that no explicit structural information
is needed to learn a successful MTRL policy
and that facilitated message-passing procedure results in faster
learning.}
\label{fig:mtrl-on-smp-benchmark}
\end{figure}
We first test \meth{Amorpheus}{} on the set of \gls{mtrl} environments proposed by~\citet{huang2020smp}.
For \texttt{Walker++}, we omit flipped environments, since~\citet{huang2020smp} implement flipping on the model level.
For \meth{Amorpheus}{}, the flipped environments look identical to the original ones.
Our experiments in this Section are built on top of the \meth{TD3} implementation used in~\citet{huang2020smp}.
Figure~\ref{fig:mtrl-on-smp-benchmark} supports our hypothesis that explicit morphological information encoded in graph topology is not needed to yield a single policy achieving high average returns across a set of incompatible continuous control environments.
Free from the need to learn multi-hop communication and equipped with the attention mechanism, \meth{Amorpheus}{} clearly outperforms~\gls{smp}, the state-of-the-art algorithm for incompatible continuous control.
\citet{huang2020smp} report that training \gls{smp} on \texttt{Cheetah++} together with other environments makes~\gls{smp} unstable. By contrast, \meth{Amorpheus}{} has no trouble learning in this regime (Figure~\ref{fig:mtrl-cwh} and~\ref{fig:mtrl-cwhh}).
Our experiments demonstrate that node features have enough information for \meth{Amorpheus}{} to perform the task and limb discrimination needed for successful~\gls{mtrl} continuous control policies.
For example, a model can distinguish left from right, not from structural biases as in~\gls{smp}, but from the relative position of the limb w.r.t.\ the root node provided in the node features.
While the total number of tasks in the~\gls{smp} benchmarks is high, they all share one key characteristic.
All tasks in a benchmark are built using subsets of the limbs from an archetype (e.g., \texttt{Walker++} or \texttt{Cheetah++}).
To verify that our results hold more broadly, we adapted the \texttt{Walkers}
benchmark \citep{wang2018nervenet} and compared \meth{Amorpheus}{} with \gls{smp} and
\meth{NerveNet} on it.
This benchmark includes five agents with different morphologies: a Hopper, a HalfCheetah, a FullCheetah, a Walker, and an Ostrich.
The results in Figure~\ref{fig:nervenet-walkers-mtrl} are consistent\footnote{
\wbr{Note that}{However,} the performance of \meth{NerveNet} is not directly comparable,
as the observational features and the learning algorithm differ from
\meth{Amorpheus}{} and \gls{smp}. We do not test \meth{NerveNet} on \gls{smp} benchmarks
because the codebases are not compatible and comparing \meth{NerveNet}
and~\gls{smp} is not the focus of the paper.
Even if we implemented \meth{NerveNet} in the \gls{smp} training loop, it is unclear how the critic of \meth{NerveNet} would perform in a new setting.
The original paper considers two options for the critic: one GNN-based and one MLP-based. We use the latter in Figure~\ref{fig:nervenet-walkers-mtrl} as the former takes only the root node output labels as an input and is thus most likely to face difficulty in learning multi-hop message-passing.
The MLP critic should perform better because training an MLP is easier, though it might be sample-inefficient when the number of tasks is large.
For example, in \texttt{Cheetah++} an agent would need to learn 12 different critics.
Finally, \meth{NerveNet} learns a separate MLP encoder per task, partially defeating the purpose of using \gls{gnn} for incompatible environments.
} with our previous experiments, demonstrating the benefits of \meth{Amorpheus}' fully-connected graph with attentional aggregation.
While we focused on~\gls{mtrl} in this work, we also evaluated \meth{Amorpheus}{} in a zero-shot generalisation setting.
Table~\ref{tab:generalisation-results} in Appendix~\ref{sec:generalisation-results} provides initial results demonstrating \meth{Amorpheus}{}'s potential.
\subsection{Attention Mask Analysis}
\gls{gnn}-based policies, especially those that use attention, are more interpretable than monolithic \gls{mlp} policies.
We now analyse the attention masks that \meth{Amorpheus}{} learns.
\begin{figure}[t]
\begin{minipage}[t]{.33\textwidth}
\centering
\includegraphics[height=88pt]{fig/nervenet_envs_mtrl}
\caption{\gls{mtrl} performance on \texttt{Walkers}~\citep{wang2018nervenet}.}
\label{fig:nervenet-walkers-mtrl}
\end{minipage}
\hfill
\begin{minipage}[t]{0.64\textwidth}
\centering
\includegraphics[width=\textwidth]{fig/masks_variety_w7.pdf}
\caption{State-dependent masks of \meth{Amorpheus}{} (3\textsuperscript{rd} attention layer) within a \texttt{Walker}-\texttt{7} rollout.}
\label{fig:mask-variety}
\end{minipage}
\begin{minipage}[t]{\textwidth}
\centering
\includegraphics[width=0.9\textwidth]{fig/limb_cycles_left.pdf}
\caption{In \wbr{the first attention layer of a }{}\texttt{Walker}-\texttt{7}\wbr{~rollout}{, in the first layer}, nodes attend to an upper leg \wbr{(column-wise mask sum $\sim$ 3) }{}when the leg is closer to the ground (\wbr{normalized }{}angle $\sim$ 0).}
\label{fig:mask_cycles}
\end{minipage}
\end{figure}
Having an implicit structure that is state dependent is one of the benefits of \meth{Amorpheus}{} (every node has access to other nodes' annotations, and the aggregation weights depend on the input as shown in Equation~\ref{eq:attention}).
By contrast, \meth{NerveNet} and~\gls{smp} have a rigid message-passing structure that does not change throughout training or throughout a rollout.
Indeed, Figure~\ref{fig:mask-variety} shows a variety of masks a \texttt{Walker++} model exhibits within a \texttt{Walker}-\texttt{7} rollout, confirming that \meth{Amorpheus}{} attends to different parts of the state space based on the input.
Both~\citet{wang2018nervenet} and~\citet{huang2020smp} notice periodic patterns arising in their models.
Smilarly, \meth{Amorpheus}{} demonstrates cycles in attention masks, usually arising for the first layer of the transformer.
Figure~\ref{fig:mask_cycles} shows the column-wise sum of the attention masks coordinated with an upper-leg limb of a \texttt{Walker}-\texttt{7} agent.
Intuitively, the column-wise sum shows how \wbr{much }{}other nodes are interested in the node corresponding to that column.
Interestingly, attention masks in earlier layers change more slowly within a rollout than those of the downstream layers.
Figure~\ref{fig:walker-cumulative-change} in Appendix~\ref{sec:mask-cumulative-change} demonstrates this phenomenon for three different \texttt{Walker++} models tested on \texttt{Walker-7}.
This shows that \meth{Amorpheus}{} might, in principle, learn a rigid structure (as in~\glspl{gnn}) if needed.
Finally, we investigate how attention masks evolve over time.
Early in training, the masks are spread across the whole graph.
Later on, the mask weights distributions become less uniform.
Figures~\ref{fig:mask-evolution-1},~\ref{fig:mask-evolution-2} and \ref{fig:mask-evolution-3} in Appendix~\ref{sec:mask-evolution} demonstrate this phenomenon on \texttt{Walker}-\texttt{7}.
\section{Introduction}
\gls{mtrl} \citep{varghese2020mtrl} leverages commonalities between multiple tasks to obtain policies with better returns, generalisation, data efficiency, or robustness.
Most \gls{mtrl} work assumes {\em compatible} state-action spaces,
where the dimensionality of the states and actions is the same across tasks.
However, many practically important domains, such as robotics, combinatorial optimization,
and object-oriented environments,
have {\em incompatible} state-action spaces
and cannot be solved by common \gls{mtrl} approaches.
Incompatible environments are avoided largely because they are inconvenient for function approximation: conventional architectures expect fixed-size inputs and outputs.
One way to overcome this limitation is to use \glspl{gnn}~\citep{gori2005new, scarselli2005graph, battaglia2018relational}.
A key feature of \glspl{gnn} is that they can process graphs of arbitrary size and thus, in principle, allow~\gls{mtrl} in incompatible environments.
However, \Glspl{gnn} also have a second key feature: they allow models to condition on structural information
about how state features are related, e.g., how a robot's limbs are connected.
In effect, this enables practitioners to incorporate additional domain knowledge where states are described as labelled graphs.
Here, a graph is a collection of labelled nodes, indicating the features of corresponding objects, and edges, indicating the relations between them.
In many cases, e.g., with the robot mentioned above, such domain knowledge is readily available.
This results in a structural inductive bias that restricts the model's computation graph, determining how errors backpropagate through the network.
\glspl{gnn} have been applied to \gls{mtrl} in continuous control environments, a staple benchmark of modern~\gls{rl}, by leveraging both of the key features mentioned above~\citep{wang2018nervenet, huang2020smp}.
In these two works, the labelled graphs are based on the agent's physical morphology, with nodes labelled with the observable features of their corresponding limbs, e.g., coordinates, angular velocities and limb type.
If two limbs are physically connected, there is an edge between their corresponding nodes.
However, the assumption that it is beneficial to restrict the model's
computation graph in this way has to our knowledge not been validated.
To investigate this issue, we conduct a series of ablations on existing \gls{gnn}-based continuous control methods. The results show that removing morphological information does not harm the performance of these models.
In addition, we propose \meth{Amorpheus}{}, a new continuous control MTRL method based on transformers~\citep{vaswani2017attention} instead of \glspl{gnn} that use morphological information to define the message-passing scheme.
\meth{Amorpheus}{}~is motivated by the hypothesis that any benefit \glspl{gnn} can extract from the morphological domain knowledge encoded in the graph is outweighed by the difficulty that the graph creates for message passing.
In a sparsely connected graph, crucial state information must be communicated across multiple hops, which we hypothesise is difficult in practice to learn.
\meth{Amorpheus}~uses transformers instead, which can be thought of as fully connected \glspl{gnn} with attentional aggregation~\citep{battaglia2018relational}.
Hence, \meth{Amorpheus}{}~ignores the morphological domain knowledge but in exchange obviates the need to learn multi-hop communication.
Similarly, in Natural Language Processing, transformers were shown to perform better without an explicit structural bias and even learn such structures from data~\citep{vig2019analyzing, goldberg2019assessing, tenney2019you, peters2018dissecting}.
Our results on incompatible \gls{mtrl}~continious control benchmarks \citep{huang2020smp,wang2018nervenet} strongly support our hypothesis: \meth{Amorpheus}~substantially outperforms \gls{gnn}-based alternatives with fixed message-passing schemes in terms of sample efficiency and final performance.
In addition, \meth{Amorpheus}~exhibits nontrivial behaviour such as cyclic attention patterns coordinated with gaits.
\section{The Role of Morphology in Existing Work}
\label{sec:contribution}
In this section, we provide evidence against the assumption that \glspl{gnn} improve performance by exploiting information about physical morphology~\citep{huang2020smp,wang2018nervenet}.
Here and in all of the following sections, we run experiments for three random seeds and report the average undiscounted \gls{mtrl}~return and the standard error across the seeds.
\begin{figure}[h] %
\centering
\begin{subfigure}[t]{0.32\textwidth}
\centering
\includegraphics[width=\textwidth]{fig/walkers_structure_ablation.pdf}
\caption{\gls{smp}, \texttt{Walker++}}
\label{fig:structure-ablations-a}
\end{subfigure}
\hfill
\begin{subfigure}[t]{0.32\textwidth}
\centering
\includegraphics[width=\textwidth]{fig/humanoids_structure_ablation.pdf}
\caption{\gls{smp}, \texttt{Humanoid++}}
\label{fig:structure-ablations-b}
\end{subfigure}
\hfill
\begin{subfigure}[t]{0.32\textwidth}
\centering
\includegraphics[width=\textwidth]{fig/nervenet_mtrl_structures.pdf}
\caption{\meth{NerveNet}, \texttt{Walkers}}
\label{fig:nervenet-structures}
\end{subfigure}
\caption{\wbr{Neither \gls{smp} nor \meth{NerveNet}}{\gls{smp} does not} leverage the \wbr{agent's }{}morphological information, or the positive effects are outweighted by \wbr{their}{its} negative effect on message passing.
}
\end{figure}
To determine if information about the agent's morphology encoded in the
relational graph structure is essential to the success of~\gls{smp}, we compare
its performance given full information about the structure (morphology), given
no information about the structure (star), and given a structural bias unrelated
to the agent's morphology (line).
Ideally, we would test a fully connected architecture as well, but~\gls{smp} only works with trees.
Figure~\ref{fig:topologies} in Appendix~\ref{ref:app-morphology-ablations} illustrates the tested topologies.
The results in Figure~\ref{fig:structure-ablations-a} and~\ref{fig:structure-ablations-b} demonstrate that, surprisingly, performance is not contingent on having information about the physical morphology. A \texttt{star} agent performs on par with the \texttt{morphology} agent, thus refuting the assumption that the method learns because it exploits information about the agent's physical morphology.
The \texttt{line} agent performs worse, perhaps because the network must propagate messages even further away, and information is lost with each hop due to the finite size of the MLPs causing information bottlenecks~\citep{alon2020bottleneck}.
We also present similar results for \meth{NerveNet}.
Figure~\ref{fig:nervenet-structures} shows that all of the variants we tried perform similarly well on \texttt{Walkers} from \citep{wang2018nervenet}, with \texttt{star} being marginally better.
Since \meth{NerveNet} can process non-tree graphs, we also tested a fully connected variant.
This version learns more slowly at the beginning, probably because of difficulties with differentiating nodes at the aggregation step.
Interestingly, in contrast to~\gls{smp}, in \meth{NerveNet} \texttt{line} performs on par with \texttt{morphology}.
This might be symptomatic of problems with the message-passing mechanism of~\gls{smp}, e.g., bottlenecks leading to information loss.
\section{Related Work}
Most~\gls{mtrl} research considers the compatible case~\citep{rusu2015policy, parisotto2015actor, NIPS2017_7036, varghese2020mtrl}.
\gls{mtrl} for continuous control is often done from pixels with CNNs solving part of the compatibility issue.
DMLab~\citep{beattie2016deepmind} is a popular choice when learning from pixels with a compatible action space shared across the environments~\citep{hessel2019multi, song2019v}.
\glspl{gnn} started to stretch the possibilities of RL allowing~\gls{mtrl} in incompatible environments.
\citet{khalil2017learning} learn combinatorial optimisation algorithms over graphs.
\citet{kurin2019improving} learn a branching heuristic of a SAT solver.
Applying approximations schemes typically used in~\gls{rl} to these settings is impossible, because they expect input and output to be of fixed size.
Another form of (potentially incompatible)~\gls{rl} using message passing are
coordination graphs \citep[e.g.~DCG,][]{boehmer2020dcg},
that use the max-plus algorithm \citep{pearl88prob}
to coordinate action selection between multiple agents.
One can apply DCG in single-agent~\gls{rl} using ideas of \citet{tavakoli2021learning}.
Several methods for incompatible continuous control have also been proposed.
\citet{chen2018hardware} pad the state vector with zeros to have the same dimensionality for robots with different number of joints, and condition the policy on the hardware information of the agent.
\citet{d2019sharing} demonstrate a positive effect of learning a common network for multiple tasks, learning a specific encoder and a decoder one per task.
We expect this method to suffer from sample-inefficiency because it has to learn separate input and output heads per each task.
Moreover, \citet{wang2018nervenet} have a similar implementation of their~\gls{mtrl} baseline showing that~\glspl{gnn} have benefits over MLPs for incompatible control.
\citet{huang2020smp}, whose work is the main baseline in this paper, apply a~\gls{gnn}-like approach and study its~\gls{mtrl} and generalisation properties.
The method can be used only with trees, its aggregation function is not permutation invariant, and the message-passing schema stays fixed throughout the training procedure.
\citet{wang2018nervenet} and~\citet{huang2020smp} attribute the effectiveness of their methods to the ability of the~\glspl{gnn} to exploit information about agent morphology.
In this work, we present evidence against this hypothesis, showing that existing approaches do not exploit morphological information as was previously believed.
Attention mechanisms have also been used in the~\gls{rl} setting.
\citet{zambaldi2018relational} consider self-attention to deal with an object-oriented state space.
They further generalize this to variable action spaces and test generalisation on Starcraft-II mini-games that have a varying number of units and other environmental entities.
\citet{duan2017one} apply attention for both temporal dependency and a factorised state space (different objects in the scene) keeping the action space compatible.
\citet{parisotto2019stabilizing} use transformers as a replacement for a recurrent policy.
\citet{icml2020_1696} use transformers to add history dependence in a POMDP as well as for factored observations, having a node per game object.
The authors do not consider a factored action space, with the policy receiving the aggregated information of the graph after the message passing ends.
\citet{baker2019emergent} use self-attention to account for a factored state-space to attend over objects or other agents in the scene.
\meth{Amorpheus}{} does not use a transformer for recurrency but for the factored state and action spaces, with each non-torso node having an action output.
\citet{iqbal2019maac} apply attention to generalise \gls{mtrl} multi-agent policies over varying environmental objects and \citet{iqbal2020aiqmix} extend this to a factored action space by summarising the values of all agents with a mixing network \citep{rashid2018qmix}.
\citet{li2020deep} learn embeddings for a multi-agent actor-critic architecture by generating the weights of a graph convolutional network \citep[GCN,][]{kipf2016semi} with attention. This allows a different topology in every state, similar to \meth{Amorpheus}, which goes one step further and allows to change the topology in every round of message passing.
Another line of work aims to infer graph topology instead of hardcoding one.
Differentiable Graph Module~\citep{DBLP:journals/corr/abs-2002-04999} predicts edge probabilities doing a continuous relaxation of k-nearest neighbours to differentiate the output with respect to the edges in the graph.
\citet{DBLP:conf/nips/0003LT20} learn to augment a given graph with additional edges to improve the performance of a downstream task.
\citet{DBLP:conf/icml/KipfFWWZ18} use variational autoencoders~\citep{DBLP:journals/corr/KingmaW13} using a~\gls{gnn} for reconstruction.
Notably, the authors notice that message passing on a fully connected graph might work better than when restricted by skeleton when evaluated on human motion capture data.
\section{Conclusions and Future Work}
In this paper, we investigated the role of explicit morphological information in graph-based continous control.
We ablated existing methods~\gls{smp} and \meth{NerveNet}, providing evidence against the belief that these methods improve performance by exploiting explicit morphological structure encoded in graph edges.
Motivated by our findings, we presented \meth{Amorpheus}{}, a transformer-based method for~\gls{mtrl} in incompatible environments.
\meth{Amorpheus}{} obviates the need to propagate messages far away in the graph and can attend to different regions of the observations depending on the input and the particular point in training.
As a result, \meth{Amorpheus}{} clearly outperforms existing work in incompatible continuous control.
In addition, \meth{Amorpheus}{} exhibits non-trivial behaviour such as periodic cycles of attention masks coordinated with the gait.
The results show that information in the node features alone is enough to learn a successful~\gls{mtrl} policy.
We believe our results further push the boundaries of incompatible~\gls{mtrl} and provide valuable insights for further progress.
One possible drawback of~\meth{Amorpheus}{} is its computational complexity.
Transformers suffer from quadratic complexity in the number of nodes with a growing body of work addressing this issue~\citep{tay2020efficient}.
However, the number of the nodes in continuous control problems is relatively low compared to much longer sequences used in NLP~\citep{devlin2018bert}.
Moreover, Transformers are higly parallelisable, compared to~\gls{smp} with the data dependency across tree levels (the tree is processed level by level with each level taking the output of the previous level as an input).
We focused on investigating the effect of injecting explicit morphological information into the model.
However, there are also opportunities to improve the learning algorithm itself.
Potential directions of improvement include averaging gradients instead of performing sequential task updates, or balancing tasks updates with multi-armed bandits or PopArt~\citep{hessel2019multi}.
|
2,877,628,090,416 | arxiv | \section{Introduction}
Identifying causal structure from observational data is an important but also challenging task in many practical applications. This task can be formulated as that of finding a Directed Acyclic Graph (DAG) that minimizes a score function defined w.r.t.~the observed data.
However,
searching over the space of DAGs for the best DAG is known to be NP-hard, even if each node has at most two parents \cite{Chickering1996learning}.
Consequently, traditional methods mostly rely on local heuristics to perform the search, including greedy hill-climbing and greedy equivalence search that explores the Markov equivalence classes \cite{chickering2002optimal}.
Along with various search strategies, existing methods have also cast causal structure learning problem as that of learning an
optimal variable ordering, considering that the ordering space is significantly smaller than that of directed graphs and searching over the ordering space can avoid dealing with the acyclicity constraint \cite{teyssier2012ordering}.
%
%
Many methods, such as genetic algorithm \cite{larranaga1996learning}, Markov chain Monte Carlo \cite{friedman2003being} and greedy local hill-climbing \cite{teyssier2012ordering}, have been exploited as the search strategies to find desired orderings. In practice, however,
these methods often cannot effectively find a globally optimal ordering for their heuristic nature.
Recently, with smooth score functions, several gradient-based methods have been proposed by exploiting a smooth characterization of acyclicity, including NOTEARS \cite{zheng2018dags} for linear causal models and several subsequent works, e.g., \cite{yu19dag,Lachapelle2019grandag,Ng2019GAE,Ng2019masked,Zheng2019learning}, which use neural networks for modelling non-linear causal relationships. As another attempt, \cite{zhu2020causal} utilize Reinforcement Learning (RL) to find the underlying DAG from the graph space without the need of smooth score functions.
Unfortunately, \cite{zhu2020causal} achieved good performance only with up to $30$ variables, for at least two reasons:~1) the action space, consisting of directed graphs, is tremendous for large scale problems and is hard to be explored efficiently; and 2) it has to compute scores for many non-DAGs generated during training but computing scores w.r.t.~data is generally time-consuming.
It appears that the RL-based approach may not be able to achieve a close performance to other gradient-based methods that directly optimize the same
score function for large causal discovery problems, due to its search nature.
By taking advantage of the reduced space of variable orderings and the strong search ability of modern RL methods, we propose Causal discovery with Ordering-based Reinforcement Learning (CORL), which incorporates RL into the ordering-based paradigm and is shown to achieve a promising empirical performance.
In particular, CORL outperforms NOTEARS, a state-of-the-art gradient-based method for linear data, even with $150$-node graphs. Meanwhile, CORL is also competitive with a strong baseline, Causal Additive Model (CAM) method \cite{buhlmann2014cam}, on non-linear data models.
\paragraph{Contributions.} We make the following contributions in this work: 1) We formulate the ordering search problem as a multi-step Markov Decision Process (MDP) and propose to implement the ordering generating process in an effective encoder-decoder architecture, followed by applying RL to optimizing the proposed model based on specifically designed reward mechanisms. We also incorporate a pretrained model into CORL to accelerate training.
%
%
%
2) We analyze the consistency and computational complexity of the proposed method.
3) We conduct comparative experiments on synthetic and real data sets to validate the performance of the proposed methods. 4) An implementation has been made available at \url{https://github.com/huawei-noah/trustworthyAI/tree/master/gcastle}.\footnote{The extended version can be found at \url{ https://arxiv.org/abs/2105.06631.}}
\section{Related Works}
\label{relatedworks}
Besides the aforementioned heuristic ordering search algorithms, \cite{schmidt2007learning} proposed L1OBS to conduct variable selection using $\ell_1$-regularization paths based on the method from \cite{teyssier2012ordering}.
\cite{scanagatta2015learning} further proposed an ordering exploration method on the basis of an approximated score function so as to scale to thousands of variables.
The CAM method \cite{buhlmann2014cam} was specifically designed for non-linear additive models. Some recent ordering-based methods such as sparsest permutation \cite{raskutti2018learning} and greedy {sparsest} permutation \cite{solus2017consistency} can guarantee consistency of Markov equivalence class, relying on some conditional independence relations and certain assumptions like faithfulness. A variant of greedy sparest permutation was further proposed in \cite{bernstein2020ordering} for the setting with latent variables.
In the present work, we mainly work on identifiable cases which have different assumptions from theirs.
In addition, exact algorithms such as dynamic programming \cite{Xiang2013lasso} and integer or linear programming \cite{bartlett2017integer} are also used for causal discovery problem.
However, these algorithms usually can only work on small graphs \cite{de2011efficient}, and to handle larger problems with hundreds of variables, they usually need to incorporate heuristics search \cite{Xiang2013lasso} or limit the maximum number of parents of each node.
Recently, RL has been used to tackle several combinatorial problems such as the maximum cut and traveling salesman problem \cite{bello2016neural,khalil2017learning,kool2018attention}.
These works aim to learn a policy as a solver based on {the particular type of combinatorial problems}.
However, causal discovery tasks generally have different relationships, data types, graph structures, etc., and moreover, are typically off-line with focus on a or a class of causal graph(s). As such, we use RL as a search strategy, similar to \cite{zoph2016neural,zhu2020causal}. Nevertheless, a pretrained model or policy can offer a good starting point to speed up training, as shown in our evaluation results (cf.~Figure~\ref{reward-curve-corl2}).
\section{Background}
\label{gen_inst}
\subsection{Causal Structure Learning}
Let $\mathcal{G}=(d, V, E)$ denotes a DAG, with $d$ the number of nodes, $V=\{v_1,\cdots,v_d\}$ the set of nodes, and $E=\{(v_i,v_j) | i,j= 1,\ldots,d\}$ the set of directed edges from $v_i$ to $v_j$. Each node $v_j$ is associated with a random variable $X_j$.
The probability model associated with $\mathcal{G}$ factorizes as $p(X_1,\cdots,X_d)=\prod^d_{j=1}p(X_j|\text{Pa}(X_j))$, where $p(X_j|\text{Pa}(X_j))$ is the conditional probability distribution for $X_j$ given
its parents $\text{Pa}(X_j):=\{X_k|(v_k, v_j)\in E\}$.
We assume that the observed data $\mathbf {x}_j$ is obtained
by the Structural Equation Model (SEM) with {additive} noises: $X_j := f_j(\text{Pa}(X_j)) + \epsilon_j, j=1,\ldots,d$, where $f_j$ represents the functional relationship between $X_j$ and its parents, and $\epsilon_j$'s denote jointly independent additive noise variables.
We assume causal minimality, which is equivalent to that each $f_j$ is not a constant for any $X_k \in \text{Pa}(X_j)$ in this SEM \cite{10.5555/2627435.2670315}.
Given a sample $\mathbf{X} = [\mathbf {x}_1,\cdots,\mathbf{x}_d] \in\mathbb R^{m\times d}$ where $\mathbf{x}_j$ is a vector of $m$ observations for random variable $X_j$. The goal is to find a DAG $\mathcal{G}$ that optimizes the Bayesian Information Criterion (BIC) (or equivalently, minimum description length) score, defined as
\begin{equation}\label{eq4-0}
\text{S}_\text{BIC}\mathcal{(G)}\hspace{-2pt}=\hspace{-2pt}\sum^d_{j=1}\hspace{-2pt}\left[\sum^m_{k=1} \log p(x^k_j|\text{Pa}(x^k_j);\theta_j)\hspace{-2pt}-\hspace{-2pt}\frac{|\theta_j|}{2} \log m \right]\hspace{-2pt},
\end{equation
where $x^k_j$ is the $k$-th observation of $X_j$, $\theta_j$ is the parameter associated with each likelihood, and $|\theta_j|$ denotes the parameter dimension. For linear-Gaussian models,
$p(x^k_j|\text{Pa}(x^k_j);\theta_j)= \mathcal{N}(x_j|\theta_j^T\text{Pa}(x_j), \sigma^2)$ and $\sigma^2$ can be estimated from the data.
\begin{figure}
\centering
\includegraphics[width=0.15 \textwidth]{figs/fig1.pdf}
\caption{An example of the correspondence between an ordering and a fully-connected DAG.}
\label{model_dag}
\end{figure}
The problem of finding a directed graph that satisfies the ayclicity constraint
can be cast as that of finding a variable ordering \cite{teyssier2012ordering,schmidt2007learning}.
Specifically, let $\Pi$ denote an ordering of the nodes in $V$, where the length of the ordering $|\Pi|=|V|$ and $\Pi$ is indexed from 1. If node $v_j\in V$ lies in the $p$-th position, then $\Pi(p)=v_j$. Notation $\Pi_{\prec v_j}$ denotes the set of nodes that precede node $v_j$ in $\Pi$. One can easily establish a canonical correspondence between an ordering $\Pi$ and a fully-connected DAG $\mathcal{G}^\Pi$; an example is presented in Figure~1.
A DAG $\mathcal{G}$ can be consistent with more than one orderings and the set of these orderings is denoted by
\begin{equation*}
\Phi(\Pi)\hspace{-2pt}=\hspace{-2pt}\{\Pi:~\text{fully-connected DAG}~\mathcal{G}^{\Pi}~\text{is a super-DAG of} \ \mathcal{G} \},
\end{equation*}
where a super-DAG of $\mathcal{G}$ is a DAG whose edge set is a superset of that of $\mathcal{G}$.
The the search for the true DAG $\mathcal{G}^{*}$ can be decomposed to two phases: finding the correct ordering and performing variable selection; the latter is to find the optimal DAG that is consistent with the ordering found in the first step.
\subsection{Reinforcement Learning}
Standard RL is usually formulated as an MDP over the environment state $s \in \mathcal{S}$ and agent action
$a\in \mathcal{A}$, under an (unknown) environmental dynamics defined by a transition probability $\mathcal{T}(s'|s,a)$. Let $\pi_{\phi}(a|s)$ denote the policy, parameterized by $\phi$, which outputs a distribution used to select an action from action space $\mathcal{A}$ based on state $s$. For episodic tasks, a trajectory $\tau = \{s_t,a_t\} ^T_ {t=0}$, where $T$ is the finite time horizon, can be collected by executing the policy repeatedly. In many cases, an immediate reward $r(s,a)$ can be received when agent executes an action.
The objective of RL is to learn a policy which can maximize the expected cumulative reward along a trajectory, i.e., $J(\phi) =\mathbb{E}_{\pi_{\phi}}[R_0]$ with $R_0=\sum^{T}_{t=0}\gamma^{t}r_{t}(s_{t}, a_{t})$ and $\gamma \in (0,1]$ being a discount factor. For some scenarios, the reward is only earned at the terminal time (also called episodic reward),
and $J(\phi) =\mathbb{E}_{\pi_{\phi}}[R(\tau)]$ with $R(\tau)=r_{T}(s_T, a_T)$.
\section{Method}
In this section, we first formulate the ordering search problem as an MDP and then describe the proposed approach. We also discuss the variable selection methods to obtain DAGs from variable orderings, as well as the consistency and computational complexity regarding the proposed method.
\subsection{Ordering Search as Markov Decision Process}
To incorporate RL into the ordering-based paradigm, we formulate the variable ordering search problem as a multi-step decision process with a variable as an action at each decision step, and the order of the selected actions (or variables) is treated as the searched ordering. The decision-making process is Markovian, and its elements are described as follows.
\paragraph{State.}
One can directly take the sample data $\mathbf{x}_j$ as the state. However, preliminary experiments (see Appendix~A.1) show that it is difficult for feed-forward neural network models to capture the underlying causal relationships directly using observed data as states,
and that the data pre-processed by an encoder module is helpful to find better orderings.
The encoder module embeds each $\mathbf{x}_j$ to state $s_j$ and all the embedded states constitute the state space $\mathcal{S}:=\{ {s}_1,\cdots, {s}_d\}$. In our case, we also need an initial state, denoted by $s_0$ (detailed choice is given in Section~4.2), to select the first action. The complete state space would be $\mathcal{\hat{S}}:= \mathcal{S}\cup \{s_0\}$. We will use $\hat{s}_t$ to denote the actual state encountered at the $t$-th decision step when generating a variable ordering.
\paragraph{Action.}
We select an action (variable) from the action space {$\mathcal{A}:=\{ {v}_1,\cdots, {v}_d\}$} consisting of all the variables at each decision step, and the action space size is equal to the number of variables, i.e., $|\mathcal{A}|=d$. Compared to the previous RL-based method that searches over the graph space with size $\mathcal{O}(2^{d\times d})$ \cite{zhu2020causal}, the resulting action space becomes much smaller.
\paragraph{State transition.}
The state transition is related to the action selected at the current decision step. If the selected variable is $v_j$ at the $t$-th decision step,
then the state is transferred to the state $s_j \in \mathcal{S}$ which
corresponds to $\mathbf{x}_j$ embedded by the encoder,
i.e., $\hat{s}_{t+1}=s_j$.
\paragraph{Reward.}
In ordering-based methods, only the variables selected in previous decision steps can be the potential parents of the currently selected variable. Hence, we design the rewards in the following cases: \textit{episodic reward} and \textit{dense reward}.
In the former case, we calculate the score for a variable ordering $\Pi$ {with $d$ variables} as the episodic reward, i.e.,
\begin{equation}
R(\tau)=r_{T}(\hat{s}_T, a_T)= \text{S}_\text{BIC}(\mathcal{G}^{\Pi})
\end{equation}
where $T = d-1$ and $\text{S}_\text{BIC}$ has been defined in Equation~(1), with $\text{Pa}(X_j)$ replaced by the potential parent variable set $U(X_j)$; here $U(X_j)$ denotes the variables associated with the nodes in $\Pi_{\prec v_j}$.
If the score function is decomposable (e.g., the BIC score),
we can calculate an immediate reward by exploiting the decomposability for the current decision step. That is, for $v_j$ selected at time step $t$, the immediate reward is
\begin{equation}
r_t = \sum^m_{k=1} \log p(x^k_j|U(x_j^{k});\theta_j) - \frac{|\theta_j|}{2} \log m .
\end{equation}
This belongs the second case with {\it dense rewards}. {Here we keep $-|\theta_j|/2 \log m$ to make Equation~(3) consistent with the form of BIC score. }
\subsection{Implementation and Optimization with Reinforcement Learning}
\begin{figure}
\centering
\includegraphics[width=0.33\textwidth,]{figs/ed3.pdf}
\caption{Illustration of the policy model. The encoder embeds the observed data $\mathbf{x}_j$ into the state $s_j$. An action $a_t$ can be selected by the decoder according to the given state $\hat{s}_t$ and the pointer mechanism at each time step $t$. Note that $T = d-1$. See Appendix A for details.} \label{model_stru}
\end{figure}
We briefly describe the neural network architectures implemented in our method, as shown in Figure~2.
More details can be found in Appendix~A.
\paragraph{Encoder.}
$f_{\phi_e}^{\mathrm{enc}}: \tilde{\mathbf{X}} \mapsto \mathcal{S}$ is used to map the observed data to the embedding space $\mathcal{S}=\{s_1,\cdots, s_d\}$. Similar to \cite{zhu2020causal}, we adopt mini-batch training and randomly draw $n$ samples from $m$ samples of the data set $\mathbf{X}$ to construct $\tilde{\mathbf{X}}\in \mathbb R^{n\times d}$ at each episode.
We also set the embedding $s_j$ to be in the same dimension, i.e., $s_j\in\mathbb R^n$. For encoder choice, we conduct an empirical comparison among several representative structures such as MLP, LSTM and the self-attention based encoder \cite{vaswani2017attention}. Empirically, we validate that the self-attention based encoder in the Transformer structure performs the best (see Appendix~A.1).
\paragraph{Decoder.} $f_{\phi_d}^{\text{dec}}: \mathcal{\hat{S}} \mapsto \mathcal{A}$ maps the state space $\mathcal{\hat{S}}$ to the action space $\mathcal{A}$. Among several decoder choices (see also Appendix~A.1 for an empirical comparison), we pick an LSTM based structure that proves effective in our experiments.
Although the initial state is generated randomly in many applications, we pick it as $s_0=\frac1d\sum_{i=1}^d s_i$, considering that the source node is fixed in a correct ordering.
We restrict each node only be selected once by masking the selected nodes, in order to generate a valid ordering \cite{vinyals2015pointer}.
\paragraph{Optimization.} The optimization objective is to learn a policy maximizing
$J(\phi)$,
where $\phi=\{\phi_e,\phi_d\}$ with $\phi_e$ and $\phi_d$ being parameters associated with encoder $f^{\text{enc}}$ and decoder $f^{\text{dec}}$, respectively.
Based on the above definition, policy gradient \cite{sutton2018reinforcement} is used to optimize the ordering generation model parameters.
For the \textit{episodic reward} case, we have the following policy gradient $\nabla J(\phi)=\mathbb{E}_{\pi_{\phi}}\left[R(\tau) \sum^T_{t=0} \nabla_{\phi} \log \pi_{\phi}\left(a_t| \hat{s}_t\right) \right]$, and the algorithm in this case is denoted as CORL-1.
For the \textit{dense reward} case, policy gradient can be calculated as $\nabla J(\phi)=\mathbb{E}_{\pi_{\phi}}\left[\sum^T_{t=0}R_t \nabla_{\phi} \log \pi_{\phi}\left(a_t| \hat{s}_t\right) \right]$,
where $R_t=\sum^{T-t}_{l=0}\gamma^{l}r_{t+l}$ denotes the return at time step $t$. We denote the algorithm in this case as CORL-2.
Using a parametric baseline to estimate the expected score typically improves
learning \cite{sutton2018reinforcement}. Therefore, we introduce a critic network $V_{\phi_{v}}(\hat{s}_t)$ parameterized by $\phi_{v}$, which learns the expected return given state $\hat{s}_t$ and is trained with stochastic gradient descent using Adam optimizer on a mean squared error objective between its
predicted value and the actual return. More details about the critic network are described in Appendix~A.2.
Inspired by the benefits from pretrained models \cite{bello2016neural}, we also consider to incorporate pretraining to our method to accelerate training. In practice, one can usually obtain some observed data with known causal graphs or correct orderings, e.g., by simulation or real data with labeled graphs. Hence, we can pretrain a policy model with such data in a supervised way and use the pretrained model as initialization for new tasks. Meanwhile, a sufficient generalization ability is desired and we hence include diverse data sets with different numbers of nodes, noise types, causal relationships, etc.
\subsection{Variable Selection}
\begin{table*}[ht]
\linespread{1.0}
\centering \scriptsize
\begin{tabular}{lcccccccccc}
\toprule
& & & RANDOM & NOTEARS & DAG-GNN & RL-BIC2 & L1OBS & A{*} Lasso & CORL-1 & CORL-2 \\
\midrule
\multirow{8}{*}{30 nodes} &\multirow{2}{*}{ER2} & TPR & 0.41 (0.04) & 0.95 (0.03) & 0.91 (0.05) & 0.94 (0.05) & 0.78 (0.06) & 0.88 (0.04) & \bf{0.99 (0.02)}&\bf{0.99 (0.01)} \\
& & SHD& 140.4 (36.7) & 14.2 (9.4) & 26.5 (12.4) & 17.8 (22.5) & 85.2 (23.8) &35.3 (14.3) & \bf{5.2 (7.4)} & \bf{4.4 (3.5)}\\
\cmidrule(){2-11}
& \multirow{2}{*}{ER5} & TPR & 0.43 (0.03) & \bf{0.93 (0.01)} & 0.85 (0.11) & 0.91 (0.03) & 0.74 (0.04) & 0.84 (0.05) & \bf{0.94 (0.03)} & \bf{0.95 (0.03)} \\
& & SHD & 210.2 (43.5) & \bf{35.4 (7.3)} & 68.0 (39.8) & {45.6 (13.3)} & 98.6 (32.7) & 71.2 (21.5) & \bf{37.4 (16.9)} & \bf{37.6 (14.5)}\\
\cmidrule(){2-11}
& \multirow{2}{*}{SF2} & TPR & 0.58 (0.02) & 0.98 (0.02) & 0.92 (0.09) & 0.99 (0.02) & 0.83 (0.04) & 0.93 (0.02) & \bf{1.0 (0.01)} & \bf{1.0 (0.01)} \\
&& SHD & 118.4 (12.3) & 6.1 (2.3) & 36.8 (33.1) & 3.2 (1.7) & 49.7 (28.1) & 27.3 (18.4) & \bf{0.0 (0.0)} & \bf{0.0 (0.0)}\\
\cmidrule(){2-11}
& \multirow{2}{*}{SF5} & TPR & 0.44 (0.03) & 0.94 (0.03) & 0.89 (0.09) & 0.96 (0.03) & 0.79 (0.04) & 0.88 (0.03) & \bf{1.00 (0.00)} & \bf{1.00 (0.00)} \\
&& SHD & 165.4 (10.6) & 23.3 (6.9) & 47.8 (35.2) & 11.3 (5.2) & 89.3 (25.7) & 40.5 (19,8) & \bf{0.0 (0.0)} & \bf{0.0 (0.0)}\\
\midrule
\multirow{8}{*}{100 nodes} &\multirow{2}{*}{ER2} & TPR & 0.33 (0.05) & 0.93 (0.02) & 0.93 (0.03) & 0.02 (0.01) & 0.54 (0.02) & 0.86 (0.04) & \bf{0.98 (0.02)} & \bf{0.98 (0.01)} \\
& & SHD & 491.4 (17.6) & 72.6 (23.5) & 66.2 (19.2) & 270.8 (13.5) & 481.2 (49.9) & 128.5 (38.4) & \bf{24.8 (10.1)}& \bf{18.6 (5.7)}\\
\cmidrule(){2-11}
& \multirow{2}{*}{ER5} & TPR & 0.34 (0.04) & \bf{0.91 (0.01)}& 0.86 (0.16) & 0.08 (0.03) & {0.53 (0.02)} & {0.82 (0.05)} & \bf{0.93 (0.02)} & \bf{0.94 (0.03)} \\
& & SHD & 984.4 (35.7) & \bf{170.3 (34.2)} & {236.4 (36.8)} & {421.2 (46.2)} & {547.9 (63.4)} & {244.0 (42.3)} &\bf{175.3 (18.9)} & \bf{164.8 (17.1)}\\
\cmidrule(){2-11}
& \multirow{2}{*}{SF2} & TPR & 0.48 (0.03) & 0.98 (0.01) & 0.89 (0.14) & {0.04 (0.02)} & {0.57 (0.03)} & {0.92 (0.03)} & \bf{1.00 (0.00)} & \bf{1.00 (0.00)} \\
&& SHD & 503.4 (23.8) & 2.3 (1.3) & {156.8 (21.2)} & {281.2 (17.4)} & {377.3 (53.4)} & {54.0 (22.3)} &\bf{0.0 (0.0)} & \bf{0.0 (0.0)}\\
\cmidrule(){2-11}
& \multirow{2}{*}{SF5} & TPR & 0.47 (0.04) & 0.95 (0.01) &{0.87 (0.15)} & {0.05 (0.03)} & {0.55 (0.04)} & {0.89 (0.03)} & \bf{0.97 (0.02)} & \bf{0.98 (0.01)} \\
&& SHD & 891.3 (19.4) & 90.2 (34.5) & 165.2 (22.0) & {405.2 (77.4)} & {503.7 (56.4)} & {114.0 (36.4)} &\bf{19.4 (5.2)} & \bf{10.8 (6.1)}\\
\bottomrule
\end{tabular}
\caption{Empirical results for ER and SF graphs of $30$ and $100$ nodes with LG data.}\label{30-Lin_results}
\end{table*}
One can obtain the causal graph from an ordering by conducting variable selection methods, such as sparse candidate \cite{teyssier2012ordering}, significance testing of covariates \cite{buhlmann2014cam}, and group Lasso \cite{schmidt2007learning}.
In this work,
for linear data models, we apply linear regression to the obtained fully-connected DAG and then use thresholding to prune edges with small weights, as similarly used by \cite{zheng2018dags}.
For the non-linear model, we adopt the CAM pruning used by \cite{Lachapelle2019grandag}. For each variable $X_j$, one can fit a generalized additive model against the
current parents of $X_j$ and then apply significance testing of covariates,
declaring significance if the reported p-values are lower that or equal to $0.001$. The overall method is summarized in Algorithm 1.
\begin{algorithm}[t]
\label{alg:all}
\caption{{Causal discovery with Ordering-based RL.}
}
\begin{algorithmic}[1]
\REQUIRE observed data $\mathbf{X}$, initial parameters $\phi_e,\phi_d$ and $\phi_v$, two empty buffers $\mathcal{D}$ and $\mathcal{D}_{score}$, initial value (negative infinite) BestScore and a random ordering BestOrdeing.
\WHILE{not terminated}
\STATE draw a batch of samples from $\mathbf{X}$, encode them to $\mathcal{S}$ and calculate the initial state $\hat{s}_0$
\FOR{$t=0,1,\ldots,T$}
\STATE collect a batch of data $\langle \hat{s}_t,a_t,r_t\rangle$ with $\pi_{\phi}$: $\mathcal{D} = \mathcal{D} \cup \{\langle \hat{s}_t,a_t,r_t\rangle\}$
\IF{$\langle v_t,\Pi_{\prec v_t}, r_t \rangle$ is not in $\mathcal{D}_{score}$}
\STATE store $\langle v_t,\Pi_{\prec v_t}, r_t \rangle$ in $\mathcal{D}_{score}$
\ENDIF
\ENDFOR
\STATE {update $\phi_e,\phi_d$, and $\phi_v$ as described in Section~4.2
} \IF{$\sum^T_{t=0}r_t > \text{BestScore}$}
\STATE update the BestScore and BestOrdering
\ENDIF
\ENDWHILE
\STATE get the final DAG by pruning the BestOrdering
\end{algorithmic}
\end{algorithm}
\subsection{Consistency Analysis}
So far we have presented CORL in a general manner without specifying explicitly
the distribution family for calculating the scores or rewards. In principle, any distribution family could be employed
as long as its log-likelihood can be computed.
However, whether the maximization of the accumulated reward recovers the correct ordering, i.e., whether consistency of the score function holds, depends on both the modelling choice of reward and the underlying SEM. If the SEM is identifiable, then the following proposition shows that it is possible to find the correct ordering with high probability in the large sample limit.
\begin{proposition}
Suppose that an identifiable SEM with true causal DAG $\mathcal{G}^{*}$ on $X=\{X_j\}_{j=1}^d$ induces distribution $P(X)$. Let $\mathcal{G}^{\Pi}$ be the fully-connected DAG that corresponds to an ordering $\Pi$. If there is an SEM with $\mathcal{G}^{\Pi}$ inducing the same distribution $P(X)$, then $\mathcal{G}^{\Pi}$ must be a super-graph of $\mathcal{G}^{*}$, i.e., every edge in $\mathcal{G}^{*}$ is covered in $\mathcal{G}^{\Pi} $.
\end{proposition}
\begin{proof}
The SEM with $\mathcal{G}^{\Pi}$ may not be causally minimal but can be reduced to an SEM satisfying the causal minimality condition \cite{10.5555/2627435.2670315}. Let $\tilde{\mathcal{G}}^{\Pi}$ denotes the causal graph in the reduced SEM with the same distribution $P(X)$. Since we have assumed that original SEM is identifiable, i.e., the distribution $P(X)$ corresponds to a unique true graph, $\tilde{\mathcal{G}}^{\Pi}$ is then identical to $\mathcal{G}^{*}$. The proof is complete by noticing that $\mathcal{G}^{\Pi}$ is a super-graph of $\tilde{\mathcal{G}}^{\Pi}$.
\end{proof}
Thus, if the causal relationships fall into the chosen model functions and a right distribution family is assumed, then given infinite samples the optimal accumulated reward (e.g., the optimal BIC score) must be achieved by a super-DAG of the underlying graph.
However,
finding the optimal accumulated reward may be hard,
because policy gradient methods only guarantee local convergence \cite{sutton2018reinforcement}, and we can only apply approximate model functions and also need to assume a certain distribution family for calculating the reward.
Nevertheless, the experimental results in Section~5 show that the proposed method can achieve a better performance than those with consistency guarantee in the finite sample regime, thanks to the improved search ability of modern RL methods.
\subsection{Computational Complexity}
In contrast with typical RL applications, we treat RL here as a search strateg
, aiming to find an ordering that achieves the best score.
CORL requires the evaluation of the rewards at each episode with $\mathcal{O}(dm^2+d^{3})$ computational cost if linear functions are adopted to model the causal relations, which is same to RL-BIC2 \cite{zhu2020causal}. Fortunately, CORL does not need to compute the matrix exponential term with $\mathcal O(d^3)$ cost due to the use of ordering search.
We observe that CORL performs fewer episodes than RL-BIC2 before the episode reward converges (see Appendix~C).
The evaluation of Transformer encoder and LSTM decoder in CORL take $\mathcal{O}(nd^{2})$ and $\mathcal{O}(dn^{2})$,
respectively. However, we find that computing rewards is dominating in the total running time (e.g., around $95\%$ and $87\%$ for $30$- and $100$-node linear data models). Thus, we record the decomposed scores for each variable $v_j$ with different parental sets $\Pi_{\prec v_j}$ to avoid repeated computations.
\section{Experiments}
\begin{figure}
\centering
\includegraphics[width=0.46\textwidth,]{figs/corl-2-pre-gridxy.pdf}
\caption{Learning curves of CORL-1, CORL-2 and CORL-2-pretrain on $100$-node LG data sets.}
\label{reward-curve-corl2}
\end{figure}
\begin{figure*}[ht]
\centering
\subfloat[{TPR on GP10}]{
\includegraphics[width=0.35\linewidth,height=0.33\linewidth]{figs/10tpr.pdf}
}\quad \quad \quad
\subfloat[SHD on GP10]{
\includegraphics[width=0.35\linewidth,height=0.33\linewidth]{figs/10shd.pdf}
}\\
\subfloat[TPR on GP30]{
\includegraphics[width=0.35\linewidth,height=0.33\linewidth]{figs/30tpr_V1.pdf}
}\quad \quad \quad
\subfloat[SHD on GP30]{
\includegraphics[width=0.35\linewidth,height=0.33\linewidth]{figs/30shd_V1.pdf}
}
\caption{The empirical results on GP data models with 10 and 30 nodes.}
\label{nonlinear_10_30}
\end{figure*}
In this section, we conduct experiments on synthetic data sets with linear and non-linear causal relationships as well as a real data set.
The baselines are {ICA-LiNGAM} \cite{shimizu2006linear}, three ordering-based approaches L1OBS \cite{schmidt2007learning}, CAM \cite{buhlmann2014cam} and A* Lasso \cite{Xiang2013lasso}, some recent gradient-based approaches NOTEARS \cite{zheng2018dags}, {DAG-GNN} \cite{yu19dag} and {GraN-DAG} \cite{Lachapelle2019grandag}, and the RL-based approach {RL-BIC2} \cite{zhu2020causal}.
We use the original implementations (see Appendix B.1 for details) and pick the recommended hyper-parameters unless otherwise stated.
We generate different types of synthetic data sets which vary along: level of edge sparsity, graph type, number of nodes, causal functions and sample size. Two types of graph sampling schemes, Erdös–Rényi (ER) and Scale-free (SF), are considered here.
We denote $d$-node ER and SF graphs with on average $hd$ edges as ER$h$ and SF$h$, respectively.
Two common metrics are considered: True Positive Rate ({TPR}) and Structural Hamming Distance ({SHD}). The former indicates the probability of correctly finding the positive edges among the discoveries. Hence, it can be used to measure the quality of an ordering, and the higher the better. The latter counts the total number of missing, falsely detected or reversed edges, and the smaller the better.
\subsection{Linear Models with Gaussian and Non-Gaussian Noise}
We evaluate the proposed methods on Linear Gaussian (LG) with equal variance Gaussian noise and LiNGAM data models, and the true DAGs in both
cases are known to be identifiable \cite{peters2014identifiability,shimizu2006linear}.
We set $h \in \{2, 5\}$ and $d \in \{30,50, 100\}$ to generate observed data (see Appendix~B.2 for details).
For variable selection, we set the thresholding as $0.3$ and apply it to the estimated coefficients, as similarly used by \cite{zheng2018dags,zhu2020causal}.
Table~\ref{30-Lin_results} {presents} the results for $30$- and $100$-node LG data models; the conclusions do not change with $50$-node graphs, which
are given in Appendix~D.
The performances of ICA-LiNGAM, GraN-DAG and CAM is also given in Appendix~D, and they are almost never on par with the best methods presented in this section. CORL-1 and CORL-2 achieve consistently good results on LiNGAM data sets which are reported in Appendix~E due to the space limit.
We now examine Table~\ref{30-Lin_results} (the values in parentheses represent the standard deviation across data sets per task). Across all settings, CORL-1 and CORL-2 are the best performing methods in terms of both TPR and SHD, while NOTEARS and DAG-GNN are not too far behind. In Figure~\ref{reward-curve-corl2}, we further show the training reward curves of CORL-1 and CORL-2 on $100$-node LG data sets, where CORL-2 converges faster to a better ordering than CORL-1. We conjecture that this is because dense rewards can provide more guidance information for the training process than episodic rewards,
which is beneficial to the learning of RL model and improves the training performance. {Hence, CORL-2 is preferred in practice if the score function is decomposable for each variable. }
As discussed previously, RL-BIC2 only achieves satisfactory results on graphs with $30$ nodes.
The TPR of L1OBS is lower than that of A* Lasso, which indicates that L1OBS using greedy hill-climbing with tabu lists may not find a good ordering.
Note that the SHD of L1OBS and A* Lasso reported here are the results after applying the introduced pruning method. We observe that the SHDs are greatly improved after pruning. For example, the SHDs of L1OBS decrease from $171.6$ $(29.5)$, $588.0$ $(66.2)$ and $1964.5$ $(136.6)$ to $85.2$ $(23.8)$, $215.4$ $(26.3)$ and $481.2$ $(49.9)$ for ER2 graphs with $30$, $50$ and $100$ nodes, respectively, while the TPRs almost keep the same.
We have also evaluated our method on $150$-node LG data models on ER2 graphs. CORL-1 has TPR and SHD being $0.95$ $(0.01)$ and $63.7$ $(9.1)$, while CORL-2 has $0.97$ $(0.01)$ and $38.3$ $(14.3)$, respectively. CORL-2 outperforms NOTEARS that achieves $0.94$ $(0.02)$ and $50.8$ $(21.8)$.
\paragraph{Pretraining.} We show the training reward curve of CORL-2-pretrain in Figure~\ref{reward-curve-corl2}, where the model parameters are pretrained in a supervised manner. The data sets used for pretraining contain $30$-node ER2 and SF2 graphs with different causal relationships. Note that the data sets used for evaluation are different from those used for pretraining. Compared to that of CORL-2 using random initialization, a pretrained model can accelerate the model learning process. Although pretraining requires additional time, it is only
carried out once and when finished, the pretrained model can be used for multiple causal discovery tasks.
Similar conclusion can be drawn in terms of CORL-1, which is shown in Appendix~G.
\paragraph{Running time.} { We also report the running time of all the methods on $30$- and $100$-node linear data models: CORL-1, CORL-2, GraN-DAG and DAG-GNN $\approx$ $15$ minutes for $30$-node graphs; CORL-1 and CORL-2 $\approx$ $7$ hours against GraN-DAG and DAG-GNN $\approx$ $4$ hours for $100$-node graphs;
CAM $\approx$ $15$ minutes for both $30$- and $100$-node graphs, while L1OBS and A* Lasso $\approx$ $2$ minutes for that tasks;
NOTEARS $\approx$ $5$ minutes and $\approx$ $1$ hour for the two tasks
respectively; RL-BIC2 $\approx$ $3$ hours for $30$-node graphs.
We set the maximal running time up to $15$ hours, but RL-BIC2 did not converge on $100$-node graphs, hence we did not report its results. Note that the running time can be significantly reduced by paralleling the evaluation of reward.
The neural network based learning methods
generally take longer time,
and the proposed method achieves the best performance among these methods.
}
\subsection{Non-Linear Model with Gaussian Process}
In this experiment, we
consider causal relationships with $f_j$ being a function sampled from a Gaussian Process (GP) with radial basis function kernel of bandwidth one. The additive noise follows standard Gaussian distribution, which is known to be identifiable \cite{10.5555/2627435.2670315}.
We consider ER1 and ER4 graphs with different sample numbers (see Appendix~B.2 for the generation of data sets), and we only report the results with $m=500$ samples due to the space limit (the remaining results are given in Appendix~F).
For comparison, only the methods that have been shown competitive for this non-linear data model in existing works \cite{zhu2020causal,Lachapelle2019grandag} are included. For a given ordering, we follow \cite{zhu2020causal} to use GP regression to fit the causal relationships.
{We also set a maximum time limit of $15$ hours for all the methods for fair comparison and only graphs with up to $30$ nodes are considered here}, as using GP regression to calculate the scores is time-consuming.
The variable selection method used here is the CAM pruning from \cite{buhlmann2014cam}.
The results on $10$- and $30$-node data sets with ER1 and ER4 graphs are shown in Figure~\ref{nonlinear_10_30}.
Overall, both GraN-DAG and DAG-GNN perform worse than CAM. We conjecture that this is because the number of samples are not sufficient for GraN-DAG
and DAG-GNN to fit neural networks well, as also shown by \cite{Lachapelle2019grandag}. CAM, CORL-1, and CORL-2 have similar results, with CORL-2 performing the best on $10$-node graphs and being slightly worse than CAM on $30$-node graphs. All of these methods have better results on ER1 graphs than on ER4 graphs, especially with $30$ nodes. We also notice that CORL-2 only runs about $700$ iterations on $30$-node graphs and about $5000$ iterations on $10$-node graphs within the time limit, due to the increased time from GP regression. Nonetheless, the proposed method achieves a much improved performance compared with the existing RL-based method.
\subsection{Real Data}
The Sachs data set \cite{sachs2005causal}, with $11$-node and $17$-edge true graph, is widely used for research on graphical models. The expression levels of protein and phospholipid in the data set can be used to discover the implicit protein signal network.
The observational data set has $m = 853$ samples and is used to discover the causal structure.
We similary use Gaussian Process regression to model the causal relationships in calculating the score.
In this experiment,
CORL-1, CORL-2 and RL-BIC2 achieve the best SHD $11$.
CAM, GraN-DAG, and ICA-LiNGAM achieve SHDs $12$, $13$ and $14$, respectively.
Particularly, DAG-GNN and NOTEARS result in SHDs $16$ and $19$, respectively, whereas an empty graph has SHD $17$.
\section{Conclusion}
In this work, we have incorporated RL into the ordering-based paradigm for causal discovery, where a generated ordering can be pruned by variable selection to obtain the causal DAG. Two methods are developed based on the MDP formulation and an encoder-decoder framework. We further analyze the consistency and computational complexity for the proposed approach. Empirical results validate the improved performance over existing RL-based causal discovery approach.
\section*{Acknowledgments}
{Xiaoqiang Wang and Liangjun Ke were supported by the National Natural Science Foundation of China under Grant 61973244.}
{
\bibliographystyle{named}
\section*{Appendix}
\section{Architectures and Hyper-Parameters}
\subsection{Encoder and Decoder Architectures} \label{multi-encoder-decoder}
There are a variety of neural network modules that can be used for encoder and decoder architectures. Here we consider some representative modules, including: Multi Layer Perceptrons (MLP) module, an LSTM based recurrent neural network module, and the self-attention based encoder from the Transformer structure. In addition, we use the original observational data as the state directly, i.e., no encoder module is used, which is denoted as Null. More details regarding the architectures and associated hyper-parameters choices will be presented in Appendix~\ref{transformer-lstm}.
Table~\ref{encoder-decoder-results} reports the empirical results of CORL-2 on $30$-node LG ER2 data sets where the noise variances are equal (see Appendix~B.2 for details about data generation). We observe that the LSTM based decoder achieves a better performance than that of MLP based decoder, which indicates that LSTM is more effective than MLP in sequential decision tasks. The overall performance of neural network encoders is better than that of Null, which shows that the data pre-processed by an encoder module is necessary. Among all these encoders, Transformer encoder achieves the best results. Similar conclusion was drawn in \cite{zhu2020causal}, and we hypothesize that the performance of Transformer encoder benefits from the self-attention scheme that provide sufficient interactions amongst variables.
\begin{table*}[ht]
\caption{Empirical results of CORL-2 with different encoder and decoder architectures on $30$-node LG ER2 data sets. {True Positive Rate ({TPR}) indicates the probability of correctly finding the positive edges among the discoveries, and the higher the better. Structural Hamming Distance ({SHD}) counts the total number of missing, falsely detected or reversed edges, and the smaller the better.}}
\label{encoder-decoder-results}
\centering \footnotesize
\setlength{\tabcolsep}{2mm}{
\begin{tabular}{lccccc}
\toprule
&&&Encoder \\
\cmidrule(){3-6}
& & Null & LSTM & MLP & Transformer \\
\cmidrule(){1-6}
\multirow{2}{*}{MLP Decoder} & TPR & 0.81 (0.07) & {0.86 (0.10)} & {0.96 (0.02)} & 0.98 (0.02) \\
& SHD &54.2 (25.1) & 36.0 (26.7) & 11.0 (5.3) & 5.0 (3.3)\\
\cmidrule(){1-6}
\multirow{2}{*}{LSTM Decoder}
& TPR & 0.94 (0.04) & {0.88 (0.09)} & 0.97 (0.01) & {0.99 (0.01)} \\
& SHD & 20.6 (20.0) & 29.0 (17.8)& 8.6 (4.3) & 4.4 (3.5) \\
\bottomrule
\end{tabular}}
\end{table*}
\subsection{Model Architectures and Hyper-Parameters}
\label{transformer-lstm}
\begin{figure}[ht
\centering
\includegraphics[width=.4\textwidth,height=.3\textwidth]{figs/encoder1.pdf}
\caption{Illustration of the Transformer encoder. The encoder embeds the observed data $\mathrm{x}_j$ of each variable $X_j$ into state $s_j$. Notation block@3 denotes three blocks.}
\label{model-encoder}
\end{figure}
\begin{figure}[ht
\centering
\includegraphics[width=0.4\textwidth,]{figs/decoder1.pdf}
\caption{Illustration of LSTM decoder. At each time step, it maps the state $\hat{s}_t$ to a distribution over action space $\mathcal{A}=\{a_1,\cdots,a_d\}$ and then selects an action (variable) according to the distribution.}
\label{model-decoder}
\end{figure}
The neural network structure of the Transformer encoder used in our experiments is given in Figure \ref{model-encoder}. It consists of a feed-forward layer with $256$ units and three blocks. Each block is composed of a multi-head attention network with $8$ heads and $2$-layer feed-forward neural networks with $1024$ and $256$ units, and each feed-forward layer is followed by a normalization layer.
Given a batch of observed samples with shape $b\times d \times n$, with $b$ denoting the batch size, $d$ the node number and $n$ the number of observed data in a batch, the final output of the encoder is a batch of embedded state with shape $b\times d \times 256$.
We illustrate the neural network structure of the LSTM based decoder in Figure \ref{model-decoder}, which is similar to the decoder proposed by \cite{vinyals2015pointer}. The LSTM takes a state as input and outputs an embedding. The embedding is mapped to the action space $\mathcal{A}$ by using some feed-forward neural networks, a soft-max module and the pointer mechanism \cite{vinyals2015pointer}. The LSTM module with $256$ hidden units is used here.
The outputs of encoder are processed as the initial hidden state $h_0$ for the decoder. All of the feed-forward neural networks used in the decoder have $256$ units. For LSTM encoder, only the standard LSTM module with $256$ hidden units is used and its output is treated as the embedded state.
The MLP module consists of $3$-layer feed-forward neural networks with $256$, $512$ and $256$ units. For MLP encoder, only the standard MLP module is used. For MLP decoder, its structure is similar to Figure~2, in which LSTM module is replaced by the MLP module.
Both CORL-1 and CORL-2 use the actor-critic algorithm to train the model parameters. We use the Adam optimizer with learning rate $1e{-4}$ and $1e{-3}$ for the actor and critic, respectively.
The discount factor $\gamma$ is set to $0.98$. The actor consists of an encoder and a decoder whose choices have been described above.
The critic uses $3$-layer feed-forward neural networks with $512$, $256$, and $1$ units, which takes a state $\hat{s}$ as input and outputs a predicted value for the current policy given state $\hat{s}$.
For CORL-1, the critic needs to predict the score for each state $\hat{s}_t$, while for CORL-2, the critic takes the initial state $\hat{s}_0$ as input and outputs a predicted value directly for a complete ordering.
\section{Baselines and Date Sets}
\subsection{Baselines}\label{appendix-baselines}
The baselines considered in our experiments are listed as follows:
\begin{itemize}
\item ICA-LiNGAM assumes linear non-Gaussian additive model for data generating procedure and applies independent component analysis to recover the weighted adjacency matrix. This method usually achieves good performance on LiNGAM data sets. However, it does not provide guarantee for linear Gaussian data sets.\footnote{\url{https://sites.google.com/site/sshimizu06/lingam}}
\item NOTEARS recovers the causal graph by estimating the weighted adjacency matrix with the least squares loss and a smooth characterization for acyclicity constraint.\footnote{
\url{https://github.com/xunzheng/notears }}
\item DAG-GNN formulates causal discovery in the framework of variational autoencoder. It uses a modified smooth characterization for acyclicity and optimizes a weighted adjacency matrix with the evidence lower bound as loss function.\footnote{\url{https://github.com/fishmoon1234/DAG-GNN}}
\item GraN-DAG models the conditional distribution of each variable given its parents with feed-forward neural networks. It also uses the smooth acyclicity constraint from NOTEARS to find a DAG that maximizes the log-likelihood of the observed samples.\footnote{\url{https://github.com/kurowasan/GraN-DAG}}
\item RL-BIC2 formulates the causal discovery as a one-step decision making process, and combines the score function and acyclicity constraint from NOTEARS as the reward for a directed graph.\footnote{\url{https://github.com/huawei-noah/trustworthyAI/tree/master/Causal_Structure_Learning/Causal_Discovery_RL}}
\item CAM conducts a greedy estimation procedure that starts with an empty
DAG and adds at each iteration the edge $(v_k, v_j)$ between nodes $v_k$ and $v_j$ that corresponds to the largest gain in log-likelihood. For a searched ordering, CAM prunes it to the final DAG by applying significance testing of covariates. CAM also performs preliminary neighborhood selection to reduce the ordering space.\footnote{\url{https://cran.r-project.org/web/packages/CAM.}}
\item L1OBS performs heuristic search (greedy hill-climbing with tabu lists) through the space of topological orderings to see an ordering with the best score. It uses $\ell_1$ variable selection to prune the searched ordering (fully-connected DAG) to the final DAG.\footnote{\url{https://www.cs.ubc.ca/~murphyk/Software/DAGlearn/}}
\item A* Lasso with a limited queue size incorporates a heuristic scheme into a dynamic programming based method. The queue size usually needs to be adjusted to balance the time cost and the quality of the solution.\footnote{\url{http://www.cs.cmu.edu/~jingx/software/AstarLasso.zip}}
\end{itemize}
\subsection{Data Generation} \label{data-gener}
We generate synthetic data sets which vary along five dimensions: level of edge sparsity, graph type, number of nodes, causal functions and sample size.
We sample $5$ data sets with a required number of samples for each task: a ground truth DAG $\mathcal{G}^*$ is firstly drawn randomly from either the Erdös–Rényi (ER) or Scale-free (SF) graph model, and the data are then generated according to different given SEMs.
Specifically, for Linear Gaussian (LG) models,
we set $h \in \{2, 5\}$ and $d \in \{30,50, 100\}$ to obtain the ER and SF graphs with different levels of edge sparsity and different numbers of nodes. Then we generate $3,000$ samples for each task following the linear SEM: $\mathbf{X} = W^T \mathbf{X} + \epsilon$, where $W\in \mathbb{R}^{d\times d}$ denotes the weighted adjacency matrix obtained by assigning edge weights independently from $\text{Unif}([-2, -0.5]\cup[0.5, 2])$. Here $\epsilon\in\mathbb R^d$ denote standard Gaussian noises with equal variances for each variable, which makes $\mathcal G^*$ identifiable \cite{peters2014identifiability}.
For LiNGAM data model, the data sets are generated similarly to LG data models but the noise variables follow non-Gaussian distributions which pass the noise samples from Gaussian distribution through a power nonlinearity to make them non-Gaussian \cite{shimizu2006linear}. LiNGAM is also identifiable, as shown in \cite{shimizu2006linear}.
The GP data sets with different sample sizes are generated following $X_j = f_j(\text{Pa}(X_j)) + \epsilon_j$, where the function $f_j$ is a function sampled from a GP with radial basis function kernel of bandwidth one and $\epsilon_j$ follows standard Gaussian distribution.
This setting is also identifiable according to \cite{10.5555/2627435.2670315}.
Due to the efficiency of the reward calculation, we only conduct experiments with up to $30$ variables.
\section{Number of Episodes Before Convergence}\label{section-iteration}
Table~\ref{iteration-number} reports the total number of episodes required for CORL-2 and RL-BIC2 to converge, averaged over five
seeds. CORL-2 performs fewer episodes than RL-BIC2 to reach a convergence, which we believe is due to reducing the size of search space and avoiding dealing with acyclicity.
\begin{table*}[ht]
\caption{Total number of iterations ($\times10^3$) to reach convergence.}
\label{iteration-number}
\centering \footnotesize
\setlength{\tabcolsep}{2mm}{
\begin{tabular}{lcccccc}
\toprule
& \multicolumn{2}{c}{30 nodes} & \multicolumn{2}{c}{50 nodes } & \multicolumn{2}{c}{100 nodes } \\
\cmidrule(){2-7}
& ER2 & ER5 & ER2 & ER5 & ER2 & ER5 \\
\cmidrule(){1-7}
CORL-2 & 1.0 (0.3) & 1.1 (0.4) & 1.9 (0.3) & {2.4 (0.3)} & {2.3 (0.5)} & 2.9 (0.4) \\
RL-BIC2 & 3.9 (0.5) & 4.1 (0.6) & 3.4 (0.4) & {3.5 (0.5)} & $\times$ & $\times$ \\
\bottomrule
\end{tabular}}
\end{table*}
\section{Additional Results on LG Data Sets}
\label{Linear-additional}
The results for $50$-node LG data models are presented in Table~\ref{50-Lin_results}. The conclusion is similar to that of the $30$- and
$100$-node experiments.
The results of ICA-LiNGAM, GraN-DAG and CAM on LG data models are presented in Table~\ref{otherBaselines-Lin_results}. Their performances do not compare favorably to CORL-1 nor CORL-2 on LG data sets.
It is not surprising that ICA-LiNGAM does not perform well because the algorithm is specifically designed for non-Gaussian noise and does not provide guarantee for LG data models.
We hypothesize that CAM's poor performance on LG data models is because it uses nonlinear regression instead of linear regression. As for GraN-DAG, it uses $2$-layer feed-forward neural networks to model the causal relationships, which may not be able to learn a good linear relationship in this experiment.
\begin{table*}[ht]
\caption{Empirical results for ER and SF graphs of $50$ nodes with LG data. The higher TPR the better, the smaller SHD the better.}
\label{50-Lin_results}
\centering \footnotesize
\begin{tabular}{lccccccccc}
\toprule
& & RANDOM & NOTEARS & DAG-GNN & RL-BIC2 & L1OBS & A{*} Lasso & CORL-1 & CORL-2 \\
\midrule
\multirow{2}{*}{ER2} & TPR & 0.31 (0.03) & 0.94 (0.02) & 0.94 (0.04) & 0.79 (0.10) & 0.56 (0.02) & 0.88 (0.03) & \bf{0.97 (0.04)} & \bf{0.97 (0.02)} \\
& SHD & 295.4 (28.5) & 38.6 (10.8) & 30.6 (8.3) & 88.5 (49.3) & 288.0 (66.2) &154.3 (27.6) &\bf{24.0 (32.3} & \bf{17.9 (10.6)}\\
\midrule
\multirow{2}{*}{ER5} & TPR& 0.32 (0.02) &\bf{0.90 (0.01)} & 0.87 (0.14)& {0.74 (0.03)} & {0.57 (0.03)} & {0.82 (0.03)} & \bf{0.90 (0.02)} & \bf{0.92 (0.02)} \\
& SHD& 378.4 (24.2) & \bf{67.8 (7.5)} & 93.2 (109.4) & {128.9 (40.4)} & 299.4 (53.6) &{104.0 (28.3)} &\bf{68.3 (10.2)} & \bf{64.8 (13.1)}\\
\midrule
\multirow{2}{*}{SF2} & TPR & 0.49 (0.04) & 0.99 (0.01) & 0.90 (0.13)& {0.84 (0.05)} & {0.67 (0.02)} & {0.89 (0.03)} & \bf{1.00 (0.00)} & \bf{1.00 (0.00)} \\
& SHD & 215.6 (14.7) & 3.5 (1.6) & {79.3 (93.2)} & {115.2 (57.4)} & {182.3 (33.4)} & {124.0 (35.2)} &\bf{0.0 (0.0)} & \bf{0.0 (0.0)}\\
\midrule
\multirow{2}{*}{SF5} & TPR & 0.51 (0.03) & \bf{0.94 (0.12)} & {0.88 (0.12)} & {0.75 (0.05)} & {0.61 (0.03)} & {0.81 (0.02)} & \bf{0.94 (0.03)} & \bf{0.95 (0.02)} \\
& SHD & 345.6 (24.3) & \bf{20.1 (14.3)} & {89.2 (99.2)} & {115.2 (57.4)} & {217.3 (36.4)} &{131.0 (25.3)} &\bf{24.3 (11.2)} & \bf{20.8 (10.1)}\\
\bottomrule
\end{tabular}
\end{table*}
\begin{table*}[ht]
\caption{Empirical results of ICA-LiNGAM, GraN-DAG and CAM (against CORL-2 for reference) for ER and SF graphs with LG data.}
\label{otherBaselines-Lin_results}
\centering \footnotesize
\begin{tabular}{lcccccc}
\toprule
& & & ICA-LiNGAM & GraN-DAG & CAM & CORL-2 \\
\midrule
\multirow{8}{*}{30 nodes} & \multirow{2}{*}{ER2} & TPR & 0.75 (0.03) & 0.51 (0.17) & 0.47 (0.05) & \bf{0.99 (0.01)} \\
& & SHD & 112.3 (12.8) & 96.0 (11.3) & 110.8 (10.3) & \bf{4.4 (3.5))}\\
\cmidrule(){2-7}
& \multirow{2}{*}{ER5} & TPR &0.57 (0.03) & {0.52 (0.03)} & {0.46 (0.02)} & \bf{0.95 (0.03)} \\
& & SHD & {161.8 (9.2)} & {175.2 (27.4)} & {191.3 (32.5)} & \bf{37.6 (14.5)}\\
\cmidrule(){2-7}
& \multirow{2}{*}{SF2} & TPR &{0.58 (0.05)} & {0.61 (0.04)} & {0.63 (0.02)} & \bf{1.0 (0.0)} \\
&& SHD & 149.0 (19,8) & {136.4 (21.2)} & {115.2 (26.7)} & \bf{0.0 (0.0)}\\
\cmidrule(){2-7}
& \multirow{2}{*}{SF5} & TPR &{0.56 (0.04)} & {0.58 (0.02)} & {0.60 (0.03)} & \bf{1.0 (0.0)} \\
&& SHD &{160.5 (8.9)} & 142.4 (24.3) & {122.2 (17.4)} & \bf{0.0 (0.0)}\\
\midrule
\multirow{8}{*}{50 nodes} & \multirow{2}{*}{ER2} & TPR & 0.73 (0.03) & 0.11 (0.04)& 0.55 (0.06) & \bf{0.97 (0.02)} \\
& & SHD & 108.8 (11.3) & 173.0 (22.9) & 140.8 (35.4) & \bf{17.9 (10.6)}\\
\cmidrule(){2-7}
& \multirow{2}{*}{ER5} & TPR & {0.57 (0.01)} & {0.64 (0.03)} & {0.61 (0.02)} & \bf{0.92 (0.02)} \\
& & SHD & {199.8 (90.7)} & {154.2 (36.4)} & 178.3 (34.8) & \bf{64.8 (13.1)}\\
\cmidrule(){2-7}
& \multirow{2}{*}{SF2} & TPR &{0.59 (0.04)} & {0.44 (0.05)} & {0.57 (0.02)} & \bf{1.00 (0.00)} \\
&& SHD &{208.5 (83.2)} & 158.6 (34.5) & {131.2 (24.4)} & \bf{0.0 (0.0)}\\
\cmidrule(){2-7}
& \multirow{2}{*}{SF5} & TPR &{0.57 (0.01)} & {0.49 (0.04)} & {0.53 (0.03)} & \bf{0.95 (0.02)} \\
&& SHD & 216.6 (88.4) & {243.9 (27.2)} & {235.2 (34.2)} & \bf{20.8 (10.1)} \\
\midrule
\multirow{8}{*}{100 nodes} & \multirow{2}{*}{ER2} & TPR & 0.73 (0.02) & 0.38 (0.02) & 0.43 (0.02) & \bf{0.98 (0.01)} \\
& & SHD & 268.4 (28.5) & 191.3 (31.9) & 126.4 (27.8) & \bf{18.6 (5.7)}\\
\cmidrule(){2-7}
& \multirow{2}{*}{ER5} & TPR &{0.57 (0.05)} & {0.42 (0.03)} & {0.47 (0.02)} & \bf{0.94 (0.03)} \\
& & SHD & 311.1 (63.7) & {208.2 (54.4)} & 182.3 (34.9) & \bf{164.8 (17.1)}\\
\cmidrule(){2-7}
& \multirow{2}{*}{SF2} & TPR & {0.69 (0.03)} & {0.40 (0.03)} & {0.44 (0.02)} & \bf{1.00 (0.00)} \\
&& SHD &367.6 (67.5) & {239.9 (43.2)} & {35.2 (37.4)} & \bf{0.0 (0.0)}\\
\cmidrule(){2-7}
& \multirow{2}{*}{SF5} & TPR &{0.57 (0.05)} & {0.39 (0.03)} & {0.48 (0.04)} & \bf{0.98 (0.01)} \\
&& SHD & 362.3 (82.8) & {219.3 (32.2)} & {125.2 (24.7)} & \bf{10.8 (6.1)}\\
\bottomrule
\end{tabular}
\end{table*}
\section{Results on LiNGAM Data Sets} \label{LiNGAM-results}
\begin{table*}[ht]
\caption{Empirical results on $30$-, $50$- and $100$-node LiNGAM ER2 data sets.}
\label{LiNGAM-30-50-100-Table-results}
\centering \footnotesize
\begin{tabular}{lcccccc}
\toprule
& \multicolumn{2}{c}{30 nodes ER2} & \multicolumn{2}{c}{50 nodes ER2} & \multicolumn{2}{c}{100 nodes ER2} \\
\cmidrule(){1-7}
Method & TPR & SHD & TPR & SHD & TPR & SHD \\
\midrule
\bf{ICA-LiNGAM} & \bf{1.00 (0.00)} & \bf{0.0 (0.0)} & \bf{1.00 (0.00)} & \bf{0.0 (0.0)} & \bf{1.00 (0.00)} & \bf{1.0 (0.9)} \\
NOTEARS & 0.94 (0.04) & 17.2 (13.2) & {0.95 (0.02)} & {33.2 (16.5)} & {0.94 (0.03)} & 69.2 (23.2) \\
DAG-GNN &0.94 (0.03)&19.6 (10.5) &{0.96 (0.01)}&{24.6 (2.9)} & 0.93 (0.03) & 66.2 (19.2)\\
GraN-DAG & 0.28 (0.09)&100.8 (14.6)&0.20 (0.01)&177.0 (25.9) &0.16 (0.04) & 312.8 (25.2) \\
RL-BIC2 & 0.94 (0.07) & 19.8 (23.0) & 0.80 (0.08) & 86.0 (51.9) & 0.13 (0.12) & 291.3 (24.1) \\
CAM & 0.60 (0.11) & 310.0 (34.0) & 0.33 (0.07) & 178.0 (31.9) & 0.53 (0.05) & 247.2 (32.1) \\
L1OBS & 0.72 (0.04) & 85.3 (23.3) & 0.47 (0.02) & 212.6 (24.6) & 0.41 (0.03) & 470.5 (48.1) \\
A{*} Lasso & 0.87 (0.03) & 42.3 (16.3) & 0.88 (0.03) & 82.6 (17.6) & 0.85 (0.04) & 102.5 (22.6) \\
\midrule
\bf{CORL-1} & \bf{0.99 (0.01)} & \bf{3.8 (6.4)} & \bf{0.96 (0.06)} & \bf{24.6 (37.7)} &\bf{0.98 (0.01)} & \bf{20.0 (7.9)}\\
\bf{CORL-2} & \bf{0.99 (0.01)} & \bf{3.9 (5.6)} & \bf{0.96 (0.08)} & \bf{20.2 (11.3)} &\bf{0.99 (0.01)} & \bf{13.8 (7.2)}\\
\bottomrule
\end{tabular}
\end{table*}
We report the empirical results on $30$-, $50$- and $100$-node LiNGAM data sets in Table~\ref{LiNGAM-30-50-100-Table-results}. For L1OBS, we increased the recommended number of evaluations, from $2,500$ to $10,000$. For A* Lasso, we pick the queue size from $\{10, 500, 1000\}$, and report the best result out of these parameter settings.
The results of L1OBS and A* Lasso reported here are those after pruning with the same method as used by CORL-2.
For other baselines, we pick the recommended hyper-parameters.
Among all these algorithms, ICA-LiNGAM can recover the true graph on most of the LiNGAM data sets. This is because ICA-LiNGAM is specifically designed for non-Gaussian noises.
CORL-1 and CORL-2 achieve consistently good results, compared with other baselines.
\section{Results on 20-Node GP Data Sets with Different Sample Sizes}\label{diff-smaplesize}
We take the $20$-node GP data models as an example to show the performance of our method w.r.t.~different sample numbers. The data generated based on ER4 graphs. We illustrate the empirical results in Table~\ref{gp-20-ER1-ER4-results}.
Since previous experiments have shown that CORL-2 is slightly better than CORL-1, we only report the results of CORL-2 here.
We also report the results of CAM on these data sets, as it is the most competitive baseline. TPR reported here is calculated based on the variable ordering (i.e., w.r.t.~its correponding fully-connected DAG). As the sample size decreases, CORL-2 tend to perform better than CAM, and we believe this is because CORL-2 benefits from the exploration ability of RL.
\begin{table*}[ht]
\caption{Empirical results on $20$-node GP ER1 data sets with different sample sizes.}
\label{gp-20-ER1-ER4-results}
\centering \footnotesize
\setlength{\tabcolsep}{2mm}{
\begin{tabular}{lccc}
\toprule
Sample size & Name & TPR & SHD \\
\midrule
\multirow{2}{*}{1000} & {CAM} & {0.91 (0.03)} & {30.0 (3.7)} \\
&{CORL-2} & 0.87 (0.03) & 36.5 (3.1) \\
\midrule
\multirow{2}{*}{500} & {CAM} & {0.86 (0.03)} & {45.0 (2.5)} \\
&{CORL-2} & 0.85 (0.03) & 46.3 (2.3) \\
\midrule
\multirow{2}{*}{400} & CAM & 0.83 (0.02) & 51.0 (2.7) \\
&{CORL-2} &{ 0.84 (0.03)} & {50.5 (3.0)} \\
\midrule
\multirow{2}{*}{200} & CAM & 0.60 (0.03) & 66.3 (1.9) \\
& {CORL-2} & {0.75 (0.02)} & {63.1 (1.5)} \\
\bottomrule
\end{tabular}}
\end{table*}
\section{CORL-1 with a Pretrained Model}\label{corl1-pretrain}
\begin{figure}[ht
\centering
\includegraphics[width=0.35\textwidth,]{figs/corl1_pre.pdf}
\caption{Learning curves of CORL-1 and CORL-1-pretrain on $100$-node LG data sets.}
\label{corl1_pre}
\end{figure}
Figure~\ref{corl1_pre} shows the training reward curves of CORL-1 and CORL-1-pretrain; the latter stands for CORL-1 with a pretrained model. The data sets used for pretraining are $30$-node ER2 and SF2 graphs with different causal relationships. We can observe that the pretrained model, which serves as a good initialization, again accelerates training.
\bibliographystyle{named}
|
2,877,628,090,417 | arxiv | \section{Introduction}
There are many good reasons for the analysis of orbital periods of
eclipsing binaries hereafter EBs). Many of the mysterious,
astrophysically interesting binaries were revealed thanks to the
analysis of their odd orbital period variations. Fine period analysis
provide additional information about variable stars physics. By means
of period analysis, it is possible to reveal and characterise the
existence of further bodies in the EB system (stars, planets) or mass
exchange between interacting binary components. By analysing period
variations caused by apsidal motion we can deduce the internal
structure of the components and test the theories of gravity. From the
systematic decrease of the period of close EBs, it is possible to
reason on the angular momentum loss through stellar winds or
gravitational waves, etc.
These period variations are always rather delicate. The credibility
of~conclusions based on EB period analysis depends strongly on the
applied methods and reliability of the data used.
\section{Data for period analyses of eclipsing binaries}
There are two types of EB data, used together with phase information,
that are standardly used for orbital period analysis -- original time
series of EB measurements (A) and mid-eclipse timings (B).
\noindent \textbf{A}. Various EB measurement time series types in the
usual order of reliability are:
\begin{itemize}
\item{photoelectric or CCD time series of parts of light curves done
in several filters where EB minima are targeted or non-targeted;}
\item{brightness measurements done by photometric surveys non-targeted
to the particular object (ASAS, Hipparcos etc.);}
\item{photoelectric or CCD time series of parts of light curves done
in one filter or integral light where EB minima are targeted as a
rule;}
\item{time series of spectroscopic observations (line strengths,
radial velocities etc.);}
\item{time series of magnitudes derived from archival photographic
plates;}
\item{time series of individual visual estimates of EB brightness
where eclipses are not targeted.}
\end{itemize}
\noindent \textbf{B}. Timings of primary/secondary EB minima are
preprocessed data derived from original time series of type A done as
a rule during central parts of eclipses. The whole time series is then
degenerated into only one result -- time of the mid eclipse -- derived
standardly by an obsolete \citet{kwee} method. This method yields a
usable estimate of minima times if descending and ascending branches
are of the same length. Nevertheless, the uncertainties of minima
timings were generally underestimated and consequently unacceptable.
Here follows eclipsing binary minimum light timing sources in order of
their reliability:
\begin{itemize}
\item{EB timings derived from photoelectric or CCD time series (a
voluntary degradation from category A to B);}
\item{timings when the EB was definitely fainter than in phases
outside eclipses - an objective source of information, but of low
quality;}
\item{plenty of amateur visual observations of EB minima - a subjective
source of information of problematic quality. Previously this was a
most popular and respected amateur activity with a clear scientific
output. The subjective source of information with problematic
quality.}
\end{itemize}
\noindent The only advantage of mid-eclipse timings (B) is the
homogeneity of data from different observers and it is the main reason
why the majority of period analyses were done using these types of
data. However, a lot of such analyses are erroneous because of lowered
reliability of entering data and the fact that times of minima almost
do not use the fact that the light curves are periodical functions.
Our experience is that the results of direct period analysis, based on
time series of EB measurements (A), are at least twice as accurate.
\section{Subjective nature of visual estimates of EB magnitudes}
From our long, in-depth experience with visual observations of EBs it
became clear that these data should be treated with caution since the
data suffer from many bad attributes that effectively limit their
suitability for fine period analysis purposes. We can enumerate only
the most serious ones:
\begin{itemize}
\item Due to the poor quality of visual observations, the accuracy of
the determined time of minimum light is much worse than for other
types of photometric observations. The quality of visual timings are
further deteriorated by the fact that some determinations were based
on only a few individual observations done only in the immediate
vicinity of the predicted time of minima (the majority of visual
observation runs begin one hour before the expected minimum time and
end one hour after it). The extreme shortness of observation runs are
typical for some observers and one should be careful when using such
unreliable minima timings \citep[see the number of estimates used for
minima timing determinations in the representative list of such data
in][]{zhu}.
\item For time series of visual estimates, minimum light times were
often determined by obscure, irreproducible methods. In the
worst case, the original estimates were not published and they are now
inaccessible.
\item Almost all standard methods used to estimate mid-eclipse
timings are based on the concepts of the least square method. However,
this method strictly requires successive estimates to be independent.
This requirement is definitely not kept in time series of visual
observations. A`visual observer' is not an instrument but a subject
who knows what light curves of observed EBs should look like. He
remembers his last estimates and subconsciously strives to create the
most plausible light curve. `Observed' visual light curves are thus
subjectively smoothed to the ideal of theoretical or photoelectric
light curves with clearly defined descending and ascending branches.
This smoothing i.a. prevents us from estimating the real uncertainty
of the timings' determination from the time series of only one
observing session. The uncertainties formally calculated and published
from the smoothed light curve data are many times smaller than they
should be.
\item The most severe flaw in the majority of the visual
observations is that observers obviously knew the predicted time of
minimum light to an accuracy of minutes. If the ephemeris for the time
of minimum light was incorrect, most observers were influenced into
confirming the predicted, incorrect time of minimum light. This
subjective effect, which is able to completely invalidate
observations, equally afflicts both beginners and `experienced' visual
observers.
\end{itemize}
All these statements can be exemplary documented for the case of
eclipsing binary BS Vulpeculae.
\begin{figure}
\begin{center}
\includegraphics[scale=0.7,angle=-0]{fig1.eps}
\end{center}
\caption{\oc\ diagram with respect to the linear ephemeris of primary
minima times: JD$_\mathrm{I}=2454387.8692+0.47597003\times E$, where
$E$ is an integer (epoch). Large circles with error bars correspond to
`virtual times of eclipses' \citep[][]{mikbezoc}. The small symbols
`+' show published minima timings of visual observations of G.
Samolyk, `$\times$' denote timings of H. Peter, and small circles
correspond to minima obtained by other visual observers. Small
diamonds are timings of photographic plates on which the star is close
to minimum and small squares are photoelectric or CCD minima timings.
The dashed line corresponds to the linear ephemeris of \citet{bernard}
JD$_\mathrm{I}=2443271.578+0.47597147\times E$, used by visual
observers. The bold grey line corresponds to the found quadratic
ephemeris.}\label{fig1}
\end{figure}
\section{BS Vulpeculae}
\hvezda\ is a relatively bright, but neglected nearly contact EB with
very short orbital period $P \sim 0.476$\,d. \citet{zhu} recently
analysed the period by the `direct' period analysis method
\citep[briefly described in][]{mikbezoc} using 8177 individual
brightness observations of \hvezda\ done during the time-interval
1898-2010. They found that the period is shortening. The result proves
that \hvezda\ evolves toward contact phase.
Later we improved the method used in \citet{zhu} so that it is now
possible to combine all the above mentioned data with 46 mid-eclipse
timings derived from time-series obtained by objective photometric
measurements (photoelectric photometry, CCD photometry or acquiring of
photographic snaps series) in a time vicinity of the predicted minima
listed in \citet{zhu}. We confirmed the shortening of the orbital
period and improved its value to $\dot{P}=-2.11(5)$ ms/yr (see also
\oc\ diagram in Fig.\,\ref{fig1}).
\subsection{\hvezda\ visual minima timings}
The sufficiently deep primary eclipse (in visual 0.7 mag) and short
duration (only 3.5\,h) predestined \hvezda\ to be an attractive target
for visual observers. We found in literature 104 primary mid-eclipse
timings derived from visual observations in the time-interval
1970--2003 \citep[see the list in][]{zhu}. The large number of visual
timings enables a relatively detailed discussion of their properties.
\begin{figure}
\begin{center}\includegraphics[width=0.9\textwidth]{fig2.eps}
\end{center}
\caption{\oc\ diagram of visually determined primary eclipse timings
with respect to the linear ephemeris
$\textit{JD}_\mathrm{I}=2454387.8692+0.47597003\times E$, where $E$ is
an integer (epoch). The symbols `+' show G. Samolyks' timings,
`$\times$' denote H. Peters' timings, and circles correspond to
timings obtained by other visual observers. The dashed-dot line
corresponds to the linear ephemeris of \citet{bernard}
$\textit{JD}_\mathrm{I}=2443271.578+0.47597147\times E$, used by
visual observers. The dashed curve is the real quadratic course. The
full line is the linear fit of visually determined timings; dotted
lines are the 1-$\sigma$ uncertainty of this fit.}\label{fig2}
\end{figure}
Our analysis of all 104 visual timings shows their excellent
correlation with the linear ephemeris of \citet{bernard}
JD$_\mathrm{I}=2443271.578+0.47597147\times E$, that was generally
used by visual observers for their observation schedule
(Fig.\,\ref{fig2}). The linear fit of visual observations is almost
parallel with the line of prediction, while timings of minima
systematically lag behind the minimum forecast by several minutes.
Visual observations have only a very weak correlation with the real
quadratic ephemeris. The conclusion is obviously valid for almost all
visual observers who targeted the star. It is apparent that
practically all visual observers subjectively adjusted their
observations to more or less confirm the existing ephemeris. The
systematic deflections had grown to 26 minutes by 2003.
The apparent `delay' in visual observers' timings with respect to
prediction agrees well with the heliocentric corrections for the times
of observation. Observers used geocentric time, but the predicted
minimum light time was heliocentric, which was (in the case of
\hvezda) delayed several minutes with respect of the geocentric time
on the night when observations were done.
The most active visual observer of \hvezda\ G. Samolyk, published also
528 brightness estimates (accessible through the AAVSO webpage)
obtained during his observations of 39 primary eclipses of \hvezda\ in
the time interval 1984--2002.
We analysed all these data in detail and arrived i.a. at the following
conclusions:
\begin{figure}
\begin{center}
\includegraphics[scale=0.76,angle=-0]{fig3.eps}
\end{center}
\caption{Samolyk's mean light curve of the \hvezda\ primary minimum
depicting his brightness estimates in arbitrary units versus the times
of minima determined by him looks very authentically.}\label{fig3}
\end{figure}
\begin{itemize}
\item Observations of \hvezda\ were (due to its extremely short period)
always optimally scheduled, typically 12 estimates per eclipse were
able to describe the complete eclipse quite well. Both descending and
ascending branches of the light curve during eclipses were covered
nicely.
\item The resulting light curves were perfectly symmetric and they
corresponded very well with ideal photometrically acquired curves (see
Fig.\,\ref{fig3}). Samolyks' light curves were very smooth; their
scatter derived from the form of the light curve is only 0.055~mag!
However, the scatter of individual times of eclipses indicates that
the real scatter in estimates should be at least five times larger!
\item The scatter of timings published by Samolyk and their bad
affinity to the real quadratic ephemeris might have been caused by the
application of an inappropriate method for the derivation of light
curve minima from estimate series. Since we know the light curve
shape, we applied our own rigorous method \citep[][]{mikproc} to
determine the times of minima. Although we obtained diverse times of
minima, all of them were clustered around the \citet{bernard}
`geocentric' ephemeris. Consequently, the estimates themselves were
biased to accommodate the forecast.
\end{itemize}
It is evident from our findings that the heavily subjective character
of visual observations of short period EBs disqualifies them as a
source of unbiased information apt for fine EB period analysis. On the
other hand we could use 200 visual estimates of brightness made by S.
Piotrowski in two time intervals 1935--1939 (155 estimates) and
1945--1946 (45 estimates) \citep[data in][]{aca}, because these
estimates have a different character than later AAVSO and BBSAG visual
observations. The distribution of these old visual estimates shows
that they were obtained in `monitoring mode' without the primary aim
to obtain minima timings. Furthermore they passed our careful
analysis.
\section{Conclusion}
Our analysis of mid-eclipse timings of \hvezda\ derived from the time
series of visual observations proved the heavily subjective character
of visual observations of \hvezda. It should be noted that mid-eclipse
timings derived from observations of visual observers associated with
B.R.N.O in pre-email times were not biased by the knowledge of the
expected minima times of EBs. The reason for this is that they used
special forecasts of EB minima distributed from the centre where times
were rounded to the nearest half hour.
We conclude that the heavily subjective character of visual
observations disqualifies visual minima timings as a source of true
phase information necessary for fine eclipsing binaries period
analyses. Consequently we do not advise using these data without
having done a detailed verification of their reliability.
\section*{Acknowledgements}
We wish to thank GA\v{C}R 209/12/0217, MUNI/A/0968/2009, ME10099, and
LH12175. This work was partly supported by the intergovernmental
cooperation project between P.\,R.\,China and the Czech Republic.
\bibliographystyle{ceab}
\section*{References}
\begin{itemize}
\small
\itemsep -2pt
\itemindent -20pt
\bibitem[de Bernardi \& Scaltriti(1979)]{bernard} de Bernardi, C., and
Scaltriti, F.: 1979, {\it Astronomy and Astrophysics, Suppl. Ser.} {\bf 35}, 63
\bibitem[Kwee \& van Woerden(1956)]{kwee} Kwee, K.~K., and van Woerden,
H.: 1956, {\it Bulletin of the Astronomical Institutes of the
Netherlands} {\bf 12}, 327
\bibitem[Mikul\'a\v sek et al.(2012a)]{mikbezoc} Mikul\'a\v sek, Z.,
Zejda, M., and Jan\'ik, J.: 2012a, in \emph{From Interacting Binaries
to Exoplanets: Essential Modeling Tools}, IAUS 282,
Eds. M. Richards \& I. Hubeny, 18--22 July 2011, Tatranska
Lomnica, Slovak Republic, 391
\bibitem[Mikul\'a\v sek et al.(2012b)]{mikproc} Mikul\'a\v sek, Z.,
Zejda, M., Qian, S.-B., Zhu, L., 2012b, in \emph{Proceedings of 9-th
Pacific Rim Conference on Stellar Astrophysics}, 14--20 April 2011,
Lijiang, China, ASP Conference Series, 111
\bibitem[Szafraniec(1962)]{aca}Szafraniec, R.: 1962, {\it Acta Astronomica Suppl.}
{\bf 5}
\bibitem[Zhu et al.(2012)]{zhu}Zhu, L.-Y., Zejda, M., Mikul\'a\v sek, Z., Li\v ska, J.,
Qian, S.-B., de Villiers, S.~N.: 2012, {\it Astronomical Journal}
{\bf 144}, 37.
\end{itemize}
\end{document}
|
2,877,628,090,418 | arxiv | \section{Introduction \label{sect_intro}}
While active galactic nuclei (AGN) are thought to be powered by
accretion onto super-massive black holes \citep{lynden71}, much
remains unknown about the physical distribution and kinematics of the
surrounding gas.
A portion of this gas is deep enough in the potential of the black
hole that its emission lines are broadened by up to several thousand
km/s \citep{antonucci93, urry95}. At distances from the black hole of
$\sim 10^{14}-10^{16}$ m \citep{wandel99, kaspi00, bentz06, bentz13},
this so-called broad line region provides a unique probe of AGN
physics and the opportunity to measure the mass of the black hole
itself, via reverberation mapping \citep{blandford82, peterson93,
peterson04}.
Traditionally, the goal of reverberation mapping is to infer a
characteristic radius of the BLR by measuring the time lag between
changes in the AGN ionizing continuum (or a proxy such as the optical
AGN continuum) and the response of the broad emission line flux. If
the time lag is due only to light travel time, then the characteristic
radius of the BLR is given by the speed of light $c$ times the time
lag $\tau$. The time lag can then be combined with a measure of the
velocity of the BLR gas to form a dimensional black hole mass estimate
referred to as the virial product, $M_{\rm vir} = c\tau \Delta v^2/G$,
where $G$ is the gravitational constant and $\Delta v$ is a measure of
the width of the broad emission line profile. The virial product is related to the
true black hole mass $M_{\rm BH}$ by the virial coefficient $f$, a
dimensionless factor of the order unity. In recent years, the
standard practice is to set the average virial coefficient by
aligning the $M_{\rm BH} -
\sigma_*$ relations for galaxies with dynamical $M_{\rm BH}$
measurements and galaxies with reverberation mapped black hole mass
measurements \citep{onken04, collin06, woo10, greene10b, graham11,
park12b, woo13, grier13b}. The scatter in the $M_{\rm BH} - \sigma_*$
relation of $\sim 0.4$ dex
\citep[e.g.][]{park12b} indicates that the uncertainty introduced by
assuming a single value of $f$ for the full reverberation mapped
sample could be comparable to the intrinsic scatter of the $M_{\rm BH}
- \sigma_*$ for quiescent galaxies.
With a new generation of high-quality reverberation mapping datasets
\citep{bentz09, denney10, barth11, grier13a}, more information
has become available to probe the details of the geometry and dynamics
of the BLR. In order to exploit this improvement in the data, in the
last few years we have been developing a new technique for
analyzing reverberation mapping data \citep{pancoast11}. The
fundamental difference of our approach with respect to previous work
is that we aim to fit fully self-consistent models of the BLR
geometry and dynamics to the data, rather than trying to infer
the so-called transfer function \citep{blandford82, horne91, horne94, krolik95}.
In addition to providing quantitative
constraints on the geometry and dynamics of the BLR, this direct
modeling approach allows us to measure the black hole mass without
relying on the virial coefficient $f$; instead, the black hole mass is just one
of the parameters in our model fit. Previously we applied this
direct modeling approach to the H$\beta$\ emission line in two AGNs, Arp
151 \citep{brewer11} and Mrk 50
\citep{pancoast12}, that were observed as part of the Lick AGN
Monitoring Projects in 2008 \citep{walsh09, bentz09} and 2011
\citep{barth11}, respectively.
In this paper, the first of a series on direct modeling of reverberation
mapping data, we introduce an improved and expanded version of our
simply parameterized phenomenological model of the BLR to be used with
the direct modeling approach. We demonstrate the capabilities of this
new model using simulated data and by placing constraints on the
uncertainties in traditional cross-correlation function (CCF)
analysis. In paper II of this series \citep{pancoast14}, we apply the
improved BLR model to five AGNs in the LAMP 2008 dataset. The
additional model flexibility and increased algorithm efficiency of
this new implementation are demonstrated by comparing the results for
Arp 151 by \citet{brewer11} to the new results described in paper II;
in the latter case the uncertainty in black hole mass is decreased by
more than 0.1 dex and it is possible to differentiate between inflow
and outflow kinematics.
We begin by presenting a detailed description of the improved BLR
model in Section~\ref{sect_model}. Tests to recover the model
parameters using simulated data are presented in
Section~\ref{sect_simdata_tests}. Comparison of direct modeling
results to CCF analysis and constraints on CCF lag uncertainties are
given in Section~\ref{sect_ccf_tests}. Finally, we give an overview
of the main conclusions in Section~\ref{sect_conclusions}. Throughout
this paper, all BLR model parameter values are given in the rest frame
of the AGN.
\section{The Model}
\label{sect_model}
In this section we describe our model of the BLR and the numerical
methods we use to explore its parameter space. Our model of the
BLR can be applied to any broad emission line, although it has so far
only been applied to the H$\beta$\ broad emission line in six AGNs
\citep{brewer11, pancoast12, pancoast14}. The basic methodology of
our model is also completely generalizable to any model in which the
geometry and dynamics of the BLR gas can be computed quickly enough to
enable a full exploration of the parameter space when comparing with
the data.
\subsection{Overview}
Our goal is to reconstruct the physical structure of the BLR and to measure the mass
of the central black hole from reverberation
mapping measurements. To achieve this, we describe the possible structure of
the BLR by a large number of parameters whose values we infer from the data.
In our model, the BLR is represented by a set of point particles whose positions
represent the spatial distribution of broad line emission.
If the BLR is really made up of distinct clouds, then each particle could be
associated with emission from a BLR cloud, however if the BLR is made up of a
smoother distribution of gas, then the particles are just a
Monte Carlo approximation of the density field of emission. Each particle in
our model is also associated with a velocity that depends upon
the mass of the black hole. Our model parameters for the BLR describe the
spatial distribution of the particles as well as their individual positions.
Additional parameters describe the rule by which velocities are assigned to
the particles, as well as the individual velocities themselves.
In the present implementation we ignore
gravitational interactions or fluid viscosity between particles,
and other non-gravitational forces like radiation pressure.
Given a distribution of particles with associated velocities, we can
immediately calculate how the BLR would process an input continuum light
curve, resulting in an emitted broad line spectrum (e.g. H$\beta$) that changes
(in both total flux and shape) over time. Apart from the conversion from
continuum to line flux, we assume that the particles act as
mirrors, reflecting the continuum flux towards the observer, where the
velocity of the particle determines how far the emission line
flux is shifted in wavelength space away from the systematic emission
line wavelength at rest with respect to the black hole.
There are three parts to our model of the BLR, which is formulated as
an application of Bayesian inference as described in
Section~\ref{sect_bayes}. The first part of the model is the AGN
continuum light curve model described in Section~\ref{sect_continuum}.
It is necessary to model the AGN continuum light curve because we need
to be able to evaluate the continuum light curve at arbitrary times in
order to calculate the broad line spectrum variations predicted by the model.
The second part is the ``geometry model'' (spatial distribution) of the BLR
described in Section~\ref{sect_geomodel}, which describes the spatial
distribution of the particles that make up the BLR emission.
The positions of the particles
determine their time lags, which tells us how delayed features in the broad emission
line light curve are compared to the continuum light curve. The third
part is the ``dynamical model'' of the BLR described in
Section~\ref{sect_dyn}. This describes the rule by which velocities are
assigned to the particles, and allows for scenarios such as near-circular
orbits, inflow, or outflow.
The component of a particle's velocity along the line of sight determines
which wavelength it affects in the model-predicted broad line spectrum.
Once the three parts of our
model of the BLR have been specified, we must explore the model
parameter space in order to constrain the properties of the BLR given a
specific reverberation mapping dataset, as described in
Section~\ref{sect_dnest}. Finally, we enumerate the limitations of
our current model of the BLR and future improvements in
Section~\ref{sect_lim}.
\subsection{Bayesian Inference Framework}
\label{sect_bayes}
We use the formalism of Bayesian statistics to infer the values of our
model parameters $\boldsymbol{\theta}$ given a reverberation mapping
dataset ${\bf D}$. We begin by defining the prior probability distributions of
the model parameters, $p(\boldsymbol{\theta} | I)$, which incorporate our
initial assumptions about the range of allowed parameter values and
depend upon any information $I$ that we have about the problem before
we begin. We then assign the probability distribution of the data given
a specific set of parameter values $p({\bf D} | \boldsymbol{\theta}, I)$
which tells us how the data and model parameters are related. This term is
often called the ``sampling distribution'', or, once the data is known, the
likelihood.
Finally, we can combine the prior and likelihood using Bayes'
theorem to obtain the posterior distribution of the
model parameters given the data:
\begin{equation}
\label{eqn_bayes}
p(\boldsymbol{\theta} | {\bf D}, I) \propto p(\boldsymbol{\theta} | I)\, p({\bf D} | \boldsymbol{\theta}, I).
\end{equation}
The normalization constant of the posterior in Equation~\ref{eqn_bayes},
called the {\it evidence} or the {\it marginal likelihood}, is given by
\begin{eqnarray}
p({\bf D} | I) &=& \int p(\boldsymbol{\theta} | I)\, p({\bf D} | \boldsymbol{\theta}, I) \, d^n{\bf\theta}
\end{eqnarray}
and is useful for model comparison.
For models with many parameters and in which the posterior distribution
is not of a known standard form, it is common to calculate properties of the
posterior
probability density function (PDF) by generating samples
using an algorithm such as Markov Chain Monte
Carlo (MCMC). As the number of parameters becomes large and the
likelihood function potentially multimodal, however, it can be more efficient to
use a more complex algorithm such as Diffusive Nested Sampling (DNS), as
described in Section~\ref{sect_dnest}. DNS has
the added benefit that it computes the marginal likelihood,
allowing for model selection, unlike most standard MCMC algorithms that only
generate posterior samples.
In our inference problem of modeling the BLR, the data consist of two
time series. The first is the AGN continuum light curve $\{\mathcal{Y}_i\}$
and its corresponding timestamps and measurement error
variances. The second time series is the spectrum of the broad line
measured over time, which we will denote by $\{\mathcal{D}_{ij}\}$ (the index
$i$ represents the epoch and $j$ the wavelength bin). The overall
dataset that enters into Bayes' theorem is both of these:
\begin{equation}
{\bf D} = \{ \{\mathcal{Y}_i\}, \{\mathcal{D}_{ij}\} \}.
\end{equation}
We can split the likelihood function into two parts. The likelihood for the
continuum data $\{\mathcal{Y}_i\}$ will be discussed in
Section~\ref{sect_continuum}. For the broad line data,
we use the model parameters $\boldsymbol{\theta}$ to construct a time series
of mock broad emission line spectra $m_{ij}(\boldsymbol{\theta})$ to compare to
the data using a Gaussian likelihood function:
\begin{equation}
p({\bf D} | \boldsymbol{\theta}, I) = \prod_{i,j} \frac{1}{\sigma_{ij} \sqrt{2 \pi}}
\exp \left[ -\frac{1}{2\sigma_{ij}^2} \left(\mathcal{D}_{ij} - m_{ij}(\boldsymbol{\theta})\right)^2
\right]
\end{equation}
\subsection{Continuum Light Curve Model}
\label{sect_continuum}
Ground-based reverberation mapping campaigns use optical AGN continuum
light curves (e.g. in the {\it V} or {\it B} bands) to track the
variability of photons leading to BLR emission, since the true
ionizing photons are in the ultraviolet (UV). While it is expected
that the UV photons are created in the accretion disk closer to the
black hole than the optical photons, the time lag between variability
features in the UV and optical is unresolved \citep{peterson91,
korista95} or on the order of a day \citep{collier98}. For this
reason, we do not distinguish between a UV or optical light curve in
our model of the BLR, assuming that either light curve is emitted from
a point source at the position of the black hole. While the true UV
and optical emitting regions in the accretion disk are certainly not
point-like, their distance from the black hole is significantly
smaller than that of the BLR compared to the uncertainties
in the mean BLR radius \citep[e.g.][]{morgan10}, suggesting that
detailed modeling of the optical or UV emitting region would not be
well-constrained by current reverberation mapping datasets.
Since our model of the BLR is many particles each reflecting the
continuum light curve to the observer with a time lag given by the
particle's distance from the continuum point source, the
continuum flux must be computed at arbitrary times within the light
curve. Generally, reverberation mapping AGN continuum light curves
are too sparsely sampled to resolve intra-day variability using simple
linear interpolation between data points. Linear interpolation also
incorrectly assumes that there is no uncertainty associated with the
interpolation process or the measurements.
For these reasons, we model the AGN continuum
light curve using a stochastic model of AGN variability, allowing us
to evaluate the light curve at arbitrarily small timescales and also to
include the continuum light curve model uncertainty into our inference
on the properties of the BLR.
We model the continuous AGN continuum light curve $y(t)$ using Gaussian
processes (GPs), which
allow us to treat the interpolated and extrapolated light curve points as additional
parameters in our model, constrained by the data $\bf D$. Most of the
information about $y(t)$ is, as one would expect, provided by the continuum
light curve data $\{\mathcal{Y}_i\}$.
With the GP assumption, the prior distribution for any finite set of
interpolated flux values is a multivariate Gaussian:
\begin{eqnarray}
p(\mathbf{y} | \mu_{\rm cont}, \mathbf{C}) &=& \frac{1}{\sqrt{(2 \pi)^n \det \mathbf{C}}}
\times\\
& &
\exp \left[
-\frac{1}{2} (\mathbf{y} - \mu_{\rm cont})^T {\bf C}^{-1} (\mathbf{y} - \mu_{\rm cont})
\right]
\end{eqnarray}
where $\mathbf{y}$ are the interpolated continuum light curve
points (i.e. evaluations of the function $y(t)$),
${\mu_{\rm cont}}$ is the long-term mean flux value of the
light curve, and ${\bf C}$ is the covariance matrix. The covariance
between any two points in the interpolated continuum light curve
depends on the time difference between them, as given by:
\begin{equation}
C(t_1, t_2) = \sigma_{\rm cont}^2 \exp \left[ - \left( \frac{| t_2 - t_1 |}{\tau_{\rm cont}} \right)^{\alpha_{\rm cont}} \right]
\end{equation}
where $\sigma_{\rm cont}$ is the long term standard deviation of the
continuum light curve, $\tau_{\rm cont}$ is the typical timescale for
variations, and $\alpha_{\rm cont}$ is a smoothness parameter between
1 and 2. Larger values of $\alpha_{\rm cont}$ lead to more covariance
between points in the continuum light curve, corresponding to less
fluctuations on small timescales. Setting $\alpha_{\rm cont} = 1$
improves the speed with which the densely sampled continuum light
curve can be calculated, as well as increasing the performance
of the MCMC\footnote{Performance is also increased by parameterising in terms
of $\sigma_{\rm cont}/\sqrt{\tau_{\rm cont}}$ rather than $\sigma_{\rm cont}$ itself.}.
For these reasons, we generally
set $\alpha_{\rm cont} = 1$, in which case our Gaussian process model
is equivalent to a continuous time first-order autoregressive process
(CAR(1)). The CAR(1) model has been found to be a good fit to AGN
variability data on similar timescales to those probed by reverberation mapping campaigns
\citep{kelly09, kozlowski10, macleod10, zu11, zu13}, although a model
that further reduces AGN variability on very short timescales provides a better fit to higher-cadence Kepler data \citep{mushotzky11}.
We interpolate and extrapolate the AGN continuum light
curve data using 1000 points, where the range of points starts before
the continuum data (usually by 50\% the continuum data range) and
extends past the end of both the continuum and line data, whichever is
later. Points extrapolated past the ends of the continuum data are
only constrained by the general behavior of the interpolated points
and thus have very high uncertainty.
\subsection{Geometry Model}
\label{sect_geomodel}
Once we have a model for the continuum light curve we need a model for
the spatial distribution of the particles, which we call the ``geometry
model''. The geometry model has flexibility in the radial distribution
of the particles as well as the angular distribution. In particular we include an
opening angle parameter that describes whether the BLR is a disk or
sphere and an inclination angle parameter that determines from what
angle the observer sees any asymmetries of the BLR. Although this is a
purely phenomenological model, it is flexible enough that it should
allow us to capture a wide variety of realistic geometries with a
moderate number of parameters.
\begin{figure}
\begin{center}
\includegraphics[scale=0.67]{figure1.pdf}
\caption{ Examples of possible radial profiles for the BLR emission
given by the Gamma distribution with $\mu = 6$, $F = 0$, and
various values for $\beta$. The distributions range from a narrow
Gaussian ($\beta < 1$) to an exponential profile ($\beta = 1$) or
steeper ($\beta > 1$).}
\label{fig_radialprof}
\end{center}
\end{figure}
We define the geometry model in two stages. First we consider the radial
distribution of the particles, and secondly we define the angular structure.
\subsubsection{Radial BLR Distribution}
The radial distribution of BLR emission density is described by a
shifted gamma distribution. The gamma distribution for a positive
variable $x$ is usually written
\begin{equation}
p(x|\alpha,\theta) \propto x^{\alpha-1} \exp \left( -\frac{x}{\theta} \right)
\end{equation}
where $\alpha$ is the shape parameter and $\theta$ is a scale parameter.
Our radial distribution is based on a shifted gamma distribution where the
lower limit is $r_0$ instead of zero. Rather than parameterizing the
distribution by $(\alpha, \theta, r_0)$, whose interpretations are not
straightforward (making priors difficult to assign),
we use a different parameterisation in terms of
three parameters $(\mu, \beta, F)$, defined as follows.
\begin{eqnarray}
\mu &=& r_0 + \alpha\theta \label{eqn_mu} \\
\beta &=& \frac{1}{\sqrt{\alpha}} \label{eqn_beta}\\
F &=& \frac{r_0}{r_0+\alpha\theta}. \label{eqn_f}
\end{eqnarray}
The parameter
$\mu$ is the mean value
of the shifted gamma distribution, $\beta$ is the standard
deviation of the gamma distribution in units of the mean $\mu$ when
$r_0=0$, and $F$ is the fraction of $\mu$ from the origin at which
the gamma distribution begins (i.e. $F$ is $r_0$ measured in units of $\mu$).
For arbitrary $r_0$, the standard deviation of the shifted gamma distribution is:
\begin{eqnarray}
\sigma_r &=& \mu \beta (1 - F).
\end{eqnarray}
Finally, we also offset the radial distribution by the Schwarzschild
radius, $R_s = 2GM/c^2$, to provide a hard limit to how close a point
particle can be to the black hole. For a $10^7 M_\odot$ black hole,
$R_s = 0.001$ light days, much smaller than the typical size of the
BLR, which is on the order of light days.
The three parameters $(\mu, \beta, F)$ control the radial profile of the
particles. To parameterise the actual distances of the particles
from the black hole, instead of using the physical distance $r$, we use a variable
$g$ (one for each particle) with a Gamma$(\beta^{-2}, 1)$ prior.
Then the actual distance $r$ of the particle is computed by:
\begin{eqnarray}
r &=& R_s + \mu F + \mu\beta^2(1-F)g.\label{eqn_calculate_r}
\end{eqnarray}
The reason for parameterizing in terms of $g$ rather than $F$ is that
Metropolis proposals are simpler. For example, a Metropolis move that changes
the parameters $(\mu, \beta, F)$ but leaves $g$ fixed
will automatically move all of the particles
appropriately.
\subsubsection{Opening and Inclination Angles}
\label{sect_angles}
The radial BLR distribution discussed in the previous section is
spherically symmetric, however we can break spherical symmetry by
introducing a disk opening angle of the BLR. The opening angle is
defined as half the angular thickness of the BLR in the angular
spherical polar coordinate perpendicular to the plane of the disk. If
the BLR is a sphere then the opening angle is $\pi/2$, and if the BLR
is a thin disk then the opening angle approaches zero. Once spherical
symmetry has been broken, it is necessary to consider at what
angle an observer will view the BLR. The inclination angle is defined
as the angle between a face-on BLR geometry and the observer's line of
sight, so an edge-on disk would have an inclination angle of $\pi/2$
while a face-on disk would have an inclination angle approaching zero.
To construct a specific BLR geometry, we begin by drawing the radial
position for each particle in a flat disk in the $x$-$y$ plane
with the observer located at the positive end of the $x$-axis.
In plane polar coordinates, the radial coordinates $r$ of the point
particles are calculated using Equation~\ref{eqn_calculate_r}, and the angular
coordinates are drawn from a uniform distribution between 0 and $2\pi$.
We then puff up this flat disk
by the opening angle, first by rotating each particle around the
$y$-axis by some angle between 0 and the opening angle and then by
rotating the particle around the $z$-axis by some angle between
0 and $2\pi$ to restore axisymmetry. Next, we rotate all point
particles around the $y$-axis by 90 degrees minus the inclination
angle so that an inclination angle of zero corresponds to a face-on
BLR geometry. All of the angles used in this process are extra model
parameters.
\subsubsection{Angular BLR Distribution}
\label{sect_angular}
We can further add asymmetry by controlling the strength of emission
from a given particle using three separate effects:
\begin{enumerate}
\item The particles are assigned non-uniform weights, depending upon
the angle between the observer's line of sight to the central source and a particle's
line of sight to the central source. The strength of this effect is controlled
by a parameter $\kappa$.
\item The parameter $\gamma$ controls the extent to which the emission is
concentrated near the outer edges of the BLR disk at the opening angle.
\item The parameter $\xi$ determines the transparency of the plane of the BLR disk.
\end{enumerate}
The first effect represents anisotropic emission from the point
particles. We use first order spherical harmonics to define a weight,
$W$, for each particle that ranges from 0 to 1 and determines
what fraction of the continuum flux is reemitted as line flux in the
direction of the observer:
\begin{equation}
W(\phi) = \frac{1}{2} + \kappa \cos \phi. \label{eqn_kappa}
\end{equation}
The one free parameter is $\kappa$, which ranges from $-0.5$ to $0.5$.
Negative values of $\kappa$ correspond to preferential emission from
the far side of the BLR from the observer and positive values
correspond to preferential emission from the near side of the BLR.
Preferential emission from the far side of the BLR could be physically
caused by BLR gas only re-emitting continuum emission back towards the
central source due to self-shielding, while preferential emission from
the near side of the BLR could be physically caused by the closer BLR
gas blocking gas farther away. The angle $\phi$ is defined to be the
angle between the observer's line of sight to the central source and
the particle's line of sight to the central source. For $\kappa
= -0.5$ and a model where the BLR is made up of spherical balls of
gas, this model is equivalent to considering broad line emission from
the area of the spheres illuminated by the central source as viewed by
the observer, like lunar phases.
The second effect is parameterized by $\gamma$ and controls the extent
to which BLR emission is concentrated near the outer faces of a disk.
This could arise for example if the parts of the BLR closer to the
plane of the accretion disk are optically thick.
The parameter $\gamma$ controls preferential emission from the outer
faces of the BLR disk by affecting how much the particle
positions are moved from an initial flat disk to between zero and the
opening angle of a thick disk. The angle for a particle's
displacement from a flat to thick disk is given by:
\begin{equation}
\theta = {\rm acos} \left( \cos \theta_o + (1 - \cos \theta_o)\times U^\gamma \right) \label{eqn_gamma}
\end{equation}
where $\theta_o$ is the opening angle and $U$ is a random number drawn
uniformly between 0 and 1. Larger values of $U$ lead to $\theta$
values closer to $\theta_o$, so using $U^\gamma$ with $\gamma$ between
1 and 5 concentrates more particles close to the opening angle
for $\gamma > 1$.
The third effect represents the possibility for an obscuring medium in
the plane of the BLR to partly or completely obscure broad line
emission from the back side of the BLR and is parameterized by $\xi$.
Unlike the first effect that depends upon the inclination angle at
which an observer views the BLR, $\xi$ is roughly defined as the
fraction of particles on the far side of the BLR midplane. In
the limit of $\xi \to 0$, the entire back half of the BLR is obscured,
and the BLR geometry could range from half a disk or sphere when
$\gamma \sim 1$ to a single cone when $\gamma \sim 5$. In the limit
of $\xi \to 1$, the back half of the BLR is not obscured. Since it is
computationally inefficient to throw out particles on the back
side of the BLR, we actually just invert their position with respect
to the plane of BLR when $\xi < 1$, making the true definition of
$\xi$ be the fraction of particles in the back side of the BLR
that have {\it not} been moved to the front side.
\subsection{Dynamics Models}
\label{sect_dyn}
In order to make a model spectrum from our geometry of the BLR we must
also assign velocities to the particles. We consider three
different kinematic components, including bound elliptical orbits and
a combination of both bound and unbound inflow or outflow.
\begin{figure}
\begin{center}
\includegraphics[scale=0.44]{figure2.pdf}
\caption{Distributions of radial and tangential velocities, $v_r$ and $v_\phi$ for the dynamical model.
Blue points are particle velocities drawn from Gaussian
distributions centered around the point for circular orbits $(v_r,
v_\phi) = (0, v_{\rm circ})$ shown as the upper filled red circle and
centered around the points for outflowing and inflowing escape
velocity $(v_r, v_\phi) = (\pm \sqrt{2}\,v_{\rm circ}, 0)$ shown as
filled red stars. The red dotted line denotes the ellipse with
semi-minor axis $(v_r, v_\phi) = (0, \,v_{\rm circ})$ and semi-major
axis $(v_r, v_\phi) = (
\sqrt{2}\,v_{\rm circ}, 0)$ along which the radial and tangential
velocities are drawn. The outer solid red circle at a radius of $|v|
= \sqrt{2GM_{\rm BH}/r}$ denotes the velocity beyond which orbits are unbound.
The red dashed circle at a radius of $|v| = \sqrt{GM_{\rm BH}/r}$ denotes
velocities with magnitude of the circular velocity.}
\label{fig_newVmodel}
\end{center}
\end{figure}
\subsubsection{Elliptical Orbits}
\label{sect_ellip}
Consider a particle orbiting a point mass at a distance $r$ with
velocity $|v| = \sqrt{v_r^2 + v_\phi^2}$, where $v_r$ is the radial
velocity and $v_\phi$ is the tangential velocity in the plane of the
orbit and perpendicular to $v_r$. The tangential velocity in terms of
the angular momentum per unit mass of the particle $L$ is given by
$v_\phi = L/r$, and the radial velocity can be obtained by considering
the energy per unit mass of the particle:
\begin{equation}
E = \frac{1}{2} \left( v_r^2 + \frac{L^2}{r^2} \right) - \frac{GM_{\rm BH}}{r}. \label{eqn_m}
\end{equation}
Solving for $v_r$ we obtain:
\begin{equation}
v_r = \pm \sqrt{2 \left( E + \frac{GM_{\rm BH}}{r} \right) - \frac{L^2}{r^2}}.
\end{equation}
For circular orbits, we have the additional constraint that $v_r=0$ so
that the centripetal force of circular motion must equal the
gravitational force, giving $v_\phi^2 = GM_{\rm BH}/r$ or $v_{\rm circ} =
\sqrt{GM_{\rm BH}/r}$. Thus, the circular orbit solutions are two special
points in the $v_r - v_\phi$ plane at $(v_r, v_\phi) = (0, \pm v_{\rm
circ})$.
We consider generalizations of circular orbits to elliptical orbits by
considering distributions in $v_r$ and $v_\phi$ centered around the
circular orbit solutions. Such a model allows us to recover circular
orbits when the distributions are narrow, but also allows for highly
elliptical orbits when the distributions are on the order of $v_{\rm
circ}$. We draw the velocities of the particles from the
ellipse in the $v_r$ and $v_\phi$ plane that has semi-minor axis
$v_{\rm circ}$ at $v_r=0$ and semi-major axis equal to the escape
velocity $\sqrt{2}v_{\rm circ}$ at $v_\phi=0$, as shown in
Figure~\ref{fig_newVmodel}. The reason for drawing velocities from
around this ellipse instead of a circle with radius $v_{\rm circ}$ is
that the parameter space naturally includes the points at $v_r = \pm
\sqrt{2}v_{\rm circ}$ that correspond to the radial outflowing and
inflowing escape velocities. We will discuss these inflowing and
outflowing velocity solutions in more detail in
Section~\ref{sect_inflow}. Since reverberation mapping measurements
cannot distinguish between rotations of the BLR around the line of
sight axis, it is only necessary to define the positive $v_\phi$ side
of the $v_r - v_\phi$ plane. The radial and tangential velocities are
thus drawn from Gaussian distributions centered at $(v_r, v_\phi) =
(0, v_{\rm circ})$ with standard deviations given by
$\sigma_{\rho,\,\rm circ}$ and $\sigma_{\Theta,\,\rm circ}$, where
$\rho$ is the radial coordinate in the $v_r - v_\phi$ plane and
$\Theta$ is the angular coordinate. Circular orbits are recovered
when $\sigma_{\rho,\,\rm circ} \to 0$ and $\sigma_{\Theta,\,\rm circ}
\to 0$, whereas highly elliptical orbits approaching the escape velocity
$|v| = \sqrt{2}v_{\rm circ}$ are obtained when $\sigma_{\rho,\,\rm
circ} \to 0.1$ and $\sigma_{\Theta,\,\rm circ} \to 1.0$, the upper
limits of their priors.
\subsubsection{Inflow and Outflow}
\label{sect_inflow}
In order to include the possibility of substantial unbound outflowing
or inflowing gas in the BLR, we allow a variable fraction of the point
particles to have elliptical, inflowing, and outflowing orbits. Since
we do not expect to find both inflowing and outflowing gas in the BLR
in the same spatial location, especially at the velocities assumed by
our model, we only allow for inflowing or outflowing particles
in addition to elliptical orbits for a specific instance of our model.
The fraction of particles with elliptical orbits is given by
$f_{\rm ellip}$, where $1 - f_{\rm ellip}$ is thus the fraction of
particles in either inflowing or outflowing orbits. Whether the
orbits are inflowing or outflowing is given by $f_{\rm flow}$, where
values between 0 and 1 and less than 0.5 denote inflow and values
greater than 0.5 denote outflow. Inflowing orbits are obtained around
values of $(v_r, v_\phi) = (-\sqrt{2}\,v_{\rm circ}, 0)$ while
outflowing orbits are obtained around values of $(v_r, v_\phi) =
(\sqrt{2}\,v_{\rm circ}, 0)$.
As for elliptical orbits, we draw the radial and tangential velocities
of inflowing or outflowing particles from Gaussian distributions
for $\rho$ and $\Theta$, the radial and angular coordinates of the $v_r
- v_\phi$ plane. The width of the Gaussian distributions is similarly
given by $\sigma_{\rho,\,\rm radial}$ and $\sigma_{\Theta,\,\rm
radial}$, where the widths are the same for both inflowing and
outflowing orbits. Since the Gaussian distributions are centered on
the points $v_r = \pm \sqrt{2}\,v_{\rm circ}$, about half of the
inflowing and outflowing particles will actually have bound
orbits. In order to allow for completely bound inflowing and
outflowing trajectories, we also allow the distributions centered
around $v_r = \pm \sqrt{2}\,v_{\rm circ}$ to be rotated by an angle
$\theta_e$ along the ellipse connecting $v_r = \pm \sqrt{2}\,v_{\rm
circ}$ to the circular orbit velocities $v_\phi = \pm v_{\rm circ}$.
When $\theta_e = 0$, the inflowing or outflowing orbits are centered
around the escape velocities at $v_r = \pm \sqrt{2}\,v_{\rm circ}$,
while $\theta_e \to \pi/2$ recovers bound elliptical orbits centered
around circular orbits. When $\theta_e \sim \pi/4$, we obtain mostly
bound inflowing or outflowing gas.
\subsubsection{Macroturbulent Velocities}
\label{sect_turbulence}
We also consider macroturbulent velocities of the particles in
addition to the velocities from elliptical, inflowing, or outflowing
orbits. For each particle, we calculate the magnitude of the
turbulent velocity along the observer's line of sight, given by:
\begin{equation}
v_{\rm turb} = \mathcal{N}(0,\sigma_{\rm turb}) |v_{\rm circ}|
\end{equation}
where $\mathcal{N}(0,\sigma_{\rm turb})$ is a normal distribution
centered on zero and with standard deviation $\sigma_{\rm turb}$. The
magnitude of the turbulent velocity is relative to the magnitude of
the velocity of the particle's circular orbit described in
Section~\ref{sect_ellip}, given by $v_{\rm circ}$. We can recover the
case with no additional turbulent velocities when $\sigma_{\rm turb}
\to 0$. We apply the additional macroturbulent velocity to a point
particle first by calculating the elliptical, inflowing, or outflowing
velocity and then adding $v_{\rm turb}$. This model for
macroturbulent velocities is similar to the one presented by
\citet{goad12} for the case of a disk with constant opening angle.
\subsubsection{Relativistic Effects}
As highlighted in \citet{goad12}, relativistic effects can have a
strong influence on the shape of emission line profiles if the BLR gas is
sufficiently close to the black hole. We include two simple
relativistic effects in the calculation of particle velocities.
The first effect is the full relativistic expression for the doppler
shift of the broad emission line due to the line of sight velocity of
the emitting BLR gas. The second relativistic effect is that of
gravitational redshift, which is caused by a photon being emitted from
deeper in a gravitational potential well than the observer of the
photon. The wavelength shift caused by gravitational redshift depends
upon the ratio of the Schwarzschild radius, $R_s = 2GM/c^2$, to the
radial distance of the emitting source. Together, the full
relativistic expression for doppler shift and the expression for
gravitational redshift act to shift the emitted wavelength
$\lambda_{\rm emit}$ of line emission from a particle to the
observed wavelength $\lambda_{\rm obs}$ given by:
\begin{equation}
\label{eqn_gravred}
\lambda_{\rm obs} = \lambda_{\rm emit} \frac{\sqrt{\frac{1 + \frac{v}{c}}{1 - \frac{v}{c}}}}{\sqrt{1 - \frac{R_s}{r}}}
\end{equation}
where the particle has velocity $v$ and radial distance from the
black hole $r$. Since we compare our model broad emission line
spectra to the data in wavelength space, we can include the
relativistic doppler shift and gravitational redshift in the simulated
data by converting the simulated data from velocity to wavelength
space using Equation~\ref{eqn_gravred}.
\subsubsection{Narrow Line Emission}
In addition to a model of the broad emission line, we must also
consider the superimposed narrow emission line from the narrow line
region (NLR). Since the NLR is farther from the black hole, the
narrow emission line is not expected to reverberate on timescales as
short as those for the BLR \citep[e.g.][]{peterson13}. We therefore
assume that the narrow emission line flux is constant over the
duration of a reverberation mapping dataset.
We model the narrow emission line component using a Gaussian
with line dispersion given by another more
isolated narrow emission line profile.
For example, to model the narrow
component of the H$\beta$\ emission line we use the line dispersion of the narrow [\mbox{O\,{\sc iii}}]$\lambda5007$
emission line, just red-ward of H$\beta$.
Since the width of [\mbox{O\,{\sc iii}}]$\lambda5007$
in a given reverberation mapping dataset
is due to both intrinsic line width and instrumental resolution, we use
measurements of the intrinsic line width to calculate the instrumental
resolution, which is needed to smooth the model spectra.
Differences in observing conditions can also change the instrumental resolution as a function of time,
so we calculate the line dispersion of the narrow [\mbox{O\,{\sc iii}}]$\lambda5007$ line
for each spectrum individually and include the measurements of the line dispersion
as free parameters with Gaussian priors given by the line width measurement uncertainties.
The intrinsic narrow line width of [\mbox{O\,{\sc iii}}]$\lambda5007$ is also treated
as a free parameter with a Gaussian prior given by the line width measurement uncertainties.
For objects where the
NLR is not resolved and thus there is no intrinsic line width to the narrow line
profile, the width of the narrow emission line directly gives a measurement
of the instrumental resolution.
Since subtracting narrow emission lines from broad emission lines can
introduce significant uncertainty into the spectrum, we model spectra
that have not had the narrow emission line subtracted and we include
the total flux of the narrow line as an additional free parameter to
be constrained by the data.
\subsection{Exploring Parameter Space}
\label{sect_dnest}
\begin{table*}
\begin{minipage}{160mm}
\caption{BLR model parameters and their prior probability distributions.}
\label{table_params}
\begin{tabular}{@{}cll}
\hline
Parameter & Definition & Prior \\
\hline
$\mu$ & Mean radius of the BLR radial profile Eq.~\ref{eqn_mu} & LogUniform$(1.02\times10^{-3}\, {\rm light\,\, days}, \Delta t_{\rm data})$ \\
$\beta$ & Unit standard deviation of BLR radial profile Eq.~\ref{eqn_beta} & Uniform$(0,2)$ \\
$F$ & Beginning radius in units of $\mu$ of BLR radial profile Eq.~\ref{eqn_f} & Uniform$(0,1)$\\
$\theta_i$ & Inclination angle \S~\ref{sect_angles} & Uniform($\cos \theta_i(0,\pi/2)$) \\
$\theta_o$ & Opening angle \S~\ref{sect_angles} & Uniform$(0,\pi/2)$ \\
$\kappa$ & Cosine illumination function parameter Eq.~\ref{eqn_kappa} & Uniform$(-0.5,0.5)$ \\
$\gamma$ & Disk edge illumination parameter Eq.~\ref{eqn_gamma} & Uniform$(1,5)$ \\
$\xi$ & Plane transparency fraction \S~\ref{sect_angular} & Uniform$(0,1)$ \\
$M_{\rm BH}$ & Black hole mass Eq.~\ref{eqn_m} & LogUniform$(2.78\times10^4, 1.67\times10^9 M_\odot)$ \\
$f_{\rm ellip}$ & Fraction of elliptical orbits \S~\ref{sect_inflow} & Uniform$(0,1)$ \\
$f_{\rm flow}$ & Flag determining inflowing or outflowing orbits \S~\ref{sect_inflow} & Uniform$(0,1)$ \\
$\sigma_{\rho,\,\rm circ}$ & Radial standard deviation around circular orbits \S~\ref{sect_ellip} & LogUniform$(0.001,0.1)$ \\
$\sigma_{\Theta,\,\rm circ}$ & Angular standard deviation around circular orbits \S~\ref{sect_ellip} & LogUniform$(0.001,1.0)$ \\
$\sigma_{\rho,\,\rm radial}$ & Radial standard deviation around radial orbits \S~\ref{sect_inflow} & LogUniform$(0.001,0.1)$ \\
$\sigma_{\Theta,\,\rm radial}$ & Angular standard deviation around radial orbits \S~\ref{sect_inflow} & LogUniform$(0.001,1.0)$\\
$\sigma_{\rm turb}$ & Standard deviation of turbulent velocities \S~\ref{sect_turbulence} & LogUniform$(0.001,0.1)$\\
$\theta_e$ & Angle in the $v_{\phi} - v_r$ plane \S~\ref{sect_inflow} & Uniform$(0,\pi/2)$\\
\hline
\end{tabular}
\medskip
Equation numbers refer to the first equation in which the parameter is used and section
numbers refer to those subsections where the parameter is defined.
$\Delta t_{\rm data}$ is the time span between the first and last data point in the reverberation mapping dataset.
The prior is designated by the
scale in which a parameter is sampled uniformly and by the range (minimum value, maximum value).
Uniform$(0,1)$ denotes a uniform prior distribution between 0 and 1.
LogUniform$(1, 100)$ denotes a uniform prior for the log of the parameter,
or alternatively, a prior density $p(x) \propto 1/x$, between the parameter values $1$ and $100$. A log-uniform prior is used for positive parameters whose order of magnitude
is unknown.
\end{minipage}
\end{table*}
\begin{figure*}
\begin{center}
\includegraphics[scale=0.67]{figure3.pdf}
\caption{A probabilistic graphical model
of the parameters and their influence on simulated reverberation data created
by the BLR model. Each unshaded node represents a parameter (e.g. $M_{\rm BH}$) or continuum hyperparameter
(e.g. $\boldsymbol{\theta}_{\rm cont}$) in the model and each shaded node represents a data value (e.g. $\mathcal{D}_{ij}$).
Arrays of parameters are represented with a box, which can be thought of as a for loop. The arrows
represent dependence between two nodes, where the arrow between $M_{\rm BH}$ and $\mathbf{x}_i$
corresponds to a weak dependency due to the minimum BLR radius being set by the Schwarzschild radius.
The geometry model parameters, which determine the positions of the particles $\mathbf{x}_i$,
include $\kappa$ and $\boldsymbol{\theta}_{\rm pos}$, a vector of the remaining geometry model parameters given in Table~\ref{table_params}: $\mu$, $\beta$, $F$,
$\theta_i$, $\theta_o$, $\gamma$, and $\xi$.
The dynamics model parameters, which determine the velocities of the particles $\mathbf{v}_{ij}$, include $M_{\rm BH}$ and $\boldsymbol{\theta}_{\rm vel}$,
a vector of the remaining dynamics model parameters given in Table~\ref{table_params}: $f_{\rm ellip}$, $f_{\rm flow}$, $\sigma_{\rho,\,\rm circ}$,
$\sigma_{\Theta,\,\rm circ}$, $\sigma_{\rho,\,\rm radial}$, $\sigma_{\Theta,\,\rm radial}$, $\sigma_{\rm turb}$, and $\theta_e$.
The continuum hyperparameters in vector $\boldsymbol{\theta}_{\rm cont}$ include $\mu_{\rm cont}$, $\sigma_{\rm cont}$, $\tau_{\rm cont}$, and $\alpha_{\rm cont}$.
This figure was made using daft-pgm.org.
}
\label{fig_pgm}
\end{center}
\end{figure*}
Once our model of the BLR has been defined, we can explore this
high-dimensional parameter space to constrain which parameter values
best fit a specific reverberation mapping dataset by measuring the
posterior PDFs and correlations between parameters. The full list of
all parameters in our BLR model is given in Table~\ref{table_params}
along with all of the random numbers used to assign the point particle positions
and velocities in Section~\ref{sect_model}.
A probalistic graphical model (PGM) of the interdependence of the parameters is
shown in Figure~\ref{fig_pgm}. One way to interpret
Figure~\ref{fig_pgm} is as a recipe for making simulated reverberation
mapping data:
\begin{enumerate}
\item Generate a model continuum light curve using Gaussian processes and then
\item sample it to create a realistic continuum light curve.
\item Use BLR geometry and dynamics parameters to generate the positions and velocities of
all the particles in the BLR.
\item Finally, use those positions and velocities along with the model continuum light curve to
make a simulated time series of broad emission line spectra or integrated broad line fluxes.
\end{enumerate}
As described in Section~\ref{sect_bayes}, we can explore
high-dimensional parameter spaces using an MCMC algorithm.
We use the diffusive nested sampling code DNest \citep{brewer11b} due to
its ability to explore correlations between parameters efficiently
in high dimensional spaces, and
because it calculates the Bayesian evidence and thus allows for
model selection. DNest works by using multiple walkers to explore
parameter space, starting from the prior and gradually adding hard likelihood
constraints.
One of the inherent difficulties of fitting real data with a simplified
model is that the model is unlikely to match the data perfectly, especially
if the error bars on the data are very small. In practice, one often obtains
unrealistically precise inferences of the model parameters because the model
contains simplifications.
However we still expect the model to capture the main features of the structure
of the BLR.
We
account for the systematic uncertainty from using a simple model by
inflating the errorbars of the data until only the macroscopic
fluctuations in the data are fit by the model. Since we use a
Gaussian likelihood function, as discussed in
Section~\ref{sect_bayes}, we can rephrase the inflation of errorbars
as an increased weighting of the prior probability compared to the
likelihood when calculating the posterior probability. The weighting
term is called a ``temperature" $T$, such that $\log({\rm posterior})
\propto \log({\rm prior}) + \log({\rm likelihood})/T$ and hence the
inflated errorbars are $\sqrt{T}$ larger than the original errorbars
for a Gaussian likelihood function. Generally higher quality datasets
require larger values of $T$. Another advantage of using nested sampling
for the computation is that we can vary the temperature and check the
sensitivity of the results without having to repeat the MCMC run.
\subsection{Limitations of the Model and Future Improvements}
\label{sect_lim}
Finally, we discuss some of the limitations of our model of the BLR
and discuss improvements to be made in the future. One of the main
limitations of our model is the simplified dynamics of the point
particles. We ignore the effects of radiation pressure, a force that
has a $1/r^2$ dependence like gravity, making it difficult to
disentangle from the black hole mass. Unfortunately, this degeneracy
of radiation pressure with black hole mass means that black hole
masses could be significantly underestimated, and that the degeneracy
can only be broken by including external information about the BLR gas
density in the model \citep[][]{marconi08, marconi09, netzer09,
netzer10}. We also ignore the self-gravity and viscosity of the BLR
gas and any interaction it has with the gas in the accretion disk.
Finally, we assume that the gas in elliptical orbits is the same gas
that could be inflowing or outflowing, when in reality the BLR could
have multiple components with different geometries and dynamics.
Another limitation to our model of the BLR is the simplified treatment
of radiative transfer, both for the ionizing and broad line photons.
We ignore any asymmetry of the ionizing photons except for an optional
preference for photons away from the BLR midplane. We also ignore
detailed radiative transfer of line photons within the BLR gas, both
locally and globally. While we have included two additional obscuration
effects in this new version of the BLR model, transparency of the disk
midplane to line photons ($\xi$) and asymmetry of the ionizing photons away
from the disk midplane ($\gamma$), these are simplifications of what is most
likely an inherently complicated problem.
Some of these limitations can be at least partially solved in future
models of the BLR. For example, CLOUDY models constrain the direction
in which line photons are emitted from individual clouds of BLR gas,
as well as the emissivity and responsivity of line emission as a
function of radius \citep[e.g.][]{ferland98,ferland13}. Using a table
of pre-computed values from CLOUDY would not only provide a more
physically-detailed local opacity to our model, but would also
constrain the radial distribution of broad line emitting gas.
By modeling the BLR using multiple broad emission lines simultaneously,
we can also start to place constraints on the underlying distribution
of gas density.
Recently \citet{li13} developed an independent code to model
reverberation mapping data using a geometry model of the BLR based on
the model of \citet{pancoast11}. They include additional flexibility
in their model by allowing for non-linear response of the broad
emission lines to incident continuum radiation. While the average
emission line response of their sample is close to linear, they find
that individual AGNs can have non-linear response, suggesting that
this effect may be important to include in future implementations of
our modeling code.
Another improvement that could be made to our model is better
treatment of the dynamics. One option could be to include separate
geometries for each dynamical component, for example a thin disk of
gas in elliptical orbits with a cone of outflowing gas. We could also
improve our treatment of outflows to include the detailed dynamics
found in simulations of disk winds or complex models of outflows. For
example, instead of assuming that outflowing gas has mainly radial
trajectories at or near the escape velocity of its present position,
we could consider the more complicated case where the gas is
accelerated to velocities on the order of the escape velocity and
where the escape velocity is defined at an initial wind launching
radius instead of the current position of the gas
\citep[e.g.][]{castor75, proga99}.
Finally, breathing of the BLR may play an important role in
determining the response of emission line flux as a function of time
\citep[see][and references therein]{korista04}. Breathing of the BLR
is where BLR emission comes from gas farther from or closer to the
central engine based on increases or decreases in the ionizing
luminosity, respectively. If the mean radius of the BLR changes
substantially over the course of a reverberation mapping campaign,
then this could have a noticeable effect on the measured time lag
and the results from direct modeling analysis.
\section{Tests with Simulated Data and Arp~151}
\label{sect_simdata_tests}
We demonstrate the capabilities of our improved model of the BLR and direct modeling code by
recovering the model parameters for two simulated reverberation mapping datasets.
By modeling the time series of emission line profiles using a geometry and dynamical model
of the BLR as well as modeling the integrated emission line light curve using a geometry model
of the BLR, we illustrate the benefits of a full spectroscopic dataset.
\subsection{The simulated datasets}
\label{sect_mockspectra}
\begin{table*}
\begin{minipage}{130mm}
\caption{Geometry and dynamics true parameter values of simulated spectral datasets and
inferred geometry and dynamics posterior median parameter values and 68\% confidence intervals.}
\label{table_simparams}
\begin{tabular}{@{}ccccc}
\hline
Geometry Model & Simulated & Simulated & Simulated & Simulated \\
Parameter & Data 1 (True) & Data 1 (Inferred) & Data 2 (True) & Data 2 (Inferred) \\
\hline
$r_{\rm mean}$ (light days) & $ 4.0$ & $4.19^{+0.21}_{-0.21}$ & $ 4.0$ & $3.54^{+0.44}_{-0.35}$ \\
$r_{\rm min}$ (light days) & $ 1.0$ & $0.85^{+0.18}_{-0.26}$ & $ 1.0$ & $0.89^{+0.22}_{-0.19}$ \\
$\sigma_{r}$ (light days) & $ 3.0$ & $3.23^{+0.30}_{-0.25}$ & $ 2.4$ & $2.39^{+0.40}_{-0.24}$ \\
$\tau_{\rm mean}$ (days) & $ 3.62$ & $3.59^{+0.15}_{-0.15}$ & $ 3.39$ & $3.30^{+0.18}_{-0.15}$ \\
$\beta$ & $ 1.0$ & $0.97^{+0.09}_{-0.09}$ & $ 0.8$ & $0.92^{+0.09}_{-0.11}$ \\
$\theta_o$ (degrees) & $40$ & $49.0^{+ 8.4}_{- 7.6}$ & $30$ & $27.3^{+11.0}_{- 8.6}$ \\
$\theta_i$ (degrees) & $20$ & $20.2^{+ 2.7}_{- 3.3}$ & $20$ & $22.8^{+10.0}_{- 6.7}$ \\
$\kappa$ & $-0.4$ & $-0.31^{+0.09}_{-0.09}$ & $-0.4$ & $-0.16^{+0.31}_{-0.24}$ \\
$\gamma$ & $ 5.0$ & $2.73^{+1.29}_{-1.19}$ & $ 5.0$ & $3.50^{+1.02}_{-1.86}$ \\
$\xi$ & $ 0.3$ & $0.31^{+0.10}_{-0.08}$ & $ 0.1$ & $0.53^{+0.37}_{-0.32}$ \\
\hline
Dynamical Model & Simulated & Simulated & Simulated & Simulated \\
Parameter & Data 1 (True) & Data 1 (Inferred) & Data 2 (True) & Data 2 (Inferred) \\
\hline
$\log_{10}(M_{\rm BH}/M_\odot)$ & $ 6.5$ & $6.42^{+0.06}_{-0.05}$ & $ 6.5$ & $6.48^{+0.08}_{-0.26}$ \\
$f_{\rm ellip}$ & $ 0.0$ & $0.07^{+0.05}_{-0.04}$ & $ 1.0$ & $0.84^{+0.12}_{-0.50}$ \\
$f_{\rm flow}$ & $ 0.0$ & $0.25^{+0.18}_{-0.17}$ & -- & $0.32^{+0.28}_{-0.22}$ \\
$\theta_e$ (degrees) & $ 0.0$ & $ 7.9^{+ 7.1}_{- 5.0}$ & -- & $70.2^{+16.7}_{-38.6}$ \\
$\sigma_{\rm turb}$ & $ 0.0$ & $0.024^{+0.055}_{-0.021}$ & $ 0.0$ & $0.009^{+0.026}_{-0.007}$ \\
\hline
\end{tabular}
\medskip
The columns with (True) give the true parameter values for the simulated datasets and the columns with (Inferred) give
the inferred parameter values and their uncertainties. True parameter values with -- are unimportant for that specific
simulated dataset.
\end{minipage}
\end{table*}
\begin{figure*}
\begin{center}
\includegraphics[scale=0.55]{figure4.png}
\caption{Simulated spectral time series 1 (top row) and 2 (bottom row). First (leftmost)
column shows the integrated line light curve in green and the continuum light curve in blue.
Second column shows the spectral time series over the wavelength range of the emission line
as a function of time series epoch. Third column shows the transfer function as a function of time lag
and wavelength. Fourth and fifth columns show the edge-on and face-on views,
respectively, of the BLR geometries for each simulated dataset (the observer views the origin
from the positive x-axis).}
\label{fig_mockspectra}
\end{center}
\end{figure*}
In order to generate realistic simulated reverberation mapping datasets, we use the LAMP 2008
dataset of H$\beta$\ emission for Arp 151 \citep{walsh09, bentz09} to determine the sampling cadence, flux errors,
instrumental smoothing, and approximate scale of the BLR.
The Arp 151 dataset includes a {\it B}-band continuum light curve and a time series of H$\beta$\ emission
line profiles, where the broad and narrow H$\beta$\ flux is isolated from the spectrum using spectral
decomposition techniques as described by \citet{park12a}.
As described in Section~\ref{sect_dnest}, the simulated datasets
are created by first generating a model of the Arp 151 continuum light curve using Gaussian
processes and sampling that model continuum light curve with the same cadence as for Arp 151.
We then add Gaussian noise to the continuum light curve using the error vector of the Arp 151
light curve. Next we set fixed the BLR geometry and dynamics model parameters to the values
found in Table~\ref{table_simparams} and sample the H$\beta$\ emission line profile at the times given by
the Arp 151 spectral dataset. Finally, we add Gaussian noise to the model spectra based on the
spectral errors in the Arp 151 dataset. In order to account for the fact that real reverberation
mapping datasets are likely more complicated than our model of the BLR assumes, we inflate the spectral errors
and added Gaussian noise on the simulated dataset by a factor of three compared to the Arp 151 dataset,
to obtain more realistic uncertainties on the inferred model parameters.
To reduce numerical noise in the simulated spectra to less than the uncertainty in the spectral fluxes,
we use 2000 particles and assign each one ten independent velocity values.
The width of the narrow line component of H$\beta$\ is modeled using the line dispersion of the narrow [\mbox{O\,{\sc iii}}]$\lambda5007$
emission line from the Arp 151 dataset, calculated for each epoch of spectroscopy.
The instrumental resolution is then measured by comparing the measured line dispersion for [\mbox{O\,{\sc iii}}]$\lambda5007$
with its intrinsic line width as calculated by \citet{whittle92}.
The simulated datasets are based on the geometry and dynamics inferred for the LAMP 2008
dataset in paper II as shown in Table~\ref{table_simparams}, with small mean radii for the BLR of $4$ light days, close to
exponential radial profiles with $\beta \sim 1$, substantial width to the BLR of $\sim 2-3$ light days,
thick disks with opening angles of $30-40$ degrees, close to face-on inclination angles of 20 degrees,
preferential emission from the far side of the BLR ($\kappa = -0.4$) and the edges of the disk ($\gamma = 5$),
and mostly opaque mid-planes ($\xi = 0.1-0.3$). The black hole masses are also chosen to be similar
to the LAMP 2008 sample with $M_{\rm BH} = 10^{6.5} M_\odot$ and each of the simulated datasets
is dominated by either elliptical orbits or inflowing orbits.
The differences between the simulated datasets can also
be easily seen in Figure~\ref{fig_mockspectra}, which shows not only the continuum,
line, and spectral timeseries, but also the transfer functions and geometries of the BLR.
The simulated spectral datasets consist of the following:
\begin{enumerate}
\item A thick, wide disk with an exponential profile and dynamics dominated by
inflowing orbits.
\item A thinner, narrower disk, with a radial profile between a Gaussian and exponential and
dynamics dominated by elliptical orbits.
\end{enumerate}
The continuum light curve interpolation using Gaussian
processes is also held constant for all three simulated datasets, although the random noise added
to each realistically sampled continuum light curve is different.
While not an issue for simulated emission line profiles, real reverberation mapping data
must contend with ambiguity in multiple spectral components overlying the broad emission
line profile. For example, with H$\beta$\ we not only have possible overlap of the red wing with the
narrow [\mbox{O\,{\sc iii}}] emission lines and the blue wing with He\,{\sc ii}, but there may also be substantial overlap with Fe\,{\sc ii} broad line emission. Blending between multiple broad components is especially important to
disentangle because the different broad emission lines will be generated in BLR gas at different radii from the
black hole and blending could confuse the dynamical modeling results.
In order to isolate any single broad emission line profile, it is
necessary to apply a method of spectral decomposition that will remove most of the ambiguity in
overlapping spectral components.
\subsection{Recovery of model parameters: spectral datasets}
\begin{figure*}
\begin{center}
\includegraphics[scale=0.85]{figure5.pdf}
\caption{Inferred model parameters for simulated spectral dataset 1. The true parameter values are given by
the vertical dashed red lines for those cases where the true value affects the shape of the simulated spectral dataset.}
\label{fig_posteriors1}
\end{center}
\end{figure*}
\begin{figure*}
\begin{center}
\includegraphics[scale=0.85]{figure6.pdf}
\caption{The same as Figure~\ref{fig_posteriors1} for simulated spectral dataset 2.}
\label{fig_posteriors2}
\end{center}
\end{figure*}
\begin{figure}
\begin{center}
\includegraphics[scale=0.35]{figure7.pdf}
\caption{Marginal posterior PDFs and correlations between parameters for simulated dataset 2,
including the fraction of elliptical orbits ($f_{\rm ellip}$), the flag determining inflowing or outflowing orbits ($f_{\rm flow}$),
and the angle in the $v_\phi - v_r$ plane ($\theta_e$).}
\label{fig_corner2}
\end{center}
\end{figure}
As a first test of our direct modeling code and BLR model, we attempt
to recover the true parameter values of the two simulated spectral
datasets described in Section~\ref{sect_mockspectra}. We assume the
same instrumental resolutions as a function of time that are used to generate the simulated
datasets and use 2000 particles and assign ten independent
velocities to each one. Since we add Gaussian noise to the simulated
datasets, we do not expect to recover every parameter of the BLR
exactly. In addition, certain BLR geometries and dynamics make it difficult to
constrain certain parameters. For example, when the majority of particles
are in elliptical orbits, the fraction of particles in inflowing or outflowing orbits
may not be well constrained. Or, a nearly face-on very thin
disk will make it difficult to constrain the parameters $\kappa$,
$\gamma$, and $\xi$ since these parameters affect the relative line emission
throughout the height of the very thin disk in this case.
The inferred posterior PDFs for the BLR geometry and dynamics model
parameters are shown in Figures~\ref{fig_posteriors1} and
\ref{fig_posteriors2} for simulated
datasets 1 and 2, respectively. The true parameter values for the
simulated datasets are shown by vertical red dashed lines for comparison, and
in the cases where the true parameter value does not matter (e.g. when
the dynamics are entirely dominated by elliptical orbits so there is
no true value of $f_{\rm flow}$) no red line is given. Overall, the
modeling code is able to recover the true parameter values to within
reasonable uncertainties, as listed in
Table~\ref{table_simparams}.
Specifically, we constrain the mean radius of the BLR to within 0.5 light days uncertainty,
the mean time lag to within 0.2 days uncertainty, and the inclination and opening angles
to within $\sim 10$ degrees. The geometry parameters that add asymmetry to the BLR
are more difficult to constrain, with $\kappa$ and $\xi$ well constrained for simulated dataset 1 while
neither $\kappa$, $\gamma$, nor $\xi$ are well-determined for simulated dataset 2.
We also constrain
$\log_{10}(M_{\rm BH}/M_\odot)$ to $0.05 - 0.25$ dex uncertainty, where the variation
comes mainly from larger correlated uncertainties with the inclination and
opening angles for simulated dataset 2. The dynamics are also well-recovered for both
simulated datasets, with a clear preference for inflow in simulated dataset 1 and for elliptical
orbits centered around the circular orbit values in simulated dataset 2. A clearer picture of
the preference for elliptical orbits for simulated dataset 2 can be seen in Figure~\ref{fig_corner2},
which shows the correlations between $f_{\rm ellip}$, $f_{\rm flow}$, and $\theta_e$.
Specifically, for values of $\theta_e$ approaching 90 degrees, the distribution of inflowing or outflowing
orbits becomes identical to the distribution for elliptical orbits centered around the circular orbit value in
the $v_\phi - v_r$ plane. This means that when $\theta_e \sim 90$ degrees, although $f_{\rm ellip}$ and $f_{\rm flow}$
are mostly unconstrained, the velocity distribution for the particles is very similar to that of $f_{\rm ellip} \sim 1$.
In general, these two simulated spectral datasets show that we can expect to
obtain substantial constraints on the geometry and dynamics of the BLR for reverberation
mapping datasets similar in quality to LAMP 2008. The potential constraints on the black hole mass
are also promising, although they depend upon the geometry of the BLR, specifically the precision with which
we can measure the inclination and opening angles.
\subsection{Recovery of model parameters: integrated line datasets}
\label{sect_linetest}
For those cases where a full spectroscopic reverberation mapping
dataset is not available, we can apply a geometry-only model of the
BLR and reproduce integrated emission line flux light curves. We test
whether this approach provides constraints on the geometry of the BLR
that are comparable to the full geometry plus dynamical modeling
problem using the simulated datasets described in
Section~\ref{sect_mockspectra} and shown in the left panel of
Figure~\ref{fig_mockspectra}.
We find that the mean time lag and mean radius are well constrained
with geometry-only modeling. The mean and median time lag inferred for each
simulated dataset are given in Table~\ref{table_lag_compare}, along
with the true mean and median lag values and the value measured by CCF analysis.
The inferred mean time lag is not only accurate, but the inferred
uncertainty in the mean time lag through geometry-only modeling of
$\sim 0.25$ days is almost half as large as for the CCF time lag.
This shows that geometry-only modeling is a promising tool for
measuring time lags. The mean radius is inferred with slightly larger
uncertainties to be $3.58^{+1.18}_{-0.97}$ light days for simulated
dataset 1 and $2.90^{+0.97}_{-0.24}$ for simulated dataset 2, while the true mean
radius is $4$ light days.
Unfortunately the other geometry model parameters are not as well
constrained. The parameters $\gamma$ and $\xi$ are completely
unconstrained for both of the simulated datasets, and $\theta_o$,
$\theta_i$, $\beta$, $F$, and $\kappa$ are mostly unconstrained.
Generally there is a slight preference for a specific value of
$\theta_o$, $\theta_i$, $\beta$, $F$, and $\kappa$, but none or almost
none of the parameter space is ruled out. These results for
geometry-only modeling suggest that a full spectroscopic reverberation
mapping dataset is needed to constrain the geometry of the BLR, since
otherwise there are too many degeneracies between model parameters to
infer anything other than the mean time lag and mean radius
consistently.
\subsection{Comparison with JAVELIN}
\begin{table}
\caption{Comparison of BLR geometry modeling, JAVELIN, and CCF lag measurements.}
\label{table_lag_compare}
\begin{tabular}{@{}ccc}
\hline
Lag (days) & Sim Data 1 & Sim Data 2 \\
\hline
True mean lag & $3.62$ & $3.39$ \\
True median lag & $2.56$ & $2.77$ \\
$\tau_{\rm mean}$ & $3.36^{+0.20}_{-0.15}$ & $3.29^{+0.23}_{-0.17}$ \\
$\tau_{\rm median}$ & $2.61^{+0.25}_{-0.21}$ & $3.10^{+0.17}_{-0.18}$ \\
$\tau_{\rm JAVELIN}$ & $2.94^{+0.13}_{-0.12}$ & $3.21^{+0.13}_{-0.14}$ \\
$\tau_{\rm cen}$ & $3.70^{+0.50}_{-0.48}$ & $3.62^{+0.56}_{-0.40}$\\
\hline
\end{tabular}
\medskip
$\tau_{\rm mean}$ and $\tau_{\rm median}$ are the mean and median time lags inferred from BLR geometry modeling, $\tau_{\rm JAVELIN}$ is the time
lag measured by JAVELIN, and $\tau_{\rm cen}$ is the center-of-mass lag measured from the CCF.
\end{table}
Recently another method has been developed for measuring the time lag
in reverberation mapping data using integrated emission line light
curves by \citet{zu11}. This method has been implemented in an
open-source code called JAVELIN written in Python.\footnote{Download
JAVELIN here: https://bitbucket.org/nye17/javelin} JAVELIN works by
using a top-hat transfer function with two parameters, a mean lag and
a width of the top hat. The continuum light curve in JAVELN is
interpolated using a CAR(1) model, which is equivalent to the
continuum model implemented here. The parameter space of the
continuum light curve and transfer function models is sampled using
MCMC, providing posterior PDFs for the model parameter values.
We can compare recovery of the time lag using BLR geometry modeling of
integrated emission line light curves to the results from JAVELIN.
For simulated dataset 1, we measure a mean lag of $\tau_{\rm JAVELIN}
= 2.94^{+0.13}_{-0.12}$ days and a mean width of the top-hat transfer function of
$w = 7.33^{+0.26}_{-0.30}$ days using JAVELIN. This can be compared to
the true mean lag of 3.62 days and the true median lag of 2.56 days for simulated dataset 1 to see that
the mean lag measured by JAVELIN is between the true mean and median lags.
For simulated dataset 2, we measure $\tau_{\rm
JAVELIN} = 3.21^{+0.13}_{-0.14}$ days and $w = 5.26^{+0.82}_{-0.63}$ days.
Again, the mean lag measured by JAVELIN is between the true mean lag of 3.39 days
and the true median time lag of 2.77 days, although closer to the mean lag.
The tendency for the time lag measured by JAVELIN to fall closer to the true
mean or median lag is due to the shape of the transfer function;
in very asymmetric transfer functions, the mean and median time lag are
increasingly discrepant, with JAVELIN more sensitive to the true median
time lag for very asymmetric transfer functions.
While the tendency of JAVELIN to measure a time lag
ranging between the true mean and median time lags
may appear to complicate its interpretation,
an uncertainty of $\sim 1$ day from the difference
between the true mean and median lags is comparable to the
uncertainty introduced by additional assumptions, as discussed
in Section~\ref{sect_ccf_tests}, when using time lag measurements to estimate
the mean radius of the BLR or to measure the black hole mass.
These comparisons suggests that JAVELIN is a excellent resource for
measuring the time lag even if the JAVELIN lag uncertainties do not
reflect the uncertainty introduced by asymmetric transfer functions.
However, to constrain more than the time lag,
more flexible modeling of the transfer function must be done.
In comparison, the CCF lag measurements for the two simulated datasets
agree with the true mean lag values due to larger uncertainties. The CCF
lag measurements do not agree more closely with the true median lag values than
with the true mean lag values for more asymmetric transfer functions, as for JAVELIN lags.
The quoted error bars on the CCF lag values, $\tau_{\rm cen}$, in
Table~\ref{table_lag_compare} are calculated by drawing
a random subset of the line and continuum light curves points, with the
same number of random draws as the original light curves. For points in
the light curves that are drawn $N$ times, the flux error is reduced by $\sqrt{N}$.
Finally, the randomly drawn light curve fluxes are modified by adding
random Gaussian noise given by the flux errors.
This is similar to the ``flux randomization"/``random subset selection" (FR/RSS)
approach described in \citet{peterson98} except
the FR/RSS approach throws out any redundant points in the light curve instead
of reducing the flux errors by $\sqrt{N}$, resulting in slightly larger
uncertainties in the CCF lag.
The CCF time lag is measured for 1000 iterations of this sequence and
we quote the median and 68\% confidence intervals of the CCF time lag distributions.
For the two simulated
datasets tested here with data quality comparable to the LAMP 2008 dataset for
Arp 151 \citep{bentz09}, the error bars are $\sim 0.5$ days, or $\sim 14$\% the
value of $\tau_{\rm cen}$.
This comparison suggests that while CCF analysis may not
give the most precise measurement of the mean or median time lag,
the CCF lag uncertainties likely include much of
the systematic uncertainties from an unknown transfer function.
Finally, we also consider the effects of detrending the light curves
before calculating the CCF lag. Detrending can improve
the shape of the CCF when there are strong long-term trends that can
be removed by subtracting a linear fit to the light curves \citep{welsh99}.
Since our simulated data do not contain strong long-term trends,
detrending the light curves should have minimal impact on the
measured CCF lags. We confirm this by subtracting a linear fit
to the simulated continuum light curves from both the continuum
and line light curves. Due to the difference in length between the
continuum and line light curves, fitting the continuum and line light
curves with linear fits separately destroys the correlation between
the light curves. When we use a linear fit to the continuum light curve
to detrend both light curves we obtain CCF lag measurements
for simulated dataset 1 of $\tau_{\rm cen} = 3.37^{+0.48}_{-0.37}$ days
and for simulated dataset 2 of $\tau_{\rm cen} = 3.38^{+0.47}_{-0.37}$ days,
which agree to within the uncertainties with the un-detrended CCF lag values.
\subsection{Dynamical modeling without a full spectral dataset}
As shown in Section~\ref{sect_linetest}, a spectroscopic dataset
offers substantially more information about the BLR for direct
modeling. Here we explore an intermediate case, where the available
data consist of the usual continuum light curve, an integrated
emission line light curve, and a mean spectrum. Since the mean
spectrum contains some information about the kinematics of the BLR, we
can model this dataset using the fully dynamical model of the BLR.
However, with only the mean spectrum, this dataset cannot constrain
the time lag as a function of velocity or wavelength, as possible for a full
spectroscopic dataset.
In order to provide a test of this intermediate dataset case that is
as realistic as possible, we use the LAMP 2008 dataset for Arp 151. A
description of the dataset can be found in paper II. In the analysis
of this test, we focus on the differences in inferred parameter values
between this test and the full dynamical modeling results presented in
paper II. In general, the modeling results for the full spectroscopic
dataset and for the intermediate dataset are fully consistent, but the
uncertainty on the inferred model parameter values is much larger for
the intermediate dataset. For example, the black hole mass is inferred
to have a posterior PDF with a long tail at high masses, giving
$\log_{10}(M_{\rm BH}/M_\odot) = 6.74^{+0.66}_{-0.13}$ compared
to the value from Paper II of $\log_{10}(M_{\rm BH}/M_\odot) = 6.62^{+0.10}_{-0.13}$.
Similarly, the uncertainty in $\theta_i$, $\theta_o$, and $\kappa$ is larger by at least
a factor of 3, the uncertainty in $\xi$ is larger by at least
a factor of two, and $\gamma$ is completely
undetermined for the intermediate dataset. The two marginally
consistent results are the mean radius and mean lag, which are both
substantially larger for the intermediate dataset and have
uncertainties at least 10 times larger than for the full
spectroscopic dataset. This is due to a preference for $\beta \to 2$, corresponding
to heavy-tailed radial distributions where the median radius and median lag
are more consistent measurements of the characteristic size of the BLR.
Overall, this test suggests that considerable
information about the BLR can be inferred from the mean line profile,
but the constraints on BLR geometry and dynamics parameters are
significantly better when the full spectroscopic dataset is used.
Finally, while this intermediate dataset allows for measurement of the
black hole mass, it cannot be constrained to less than the $\sim
0.4$ dex scatter in the $f$ factor due to a tail in the posterior PDF at high masses.
\section{Comparison with cross-correlation analysis}
\label{sect_ccf_tests}
We can compare the results of direct modeling to the standard
reverberation mapping analysis of using the cross-correlation function
(CCF) to measure time lags and the dispersion or FWHM of the broad
emission line to measure a characteristic velocity of the BLR. In
addition to providing a sanity check on our direct modeling results,
such a comparison allows us to explore some of the uncertainties
involved in standard reverberation mapping analysis. First, we
consider the time lag traditionally measured using the CCF, how it
compares to a measurement of the mean radius and how sampling of the
line light curve and variability of the continuum light curve affect
its measurement. Second, we consider the combination of the CCF lag
with measurements of the emission line width to explore the
uncertainty in black hole masses measured using the virial product.
\subsection{Comparing the time lag and mean radius}
\label{sect_rmean_lag}
\begin{figure}
\begin{center}
\includegraphics[scale=0.5]{figure8.pdf}
\caption{Difference between the mean radius and mean lag for BLR models drawn randomly
from the prior with $\mu = 4$ light days. The distribution is asymmetric because the parameter
$\xi$, the BLR plane transparency fraction, only shortens the mean lag compared to the mean
radius, creating an excess of models where the mean radius is larger than the mean lag.}
\label{fig_radius_lag_test}
\end{center}
\end{figure}
One of the main assumptions in the traditional analysis is that the
time lag measured from CCF analysis is related to some characteristic
radius of the BLR. We explore the validity of this assumption by
comparing the mean radius and the mean time lag in our geometry model
of the BLR. We hold the mean radius fixed at $\mu = 4$ light days and
allow the other geometry model parameter values to sample their priors
as listed in Table~\ref{table_params}, with the exception of the
inclination angle, which we constrain to vary between zero (face-on)
and 45 degrees. The results of this comparison are shown in
Figure~\ref{fig_radius_lag_test} for 200,000 samples. The difference
between the mean radius, $r_{\rm mean}$, and the mean lag, $\tau$, is
generally greater than one, meaning that the mean lag (in days) is
usually shorter than the mean radius (in light days). This is due to
the geometry parameter $\xi$ that allows the midplane of the BLR to be
transparent or opaque, since a BLR midplane that is not transparent
will result in fewer particles with longer lags and hence a
tendency for the mean lag to be smaller than the mean radius. The
mean of $r_{\rm mean} - \tau$ is 0.53 light days and the standard
deviation of the distribution is 0.80 light days. This suggests that
the uncertainty in using the time lag as a measurement of the mean
radius is relatively small, on the order of the CCF time lag
uncertainty typically quoted for high-quality reverberation mapping
data of $\sim 1$ day.
\subsection{The effects of line light curve sampling}
\label{sect_sampling_test}
\begin{table}
\caption{Geometry model parameter values of simulated emission line light curves used in the comparison of
direct modeling with the cross-correlation analysis approach.}
\label{table_geo_mod_true}
\begin{tabular}{@{}cccccccc}
\hline
Mock & $\tau_{\rm mean}$ & $\theta_i$ & $\theta_o$ & $\kappa$ & $\beta$ & $F$ & $\xi$\\
Line & (days) & (deg) & (deg) & & & & \\
\hline
1 & 3.69 & 10 & 25 & -0.25 & 1.0 & 0.2 & 0.5 \\
2 & 3.77 & 10 & 25 & 0.5 & 0.11 & 0.5 & 1 \\
3 & 4.01 & 10 & 90 & 0.0 & 0.11 & 0.99 & 1 \\
4 & 5.34 & 10 & 90 & -0.5 & 0.11 & 0.99 & 1 \\
5 & 4.00 & 0 & 0.5 & 0.0 & 0.11 & 0.99 & 1 \\
\hline
\end{tabular}
\medskip
All simulated datasets were created with a mean radius, $\mu$, of 4 light days and
with $\gamma = 1$.
\end{table}
\begin{figure}
\begin{center}
\includegraphics[scale=0.6]{figure9.pdf}
\caption{Simulated integrated emission line datasets including the AGN continuum light curve in solid
blue and integrated H$\beta$\ line light curves in solid red, green, cyan, black, and dashed red. The
continuum light curve is based on the LAMP 2008 light curve of Arp 151 \citep{walsh09}. The
simulated H$\beta$\ line light curves correspond to five different BLR geometries, as shown in Figure \ref{fig_mockgeo}. }
\label{fig_mocklc}
\end{center}
\end{figure}
\begin{figure}
\begin{center}
\includegraphics[scale=0.62]{figure10.pdf}
\caption{Geometries of the BLR (left panels) and corresponding transfer functions (right panels)
of the simulated reverberation mapping datasets shown in Figure \ref{fig_mocklc}. Top to bottom BLR
geometries: face-on wide disk, face-on donut, spherical shell, spherical shell with preferential
emission from the back of the sphere, and a face-on thin ring. }
\label{fig_mockgeo}
\end{center}
\end{figure}
\begin{figure}
\begin{center}
\includegraphics[scale=0.54]{figure11.pdf}
\caption{The CCF lag $\tau_{\rm cen}$ as a function of the ratio of $\sigma_{\rm noise}$ to
the RMS variability of the line light curve and versus the cadence as given by the sampling
fraction of the mean radius. Top to bottom: simulated line dataset 1, 2, 3, 4, and 5.
The horizontal blue lines show the true mean time lags for each simulated line dataset. The
vertical dotted lines show the values of the x-axis for which each cluster of points correspond,
where the clusters of points are spread out along the x-axis to show their spread.
These results are for the case where 3/4 of the epochs are not lost to weather.}
\label{fig_line_sampling2}
\end{center}
\end{figure}
Next we explore the dependence of the measured CCF lag
on the geometry of the BLR and on the
sampling characteristics of the emission line light curve.
We focus on four very simple BLR geometries and one more
realistic one, as shown in Figure~\ref{fig_mockgeo}, including
\begin{enumerate}
\item A nearly face-on wide disk with preferential emission from the far side and a
disk midplane that is more than half opaque.
\item A nearly face-on ring with preferential emission from the near side.
\item A spherical shell (making a top-hat transfer function).
\item A spherical shell with preferential emission from the far side.
\item A perfectly face-on thin ring (making a $\delta$-function transfer function).
\end{enumerate}
We use these five geometries of the BLR to create simulated emission
line light curves, as shown in Figure~\ref{fig_mocklc} using the same
input continuum light curve and with very fine sampling of 0.1 days
for both the line and continuum light curves. The geometry model
parameter values are given in Table~\ref{table_geo_mod_true}. In
order to test how the quality of integrated emission line light curves
affects measurement of the CCF lag, we degraded the quality of the
simulated data by adding random Gaussian noise to the line light curve
and by reducing the sampling cadence. For each simulated dataset
degraded by adding $\sigma_{\rm noise}$ of random Gaussian noise, by
sampling the line light curve at some fraction of the true mean radius
of the BLR, and by losing a fraction of that sampled line light curve
to weather, we computed the CCF lags $\tau_{\rm cen}$ and $\tau_{\rm
peak}$ for 1000 realizations of assigning the random noise and losing
a fraction of the light curve to weather. The simulated line light
curves were degraded by:
\begin{enumerate}
\item Reducing the sampling cadence to 1/10, 1/5, 1/3, or 1/2 of the true
mean radius value of the geometry model of 4 light days. This means that the highest
cadence is about half a day.
\item Adding random Gaussian noise, $\sigma_{\rm noise}$, at the level of 0.05, 0.1, 0.2, 0.3,
0.4, 0.5, 0.6, 0.7, 0.8, 0.9, or 1.0 times the RMS variability of the simulated line light curve.
\item Including only a fraction of the total number of line light curve data points randomly
from the light curve to simulate observations lost to weather. The fractions are 1, 3/4, 2/3, and 1/2.
\end{enumerate}
Some illustrative results of this comparison are shown in
Figure~\ref{fig_line_sampling2}, with the lefthand column showing the
CCF lag $\tau_{\rm cen}$ versus the ratio of $\sigma_{\rm noise}$ over
the RMS variability and with the righthand column showing the CCF lag
$\tau_{\rm cen}$ versus the cadence as a sampling fraction of $r_{\rm
mean}$. Figure~\ref{fig_line_sampling2} shows the results for when
3/4 of the line light curve is not lost to weather. The trend
continues for larger fractions of the light curve lost to weather: the
uncertainties on the measured CCF lag $\tau_{\rm cen}$ increase while
the mean lag measurement stays the same. For the case where no
observations are lost to weather, the error bars become comparable to
the size of the points in Figure~\ref{fig_line_sampling2}.
Overall,
these results suggest that for different geometries of the BLR
$\tau_{\rm cen}$ can be offset from the true lag value by as much as a
quarter of a light day (for a true mean lag of $\sim 4$ light days,
see Table~\ref{table_geo_mod_true} for the exact values), which is
well within typical uncertainties on CCF time lags quoted in the
literature. For light curves with larger flux errors and lower
cadence, this offset is easily within the error bars. In addition to
a possible offset from the true lag values, these results show the
importance of sampling the light curve at smaller fractions of the
mean lag, even when the signal to noise quality of the light curve is
high. As the fraction of the light curve lost to weather increases,
this effect becomes more important.
Detrending of the simulated light curves does not change these
results.
\subsection{The effects of continuum variability}
\label{sect_continuum_test}
\begin{figure}
\begin{center}
\includegraphics[scale=0.55]{figure12.pdf}
\caption{Histogram of the CCF center-of-mass time
lag $\tau_{\rm cen}$ for 1000 random continuum light curve model realizations.
The true mean lag is $3.74$ days.
The vertical solid red line denotes the median value of 3.33 days and the
dotted vertical red lines give the 68\% confidence interval around the median at
2.66 and 4.66 days.}
\label{fig_continuum_test}
\end{center}
\end{figure}
In addition to light curve sampling effects, there is also the
possibility that variability features in the AGN continuum light curve
could affect measurement of CCF lags. We explore this source of
uncertainty by generating 1000 random realizations of AGN continuum
light curves, keeping the continuum hyper-parameters fixed to values
similar to those inferred for Arp 151 and the BLR geometry model fixed
to the values for simulated integrated line dataset 1 given in
Table~\ref{table_geo_mod_true}. Given each realization of the AGN
continuum light curve and the fixed BLR geometry model, we generate an
integrated emission line light curve. We use the sampling cadence of
the LAMP 2008 dataset for Arp 151, described in
Section~\ref{sect_mockspectra}, for each realization of the continuum
and line light curves. We then calculate the CCF center-of-mass lag
$\tau_{\rm cen}$ for each of the 1000 realizations, obtaining
successful CCF lag measurements for over 90\% of the random continuum
realizations.
The results are shown in Figure~\ref{fig_continuum_test} as a
histogram of $\tau_{\rm cen}$ values, where we have truncated the
histogram to between zero and fifteen days for clarity. The median
and 68\% confidence interval for all measurements of $\tau_{\rm cen}$
is $3.33^{+1.33}_{-0.67}$ days, and considering only values of
$\tau_{\rm cen}$ between zero and fifteen days reduces the
uncertainties by less than 0.1 days.
Detrending of the simulated light curves results in a similar median
value for $\tau_{\rm cen}$ of $3.23^{+0.97}_{-0.61}$ days.
These inferred median values for
$\tau_{\rm cen}$ agree to within the uncertainties with each other and the true
value of the mean lag of $3.74$ days. This test demonstrates that the
main consequence of continuum variability is to add additional scatter
to measurements of $\tau_{\rm cen}$ on the order of $\sim 1$ day,
without shifting the median measurement of $\tau_{\rm cen}$ away from
the true value.
\subsection{Comparing the black hole mass and virial product}
\begin{figure}
\begin{center}
\includegraphics[scale=0.55]{figure13.pdf}
\caption{Distributions of $f$ factor values for a fixed value of black hole mass
and mean radius, with the other BLR model parameters allowed to vary. $f_\sigma$
and $f_{\rm FWHM}$ are calculated using the CCF lag $\tau_{\rm cen}$ and
the line dispersion of the RMS emission
line profile or the FWHM of the mean emission line profile respectively. The top
left panel shows the correlation between $f_\sigma$ and $f_{\rm FWHM}$.}
\label{fig_f_compare}
\end{center}
\end{figure}
Other than the mean lag from CCF analysis, the black hole mass
measured from the virial product is the key measurement of
reverberation mapping studies. However the use of the virial product,
$M_{\rm vir} = c\tau \Delta v^2/G$, to measure black hole mass involves
making many assumptions, including that the mean lag is a good measure
of the physical scale of the BLR and that the width of the broad
emission line is a good measure of the velocity field of the BLR. We
attempt to quantify the uncertainty introduced by these assumptions by
calculating the virial product from instances of our geometry and
dynamics BLR model. We hold the black hole mass fixed at $M =
10^{6.5}M_\odot$ and the mean radius fixed at $\mu = 4$ light days
while allowing all other geometry and dynamics model parameters to
vary within their prior bounds, except for the inclination angle, which
is limited to within zero (face-on) and 45 degrees. For each sample
of a BLR model, we calculate the CCF lag $\tau_{\rm cen}$, the line
dispersion of the RMS emission line profile, and the full width at
half maximum (FWHM) of the mean line profile. Using these three
values we can calculate the virial product either using the line
dispersion or FWHM line width measurement. By further dividing the
true black hole mass by the virial product we can work in terms of the
virial coefficient $f$, where $f_\sigma$ is calculated from the virial
product using the line dispersion and $f_{\rm FWHM}$ is calculated
from the virial product using the FWHM.
The results for the comparison of true black hole mass to virial
product are shown in Figure~\ref{fig_f_compare} for 1000 samples of
the BLR model parameters (other than $M$ and $\mu$, which are held
fixed). The cadence of the continuum light curve and spectral time series
were based on the cadence of the LAMP 2008 dataset for Arp 151,
as described by \citet{bentz09}.
The values of $\log_{10}(f_\sigma)$ and $\log_{10}(f_{\rm
FWHM})$ are clearly correlated, but there is significant scatter in
the relation. More importantly, the dispersion in the $f$ factors is
encouragingly small: the mean value of $\log_{10}(f_\sigma)$ is 0.43,
with a standard deviation of 0.22, and the mean value of
$\log_{10}(f_{\rm FWHM})$ is $-0.39$ with a standard deviation of 0.26,
where the dispersion in the $f$ factors does not depend on whether
the light curves have been detrended.
This means that if the BLR is well described by our phenomenological
model, we should not be surprised that the $M_{\rm BH} - \sigma_*$
relation based on reverberation mapping black hole mass measurements
does not have much larger scatter than the canonical $\sim 0.4$ dex
found for galaxies with dynamical mass measurements.
\section{Conclusions}
\label{sect_conclusions}
In this paper we present an improved and expanded simply parameterized phenomenological
model of the BLR for direct modeling of reverberation mapping data. In addition to
describing the model in detail, we test the performance of the direct modeling approach
using simulated reverberation mapping datasets with and without full spectral information.
We also use this model of the BLR to explore sources of uncertainty in the traditional
cross-correlation analysis used to measure time lags in reverberation mapped AGNs
as well as sources of uncertainty in traditional measurements of the black hole mass
using the virial product. Our main conclusions are as follows:
\begin{enumerate}
\item For simulated data with the same properties as the LAMP 2008 spectroscopic dataset for
Arp 151, we can recover the black hole mass to within 0.05-0.25 dex uncertainty and
distinguish between elliptical orbits and inflow. We recover the mean radius
and mean lag with $5-12$\% uncertainties and the opening angle of the disk
and inclination angle to within $5-10$ degrees.
\item For the same simulated datasets, but where integrated emission line fluxes
are used instead of the full spectroscopic information, we can use a BLR geometry model
to constrain the mean radius
and mean lag with $5-35$\% uncertainties and obtain only minimal constraints
on the geometry of the BLR.
\item Using a combination of an integrated emission line light curve and a mean emission
line profile for direct modeling allows for some constraints on the geometry of the BLR, but
with greater uncertainty than from using the full spectroscopic dataset. The uncertainty
in $\log_{10}(M_{\rm BH}/M_\odot)$ is also greater compared to using
the full spectroscopic dataset.
\item Comparison of BLR geometry modeling results to those from JAVELIN \citep{zu11}
and CCF analysis shows that JAVELIN recovers a time lag between
the true mean and median lag, while CCF analysis recovers a time lag closer to the true mean lag. While
the larger lag uncertainties from CCF analysis may reflect the unknown shape of the transfer
function, the lag uncertainties from JAVELIN are smaller than the difference between the
true mean and median time lag.
\item By considering the range in possible BLR geometries of our model, we estimate
the uncertainty in converting a mean lag into a mean radius to be $\sim 25$\%.
\item The CCF lag $\tau_{\rm cen}$ can be offset from the true lag of a BLR model
depending on the geometry. Both signal-to-noise of the flux light curve and sampling
rate affect the dispersion in how far the CCF lag is relative to the true lag. Gaps in
the light curve due to weather also introduce more uncertainty in the CCF lag.
\item For a given BLR geometry, changes in the variability features of the AGN
continuum light curve introduces an uncertainty of $\sim 25$\% into measurements
of the CCF lag $\tau_{\rm cen}$.
\item By considering the range in possible BLR geometries and dynamics of our model,
we estimate the uncertainty in measuring the black hole mass using the virial product
to be smaller than the spread in the $M_{\rm BH} - \sigma_*$ relation. We find that the
standard deviation of $f = M_{\rm BH}/M_{\rm vir}$ is only $\sim 0.25$ dex, i.e. smaller than the uncertainty typically quoted for virial mass estimates.
\end{enumerate}
The tests presented here demonstrate the unique capabilities
of dynamical modeling of reverberation mapping data to constrain the geometric and kinematic properties
of the BLR. While we can use hybrid datasets consisting of integrated line flux measurements and
a mean emission line profile, considerably more information is available from modeling the reverberations
across the emission line profile. The improvements we have made to this simply parameterized
phenomenological model of the BLR have increased the flexibility of the method to fit a wider variety
of emission line profiles. Future improvements will add a deeper connection to photoionization physics,
relating the distribution of broad line emission to the distribution of underlying gas, and explore the
effects of non-gravitational forces, important for inferring the correct black hole mass.
These tests also confirm that the uncertainties inherent in the traditional analysis of
measuring lags using the cross-correlation function and black hole masses using
the virial product are relatively small, although larger than the formal uncertainties.
The simplified problem of modeling integrated emission line light curves using
a geometry-only model for the BLR presents an alternative approach for measuring
time lags and mean radii of the BLR compared to the traditional analysis. One
advantage to measuring time lags and mean radii with geometry modeling of the BLR
is that the final uncertainties reflect the unknown underlying transfer function.
\section*{Acknowledgements}
We would like to thank Aaron Barth, Mike Goad, Keith Horne, Daniel Proga, and Ying Zu for helpful discussions.
AP acknowledges support from the NSF through the Graduate Research Fellowship Program.
AP, BJB, and TT acknowledge support from the Packard Foundation through a Packard Fellowship to TT and support from the NSF through awards NSF-CAREER-0642621 and NSF-1108835.
BJB is partially supported by the Marsden Fund (Royal Society of New Zealand).
|
2,877,628,090,419 | arxiv | \section{Introduction}
\label{sec:introduction}
Deep neural networks trained with backpropagation have commonly attained superhuman performance in applications of computer vision \cite{krizhevsky2012imagenet} and many others \cite{schmidhuber2015deep} and are thus receiving an unprecedented research interest. Despite the rapid growth of the list of successful applications with these gradient-based methods, our theoretical understanding, however, is progressing at a more modest pace.
One of the salient features of deep networks today is that they often have far more model parameters than the number of training samples that they are trained on, but meanwhile some of the models still exhibit remarkably good generalization performance when applied to unseen data of similar nature, while others generalize poorly in exactly the same setting. A satisfying explanation of this phenomenon would be the key to more powerful and reliable network structures.
To answer such a question, statistical learning theory has proposed interpretations from the viewpoint of system complexity \cite{vapnik2013nature,bartlett2002rademacher,poggio2004general}. In the case of large numbers of parameters, it is suggested to apply some form of regularization to ensure good generalization performance. Regularizations can be explicit, such as the dropout technique \cite{srivastava2014dropout} or the $l_2$-penalization (weight decay) as reported in \cite{krizhevsky2012imagenet}; or implicit, as in the case of the early stopping strategy \cite{yao2007early} or the stochastic gradient descent algorithm itself \cite{zhang2016understanding}.
Inspired by the recent line of works \cite{saxe2013exact,advani2017high}, in this article we introduce a random matrix framework to analyze the training and, more importantly, the generalization performance of neural networks, trained by gradient descent. Preliminary results established from a toy model of two-class classification on a single-layer linear network are presented, which, despite their simplicity, shed new light on the understanding of many important aspects in training neural nets. In particular, we demonstrate how early stopping can naturally protect the network against overfitting, which becomes more severe as the number of training sample approaches the dimension of the data. We also provide a strict lower bound on the training sample size for a given classification task in this simple setting. A byproduct of our analysis implies that random initialization, although commonly used in practice in training deep networks \cite{glorot2010understanding,krizhevsky2012imagenet}, may lead to a degradation of the network performance.
From a more theoretical point of view, our analyses allow one to evaluate any functional of the eigenvalues of the sample covariance matrix of the data (or of the data representation learned from previous layers in a deep model), which is at the core of understanding many experimental observations in today's deep networks \cite{glorot2010understanding,ioffe2015batch}. Our results are envisioned to generalize to more elaborate settings, notably to deeper models that are trained with the stochastic gradient descent algorithm, which is of more practical interest today due to the tremendous size of the data.
\emph{Notations}: Boldface lowercase (uppercase) characters stand for vectors (matrices), and non-boldface for scalars respectively. $\mathbf{0}_p$ is the column vector of zeros of size $p$, and $\mathbf{I}_p$ the $p \times p$ identity matrix. The notation $(\cdot)^{\sf T}$ denotes the transpose operator. The norm $\| \cdot \| $ is the Euclidean norm for vectors and the operator norm for matrices. $\Im(\cdot)$ denotes the imaginary part of a complex number. For $x \in \mathbb{R}$, we denote for simplicity $(x)^+ \equiv \max(x,0)$.
In the remainder of the article, we introduce the problem of interest and recall the results of \cite{saxe2013exact} in Section~\ref{sec:problem}. After a brief overview of basic concepts and methods to be used throughout the article in Section~\ref{sec:preliminaries}, our main results on the training and generalization performance of the network are presented in Section~\ref{sec:performance}, followed by a thorough discussion in Section~\ref{sec:discuss} and experiments on the popular MNIST database \cite{lecun1998mnist} in Section~\ref{sec:validations}. Section~\ref{sec:conclusion} concludes the article by summarizing the main results and outlining future research directions.
\section{Problem Statement}
\label{sec:problem}
Let the training data $\mathbf{x}_1, \ldots, \mathbf{x}_n \in \mathbb{R}^p$ be independent vectors drawn from two distribution classes $\mathcal{C}_1$ and $\mathcal{C}_2$ of cardinality $n_1$ and $n_2$ (thus $n_1 + n_2 = n$), respectively. We assume that the data vector $\mathbf{x}_i$ of class $\mathcal{C}_a$ can be written as
\[
\mathbf{x}_i = (-1)^a \boldsymbol{\mu} + \mathbf{z}_i
\]
for $a = \{1,2\}$, with $\boldsymbol{\mu} \in \mathbb{R}^p$ and $\mathbf{z}_i$ a Gaussian random vector $\mathbf{z}_i \sim \mathcal{N}(\mathbf{0}_p, \mathbf{I}_p)$. In the context of a binary classification problem, one takes the label $y_i = -1$ for $\mathbf{x}_i \in \mathcal{C}_1$ and $y_j = 1$ for $\mathbf{x}_j \in \mathcal{C}_2$ to distinguish the two classes.
We denote the training data matrix $\mathbf{X} = \begin{bmatrix} \mathbf{x}_1, \ldots, \mathbf{x}_n \end{bmatrix} \in \mathbb{R}^{p \times n}$ by cascading all $\mathbf{x}_i$'s as column vectors and associated label vector $\mathbf{y} \in \mathbb{R}^n$. With the pair $\{\mathbf{X}, \mathbf{y}\}$, a classifier is trained using ``full-batch'' gradient descent to minimize the loss function $L(\mathbf{w})$ given by
\[
L(\mathbf{w}) = \frac1{2n} \| \mathbf{y}^{\sf T} - \mathbf{w}^{\sf T} \mathbf{X} \|^2
\]
so that for a new datum $\hat \mathbf{x}$, the output of the classifier is $\hat y = \mathbf{w}^{\sf T} \hat \mathbf{x}$, the sign of which is then used to decide the class of $\hat \mathbf{x}$. The derivative of $L$ with respective to $\mathbf{w}$ is given by
\[
\frac{\partial L(\mathbf{w})}{\partial \mathbf{w}} = - \frac1{n} \mathbf{X} (\mathbf{y} - \mathbf{X}^{\sf T} \mathbf{w}).
\]
The gradient descent algorithm \cite{boyd2004convex} takes small steps of size $\alpha$ along the \emph{opposite direction} of the associated gradient, i.e., $\mathbf{w}_{t+1} = \mathbf{w}_t - \alpha \frac{\partial L(\mathbf{w})}{\partial \mathbf{w}} \big|_{\mathbf{w} = \mathbf{w}_t}$.
Following the previous works of \cite{saxe2013exact,advani2017high}, when the learning rate $\alpha$ is small, $\mathbf{w}_{t+1}$ and $\mathbf{w}_t$ are close to each other so that by performing a continuous-time approximation, one obtains the following differential equation
\[
\frac{\partial \mathbf{w}(t)}{\partial t} = - \alpha \frac{\partial L(\mathbf{w})}{\partial \mathbf{w}} = \frac{\alpha}{n} \mathbf{X} \left(\mathbf{y} - \mathbf{X}^{\sf T} \mathbf{w}(t) \right)
\]
the solution of which is given explicitly by
\begin{equation}
\mathbf{w}(t) = e^{- \frac{\alpha t}n \mathbf{X} \mathbf{X}^{\sf T} } \mathbf{w}_0 + \left(\mathbf{I}_p - e^{- \frac{\alpha t}n \mathbf{X}\X^{\sf T} } \right) ( \mathbf{X}\X^{\sf T} )^{-1} \mathbf{X}\mathbf{y}
\label{eq:solution-de}
\end{equation}
if one assumes that $\mathbf{X} \mathbf{X}^{\sf T}$ is invertible (only possible in the case $p < n$), with $\mathbf{w}_0 \equiv \mathbf{w}(t=0)$ the initialization of the weight vector; we recall the definition of the exponential of a matrix $\frac1n \mathbf{X} \mathbf{X}^{\sf T}$ given by the power series $e^{\frac1n \mathbf{X} \mathbf{X}^{\sf T}} = \sum_{k=0}^\infty \frac1{k!} (\frac1n \mathbf{X} \mathbf{X}^{\sf T})^k = \mathbf{V} e^{\boldsymbol{\Lambda}} \mathbf{V}^{\sf T}$, with the eigendecomposition of $\frac1n \mathbf{X} \mathbf{X}^{\sf T} = \mathbf{V} \boldsymbol{\Lambda} \mathbf{V}^{\sf T}$ and $e^{\boldsymbol{\Lambda}}$ is a diagonal matrix with elements equal to the exponential of the elements of $\boldsymbol{\Lambda}$. As $t\to \infty$ the network ``forgets'' the initialization $\mathbf{w}_0$ and results in the least-square solution $\mathbf{w}_{LS} \equiv ( \mathbf{X}\X^{\sf T} )^{-1} \mathbf{X}\mathbf{y}$.
When $p>n$, $\mathbf{X}\X^{\sf T}$ is no longer invertible. Assuming $\mathbf{X}^{\sf T} \mathbf{X}$ is invertible and writing $\mathbf{X}\mathbf{y} = \left(\mathbf{X}\X^{\sf T}\right)\mathbf{X}\left(\mathbf{X}^{\sf T}\mathbf{X}\right)^{-1}\mathbf{y}$, the solution is similarly given by
\[
\mathbf{w}(t) = e^{- \frac{\alpha t}n \mathbf{X} \mathbf{X}^{\sf T} } \mathbf{w}_0 + \mathbf{X} \left(\mathbf{I}_n - e^{- \frac{\alpha t}n \mathbf{X}^{\sf T} \mathbf{X} } \right) ( \mathbf{X}^{\sf T} \mathbf{X} )^{-1} \mathbf{y}
\]
with the least-square solution $\mathbf{w}_{LS} \equiv \mathbf{X}( \mathbf{X}^{\sf T}\mathbf{X} )^{-1} \mathbf{y}$.
In the work of \cite{advani2017high} it is assumed that $\mathbf{X}$ has i.i.d.\@ entries and that there is no linking structure between the data and associated targets in such a way that the ``true'' weight vector $\bar \mathbf{w}$ to be learned is independent of $\mathbf{X}$ so as to simplify the analysis. In the present work we aim instead at exploring the capacity of the network to retrieve the (mixture modeled) data structure and position ourselves in a more realistic setting where $\mathbf{w}$ captures the different statistical structures (between classes) of the pair $(\mathbf{X},\mathbf{y})$. Our results are thus of more guiding significance for practical interests.
From \eqref{eq:solution-de} note that both $e^{-\frac{\alpha t}{n} \mathbf{X} \mathbf{X}^{\sf T}}$ and $\mathbf{I}_p - e^{-\frac{\alpha t}{n} \mathbf{X} \mathbf{X}^{\sf T}}$ share the same eigenvectors with the \emph{sample covariance matrix} $\frac1n \mathbf{X} \mathbf{X}^{\sf T}$, which thus plays a pivotal role in the network learning dynamics. More concretely, the projections of $\mathbf{w}_0$ and $\mathbf{w}_{LS}$ onto the eigenspace of $\frac1n \mathbf{X} \mathbf{X}^{\sf T}$, weighted by functions ($\exp(-\alpha t \lambda_i)$ or $1-\exp(-\alpha t \lambda_i)$) of the associated eigenvalue $\lambda_i$, give the temporal evolution of $\mathbf{w}(t)$ and consequently the training and generalization performance of the network. The core of our study therefore consists in deeply understanding of the eigenpairs of this sample covariance matrix, which has been largely investigated in the random matrix literature \cite{bai2010spectral}.
\section{Preliminaries}
\label{sec:preliminaries}
Throughout this paper, we will be relying on some basic yet powerful concepts and methods from random matrix theory, which shall be briefly highlighted in this section.
\subsection{Resolvent and deterministic equivalents}
\label{subsec:resolvent-and-its-D-E}
Consider an $n \times n$ Hermitian random matrix $\mathbf{M}$. We define its \emph{resolvent} $\mathbf{Q}_{\mathbf{M}}(z)$, for $z \in \mathbb{C}$ not an eigenvalue of $\mathbf{M}$, as
\[
\mathbf{Q}_{\mathbf{M}}(z) = \left( \mathbf{M} - z \mathbf{I}_n \right)^{-1}.
\]
Through the Cauchy integral formula discussed in the following subsection, as well as its central importance in random matrix theory, $\mathbf{Q}_{\frac1n \mathbf{X}\X^{\sf T}}(z)$ is the key object investigated in this article.
For certain simple distributions of $\mathbf{M}$, one may define a so-called \emph{deterministic equivalent} \cite{hachem2007deterministic,couillet2011random} $\bar\mathbf{Q}_{\mathbf{M}}$ for $\mathbf{Q}_{\mathbf{M}}$, which is a deterministic matrix such that for all $\mathbf{A}\in \mathbb{R}^{n \times n}$ and all $\mathbf{a,b} \in \mathbb{R}^n$ of bounded (spectral and Euclidean, respectively) norms, $\frac1n \tr \left( \mathbf{A} \mathbf{Q}_{\mathbf{M}} \right) - \frac1n \tr \left( \mathbf{A} \bar \mathbf{Q}_{\mathbf{M}} \right) \to 0$ and $\mathbf{a}^{\sf T} \left( \mathbf{Q}_{\mathbf{M}} - \bar \mathbf{Q}_{\mathbf{M}} \right) \mathbf{b} \to 0$ almost surely as $n \to \infty$. As such, deterministic equivalents allow to transfer random spectral properties of $\mathbf{M}$ in the form of deterministic limiting quantities and thus allows for a more detailed investigation.
\subsection{Cauchy's integral formula}
\label{subsec:cauchy-integral-and-residue}
First note that the resolvent $\mathbf{Q}_{\mathbf{M}}(z)$ has the same eigenspace as $\mathbf{M}$, with associated eigenvalue $\lambda_i$ replaced by $\frac1{\lambda_i - z}$. As discussed at the end of Section~\ref{sec:problem}, our objective is to evaluate functions of these eigenvalues, which reminds us of the fundamental Cauchy's integral formula, stating that for any function $f$ holomorphic on an open subset $U$ of the complex plane, one can compute $f(\lambda)$ by contour integration. More concretely, for a closed positively (counter-clockwise) oriented path $\gamma$ in $U$ with winding number one (i.e., describing a $360^\circ$ rotation), one has, for $\lambda$ contained in the surface described by $\gamma$, $\frac1{2\pi i} \oint_{\gamma} \frac{f(z)}{z - \lambda} dz = f(\lambda)$ and $\frac1{2\pi i} \oint_{\gamma} \frac{f(z)}{z - \lambda} dz = 0$ if $\lambda$ lies outside the contour of $\gamma$.
With Cauchy's integral formula, one is able to evaluate more sophisticated functionals of the random matrix $\mathbf{M}$. For example, for $f(\mathbf{M}) \equiv \mathbf{a}^{\sf T} e^{\mathbf{M}} \mathbf{b}$ one has
\[
f(\mathbf{M}) = -\frac1{2 \pi i} \oint_{\gamma} \exp(z) \mathbf{a}^{\sf T} \mathbf{Q}_{\mathbf{M}}(z) \mathbf{b}\ dz
\]
with $\gamma$ a positively oriented path circling around \emph{all} the eigenvalues of $\mathbf{M}$. Moreover, from the previous subsection one knows that the bilinear form $\mathbf{a}^{\sf T} \mathbf{Q}_{\mathbf{M}}(z) \mathbf{b}$ is asymptotically close to a non-random quantity $\mathbf{a}^{\sf T} \bar \mathbf{Q}_{\mathbf{M}}(z) \mathbf{b}$. One thus deduces that the functional $\mathbf{a}^{\sf T} e^{\mathbf{M}} \mathbf{b}$ has an asymptotically deterministic behavior that can be expressed as $-\frac1{2 \pi i} \oint_{\gamma} \exp(z) \mathbf{a}^{\sf T} \bar \mathbf{Q}_{\mathbf{M}}(z) \mathbf{b}\ dz$.
This observation serves in the present article as the foundation for the performance analysis of the gradient-based classifier, as described in the following section.
\section{Temporal Evolution of Training and Generalization Performance}
\label{sec:performance}
With the explicit expression of $\mathbf{w}(t)$ in \eqref{eq:solution-de}, we now turn our attention to the training and generalization performances of the classifier as a function of the training time $t$. To this end, we shall be working under the following assumptions.
\begin{Assumption}[Growth Rate]
As $n \to \infty$,
\begin{enumerate}
\item $\frac{p}{n} \to c \in (0,\infty)$.
\item For $a= \{1,2\}$, $\frac{n_a}{n} \to c_a \in (0,1)$.
\item $\| \boldsymbol{\mu} \| = O(1)$.
\end{enumerate}
\label{ass:growth-rate}
\end{Assumption}
The above assumption ensures that the matrix $ \frac1n \mathbf{X} \mathbf{X}^{\sf T}$ is of bounded operator norm for all large $n,p$ with probability one \cite{bai1998no}.
\begin{Assumption}[Random Initialization]
Let $\mathbf{w}_0 \equiv \mathbf{w}(t=0)$ be a random vector with i.i.d.\@ entries of zero mean, variance $\sigma^2/p$ for some $\sigma>0$ and finite fourth moment.
\label{ass:initialization}
\end{Assumption}
We first focus on the generalization performance, i.e., the average performance of the trained classifier taking as input an unseen new datum $\hat \mathbf{x}$ drawn from class $\mathcal{C}_1$ or $\mathcal{C}_2$.
\subsection{Generalization Performance}
\label{subsec:generalization-perf}
To evaluate the generalization performance of the classifier, we are interested in two types of misclassification rates, for a new datum $\hat \mathbf{x}$ drawn from class $\mathcal{C}_1$ or $\mathcal{C}_2$, as
\[
{\rm P}( \mathbf{w}(t)^{\sf T} \hat \mathbf{x} > 0~|~\hat \mathbf{x} \in \mathcal{C}_1), \quad {\rm P}( \mathbf{w}(t)^{\sf T} \hat \mathbf{x} < 0~|~\hat \mathbf{x} \in \mathcal{C}_2).
\]
Since the new datum $\hat \mathbf{x}$ is independent of $\mathbf{w}(t)$, $\mathbf{w}(t)^{\sf T} \hat \mathbf{x}$ is a Gaussian random variable of mean $\pm \mathbf{w}(t)^{\sf T} \boldsymbol{\mu}$ and variance $ \| \mathbf{w}(t) \|^2 $. The above probabilities can therefore be given via the $Q$-function: $Q(x) \equiv \frac1{\sqrt{2\pi}} \int_x^{\infty} \exp\left( -\frac{u^2}2 \right) du$. We thus resort to the computation of $\mathbf{w}(t)^{\sf T} \boldsymbol{\mu}$ as well as $ \mathbf{w}(t)^{\sf T} \mathbf{w}(t) $ to evaluate the aforementioned classification error.
For $\boldsymbol{\mu}^{\sf T} \mathbf{w}(t)$, with Cauchy's integral formula we have
\begin{align*}
&\boldsymbol{\mu}^{\sf T} \mathbf{w}(t) = \boldsymbol{\mu}^{\sf T} e^{- \frac{\alpha t}n \mathbf{X} \mathbf{X}^{\sf T} } \mathbf{w}_0 + \boldsymbol{\mu}^{\sf T} \left(\mathbf{I}_p - e^{- \frac{\alpha t}n \mathbf{X}\X^{\sf T} } \right) \mathbf{w}_{LS}\\
&= -\frac1{2\pi i} \oint_{\gamma} f_t(z) \boldsymbol{\mu}^{\sf T} \left( \frac1n \mathbf{X} \mathbf{X}^{\sf T} - z \mathbf{I}_p \right)^{-1} \mathbf{w}_0 \ dz \\
& -\frac1{2\pi i} \oint_{\gamma} \frac{1-f_t(z)}{z} \boldsymbol{\mu}^{\sf T} \left( \frac1n \mathbf{X} \mathbf{X}^{\sf T} - z \mathbf{I}_p \right)^{-1} \frac1n \mathbf{X}\mathbf{y} \ dz
\end{align*}
with $f_t(z) \equiv \exp(-\alpha t z)$, for a positive closed path $\gamma$ circling around all eigenvalues of $\frac1n \mathbf{X} \mathbf{X}^{\sf T}$. Note that the data matrix $\mathbf{X}$ can be rewritten as
\[
\mathbf{X} = -\boldsymbol{\mu} \j_1^{\sf T} + \boldsymbol{\mu} \j_2^{\sf T} + \mathbf{Z} = \boldsymbol{\mu} \mathbf{y}^{\sf T} + \mathbf{Z}
\]
with $\mathbf{Z} \equiv \begin{bmatrix} \mathbf{z}_1, \ldots, \mathbf{z}_n \end{bmatrix} \in \mathbb{R}^{p \times n}$ of i.i.d.\@ $\mathcal{N}(0,1)$ entries and $\j_a \in \mathbb{R}^n$ the canonical vectors of class $\mathcal{C}_a$ such that $(\j_a)_i = \delta_{\mathbf{x}_i \in \mathcal{C}_a}$. To isolate the deterministic vectors $\boldsymbol{\mu}$ and $\j_a$'s from the random $\mathbf{Z}$ in the expression of $\boldsymbol{\mu}^{\sf T} \mathbf{w}(t)$, we exploit Woodbury's identity to obtain
\begin{align*}
&\left( \frac1n \mathbf{X} \mathbf{X}^{\sf T} - z \mathbf{I}_p \right)^{-1} = \mathbf{Q}(z) - \mathbf{Q}(z) \begin{bmatrix} \boldsymbol{\mu} & \frac1n \mathbf{Z}\mathbf{y} \end{bmatrix} \\
&\begin{bmatrix} \boldsymbol{\mu}^{\sf T} \mathbf{Q}(z) \boldsymbol{\mu} & 1+\frac1n \boldsymbol{\mu}^{\sf T} \mathbf{Q}(z) \mathbf{Z}\mathbf{y} \\ * & -1 + \frac1n \mathbf{y}^{\sf T} \mathbf{Z}^{\sf T} \mathbf{Q}(z) \frac1n \mathbf{Z} \mathbf{y} \end{bmatrix}^{-1} \begin{bmatrix} \boldsymbol{\mu}^{\sf T} \\ \frac1n \mathbf{y}^{\sf T} \mathbf{Z}^{\sf T} \end{bmatrix} \mathbf{Q}(z)
\end{align*}
where we denote the resolvent $\mathbf{Q}(z) \equiv \left( \frac1n \mathbf{Z}\Z^{\sf T} - z \mathbf{I}_p \right)^{-1}$, a deterministic equivalent of which is given by
\[
\mathbf{Q}(z) \leftrightarrow \bar \mathbf{Q}(z) \equiv m(z) \mathbf{I}_p
\]
with $m(z)$ determined by the popular Marčenko–Pastur equation \cite{marvcenko1967distribution}
\begin{equation}
m(z) = \frac{1-c-z}{2cz} + \frac{\sqrt{(1-c-z)^2 - 4cz}}{2cz}
\label{eq:MP-equation}
\end{equation}
where the branch of the square root is selected in such a way that $\Im(z) \cdot \Im m(z) >0$, i.e., for a given $z$ there exists a \emph{unique} corresponding $m(z)$.
Substituting $\mathbf{Q}(z)$ by the simple form deterministic equivalent $m(z) \mathbf{I}_p$, we are able to estimate the random variable $\boldsymbol{\mu}^{\sf T} \mathbf{w}(t)$ with a contour integral of some deterministic quantities as $n,p \to \infty$. Similar arguments also hold for $\mathbf{w}(t)^{\sf T} \mathbf{w}(t)$, together leading to the following theorem.
\begin{Theorem}[Generalization Performance]
Let Assumptions~\ref{ass:growth-rate} and~\ref{ass:initialization} hold. As $n \to \infty$, with probability one
\begin{align*}
&{\rm P}( \mathbf{w}(t)^{\sf T} \hat \mathbf{x} > 0~|~\hat \mathbf{x} \in \mathcal{C}_1) - Q\left( \frac{E }{ \sqrt{V} } \right) \to 0 \\
&{\rm P}( \mathbf{w}(t)^{\sf T} \hat \mathbf{x} < 0~|~\hat \mathbf{x} \in \mathcal{C}_2) - Q\left( \frac{E }{ \sqrt{V} } \right)\to 0
\end{align*}
where
\begin{align*}
E &\equiv -\frac1{2\pi i} \oint_{\gamma} \frac{1-f_t(z)}{z} \frac{ \| \boldsymbol{\mu} \|^2 m(z) \ dz}{ \left( \| \boldsymbol{\mu} \|^2 +c \right) m(z) +1 } \\
V &\equiv \frac1{2\pi i} \oint_{\gamma} \left[\frac{ \frac1{z^2} \left(1-f_t(z)\right)^2\ }{ \left( \| \boldsymbol{\mu} \|^2 +c \right) m(z) +1 } - \sigma^2 f_t^2(z) m(z) \right]dz
\end{align*}
with $\gamma$ a closed positively oriented path that contains all eigenvalues of $\frac1n \mathbf{X} \mathbf{X}^{\sf T}$ and the origin, $f_t(z) \equiv \exp(-\alpha t z)$ and $m(z)$ given by Equation~\eqref{eq:MP-equation}.
\label{theo:generalize-perf}
\end{Theorem}
Although derived from the case $p<n$, Theorem~\ref{theo:generalize-perf} also applies when $p>n$. To see this, note that with Cauchy's integral formula, for $z\neq0$ not an eigenvalue of $\frac1n \mathbf{X} \mathbf{X}^{\sf T}$ (thus not of $\frac1n \mathbf{X}^{\sf T} \mathbf{X}$), one has $\mathbf{X} \left( \frac1n \mathbf{X}^{\sf T} \mathbf{X} - z \mathbf{I}_n \right)^{-1}\mathbf{y} = \left( \frac1n \mathbf{X}\X^{\sf T} - z \mathbf{I}_p \right)^{-1} \mathbf{X} \mathbf{y}$, which further leads to the same expressions as in Theorem~\ref{theo:generalize-perf}. Since $\frac1n \mathbf{X}\X^{\sf T}$ and $\frac1n \mathbf{X}^{\sf T}\mathbf{X}$ have the same eigenvalues except for additional zero eigenvalues for the larger matrix, the path $\gamma$ remains unchanged (as we demand that $\gamma$ contains the origin) and hence Theorem~\ref{theo:generalize-perf} holds true for both $p<n$ and $p>n$. The case $p=n$ can be obtained by continuity arguments.
\subsection{Training performance}
\label{eq:training-perf}
To compare generalization versus training performance, we are now interested in the behavior of the classifier when applied to the training set $\mathbf{X}$. To this end, we consider the random vector $\mathbf{X}^{\sf T} \mathbf{w}(t)$ given by
\[
\mathbf{X}^{\sf T} \mathbf{w}(t) = \mathbf{X}^{\sf T} e^{- \frac{\alpha t}n \mathbf{X} \mathbf{X}^{\sf T} } \mathbf{w}_0 + \mathbf{X}^{\sf T} \left(\mathbf{I}_p - e^{- \frac{\alpha t}n \mathbf{X}\X^{\sf T} } \right) \mathbf{w}_{LS}.
\]
Note that the $i$-th entry of $\mathbf{X}^{\sf T} \mathbf{w}(t)$ is given by the bilinear form $\mathbf{e}_i^{\sf T} \mathbf{X}^{\sf T} \mathbf{w}(t)$, with $\mathbf{e}_i$ the canonical vector with unique non-zero entry $[\mathbf{e}_i]_i = 1$. With previous notations we have
\begin{align*}
&\mathbf{e}_i^{\sf T} \mathbf{X}^{\sf T} \mathbf{w}(t)\\
&= -\frac1{2\pi i} \oint_{\gamma} f_t(z, t) \mathbf{e}_i^{\sf T} \mathbf{X}^{\sf T} \left( \frac1n \mathbf{X} \mathbf{X}^{\sf T} - z \mathbf{I}_p \right)^{-1} \mathbf{w}_0 \ dz\\
&-\frac1{2\pi i} \oint_{\gamma} \frac{1-f_t(z)}{z} \mathbf{e}_i^{\sf T} \frac1n \mathbf{X}^{\sf T} \left( \frac1n \mathbf{X} \mathbf{X}^{\sf T} - z \mathbf{I}_p \right)^{-1} \mathbf{X} \mathbf{y} \ dz
\end{align*}
which yields the following results.
\begin{Theorem}[Training Performance]
Under the assumptions and notations of Theorem~\ref{theo:generalize-perf}, as $n \to \infty$,
\begin{align*}
&{\rm P}( \mathbf{w}(t)^{\sf T} \mathbf{x}_i > 0~|~\mathbf{x}_i \in \mathcal{C}_1) - Q\left( \frac{E_* }{ \sqrt{V_* - E_*^2 } } \right) \to 0 \\
&{\rm P}( \mathbf{w}(t)^{\sf T} \mathbf{x}_i < 0~|~\mathbf{x}_i \in \mathcal{C}_2) - Q\left( \frac{E_* }{ \sqrt{V_* - E_*^2} } \right)\to 0
\end{align*}
almost surely, with
\begin{align*}
E_* &\equiv \frac1{2\pi i} \oint_{\gamma} \frac{1-f_t(z)}{z} \frac{dz}{ \left( \| \boldsymbol{\mu} \|^2 +c \right) m(z) +1 } \\
V_* &\equiv \frac1{2\pi i} \oint_{\gamma} \left[\frac{ \frac1{z} \left(1-f_t(z)\right)^2\ }{ \left( \| \boldsymbol{\mu} \|^2 +c \right) m(z) +1 } - \sigma^2 f_t^2(z) z m(z) \right] dz.
\end{align*}
\label{theo:training-perf}
\end{Theorem}
\begin{figure}[htb]
\vskip 0.1in
\begin{center}
\begin{tikzpicture}[font=\footnotesize,spy using outlines]
\renewcommand{\axisdefaulttryminticks}{4}
\pgfplotsset{every major grid/.style={densely dashed}}
\tikzstyle{every axis y label}+=[yshift=-10pt]
\tikzstyle{every axis x label}+=[yshift=5pt]
\pgfplotsset{every axis legend/.append style={cells={anchor=west},fill=white, at={(0.98,0.98)}, anchor=north east, font=\footnotesize, }}
\begin{axis}[
width=\columnwidth,
height=0.7\columnwidth,
xmin=0,
xmax=300,
ymin=-.01,
ymax=.5,
xlabel={Training time $(t)$},
ylabel={Misclassification rate},
ytick={0,0.1,0.2,0.3,0.4,0.5},
grid=major,
scaled ticks=true,
]
\addplot[color=blue!60!white,line width=1pt] coordinates{
(0.000000,0.482031)(6.000000,0.211094)(12.000000,0.107344)(18.000000,0.064687)(24.000000,0.046250)(30.000000,0.031094)(36.000000,0.023125)(42.000000,0.017500)(48.000000,0.014375)(54.000000,0.012344)(60.000000,0.010313)(66.000000,0.008906)(72.000000,0.007656)(78.000000,0.006562)(84.000000,0.005938)(90.000000,0.005469)(96.000000,0.005000)(102.000000,0.004687)(108.000000,0.004531)(114.000000,0.004063)(120.000000,0.003750)(126.000000,0.003594)(132.000000,0.003125)(138.000000,0.002969)(144.000000,0.002656)(150.000000,0.002656)(156.000000,0.002344)(162.000000,0.002344)(168.000000,0.002188)(174.000000,0.002031)(180.000000,0.002031)(186.000000,0.002031)(192.000000,0.001875)(198.000000,0.001875)(204.000000,0.001875)(210.000000,0.001875)(216.000000,0.001719)(222.000000,0.001719)(228.000000,0.001719)(234.000000,0.001719)(240.000000,0.001563)(246.000000,0.001563)(252.000000,0.001563)(258.000000,0.001563)(264.000000,0.001563)(270.000000,0.001563)(276.000000,0.001563)(282.000000,0.001563)(288.000000,0.001563)(294.000000,0.001563)
};
\addlegendentry{{ Simulation: training performance }};
\addplot+[only marks,mark = x,color=blue!60!white] coordinates{
(0.000000,0.500000)(6.000000,0.228091)(12.000000,0.109439)(18.000000,0.063533)(24.000000,0.042920)(30.000000,0.031943)(36.000000,0.025256)(42.000000,0.020767)(48.000000,0.017539)(54.000000,0.015102)(60.000000,0.013199)(66.000000,0.011674)(72.000000,0.010429)(78.000000,0.009398)(84.000000,0.008534)(90.000000,0.007802)(96.000000,0.007177)(102.000000,0.006639)(108.000000,0.006172)(114.000000,0.005765)(120.000000,0.005407)(126.000000,0.005091)(132.000000,0.004809)(138.000000,0.004558)(144.000000,0.004333)(150.000000,0.004129)(156.000000,0.003945)(162.000000,0.003777)(168.000000,0.003623)(174.000000,0.003482)(180.000000,0.003352)(186.000000,0.003232)(192.000000,0.003121)(198.000000,0.003017)(204.000000,0.002920)(210.000000,0.002829)(216.000000,0.002744)(222.000000,0.002664)(228.000000,0.002589)(234.000000,0.002519)(240.000000,0.002452)(246.000000,0.002390)(252.000000,0.002331)(258.000000,0.002276)(264.000000,0.002224)(270.000000,0.002176)(276.000000,0.002131)(282.000000,0.002090)(288.000000,0.002052)(294.000000,0.002018)
};
\addlegendentry{{ Theory: training performance }};
\addplot[densely dashed,color=red!60!white,line width=1pt] coordinates{
(0.000000,0.491875)(6.000000,0.258594)(12.000000,0.146250)(18.000000,0.101250)(24.000000,0.078594)(30.000000,0.069062)(36.000000,0.060625)(42.000000,0.056875)(48.000000,0.053594)(54.000000,0.052969)(60.000000,0.051875)(66.000000,0.050937)(72.000000,0.049688)(78.000000,0.049375)(84.000000,0.048906)(90.000000,0.048594)(96.000000,0.047813)(102.000000,0.047500)(108.000000,0.047813)(114.000000,0.048281)(120.000000,0.047813)(126.000000,0.048281)(132.000000,0.048750)(138.000000,0.048594)(144.000000,0.049063)(150.000000,0.049219)(156.000000,0.049844)(162.000000,0.049531)(168.000000,0.049063)(174.000000,0.050156)(180.000000,0.050313)(186.000000,0.050781)(192.000000,0.051406)(198.000000,0.051719)(204.000000,0.052031)(210.000000,0.052187)(216.000000,0.051875)(222.000000,0.052031)(228.000000,0.052344)(234.000000,0.052500)(240.000000,0.052344)(246.000000,0.052500)(252.000000,0.052656)(258.000000,0.052656)(264.000000,0.053281)(270.000000,0.053906)(276.000000,0.053750)(282.000000,0.054375)(288.000000,0.054531)(294.000000,0.054844)
};
\addlegendentry{{ Simulation: generalization performance }};
\addplot+[only marks,mark = o,color=red!60!white] coordinates{
(0.000000,0.500000)(6.000000,0.259797)(12.000000,0.149456)(18.000000,0.103028)(24.000000,0.081019)(30.000000,0.069156)(36.000000,0.062103)(42.000000,0.057609)(48.000000,0.054610)(54.000000,0.052548)(60.000000,0.051108)(66.000000,0.050101)(72.000000,0.049404)(78.000000,0.048937)(84.000000,0.048644)(90.000000,0.048486)(96.000000,0.048433)(102.000000,0.048463)(108.000000,0.048559)(114.000000,0.048710)(120.000000,0.048905)(126.000000,0.049135)(132.000000,0.049396)(138.000000,0.049681)(144.000000,0.049988)(150.000000,0.050311)(156.000000,0.050649)(162.000000,0.051000)(168.000000,0.051362)(174.000000,0.051732)(180.000000,0.052111)(186.000000,0.052497)(192.000000,0.052889)(198.000000,0.053287)(204.000000,0.053689)(210.000000,0.054096)(216.000000,0.054507)(222.000000,0.054921)(228.000000,0.055339)(234.000000,0.055760)(240.000000,0.056184)(246.000000,0.056610)(252.000000,0.057038)(258.000000,0.057468)(264.000000,0.057900)(270.000000,0.058332)(276.000000,0.058764)(282.000000,0.059196)(288.000000,0.059627)(294.000000,0.060056)
};
\addlegendentry{{ Theory: generalization performance }};
\begin{scope}
\spy[black!50!white,size=1.6cm,circle,connect spies,magnification=2] on (1,0.4) in node [fill=none] at (4,1.5);
\end{scope}
\end{axis}
\end{tikzpicture}
\caption{Training and generalization performance for $\boldsymbol{\mu} = [2;\mathbf{0}_{p-1}]$, $p=256$, $n=512$, $\sigma^2 =0.1$, $\alpha = 0.01$ and $c_1 = c_2 = 1/2$. Results obtained by averaging over $50$ runs.}
\label{fig:train-and-general-perf}
\end{center}
\vskip -0.1in
\end{figure}
In Figure~\ref{fig:train-and-general-perf} we compare finite dimensional simulations with theoretical results obtained from Theorem~\ref{theo:generalize-perf}~and~\ref{theo:training-perf} and observe a very close match, already for not too large $n,p$. As $t$ grows large, the generalization error first drops rapidly with the training error, then goes up, although slightly, while the training error continues to decrease to zero. This is because the classifier starts to over-fit the training data $\mathbf{X}$ and performs badly on unseen ones. To avoid over-fitting, one effectual approach is to apply regularization strategies \cite{bishop2007pattern}, for example, to ``early stop'' (at $t=100$ for instance in the setting of Figure~\ref{fig:train-and-general-perf}) in the training process. However, this introduces new hyperparameters such as the optimal stopping time $t_{opt}$ that is of crucial importance for the network performance and is often tuned through cross-validation in practice. Theorem~\ref{theo:generalize-perf} and~\ref{theo:training-perf} tell us that the training and generalization performances, although being random themselves, have asymptotically deterministic behaviors described by $(E_*, V_*)$ and $(E, V)$, respectively, which allows for a deeper understanding on the choice of $t_{opt}$, since $E, V$ are in fact functions of $t$ via $f_t(z) \equiv \exp(-\alpha t z)$.
Nonetheless, the expressions in Theorem~\ref{theo:generalize-perf} and~\ref{theo:training-perf} of contour integrations are not easily analyzable nor interpretable. To gain more insight, we shall rewrite $(E, V)$ and $(E_*, V_*)$ in a more readable way. First, note from Figure~\ref{fig:eigvenvalue-distribution-XX} that the matrix $\frac1n \mathbf{X}\X^{\sf T}$ has (possibly) two types of eigenvalues: those inside the \emph{main bulk} (between $\lambda_- \equiv (1-\sqrt{c})^2$ and $\lambda_+ \equiv (1+\sqrt{c})^2$) of the Marčenko–Pastur distribution
\begin{equation}\label{eq:MP-distribution}
\nu(dx) = \frac{ \sqrt{ (x- \lambda_-)^+ (\lambda_+ - x)^+ }}{2\pi c x} dx + \left( 1- \frac1c\right)^+ \delta(x)
\end{equation}
and a (possibly) isolated one\footnote{The existence (or absence) of outlying eigenvalues for the sample covariance matrix has been largely investigated in the random matrix literature and is related to the so-called ``spiked random matrix model''. We refer the reader to \cite{benaych2011eigenvalues} for an introduction. The information carried by these ``isolated'' eigenpairs also marks an important technical difference to \cite{advani2017high} in which $\mathbf{X}$ is only composed of noise terms.} lying away from $[\lambda_-,\lambda_+]$, that shall be treated separately. We rewrite the path $\gamma$ (that contains all eigenvalues of $\frac1n \mathbf{X}\X^{\sf T}$) as the sum of two paths $\gamma_b$ and $\gamma_s$, that circle around the main bulk and the isolated eigenvalue (if any), respectively. To handle the first integral of $\gamma_b$, we use the fact that for any nonzero $\lambda \in \mathbb{R}$, the limit $\lim_{z\in\mathbb{Z} \to \lambda} m(z) \equiv \check m(\lambda)$ exists \cite{silverstein1995analysis} and follow the idea in \cite{bai2008clt} by choosing the contour $\gamma_b$ to be a rectangle with sides parallel to the axes, intersecting the real axis at $0$ and $\lambda_+$ and the horizontal sides being a distance $\varepsilon \to 0$ away from the real axis, to split the contour integral into four single ones of $\check m(x)$. The second integral circling around $\gamma_s$ can be computed with the residue theorem. This together leads to the expressions of $(E, V)$ and $(E_*, V_*)$ as follows\footnote{We defer the readers to Section~\ref{sm:detailed-deduction} in Supplementary Material for a detailed exposition of Theorem~\ref{theo:generalize-perf}~and~\ref{theo:training-perf}, as well as \eqref{eq:E}-\eqref{eq:Var-star}.}
\begin{align}
E &= \int \frac{ 1-f_t(x) }{x} \mu(dx) \label{eq:E}\\
%
V &= \frac{\|\boldsymbol{\mu}\|^2 + c}{\|\boldsymbol{\mu}\|^2} \int \frac{ (1-f_t(x))^2 \mu(dx)}{x^2} + \sigma^2 \int f_t^2(x) \nu(dx) \label{eq:Var} \\
%
%
%
E_* &= \frac{\|\boldsymbol{\mu}\|^2 + c}{\|\boldsymbol{\mu}\|^2} \int \frac{ 1-f_t(x) }{x} \mu(dx) \label{eq:E-star} \\
%
V_* &= \frac{\|\boldsymbol{\mu}\|^2 + c}{\|\boldsymbol{\mu}\|^2} \int \frac{ (1-f_t(x))^2 \mu(dx)}{x} + \sigma^2 \int x f_t^2(x) \nu(dx) \label{eq:Var-star}
\end{align}
where we recall $f_t(x) = \exp(-\alpha t x)$, $\nu(x)$ given by \eqref{eq:MP-distribution} and denote the measure
\begin{equation}\label{eq:definition-measure}
\mu(dx) \equiv \frac{\sqrt{(x-\lambda_-)^+(\lambda_+ -x)^+}}{ 2\pi(\lambda_s - x) } dx + \frac{ (\|\boldsymbol{\mu}\|^4-c)^+}{\|\boldsymbol{\mu}\|^2} \delta_{\lambda_s}(x)
\end{equation}
as well as
\begin{equation}\label{eq:lambda_s}
\lambda_s = c+1 +\| \boldsymbol{\mu}\|^2 + \frac{c}{\| \boldsymbol{\mu}\|^2}\ge (\sqrt{c}+1)^2
\end{equation}
with equality if and only if $\| \boldsymbol{\mu}\|^2 = \sqrt{c}$.
A first remark on the expressions of \eqref{eq:E}-\eqref{eq:Var-star} is that $E_*$ differs from $E$ only by a factor of $\frac{\|\boldsymbol{\mu}\|^2+c}{\|\boldsymbol{\mu}\|^2}$. Also, both $V$ and $V_*$ are the sum of two parts: the first part that strongly depends on $\boldsymbol{\mu}$ and the second one that is independent of $\boldsymbol{\mu}$. One thus deduces for $\|\boldsymbol{\mu}\| \to 0$ that $E \to 0$ and
\[
V \to \int \frac{ (1-f_t(x))^2 }{x^2} \rho(dx) + \sigma^2 \int f_t^2(x) \nu(dx) > 0
\]
with $\rho(dx) \equiv \frac{\sqrt{(x-\lambda_-)^+(\lambda_+ -x)^+}}{ 2\pi (c+1) } dx$ and therefore the generalization performance goes to $Q(0) = 0.5$. On the other hand, for $\|\boldsymbol{\mu}\| \to \infty$, one has $ \frac{E}{\sqrt{V}} \to \infty $ and hence the classifier makes perfect predictions.
In a more general context (i.e., for Gaussian mixture models with generic means and covariances as investigated in \cite{benaych2016spectral}, and obviously for practical datasets), there may be more than one eigenvalue of $\frac1n \mathbf{X}\X^{\sf T}$ lying outside the main bulk, which may not be limited to the interval $[\lambda_-,\lambda_+]$. In this case, the expression of $m(z)$, instead of being explicitly given by \eqref{eq:MP-equation}, may be determined through more elaborate (often implicit) formulations. While handling more generic models is technically reachable within the present analysis scheme, the results are much less intuitive. Similar objectives cannot be achieved within the framework presented in \cite{advani2017high}; this conveys more practical interest to our results and the proposed analysis framework.
\begin{figure}[htb]
\vskip 0.1in
\begin{center}
\begin{tikzpicture}[font=\footnotesize,spy using outlines]
\renewcommand{\axisdefaulttryminticks}{4}
\pgfplotsset{every major grid/.style={densely dashed}}
\tikzstyle{every axis y label}+=[yshift=-10pt]
\tikzstyle{every axis x label}+=[yshift=5pt]
\pgfplotsset{every axis legend/.append style={cells={anchor=west},fill=white, at={(0.98,0.98)}, anchor=north east, font=\footnotesize }}
\begin{axis}[
width=\columnwidth,
height=0.7\columnwidth,
xmin=-.1,
ymin=0,
xmax=4,
ymax=.9,
yticklabels={},
bar width=4pt,
grid=major,
ymajorgrids=false,
scaled ticks=true,
]
\addplot+[ybar,mark=none,color=white,fill=blue!60!white,area legend] coordinates{
(0.000000,0.021551)(0.090628,0.797386)(0.181256,0.862039)(0.271885,0.797386)(0.362513,0.689631)(0.453141,0.646529)(0.543769,0.581876)(0.634397,0.560325)(0.725026,0.495672)(0.815654,0.431019)(0.906282,0.474121)(0.996910,0.387917)(1.087538,0.366366)(1.178166,0.366366)(1.268795,0.323264)(1.359423,0.323264)(1.450051,0.301714)(1.540679,0.301714)(1.631307,0.280163)(1.721936,0.237061)(1.812564,0.215510)(1.903192,0.215510)(1.993820,0.215510)(2.084448,0.193959)(2.175077,0.172408)(2.265705,0.172408)(2.356333,0.150857)(2.446961,0.129306)(2.537589,0.107755)(2.628218,0.086204)(2.718846,0.064653)(2.809474,0.043102)(2.900102,0.000000)(2.990730,0.000000)(3.081359,0.000000)(3.171987,0.000000)(3.262615,0.000000)(3.353243,0.000000)(3.443871,0.000000)(3.534499,0.000000)(3.625128,0.000000)(3.715756,0.000000)(3.806384,0.000000)(3.897012,0.021551)(3.987640,0.000000)(4.078269,0.000000)(4.168897,0.000000)(4.259525,0.000000)(4.350153,0.000000)(4.440781,0.000000)
};
\addlegendentry{{Eigenvalues of $\frac1n \mathbf{X}\X^{\sf T}$}};
\addplot[color=red!60!white,line width=1.5pt] coordinates{
(0.085786, 0.000000)(0.100000, 0.636614)(0.114213, 0.786276)(0.128426, 0.854235)(0.142639, 0.885830)(0.156852, 0.898331)(0.171066, 0.899981)(0.185279, 0.895191)(0.199492, 0.886499)(0.213705, 0.875437)(0.227918, 0.862966)(0.242132, 0.849700)(0.256345, 0.836043)(0.270558, 0.822261)(0.284771, 0.808529)(0.298984, 0.794965)(0.313198, 0.781644)(0.327411, 0.768615)(0.341624, 0.755908)(0.355837, 0.743539)(0.370050, 0.731514)(0.384264, 0.719834)(0.398477, 0.708495)(0.412690, 0.697490)(0.426903, 0.686811)(0.441116, 0.676446)(0.455330, 0.666386)(0.469543, 0.656618)(0.483756, 0.647132)(0.497969, 0.637915)(0.512182, 0.628957)(0.526396, 0.620248)(0.540609, 0.611776)(0.554822, 0.603531)(0.569035, 0.595503)(0.583248, 0.587685)(0.597462, 0.580065)(0.611675, 0.572637)(0.625888, 0.565393)(0.640101, 0.558323)(0.654315, 0.551422)(0.668528, 0.544682)(0.682741, 0.538097)(0.696954, 0.531660)(0.711167, 0.525367)(0.725381, 0.519210)(0.739594, 0.513184)(0.753807, 0.507286)(0.768020, 0.501509)(0.782233, 0.495849)(0.796447, 0.490302)(0.810660, 0.484863)(0.824873, 0.479530)(0.839086, 0.474296)(0.853299, 0.469161)(0.867513, 0.464119)(0.881726, 0.459167)(0.895939, 0.454303)(0.910152, 0.449523)(0.924365, 0.444824)(0.938579, 0.440204)(0.952792, 0.435661)(0.967005, 0.431191)(0.981218, 0.426792)(0.995431, 0.422462)(1.009645, 0.418199)(1.023858, 0.414000)(1.038071, 0.409864)(1.052284, 0.405788)(1.066497, 0.401771)(1.080711, 0.397811)(1.094924, 0.393906)(1.109137, 0.390054)(1.123350, 0.386255)(1.137563, 0.382505)(1.151777, 0.378805)(1.165990, 0.375151)(1.180203, 0.371544)(1.194416, 0.367982)(1.208629, 0.364463)(1.222843, 0.360986)(1.237056, 0.357550)(1.251269, 0.354153)(1.265482, 0.350796)(1.279695, 0.347475)(1.293909, 0.344192)(1.308122, 0.340943)(1.322335, 0.337730)(1.336548, 0.334549)(1.350761, 0.331402)(1.364975, 0.328286)(1.379188, 0.325201)(1.393401, 0.322145)(1.407614, 0.319119)(1.421827, 0.316121)(1.436041, 0.313151)(1.450254, 0.310207)(1.464467, 0.307290)(1.478680, 0.304398)(1.492893, 0.301530)(1.507107, 0.298687)(1.521320, 0.295866)(1.535533, 0.293068)(1.549746, 0.290292)(1.563959, 0.287538)(1.578173, 0.284804)(1.592386, 0.282090)(1.606599, 0.279396)(1.620812, 0.276721)(1.635025, 0.274064)(1.649239, 0.271425)(1.663452, 0.268803)(1.677665, 0.266198)(1.691878, 0.263610)(1.706091, 0.261037)(1.720305, 0.258479)(1.734518, 0.255936)(1.748731, 0.253407)(1.762944, 0.250892)(1.777157, 0.248390)(1.791371, 0.245901)(1.805584, 0.243425)(1.819797, 0.240960)(1.834010, 0.238506)(1.848223, 0.236064)(1.862437, 0.233632)(1.876650, 0.231209)(1.890863, 0.228797)(1.905076, 0.226393)(1.919289, 0.223999)(1.933503, 0.221612)(1.947716, 0.219233)(1.961929, 0.216862)(1.976142, 0.214497)(1.990355, 0.212139)(2.004569, 0.209787)(2.018782, 0.207440)(2.032995, 0.205098)(2.047208, 0.202761)(2.061421, 0.200428)(2.075635, 0.198098)(2.089848, 0.195772)(2.104061, 0.193448)(2.118274, 0.191127)(2.132487, 0.188807)(2.146701, 0.186488)(2.160914, 0.184170)(2.175127, 0.181852)(2.189340, 0.179533)(2.203553, 0.177213)(2.217767, 0.174892)(2.231980, 0.172568)(2.246193, 0.170242)(2.260406, 0.167911)(2.274619, 0.165577)(2.288833, 0.163238)(2.303046, 0.160893)(2.317259, 0.158541)(2.331472, 0.156182)(2.345685, 0.153816)(2.359899, 0.151440)(2.374112, 0.149055)(2.388325, 0.146658)(2.402538, 0.144250)(2.416752, 0.141829)(2.430965, 0.139394)(2.445178, 0.136944)(2.459391, 0.134477)(2.473604, 0.131992)(2.487818, 0.129487)(2.502031, 0.126962)(2.516244, 0.124413)(2.530457, 0.121840)(2.544670, 0.119240)(2.558884, 0.116610)(2.573097, 0.113949)(2.587310, 0.111254)(2.601523, 0.108521)(2.615736, 0.105747)(2.629950, 0.102929)(2.644163, 0.100061)(2.658376, 0.097141)(2.672589, 0.094161)(2.686802, 0.091115)(2.701016, 0.087997)(2.715229, 0.084798)(2.729442, 0.081507)(2.743655, 0.078113)(2.757868, 0.074601)(2.772082, 0.070952)(2.786295, 0.067145)(2.800508, 0.063149)(2.814721, 0.058926)(2.828934, 0.054422)(2.843148, 0.049560)(2.857361, 0.044221)(2.871574, 0.038204)(2.885787, 0.031119)(2.900000, 0.021952)(2.914214, 0.000000)
};
\addlegendentry{{Marčenko–Pastur distribution}};
\addplot+[only marks,mark=x,color=red!60!white,line width=1.5pt] coordinates{(3.9722,0)};
\addlegendentry{{ Theory: $\lambda_s$ given in \eqref{eq:lambda_s} }};
\begin{scope}
\spy[black!50!white,size=1.8cm,circle,connect spies,magnification=5] on (6.55,0.08) in node [fill=none] at (5,1.8);
\end{scope}
\end{axis}
\end{tikzpicture}
\caption{Eigenvalue distribution of $\frac1n \mathbf{X}\X^{\sf T}$ for $\boldsymbol{\mu} = [1.5;\mathbf{0}_{p-1}]$, $p=512$, $n=1\,024$ and $c_1 = c_2 = 1/2$.}
\label{fig:eigvenvalue-distribution-XX}
\end{center}
\vskip -0.1in
\end{figure}
\section{Discussions}
\label{sec:discuss}
In this section, with a careful inspection of \eqref{eq:E} and~\eqref{eq:Var}, discussions will be made from several different aspects. First of all, recall that the generalization performance is simply given by $ Q\left( \frac{\boldsymbol{\mu}^{\sf T} \mathbf{w}(t)}{ \| \mathbf{w}(t) \| } \right)$, with the term $\frac{\boldsymbol{\mu}^{\sf T} \mathbf{w}(t)}{ \| \mathbf{w}(t) \| }$ describing the alignment between $\mathbf{w}(t)$ and $\boldsymbol{\mu}$, therefore the best possible generalization performance is simply $Q(\|\boldsymbol{\mu}\|)$. Nonetheless, this ``best'' performance can never be achieved as long as $p/n \to c >0$, as described in the following remark.
\begin{Remark}[Optimal Generalization Performance]
Note that, with Cauchy–Schwarz inequality and the fact that $\int \mu(dx) = \| \boldsymbol{\mu} \|^2$ from \eqref{eq:definition-measure}, one has
\[
E^2 \le \int \frac{(1-f_t(x))^2}{x^2} d\mu(x) \cdot \int d\mu(x) \le \frac{\|\boldsymbol{\mu}\|^4}{\|\boldsymbol{\mu}\|^2+c} V
\]
with equality in the right-most inequality if and only if the variance $\sigma^2 = 0$. One thus concludes that $E/\sqrt{V} \le \|\boldsymbol{\mu}\|^2/\sqrt{\|\boldsymbol{\mu}\|^2 + c}$ and the best generalization performance (lowest misclassification rate) is $Q (\|\boldsymbol{\mu}\|^2/\sqrt{\|\boldsymbol{\mu}\|^2 + c})$ and can be attained only when $\sigma^2 = 0$.
\label{rem:optimal-generalization-perf}
\end{Remark}
The above remark is of particular interest because, for a given task (thus $p, \boldsymbol{\mu}$ fixed) it allows one to compute the \emph{minimum} training data number $n$ to fulfill a certain request of classification accuracy.
As a side remark, note that in the expression of $E/\sqrt{V}$ the initialization variance $\sigma^2$ only appears in $V$, meaning that random initializations impair the generalization performance of the network. As such, one should initialize with $\sigma^2$ very close, but not equal, to zero, to obtain symmetry breaking between hidden units \cite{goodfellow2016deeplearning} as well as to mitigate the drop of performance due to large $\sigma^2$.
In Figure~\ref{fig:optimal-perf-and-time-vs-sigma2} we plot the optimal generalization performance with the corresponding optimal stopping time as functions of $\sigma^2$, showing that small initialization helps training in terms of both accuracy and efficiency.
\begin{figure}[tbh]
\vskip 0.1in
\begin{center}
\begin{minipage}[b]{0.48\columnwidth}%
\begin{tikzpicture}[font=\LARGE,scale=0.5]
\renewcommand{\axisdefaulttryminticks}{4}
\pgfplotsset{every major grid/.style={densely dashed}}
\tikzstyle{every axis y label}+=[yshift=-10pt]
\tikzstyle{every axis x label}+=[yshift=5pt]
\pgfplotsset{every axis legend/.append style={cells={anchor=west},fill=white, at={(0.02,0.98)}, anchor=north west, font=\LARGE }}
\begin{axis}[
xmode=log,
xmin=0.01,
ymin=0.03,
xmax=1,
ymax=0.08,
xlabel={$\sigma^2$},
grid=major,
scaled ticks=true,
]
\addplot[mark=o,color=red!60!white,line width=2pt] coordinates{
(0.010000,0.033928)(0.012589,0.034610)(0.015849,0.035407)(0.019953,0.036334)(0.025119,0.037416)(0.031623,0.038674)(0.039811,0.040131)(0.050119,0.041812)(0.063096,0.043740)(0.079433,0.045930)(0.100000,0.048388)(0.125893,0.051103)(0.158489,0.054040)(0.199526,0.057145)(0.251189,0.060351)(0.316228,0.063587)(0.398107,0.066787)(0.501187,0.069890)(0.630957,0.072840)(0.794328,0.075589)(1.000000,0.078101)
};
\addlegendentry{{Optimal error rates}};
\end{axis}
\end{tikzpicture}
\end{minipage}%
\hfill{}
\begin{minipage}[b]{0.48\columnwidth}%
\begin{tikzpicture}[font=\LARGE,scale=0.5]
\renewcommand{\axisdefaulttryminticks}{4}
\pgfplotsset{every major grid/.style={densely dashed}}
\tikzstyle{every axis y label}+=[yshift=-10pt]
\tikzstyle{every axis x label}+=[yshift=5pt]
\pgfplotsset{every axis legend/.append style={cells={anchor=west},fill=white, at={(0.02,0.98)}, anchor=north west, font=\LARGE }}
\begin{axis}[
xmode=log,
xmin=0.01,
ymin=0,
xmax=1,
ymax=600,
xlabel={$\sigma^2$},
grid=major,
scaled ticks=true,
]
\addplot[mark=o,color=blue!60!white,line width=2pt] coordinates{
(0.010000,41.000000)(0.012589,44.000000)(0.015849,48.000000)(0.019953,51.000000)(0.025119,55.000000)(0.031623,60.000000)(0.039811,65.000000)(0.050119,71.000000)(0.063096,78.000000)(0.079433,87.000000)(0.100000,98.000000)(0.125893,112.000000)(0.158489,129.000000)(0.199526,152.000000)(0.251189,180.000000)(0.316228,214.000000)(0.398107,256.000000)(0.501187,306.000000)(0.630957,365.000000)(0.794328,435.000000)(1.000000,516.000000)
};
\addlegendentry{{Optimal stopping time}};
\end{axis}
\end{tikzpicture}
\end{minipage}
\caption{ Optimal performance and corresponding stopping time as functions of $\sigma^2$, with $c = 1/2$, $\|\boldsymbol{\mu}\|^2=4$ and $\alpha=0.01$.}
\label{fig:optimal-perf-and-time-vs-sigma2}
\end{center}
\vskip -0.1in
\end{figure}
Although the integrals in \eqref{eq:E} and \eqref{eq:Var} do not have nice closed forms, note that, for $t$ close to $0$, with a Taylor expansion of $f_t(x) \equiv \exp(-\alpha tx)$ around $\alpha t x = 0$, one gets more interpretable forms of $E$ and $V$ without integrals, as presented in the following subsection.
\subsection{Approximation for $t$ close to $0$}
\label{subsec:t-close-to-0}
Taking $t = 0$, one has $f_t(x) = 1$ and therefore $E = 0$, $V= \sigma^2 \int \nu(dx)= \sigma^2$, with $\nu(dx)$ the Marčenko–Pastur distribution given in \eqref{eq:MP-distribution}. As a consequence, at the beginning stage of training, the generalization performance is $Q(0) = 0.5$ for $\sigma^2 \neq 0$ and the classifier makes random guesses.
For $t$ not equal but close to $0$, the Taylor expansion of $f_t(x) \equiv \exp(-\alpha tx)$ around $\alpha t x=0$ gives
\[
f_t(x) \equiv \exp(-\alpha t x) \approx 1 -\alpha t x + O(\alpha^2 t^2 x^2).
\]
Making the substitution $x = 1+c-2\sqrt{c} \cos\theta$ and with the fact that $\int_0^{\pi} \frac{ \sin^2\theta }{ p + q \cos\theta } d\theta = \frac{p \pi}{q^2} \left( 1 - \sqrt{1-q^2/p^2 } \right)$ (see for example 3.644-5 in \cite{gradshteyn2014table}), one gets $E = \tilde{E} + O(\alpha^2 t^2)$ and $V = \tilde{V}+ O(\alpha^2 t^2)$, where
\[
\tilde{E} \equiv \frac{\alpha t}{2} g(\boldsymbol{\mu},c) + \frac{ (\|\boldsymbol{\mu} \|^4 -c)^+ }{\| \boldsymbol{\mu} \|^2} \alpha t = \| \boldsymbol{\mu} \|^2 \alpha t
\]
\begin{align*}
\tilde{V} &\equiv \frac{\|\boldsymbol{\mu}\|^2 + c}{\|\boldsymbol{\mu}\|^2} \frac{ (\|\boldsymbol{\mu} \|^4 - c)^+}{\| \boldsymbol{\mu} \|^2} \alpha^2 t^2 + \frac{\|\boldsymbol{\mu}\|^2 + c}{\|\boldsymbol{\mu}\|^2} \frac{\alpha^2 t^2}2 g(\boldsymbol{\mu},c) \\
& + \sigma^2 (1+c) \alpha^2 t^2 - 2\sigma^2 \alpha t + \left(1-\frac1c\right)^+ \sigma^2 \\
&+ \frac{\sigma^2}{2c} \left( 1+c - (1+\sqrt{c}) |1-\sqrt{c}| \right) \\
&= (\| \boldsymbol{\mu}\|^2 + c + c\sigma^2) \alpha^2 t^2 + \sigma^2 ( \alpha t - 1 )^2
\end{align*}
with $g(\boldsymbol{\mu},c) \equiv \| \boldsymbol{\mu}\|^2 + \frac{c}{\| \boldsymbol{\mu}\|^2} - \left( \| \boldsymbol{\mu}\| + \frac{\sqrt{c}}{\| \boldsymbol{\mu}\|} \right) \left| \| \boldsymbol{\mu}\| - \frac{ \sqrt{c} }{\| \boldsymbol{\mu}\|} \right| $ and consequently $\frac12 g(\boldsymbol{\mu},c) + \frac{(\|\boldsymbol{\mu} \|^4 -c)^+}{\| \boldsymbol{\mu} \|^2} = \| \boldsymbol{\mu}\|^2$. It is interesting to note from the above calculation that, although $E$ and $V$ seem to have different behaviors\footnote{This phenomenon has been largely observed in random matrix theory and is referred to as ``phase transition''\cite{baik2005phase}.} for $\| \boldsymbol{\mu}\|^2 > \sqrt{c}$ or $c>1$, it is in fact not the case and the extra part of $\| \boldsymbol{\mu}\|^2 > \sqrt{c}$ (or $c>1$) compensates for the singularity of the integral, so that the generalization performance of the classifier is a smooth function of both $\| \boldsymbol{\mu}\|^2 $ and $c$.
Taking the derivative of $\frac{ \tilde{E}}{ \sqrt{ \tilde{V}} } $ with respect to $t$, one has
\[
\frac{\partial }{\partial t} \frac{ \tilde{E}}{ \sqrt{ \tilde{V}} } = \frac{ \alpha (1-\alpha t) \sigma^2 }{ {\tilde V}^{3/2} }
\]
which implies that the maximum of $\frac{ \tilde{E}} {\sqrt{ \tilde{V}}}$ is $ \frac{ \| \boldsymbol{\mu}\|^2 }{ \sqrt{ \| \boldsymbol{\mu}\|^2 +c+c\sigma^2} }$ and can be attained with $t = 1/\alpha$. Moreover, taking $t=0$ in the above equation one gets $\frac{\partial }{\partial t} \frac{ \tilde{E}}{ \sqrt{ \tilde{V}} } \big|_{t=0} = \frac{\alpha}{\sigma}$. Therefore, large $\sigma$ is harmful to the training efficiency, which coincides with the conclusion from Remark~\ref{rem:optimal-generalization-perf}.
The approximation error arising from Taylor expansion can be large for $t$ away from $0$, e.g., at $t = 1/\alpha$ the difference $E - \tilde{E}$ is of order $O(1)$ and thus cannot be neglected.
\subsection{As $t \to \infty$: least-squares solution}
As $t \to \infty$, one has $f_t(x) \to 0$ which results in the least-square solution $\mathbf{w}_{LS} = (\mathbf{X}\X^{\sf T})^{-1} \mathbf{X} \mathbf{y} $ or $\mathbf{w}_{LS} = \mathbf{X} (\mathbf{X}^{\sf T}\mathbf{X})^{-1} \mathbf{y} $ and consequently
\begin{equation}
\frac{ \boldsymbol{\mu}^{\sf T} \mathbf{w}_{LS} }{ \| \mathbf{w}_{LS}\| } = \frac{ \| \boldsymbol{\mu} \|^2 }{\sqrt{\| \boldsymbol{\mu} \|^2 + c}} \sqrt{
1-\min\left(c,\frac1c\right) }.
\label{eq:perf-w-LS}
\end{equation}
Comparing \eqref{eq:perf-w-LS} with the expression in Remark~\ref{rem:optimal-generalization-perf}, one observes that when $t \to \infty$ the network becomes ``over-trained'' and the performance drops by a factor of $\sqrt{1- \min (c, c^{-1} ) }$. This becomes even worse when $c$ gets close to $1$, as is consistent with the empirical findings in \cite{advani2017high}. However, the point $c=1$ is a singularity for \eqref{eq:perf-w-LS}, but not for $\frac{E}{ \sqrt{V} }$ as in \eqref{eq:E} and~\eqref{eq:Var}. One may thus expect to have a smooth and reliable behavior of the well-trained network for $c$ close to $1$, which is a noticeable advantage of gradient-based training compared to simple least-square method. This coincides with the conclusion of \cite{yao2007early} in which the asymptotic behavior of solely $n \to \infty$ is considered.
In Figure~\ref{fig:approximation-t-small} we plot the generalization performance from simulation (blue line), the approximation from Taylor expansion of $f_t(x)$ as described in Section~\ref{subsec:t-close-to-0} (red dashed line), together with the performance of $\mathbf{w}_{LS}$ (cyan dashed line). One observes a close match between the result from Taylor expansion and the true performance for $t$ small, with the former being optimal at $t=100$ and the latter slowly approaching the performance of $\mathbf{w}_{LS}$ as $t$ goes to infinity.
In Figure~\ref{fig:approximation-c=1} we underline the case $c=1$ by taking $p=n=512$ with all other parameters unchanged from Figure~\ref{fig:approximation-t-small}. One observes that the simulation curve (blue line) increases much faster compared to Figure~\ref{fig:approximation-t-small} and is supposed to end up at $0.5$, which is the performance of $\mathbf{w}_{LS}$ (cyan dashed line). This confirms a serious degradation of performance for $c$ close to $1$ of the classical least-squares solution.
\begin{figure}[htb]
\vskip 0.1in
\begin{center}
\begin{tikzpicture}[font=\footnotesize,spy using outlines]
\renewcommand{\axisdefaulttryminticks}{4}
\pgfplotsset{every major grid/.style={densely dashed}}
\tikzstyle{every axis y label}+=[yshift=-10pt]
\tikzstyle{every axis x label}+=[yshift=5pt]
\pgfplotsset{every axis legend/.append style={cells={anchor=west},fill=white, at={(0.98,0.98)}, anchor=north east, font=\footnotesize }}
\begin{axis}[
width=\columnwidth,
height=0.7\columnwidth,
xmin=0,
xmax=1000,
ymin=0,
ymax=.5,
grid=major,
xlabel={Training time $(t)$},
ylabel={Misclassification rate},
ytick={0,0.1,0.2,0.3,0.4,0.5},
scaled ticks=true,
]
\addplot[color=blue!60!white,line width=1.5pt] coordinates{
(0.000000,0.500000)(10.000000,0.175906)(20.000000,0.093935)(30.000000,0.069117)(40.000000,0.058867)(50.000000,0.053804)(60.000000,0.051075)(70.000000,0.049573)(80.000000,0.048786)(90.000000,0.048446)(100.000000,0.048399)(110.000000,0.048550)(120.000000,0.048839)(130.000000,0.049225)(140.000000,0.049681)(150.000000,0.050186)(160.000000,0.050726)(170.000000,0.051290)(180.000000,0.051871)(190.000000,0.052463)(200.000000,0.053060)(210.000000,0.053660)(220.000000,0.054260)(230.000000,0.054857)(240.000000,0.055450)(250.000000,0.056038)(260.000000,0.056620)(270.000000,0.057195)(280.000000,0.057762)(290.000000,0.058320)(300.000000,0.058870)(310.000000,0.059412)(320.000000,0.059944)(330.000000,0.060468)(340.000000,0.060982)(350.000000,0.061487)(360.000000,0.061983)(370.000000,0.062470)(380.000000,0.062948)(390.000000,0.063418)(400.000000,0.063878)(410.000000,0.064330)(420.000000,0.064774)(430.000000,0.065209)(440.000000,0.065636)(450.000000,0.066055)(460.000000,0.066466)(470.000000,0.066869)(480.000000,0.067264)(490.000000,0.067652)(500.000000,0.068033)(510.000000,0.068406)(520.000000,0.068773)(530.000000,0.069133)(540.000000,0.069485)(550.000000,0.069832)(560.000000,0.070171)(570.000000,0.070505)(580.000000,0.070832)(590.000000,0.071153)(600.000000,0.071469)(610.000000,0.071778)(620.000000,0.072082)(630.000000,0.072380)(640.000000,0.072673)(650.000000,0.072961)(660.000000,0.073243)(670.000000,0.073520)(680.000000,0.073793)(690.000000,0.074060)(700.000000,0.074323)(710.000000,0.074580)(720.000000,0.074834)(730.000000,0.075083)(740.000000,0.075327)(750.000000,0.075567)(760.000000,0.075803)(770.000000,0.076035)(780.000000,0.076263)(790.000000,0.076487)(800.000000,0.076707)(810.000000,0.076923)(820.000000,0.077136)(830.000000,0.077344)(840.000000,0.077550)(850.000000,0.077752)(860.000000,0.077950)(870.000000,0.078145)(880.000000,0.078337)(890.000000,0.078525)(900.000000,0.078711)(910.000000,0.078893)(920.000000,0.079072)(930.000000,0.079249)(940.000000,0.079422)(950.000000,0.079592)(960.000000,0.079760)(970.000000,0.079925)(980.000000,0.080087)(990.000000,0.080247)
};
\addlegendentry{{ Simulation }};
\addplot[densely dashed,color=red!60!white, line width=1.5pt] coordinates{
(0.000000,0.500000)(10.000000,0.130370)(20.000000,0.053377)(30.000000,0.038181)(40.000000,0.033586)(50.000000,0.031801)(60.000000,0.031011)(70.000000,0.030641)(80.000000,0.030469)(90.000000,0.030398)(100.000000,0.030381)(110.000000,0.030392)(120.000000,0.030420)(130.000000,0.030456)(140.000000,0.030496)(150.000000,0.030538)(160.000000,0.030580)(170.000000,0.030621)(180.000000,0.030661)(190.000000,0.030699)(200.000000,0.030735)(210.000000,0.030770)(220.000000,0.030803)(230.000000,0.030834)(240.000000,0.030863)(250.000000,0.030891)(260.000000,0.030918)(270.000000,0.030943)(280.000000,0.030967)(290.000000,0.030990)(300.000000,0.031011)(310.000000,0.031032)(320.000000,0.031051)(330.000000,0.031070)(340.000000,0.031088)(350.000000,0.031105)(360.000000,0.031121)(370.000000,0.031136)(380.000000,0.031151)(390.000000,0.031165)(400.000000,0.031179)(410.000000,0.031192)(420.000000,0.031204)(430.000000,0.031216)(440.000000,0.031228)(450.000000,0.031239)(460.000000,0.031250)(470.000000,0.031260)(480.000000,0.031270)(490.000000,0.031280)(500.000000,0.031289)(510.000000,0.031298)(520.000000,0.031307)(530.000000,0.031315)(540.000000,0.031323)(550.000000,0.031331)(560.000000,0.031338)(570.000000,0.031346)(580.000000,0.031353)(590.000000,0.031360)(600.000000,0.031366)(610.000000,0.031373)(620.000000,0.031379)(630.000000,0.031385)(640.000000,0.031391)(650.000000,0.031397)(660.000000,0.031403)(670.000000,0.031408)(680.000000,0.031413)(690.000000,0.031419)(700.000000,0.031424)(710.000000,0.031429)(720.000000,0.031433)(730.000000,0.031438)(740.000000,0.031443)(750.000000,0.031447)(760.000000,0.031451)(770.000000,0.031456)(780.000000,0.031460)(790.000000,0.031464)(800.000000,0.031468)(810.000000,0.031472)(820.000000,0.031475)(830.000000,0.031479)(840.000000,0.031483)(850.000000,0.031486)(860.000000,0.031489)(870.000000,0.031493)(880.000000,0.031496)(890.000000,0.031499)(900.000000,0.031503)(910.000000,0.031506)(920.000000,0.031509)(930.000000,0.031512)(940.000000,0.031515)(950.000000,0.031517)(960.000000,0.031520)(970.000000,0.031523)(980.000000,0.031526)(990.000000,0.031528)
};
\addlegendentry{{ Approximation via Taylor expansion }};
\addplot[densely dashed,color=cyan!60!white, line width=1.5pt] coordinates{
(0.000000,0.091211)(10.000000,0.091211)(20.000000,0.091211)(30.000000,0.091211)(40.000000,0.091211)(50.000000,0.091211)(60.000000,0.091211)(70.000000,0.091211)(80.000000,0.091211)(90.000000,0.091211)(100.000000,0.091211)(110.000000,0.091211)(120.000000,0.091211)(130.000000,0.091211)(140.000000,0.091211)(150.000000,0.091211)(160.000000,0.091211)(170.000000,0.091211)(180.000000,0.091211)(190.000000,0.091211)(200.000000,0.091211)(210.000000,0.091211)(220.000000,0.091211)(230.000000,0.091211)(240.000000,0.091211)(250.000000,0.091211)(260.000000,0.091211)(270.000000,0.091211)(280.000000,0.091211)(290.000000,0.091211)(300.000000,0.091211)(310.000000,0.091211)(320.000000,0.091211)(330.000000,0.091211)(340.000000,0.091211)(350.000000,0.091211)(360.000000,0.091211)(370.000000,0.091211)(380.000000,0.091211)(390.000000,0.091211)(400.000000,0.091211)(410.000000,0.091211)(420.000000,0.091211)(430.000000,0.091211)(440.000000,0.091211)(450.000000,0.091211)(460.000000,0.091211)(470.000000,0.091211)(480.000000,0.091211)(490.000000,0.091211)(500.000000,0.091211)(510.000000,0.091211)(520.000000,0.091211)(530.000000,0.091211)(540.000000,0.091211)(550.000000,0.091211)(560.000000,0.091211)(570.000000,0.091211)(580.000000,0.091211)(590.000000,0.091211)(600.000000,0.091211)(610.000000,0.091211)(620.000000,0.091211)(630.000000,0.091211)(640.000000,0.091211)(650.000000,0.091211)(660.000000,0.091211)(670.000000,0.091211)(680.000000,0.091211)(690.000000,0.091211)(700.000000,0.091211)(710.000000,0.091211)(720.000000,0.091211)(730.000000,0.091211)(740.000000,0.091211)(750.000000,0.091211)(760.000000,0.091211)(770.000000,0.091211)(780.000000,0.091211)(790.000000,0.091211)(800.000000,0.091211)(810.000000,0.091211)(820.000000,0.091211)(830.000000,0.091211)(840.000000,0.091211)(850.000000,0.091211)(860.000000,0.091211)(870.000000,0.091211)(880.000000,0.091211)(890.000000,0.091211)(900.000000,0.091211)(910.000000,0.091211)(920.000000,0.091211)(930.000000,0.091211)(940.000000,0.091211)(950.000000,0.091211)(960.000000,0.091211)(970.000000,0.091211)(980.000000,0.091211)(990.000000,0.091211)
};
\addlegendentry{{ Performance of $\mathbf{w}_{LS}$ }};
\begin{scope}
\spy[black!50!white,size=1.5cm,circle,connect spies,magnification=1.8] on (0.4,0.65) in node [fill=none] at (5,1.8);
\end{scope}
\end{axis}
\end{tikzpicture}
\caption{ Generalization performance for $\boldsymbol{\mu} = \begin{bmatrix}2;\mathbf{0}_{p-1}\end{bmatrix}$, $p=256$, $n=512$, $c_1 = c_2 = 1/2$, $\sigma^2 = 0.1$ and $\alpha = 0.01$. Simulation results obtained by averaging over $50$ runs.}
\label{fig:approximation-t-small}
\end{center}
\vskip -0.1in
\end{figure}
\begin{figure}[htb]
\vskip 0.1in
\begin{center}
\begin{tikzpicture}[font=\footnotesize,spy using outlines]
\renewcommand{\axisdefaulttryminticks}{4}
\pgfplotsset{every major grid/.style={densely dashed}}
\tikzstyle{every axis y label}+=[yshift=-10pt]
\tikzstyle{every axis x label}+=[yshift=5pt]
\pgfplotsset{every axis legend/.append style={cells={anchor=west},fill=white, at={(0.98,0.95)}, anchor=north east, font=\footnotesize }}
\begin{axis}[
width=\columnwidth,
height=0.7\columnwidth,
xmin=0,
xmax=1000,
ymin=0.01,
ymax=0.51,
grid=major,
xlabel={Training time $(t)$},
ylabel={Misclassification rate},
ytick={0,0.1,0.2,0.3,0.4,0.5},
scaled ticks=true,
]
\addplot[color=blue!60!white,line width=1.5pt] coordinates{
(0.000000,0.488750)(10.000000,0.177031)(20.000000,0.103750)(30.000000,0.079922)(40.000000,0.071797)(50.000000,0.067656)(60.000000,0.066562)(70.000000,0.065625)(80.000000,0.066719)(90.000000,0.066875)(100.000000,0.067969)(110.000000,0.069375)(120.000000,0.070391)(130.000000,0.071719)(140.000000,0.072500)(150.000000,0.074609)(160.000000,0.075078)(170.000000,0.077109)(180.000000,0.077969)(190.000000,0.078516)(200.000000,0.079766)(210.000000,0.081016)(220.000000,0.082578)(230.000000,0.083594)(240.000000,0.084375)(250.000000,0.085234)(260.000000,0.086250)(270.000000,0.087891)(280.000000,0.089453)(290.000000,0.090391)(300.000000,0.091016)(310.000000,0.092109)(320.000000,0.093203)(330.000000,0.093906)(340.000000,0.094766)(350.000000,0.095469)(360.000000,0.096250)(370.000000,0.097500)(380.000000,0.098047)(390.000000,0.098672)(400.000000,0.099844)(410.000000,0.100781)(420.000000,0.101562)(430.000000,0.102344)(440.000000,0.103438)(450.000000,0.104297)(460.000000,0.104609)(470.000000,0.105391)(480.000000,0.105547)(490.000000,0.106016)(500.000000,0.106875)(510.000000,0.107578)(520.000000,0.108281)(530.000000,0.108828)(540.000000,0.109609)(550.000000,0.109922)(560.000000,0.110234)(570.000000,0.111172)(580.000000,0.111797)(590.000000,0.112422)(600.000000,0.112500)(610.000000,0.112812)(620.000000,0.113672)(630.000000,0.114062)(640.000000,0.114375)(650.000000,0.114922)(660.000000,0.115547)(670.000000,0.115937)(680.000000,0.116562)(690.000000,0.117031)(700.000000,0.117656)(710.000000,0.117813)(720.000000,0.118594)(730.000000,0.119766)(740.000000,0.120469)(750.000000,0.120703)(760.000000,0.121016)(770.000000,0.121406)(780.000000,0.122031)(790.000000,0.122656)(800.000000,0.122969)(810.000000,0.123281)(820.000000,0.123906)(830.000000,0.124297)(840.000000,0.124688)(850.000000,0.125469)(860.000000,0.125938)(870.000000,0.126328)(880.000000,0.126953)(890.000000,0.127812)(900.000000,0.128125)(910.000000,0.128594)(920.000000,0.129297)(930.000000,0.130078)(940.000000,0.130547)(950.000000,0.131094)(960.000000,0.131641)(970.000000,0.132109)(980.000000,0.132812)(990.000000,0.133125)
};
\addlegendentry{{ Simulation }};
\addplot[densely dashed,color=red!60!white, line width=1.5pt] coordinates{
(0.000000,0.500000)(10.000000,0.135456)(20.000000,0.061133)(30.000000,0.046126)(40.000000,0.041512)(50.000000,0.039705)(60.000000,0.038903)(70.000000,0.038526)(80.000000,0.038351)(90.000000,0.038279)(100.000000,0.038261)(110.000000,0.038273)(120.000000,0.038301)(130.000000,0.038338)(140.000000,0.038379)(150.000000,0.038422)(160.000000,0.038464)(170.000000,0.038506)(180.000000,0.038546)(190.000000,0.038585)(200.000000,0.038622)(210.000000,0.038657)(220.000000,0.038691)(230.000000,0.038722)(240.000000,0.038752)(250.000000,0.038781)(260.000000,0.038808)(270.000000,0.038834)(280.000000,0.038858)(290.000000,0.038881)(300.000000,0.038903)(310.000000,0.038924)(320.000000,0.038944)(330.000000,0.038963)(340.000000,0.038981)(350.000000,0.038998)(360.000000,0.039014)(370.000000,0.039030)(380.000000,0.039045)(390.000000,0.039060)(400.000000,0.039073)(410.000000,0.039087)(420.000000,0.039099)(430.000000,0.039112)(440.000000,0.039123)(450.000000,0.039135)(460.000000,0.039146)(470.000000,0.039156)(480.000000,0.039166)(490.000000,0.039176)(500.000000,0.039185)(510.000000,0.039194)(520.000000,0.039203)(530.000000,0.039212)(540.000000,0.039220)(550.000000,0.039228)(560.000000,0.039235)(570.000000,0.039243)(580.000000,0.039250)(590.000000,0.039257)(600.000000,0.039264)(610.000000,0.039271)(620.000000,0.039277)(630.000000,0.039283)(640.000000,0.039289)(650.000000,0.039295)(660.000000,0.039301)(670.000000,0.039306)(680.000000,0.039312)(690.000000,0.039317)(700.000000,0.039322)(710.000000,0.039327)(720.000000,0.039332)(730.000000,0.039337)(740.000000,0.039341)(750.000000,0.039346)(760.000000,0.039350)(770.000000,0.039354)(780.000000,0.039359)(790.000000,0.039363)(800.000000,0.039367)(810.000000,0.039371)(820.000000,0.039374)(830.000000,0.039378)(840.000000,0.039382)(850.000000,0.039385)(860.000000,0.039389)(870.000000,0.039392)(880.000000,0.039396)(890.000000,0.039399)(900.000000,0.039402)(910.000000,0.039405)(920.000000,0.039408)(930.000000,0.039411)(940.000000,0.039414)(950.000000,0.039417)(960.000000,0.039420)(970.000000,0.039423)(980.000000,0.039426)(990.000000,0.039428)
};
\addlegendentry{{ Approximation via Taylor expansion }};
\addplot[densely dashed,color=cyan!60!white, line width=1.5pt] coordinates{
(0.000000,0.500000)(10.000000,0.500000)(20.000000,0.500000)(30.000000,0.500000)(40.000000,0.500000)(50.000000,0.500000)(60.000000,0.500000)(70.000000,0.500000)(80.000000,0.500000)(90.000000,0.500000)(100.000000,0.500000)(110.000000,0.500000)(120.000000,0.500000)(130.000000,0.500000)(140.000000,0.500000)(150.000000,0.500000)(160.000000,0.500000)(170.000000,0.500000)(180.000000,0.500000)(190.000000,0.500000)(200.000000,0.500000)(210.000000,0.500000)(220.000000,0.500000)(230.000000,0.500000)(240.000000,0.500000)(250.000000,0.500000)(260.000000,0.500000)(270.000000,0.500000)(280.000000,0.500000)(290.000000,0.500000)(300.000000,0.500000)(310.000000,0.500000)(320.000000,0.500000)(330.000000,0.500000)(340.000000,0.500000)(350.000000,0.500000)(360.000000,0.500000)(370.000000,0.500000)(380.000000,0.500000)(390.000000,0.500000)(400.000000,0.500000)(410.000000,0.500000)(420.000000,0.500000)(430.000000,0.500000)(440.000000,0.500000)(450.000000,0.500000)(460.000000,0.500000)(470.000000,0.500000)(480.000000,0.500000)(490.000000,0.500000)(500.000000,0.500000)(510.000000,0.500000)(520.000000,0.500000)(530.000000,0.500000)(540.000000,0.500000)(550.000000,0.500000)(560.000000,0.500000)(570.000000,0.500000)(580.000000,0.500000)(590.000000,0.500000)(600.000000,0.500000)(610.000000,0.500000)(620.000000,0.500000)(630.000000,0.500000)(640.000000,0.500000)(650.000000,0.500000)(660.000000,0.500000)(670.000000,0.500000)(680.000000,0.500000)(690.000000,0.500000)(700.000000,0.500000)(710.000000,0.500000)(720.000000,0.500000)(730.000000,0.500000)(740.000000,0.500000)(750.000000,0.500000)(760.000000,0.500000)(770.000000,0.500000)(780.000000,0.500000)(790.000000,0.500000)(800.000000,0.500000)(810.000000,0.500000)(820.000000,0.500000)(830.000000,0.500000)(840.000000,0.500000)(850.000000,0.500000)(860.000000,0.500000)(870.000000,0.500000)(880.000000,0.500000)(890.000000,0.500000)(900.000000,0.500000)(910.000000,0.500000)(920.000000,0.500000)(930.000000,0.500000)(940.000000,0.500000)(950.000000,0.500000)(960.000000,0.500000)(970.000000,0.500000)(980.000000,0.500000)(990.000000,0.500000)
};
\addlegendentry{{ Performance of $\mathbf{w}_{LS}$ }};
\end{axis}
\end{tikzpicture}
\caption{ Generalization performance for $\boldsymbol{\mu} = \begin{bmatrix}2;\mathbf{0}_{p-1}\end{bmatrix}$, $p=512$, $n=512$, $c_1 = c_2 = 1/2$, $\sigma^2 = 0.1$ and $\alpha = 0.01$. Simulation results obtained by averaging over $50$ runs.}
\label{fig:approximation-c=1}
\end{center}
\vskip -0.1in
\end{figure}
\subsection{Special case for $c = 0$}
\label{subsec:c}
One major interest of random matrix analysis is that the ratio $c$ appears constantly in the analysis. Taking $c = 0$ signifies that we have far more training data than their dimension. This results in both $\lambda_-$, $\lambda_+ \to 1$, $\lambda_s \to 1 + \| \boldsymbol{\mu} \|^2$ and
\begin{align*}
E &\to \| \boldsymbol{\mu} \|^2 \frac{ 1-f_t(1 + \| \boldsymbol{\mu} \|^2 ) }{ 1+ \| \boldsymbol{\mu} \|^2 } \\
V &\to \| \boldsymbol{\mu} \|^2 \left( \frac{ 1-f_t(1 + \| \boldsymbol{\mu} \|^2 ) }{ 1+ \| \boldsymbol{\mu} \|^2 } \right)^2 + \sigma^2 f_t^2(1).
\end{align*}
As a consequence, $\frac{ E }{ \sqrt{ V} } \to \| \boldsymbol{\mu}\|$ if $\sigma^2 = 0$. This can be explained by the fact that with sufficient training data the classifier learns to align perfectly to $\boldsymbol{\mu}$ so that $\frac{ \boldsymbol{\mu}^{\sf T} \mathbf{w}(t) }{ \|\mathbf{w}(t) \| } = \| \boldsymbol{\mu} \|$. On the other hand, with initialization $\sigma^2 \neq 0$, one always has $\frac{ E }{ \sqrt{ V} } < \| \boldsymbol{\mu}\|$. But still, as $t$ goes large, the network forgets the initialization exponentially fast and converges to the optimal $\mathbf{w}(t)$ that aligns to $\boldsymbol{\mu}$.
In particular, for $\sigma^2 \neq 0$, we are interested in the optimal stopping time by taking the derivative with respect to $t$,
\[
\frac{\partial }{\partial t} \frac{ E }{ \sqrt{ V } } = \frac{\alpha \sigma^2 \|\boldsymbol{\mu}\|^2}{ V^{3/2} } \frac{ \|\boldsymbol{\mu}\|^2 f_t(1+\|\boldsymbol{\mu}\|^2) + 1 }{1+\|\boldsymbol{\mu}\|^2} f_t^2(1) > 0
\]
showing that when $c= 0$, the generalization performance continues to increase as $t$ grows and there is in fact no ``over-training'' in this case.
\begin{figure}[htb]
\vskip 0.1in
\begin{center}
\begin{tikzpicture}[font=\footnotesize,spy using outlines]
\renewcommand{\axisdefaulttryminticks}{4}
\pgfplotsset{every major grid/.style={densely dashed}}
\tikzstyle{every axis y label}+=[yshift=-10pt]
\tikzstyle{every axis x label}+=[yshift=5pt]
\pgfplotsset{every axis legend/.append style={cells={anchor=west},fill=white, at={(0.98,0.98)}, anchor=north east, font=\footnotesize }}
\begin{axis}[
width=\columnwidth,
height=0.7\columnwidth,
xmin=0,
xmax=300,
ymin=-0.01,
ymax=.5,
xlabel={Training time $(t)$},
ylabel={Misclassification rate},
ytick={0,0.1,0.2,0.3,0.4,0.5},
grid=major,
scaled ticks=true,
]
\addplot[color=blue!60!white,line width=1pt] coordinates{
(0.000000,0.498444)(6.000000,0.121505)(12.000000,0.038776)(18.000000,0.017755)(24.000000,0.010153)(30.000000,0.006684)(36.000000,0.004821)(42.000000,0.003724)(48.000000,0.002602)(54.000000,0.002117)(60.000000,0.001760)(66.000000,0.001454)(72.000000,0.001046)(78.000000,0.000867)(84.000000,0.000714)(90.000000,0.000612)(96.000000,0.000485)(102.000000,0.000408)(108.000000,0.000332)(114.000000,0.000306)(120.000000,0.000281)(126.000000,0.000204)(132.000000,0.000179)(138.000000,0.000179)(144.000000,0.000179)(150.000000,0.000179)(156.000000,0.000179)(162.000000,0.000128)(168.000000,0.000051)(174.000000,0.000026)(180.000000,0.000026)(186.000000,0.000026)(192.000000,0.000026)(198.000000,0.000000)(204.000000,0.000000)(210.000000,0.000000)(216.000000,0.000000)(222.000000,0.000000)(228.000000,0.000000)(234.000000,0.000000)(240.000000,0.000000)(246.000000,0.000000)(252.000000,0.000000)(258.000000,0.000000)(264.000000,0.000000)(270.000000,0.000000)(276.000000,0.000000)(282.000000,0.000000)(288.000000,0.000000)(294.000000,0.000000)
};
\addlegendentry{{ Simulation: training performance }};
\addplot+[only marks, mark=x,color=blue!60!white] coordinates{
(0.000000,0.500000)(6.000000,0.126445)(12.000000,0.037929)(18.000000,0.016031)(24.000000,0.008300)(30.000000,0.004780)(36.000000,0.002917)(42.000000,0.001843)(48.000000,0.001192)(54.000000,0.000785)(60.000000,0.000524)(66.000000,0.000355)(72.000000,0.000244)(78.000000,0.000170)(84.000000,0.000119)(90.000000,0.000085)(96.000000,0.000061)(102.000000,0.000044)(108.000000,0.000032)(114.000000,0.000024)(120.000000,0.000018)(126.000000,0.000013)(132.000000,0.000010)(138.000000,0.000008)(144.000000,0.000006)(150.000000,0.000005)(156.000000,0.000004)(162.000000,0.000003)(168.000000,0.000002)(174.000000,0.000002)(180.000000,0.000001)(186.000000,0.000001)(192.000000,0.000001)(198.000000,0.000001)(204.000000,0.000001)(210.000000,0.000000)(216.000000,0.000000)(222.000000,0.000000)(228.000000,0.000000)(234.000000,0.000000)(240.000000,0.000000)(246.000000,0.000000)(252.000000,0.000000)(258.000000,0.000000)(264.000000,0.000000)(270.000000,0.000000)(276.000000,0.000000)(282.000000,0.000000)(288.000000,0.000000)(294.000000,0.000000)
};
\addlegendentry{{ Theory: training performance }};
\addplot[densely dashed,color=red!60!white,line width=1pt] coordinates{
(0.000000,0.502526)(6.000000,0.170995)(12.000000,0.080663)(18.000000,0.053954)(24.000000,0.043776)(30.000000,0.038546)(36.000000,0.035918)(42.000000,0.034184)(48.000000,0.033291)(54.000000,0.032730)(60.000000,0.032423)(66.000000,0.032551)(72.000000,0.032628)(78.000000,0.032883)(84.000000,0.033214)(90.000000,0.033367)(96.000000,0.033622)(102.000000,0.034082)(108.000000,0.034260)(114.000000,0.034668)(120.000000,0.035179)(126.000000,0.035689)(132.000000,0.036173)(138.000000,0.036786)(144.000000,0.037296)(150.000000,0.037857)(156.000000,0.038418)(162.000000,0.038776)(168.000000,0.039209)(174.000000,0.039668)(180.000000,0.040128)(186.000000,0.040561)(192.000000,0.040740)(198.000000,0.041148)(204.000000,0.041709)(210.000000,0.041964)(216.000000,0.042372)(222.000000,0.042781)(228.000000,0.043112)(234.000000,0.043469)(240.000000,0.044031)(246.000000,0.044490)(252.000000,0.045128)(258.000000,0.045536)(264.000000,0.045867)(270.000000,0.046199)(276.000000,0.046633)(282.000000,0.047092)(288.000000,0.047551)(294.000000,0.048061)
};
\addlegendentry{{ Simulation: generalization performance }};
\addplot+[only marks, mark=o,color=red!60!white] coordinates{
(0.000000,0.500000)(6.000000,0.174328)(12.000000,0.082045)(18.000000,0.053372)(24.000000,0.041642)(30.000000,0.035840)(36.000000,0.032617)(42.000000,0.030699)(48.000000,0.029518)(54.000000,0.028786)(60.000000,0.028345)(66.000000,0.028102)(72.000000,0.027999)(78.000000,0.027998)(84.000000,0.028075)(90.000000,0.028210)(96.000000,0.028392)(102.000000,0.028610)(108.000000,0.028858)(114.000000,0.029131)(120.000000,0.029423)(126.000000,0.029731)(132.000000,0.030053)(138.000000,0.030386)(144.000000,0.030728)(150.000000,0.031079)(156.000000,0.031436)(162.000000,0.031798)(168.000000,0.032164)(174.000000,0.032535)(180.000000,0.032908)(186.000000,0.033284)(192.000000,0.033661)(198.000000,0.034040)(204.000000,0.034420)(210.000000,0.034800)(216.000000,0.035181)(222.000000,0.035562)(228.000000,0.035943)(234.000000,0.036324)(240.000000,0.036704)(246.000000,0.037084)(252.000000,0.037463)(258.000000,0.037840)(264.000000,0.038217)(270.000000,0.038593)(276.000000,0.038968)(282.000000,0.039341)(288.000000,0.039713)(294.000000,0.040084)
};
\addlegendentry{{ Theory: generalization performance }};
\begin{scope}
\spy[black!50!white,size=1.6cm,circle,connect spies,magnification=2] on (0.6,0.4) in node [fill=none] at (4,1.5);
\end{scope}
\end{axis}
\end{tikzpicture}
\caption{Training and generalization performance for MNIST data (number $1$ and $7$) with $n=p=784$, $c_1 = c_2 = 1/2$, $\alpha=0.01$ and $\sigma^2 = 0.1$. Results obtained by averaging over $100$ runs.}
\label{fig:MNIST-simu}
\end{center}
\vskip -0.1in
\end{figure}
\section{Numerical Validations}
\label{sec:validations}
We close this article with experiments on the popular MNIST dataset \cite{lecun1998mnist} (number $1$ and $7$). We randomly select training sets of size $n=784$ vectorized images of dimension $p=784$ and add artificially a Gaussian white noise of $-10\si{\deci\bel}$ in order to be more compliant with our toy model setting. Empirical means and covariances of each class are estimated from the full set of $13\,007$ MNIST images ($6\,742$ images of number $1$ and $6\,265$ of number $7$). The image vectors in each class are whitened by pre-multiplying $\mathbf{C}_a^{-1/2}$ and re-centered to have means of $\pm \boldsymbol{\mu}$, with $\boldsymbol{\mu}$ half of the difference between means from the two classes. We observe an extremely close fit between our results and the empirical simulations, as shown in Figure~\ref{fig:MNIST-simu}.
\section{Conclusion}
\label{sec:conclusion}
In this article, we established a random matrix approach to the analysis of learning dynamics for gradient-based algorithms on data of simultaneously large dimension and size. With a toy model of Gaussian mixture data with $\pm \boldsymbol{\mu}$ means and identity covariance, we have shown that the training and generalization performances of the network have asymptotically deterministic behaviors that can be evaluated via so-called deterministic equivalents and computed with complex contour integrals (and even under the form of real integrals in the present setting). The article can be generalized in many ways: with more generic mixture models (with the Gaussian assumption relaxed), on more appropriate loss functions (logistic regression for example), and more advanced optimization methods.
In the present work, the analysis has been performed on the ``full-batch'' gradient descent system. However, the most popular method used today is in fact its ``stochastic'' version \cite{bottou2010large} where only a fixed-size ($n_{batch}$) randomly selected subset (called a \emph{mini-batch}) of the training data is used to compute the gradient and descend \emph{one} step along with the opposite direction of this gradient in each iteration. In this scenario, one of major concern in practice lies in determining the optimal size of the mini-batch and its influence on the generalization performance of the network \cite{keskar2016large}. This can be naturally linked to the ratio $n_{batch}/p$ in the random matrix analysis.
Deep networks that are of more practical interests, however, need more efforts. As mentioned in \cite{saxe2013exact,advani2017high}, in the case of multi-layer networks, the learning dynamics depend, instead of each eigenmode separately, on the coupling of different eigenmodes from different layers. To handle this difficulty, one may add extra assumptions of independence between layers as in \cite{choromanska2015loss} so as to study each layer separately and then reassemble to retrieve the results of the whole network.
\section*{Acknowledgments}
We thank the anonymous reviewers for their comments and constructive suggestions. We would like to acknowledge this work is supported by the ANR Project RMT4GRAPH (ANR-14-CE28-0006) and the Project DeepRMT of La Fondation Sup{\'e}lec.
|
2,877,628,090,420 | arxiv | \section{Introduction}
We derive a bound on the difference in expectations of smooth functions of maxima of finite dimensional Gaussian random vectors.
We also derive a bound on the Kolmogorov distance between distributions of
these maxima. The key property of these bounds is that they depend on the dimension $p$ of Gaussian random vectors only through $\log p$, and on the maximum
difference between the covariance matrices of the vectors. These results extend and complement the work of \cite{Chatterjee2005b} that derived an explicit Sudakov-Fernique type bound on the difference of expectations of maxima of Gaussian random vectors. See also \cite{AdlerTaylor2007}, Chapter 2. As an application, we establish a conditional multiplier central limit theorem for maxima of sums of independent random vectors where the dimension of the vectors is possibly much larger than the sample size. In all these results, we allow for arbitrary covariance structures between the coordinates in random vectors, which is plausible especially in applications to high-dimensional statistics. We stress that the derivation of bounds on the Kolmogorov distance is by no means trivial and relies on the new {\em anti-concentration} inequality for maxima of Gaussian random vectors,
which is another main result of this paper (see Comment \ref{rem: concentration vs anticoncentration} for what anti-concentration inequalities here precisely refer to and how they differ from the concentration inequalities).
These anti-concentration bounds are non-trivial in the following sense: (i) they apply to every dimension $p$ and are explicit in the effect of the dimension $p$, (ii) they allow for arbitrary covariance structures between the coordinates in Gaussian random vectors, and (iii) they are sharp in the sense that there is an example for which the bound is tight up to a dimension independent constant. We note that these anti-concentration bounds are sharper
than those that result from the application of the universal reverse isoperimetric inequality of \cite{Ball1993} (see Comment \ref{rem: ball}).
This happens due to the special structure of the sets of interest.
Comparison inequalities for Gaussian random vectors play an important role in probability theory, especially in empirical process and extreme value theories.
We refer the reader to \cite{Slepian1962}, \cite{Leadbetter1983}, \cite{G85}, \cite{LT91}, \cite{LS01}, \cite{LS02}, \cite{Chatterjee2005b} and \cite{Y09} for standard references on this topic.
The anti-concentration phenomenon has attracted considerable interest in the context of random matrix theory and the Littlewood-Offord problem in number theory. See, e.g., \cite{RV08}, \cite{RV09}, and \cite{VR07} who remarked that \textit{``concentration is better understood than anti-concentration''}.
Those papers were concerned with the anti-concentration in the Euclidean norm for sums of independent random vectors, and the topic and the proof technique here are substantially different from theirs.
Either of the comparison or anti-concentration bounds derived in the paper have many immediate statistical applications, especially in the context of high-dimensional statistical inference, where the dimension $p$ of vectors of interest is much larger than the sample size (see \cite{BV11} for a textbook treatment of the recent developments of high-dimensional statistics). In particular,
the results established here are helpful in deriving an invariance principle for sums of high-dimensional random vectors, and also in establishing
the validity of the multiplier bootstrap for inference in practice. We refer the reader to a companion paper \cite{CCK12}, where
the results established here are applied in several important statistical problems, particularly the analysis of Dantzig selector of \cite{CandesTao2007} in the non-Gaussian setting.
After the initial submission, we have become aware of the work \cite{NV09}, which derives bounds on the density function of the maximum of a Gaussian random vector \citep[see][Proposition 3.12]{NV09} under positive covariances restriction. This is related to but different from our anti-concentration bounds. The crucial assumption in \cite{NV09}'s Proposition 3.12 is {\em positivity} of all the covariances between the coordinates in the Gaussian random vector, which does not hold in our targeted applications in high-dimensional statistics, e.g., analysis of Danzig selector. Moreover, \cite{NV09}'s upper bound on the density depends on the inverse of the lower bound on the covariances -- and hence, e.g., if there are two independent coordinates in the Gaussian random vector, then the upper bound becomes infinite. Our anti-concentration bounds do \textit{not} require such positivity (or other) assumptions on covariances and hence are not implied by results \cite{NV09}. Moreover, the proof technique used here is substantially different from that of \cite{NV09} based on Malliavin calculus.
The rest of the paper is organized as follows. In Section \ref{sec: Gaus and multiplier}, we present comparison bounds for Gaussian random vectors and its application, namely the conditional multiplier central limit theorem. In Section \ref{sec: anti-concentration}, we present anti-concentration bounds for maxima of Gaussian random vectors. In Sections \ref{sec: proof for section 2} and \ref{sec: proof for section 3}, we give proofs of the theorems in Sections \ref{sec: Gaus and multiplier} and \ref{sec: anti-concentration}. Appendix contains a proof of a technical lemma.
{\bf Notation}.
Denote by $(\Omega,\mathcal{F},\Pr)$ an underlying probability space.
For $a,b \in \RR$, we write $a_{+} = \max \{ 0,a \}$ and $a \vee b = \max \{ a,b \}$. Let $1(\cdot)$ denote the indicator function.
The transpose of a vector $z$ is denoted by $z^{T}$. For a function $g: \RR \to \RR$, we use the notation $\| g \|_{\infty} = \sup_{z \in \RR} | g(z) |$.
\section{Comparison Bounds and Multiplier Bootstrap}
\label{sec: Gaus and multiplier}
\subsection{Comparison bounds}
Let $X =(X_{1},\dots,X_{p})^{T}$ and $Y=(Y_{1},\dots,Y_{p})^{T}$ be centered Gaussian random vectors in $\RR^{p}$
with covariance matrices $\Sigma^{X} =(\sigma^{X}_{jk})_{1 \leq j,k \leq p}$ and
$\Sigma^{Y}=(\sigma^{Y}_{jk})_{1 \leq j,k \leq p}$, respectively.
The purpose of this section is to give error bounds on
the difference of the expectations of smooth functions and the distribution functions
of
\begin{equation*}
\max_{1 \leq j \leq p} X_{j} \quad \text{and} \quad \max_{1 \leq j \leq p} Y_{j}
\end{equation*}
in terms of $p$ and
\begin{equation*}
\Delta :=\max_{1\leq j,k\leq p} | \sigma^{X}_{jk}-\sigma^{Y}_{jk} |.
\end{equation*}
The problem of comparing distributions of maxima is of intrinsic difficulty since the maximum function $z=(z_{1},\dots,z_{p})^{T} \mapsto \max_{1 \leq j \leq p} z_{j}$ is non-differentiable.
To circumvent the problem, we use a smooth approximation of the maximum function. For $z=(z_{1},\dots,z_{p})^{T} \in \RR^{p}$, consider the function:
\begin{equation*}
F_{\beta}(z):=\beta^{-1}\log\left(\sum_{j=1}^{p}\exp(\beta z_{j})\right),
\end{equation*}
which approximates the maximum function, where $\beta > 0$ is the smoothing parameter that controls the level of approximation (we call this function the ``smooth max function'').
Indeed, an elementary calculation shows that for every $z \in \RR^{p}$,
\begin{equation}\label{eq: smooth max property}
0 \leq F_{\beta}(z)- \max_{1 \leq j \leq p} z_{j} \leq \beta^{-1} \log p.
\end{equation}
This smooth max function arises in the definition of ``free energy" in spin glasses. See, e.g., \cite{Talagrand2003} and \cite{Panchenko2013}.
Here is the first theorem of this section.
\begin{theorem}[Comparison bounds for smooth functions]
\label{theorem:comparison}
For every $g \in C^2(\RR)$ with $\| g' \|_{\infty} \vee \| g'' \|_{\infty} < \infty$ and every $\beta>0$,
\begin{align*}
&| \Ep[g(F_{\beta}(X))] - \Ep [g(F_{\beta}(Y))] | \leq (\| g'' \|_{\infty}/2 + \beta \| g' \|_{\infty})\Delta,\\
\intertext{and hence}
&| \Ep [ g (\max_{1 \leq j \leq p}X_{j} ) ] - \Ep [ g(\max_{1 \leq j \leq p} Y_{j})] | \leq (\| g'' \|_{\infty}/2 + \beta \| g' \|_{\infty}) \Delta + 2 \beta^{-1} \| g' \|_{\infty} \log p.
\end{align*}
\end{theorem}
\begin{proof}
See Section \ref{sec: proof for section 2}.
\end{proof}
\begin{remark}
Minimizing the second bound with respect to $\beta > 0$, we have
\begin{equation*}
| \Ep [ g (\max_{1 \leq j \leq p}X_{j} )] - \Ep [ g(\max_{1 \leq j \leq p} Y_{j})] | \leq \| g'' \|_{\infty}\Delta/2+ 2\| g' \|_{\infty}\sqrt{2\Delta\log p}.
\end{equation*}
This result extends the work of \cite{Chatterjee2005b}, which derived the following Sudakov-Fernique type bound on the difference of the expectations of the Gaussian maxima:
\begin{equation*}
| \Ep [ \max_{1 \leq j \leq p}X_{j} ] - \Ep [\max_{1 \leq j \leq p} Y_{j}] | \leq 2\sqrt{2\Delta\log p}.
\end{equation*}
\end{remark}
Theorem \ref{theorem:comparison} is not applicable to functions of the form $g(z) = 1(z \leq x)$ and hence does not directly lead to a bound on the Kolmogorov distance between $\max_{1 \leq j \leq p} X_{j}$ and $\max_{1 \leq j \leq p} Y_{j}$ (recall that the Kolmogorov distance between (the distributions) of two real valued random variables
$\xi$ and $\eta$ is defined by $\sup_{x \in \RR} | \Pr ( \xi \leq x ) - \Pr (\eta \leq x) |$).
Nevertheless, we have the following bound on the Kolmogorov distance.
\begin{theorem}[Comparison of distributions]\label{cor: distances Gaussian to Gaussian}
Suppose that $p \geq 2$ and $\sigma^{Y}_{jj} > 0$ for all $1 \leq j \leq p$. Then
\begin{equation}
\sup_{x \in\RR} | \Pr ( \max_{1 \leq j \leq p} X_{j}\leq x ) -\Pr ( \max_{1 \leq j \leq p} Y_{j}\leq x ) | \leq C \Delta^{1/3}(1 \vee \log (p/\Delta))^{2/3}, \label{eq: K-distance}
\end{equation}
where $C>0$ depends only on $\min_{1 \leq j \leq p} \sigma_{jj}^{Y}$ and $\max_{1 \leq j \leq p} \sigma_{jj}^{Y}$ (the right side is understood to be $0$ when $\Delta = 0$).
\end{theorem}
\begin{proof}
See Section \ref{sec: proof for section 2}.
\end{proof}
Deriving a bound on the Kolmogorov distance between $\max_{1 \leq j \leq p}X_{j}$ and $\max_{1 \leq j \leq p}Y_{j}$ from Theorem \ref{theorem:comparison} is {\em not} a trivial issue and this step relies on the {\em anti-concentration} inequality for maxima of (not necessarily independent) Gaussian random variables, which we will study in Section \ref{sec: anti-concentration}. Interestingly, the proof of Theorem \ref{cor: distances Gaussian to Gaussian} is substantially different from the (``textbook'') proof of classical Slepian's inequality. The simplest form of Slepian's inequality states that
\begin{equation*}
\Pr ( \max_{1 \leq j \leq p} X_{j} \leq x) \leq \Pr ( \max_{1 \leq j \leq p} Y_{j} \leq x), \ \forall x \in \RR,
\end{equation*}
whenever $\sigma_{jj}^{X} = \sigma_{jj}^{Y}$ and $\sigma_{jk}^{X} \leq \sigma_{jk}^{Y}$ for all $1 \leq j,k \leq p$.
This inequality is immediately deduced from the following expression:
\begin{multline}
\Pr ( \max_{1 \leq j \leq p} X_{j}\leq x ) -\Pr ( \max_{1 \leq j \leq p} Y_{j}\leq x ) \\
= \sum_{1 \leq j < k \leq p} (\sigma_{jk}^{X} - \sigma_{jk}^{Y}) \int_{0}^{1} \left \{ \int_{-\infty}^{x} \cdots \int_{-\infty}^{x} \frac{\partial^{2} f_{t}(z)}{\partial z_{j}\partial z_{k}} dz \right \} dt, \label{eq: slepian expression}
\end{multline}
where $\sigma_{jj}^{X} = \sigma_{jj}^{Y},1 \leq \forall j \leq p$, is assumed. Here $f_{t}$ denotes the density function of $N(0,t \Sigma^{X} + (1-t) \Sigma^{Y})$. See \cite{Leadbetter1983}, page 82, for this expression.
The expression (\ref{eq: slepian expression}) is of importance and indeed a source of many interesting probabilistic results (see, e.g., \cite{LS02} and \cite{Y09} for recent related works).
It is not clear (or at least non-trivial), however, whether a bound similar in nature to Theorem \ref{cor: distances Gaussian to Gaussian} can be deduced from the expression (\ref{eq: slepian expression}) when there is no restriction on the covariance matrices except for the condition that $\sigma_{jj}^{X} = \sigma_{jj}^{Y},1 \leq \forall j \leq p$, and here we take the different route.
The key features of Theorem \ref{cor: distances Gaussian to Gaussian} are: (i) the bound on the Kolmogorov distance between the maxima of Gaussian random vectors in $\RR^{p}$ depends on the dimension $p$ only through $\log p$ and the maximum difference of the covariance matrices $\Delta$, and (ii) it allows for arbitrary covariance matrices for $X$ and $Y$ (except for the nondegeneracy condition that $\sigma_{jj}^{Y} > 0, \ 1 \leq \forall j \leq p$). These features have an important implication to statistical applications, as discussed below.
\subsection{Conditional multiplier central limit theorem}
Consider the following problem. Suppose that $n$ independent centered random vectors in $\RR^{p}$ of observations $Z_{1},\dots,Z_{n}$ are given. Here
$Z_{1},\dots,Z_{n}$ are generally non-Gaussian, and the dimension $p$ is allowed to increase with $n$ (i.e., the case where $p=p_{n} \to \infty$ as $n \to \infty$ is allowed).
We suppress the possible dependence of $p$ on $n$ for the notational convenience. Suppose that each $Z_{i}$ has a finite covariance matrix $\Ep [ Z_{i} Z_{i}^{T} ]$.
Consider the following normalized sum:
\begin{equation*}
S_{n} := (S_{n,1},\dots,S_{n,p})^{T} = \frac{1}{\sqrt{n}} \sum_{i=1}^{n} Z_{i}.
\end{equation*}
The problem here is to approximate the distribution of $\max_{1 \leq j \leq p} S_{n,j}$.
Statistics of this form arise frequently in modern statistical applications. The exact distribution of $\max_{1 \leq j \leq p} S_{n,j}$ is generally unknown.
An intuitive idea to approximate the distribution of $\max_{1 \leq j \leq p} S_{n,j}$ is to use the Gaussian approximation. Let $V_{1},\dots,V_{n}$ be independent Gaussian random vectors in $\RR^{p}$ such that $V_{i} \sim N(0,\Ep [ Z_{i} Z_{i}^{T} ])$, and define
\begin{equation*}
T_{n} := (T_{n,1},\dots,T_{n,p}) := \frac{1}{\sqrt{n}} \sum_{i=1}^{n} V_{i} \sim N(0,n^{-1} {\textstyle \sum}_{i=1}^{n} \Ep [Z_{i}Z_{i}^{T}] ).
\end{equation*}
It is expected that the distribution of $\max_{1 \leq j \leq p} T_{n,j}$ is close to that of $\max_{1 \leq j \leq p} S_{n,j}$ in the following sense:
\begin{equation}
\sup_{x \in \RR} | \Pr ( \max_{1 \leq j \leq p} T_{n,j} \leq x ) - \Pr ( \max_{1 \leq j \leq p} S_{n,j} \leq x ) | \to 0, \ n \to \infty. \label{eq: gaussian-approx}
\end{equation}
When $p$ is fixed, (\ref{eq: gaussian-approx}) will follow from the classical Lindeberg-Feller central limit theorem, subject to the Lindeberg conditions. The recent paper by \cite{CCK12} established conditions under which this Gaussian approximation (\ref{eq: gaussian-approx}) holds even when $p$ is comparable or much larger than $n$.
For example, \cite{CCK12} proved that if $c_{1} \leq n^{-1} \sum_{i=1}^{n} \Ep [ Z_{ij}^{2} ] \leq C_{1}$ and $\Ep [ \exp (| Z_{ij} |/ C_{1} ) ] \leq 2$ for all $1 \leq i \leq n$ and $1 \leq j \leq p$ for some $0 < c_{1} < C_{1}$,
then (\ref{eq: gaussian-approx}) holds as long as $\log p = o(n^{1/7})$.
The Gaussian approximation (\ref{eq: gaussian-approx}) is in itself an important step, but in the general case where the covariance matrix $n^{-1} \sum_{i=1}^{n} \Ep [ Z_{i} Z_{i}^{T} ]$ is unknown, it is not directly applicable for purposes of statistical inference. In such cases, the following {\em multiplier bootstrap} procedure will be useful. Let $\eta_{1},\dots,\eta_{n}$ be independent standard Gaussian random variables independent of $Z_{1}^{n} := \{ Z_{1},\dots,Z_{n} \}$. Consider the following randomized sum:
\begin{equation*}
S_{n}^{\eta} := (S_{n,1}^{\eta}, \dots, S_{n,p}^{\eta} )^{T} := \frac{1}{\sqrt{n}} \sum_{i=1}^{n} \eta_{i} Z_{i}.
\end{equation*}
Since conditional on $Z_{1}^{n}$,
\begin{equation*}
S_{n}^{\eta} \sim N(0,n^{-1} {\textstyle \sum}_{i=1}^{n} Z_{i}Z_{i}^{T}),
\end{equation*}
it is natural to expect that the conditional distribution of $\max_{1 \leq j \leq p} S_{n,j}^{\eta}$ is ``close'' to the distribution of $\max_{1 \leq j \leq p} T_{n,j}$ and hence that of $\max_{1 \leq j \leq p} S_{n,j}$. Note here that the conditional distribution of $S^{\eta}_{n}$ is completely known, which makes this distribution useful for purposes of statistical inference. The following proposition makes this intuition rigorous.
\begin{proposition}[Conditional multiplier central limit theorem]
\label{prop: multiplier bootstrap}
Work with the setup as described above. Suppose that $p \geq 2$ and there are some constants $0 < c_{1} < C_{1}$ such that
$c_{1} \leq n^{-1} \sum_{i=1}^{n} \Ep [ Z_{ij}^{2} ] \leq C_{1}$ for all $1 \leq j \leq p$. Moreover, suppose that $\hat{\Delta} := \max_{1 \leq j,k \leq p} | n^{-1} \sum_{i=1}^{n} (Z_{ij}Z_{ik} - \Ep [ Z_{ij}Z_{ik} ])| = o_{\Pr} ( (\log p)^{-2})$. Then
\begin{equation}
\sup_{x \in \RR} | \Pr ( \max_{1 \leq j \leq p} S_{n,j}^{\eta} \leq x \mid Z_{1}^{n} ) - \Pr ( \max_{1 \leq j \leq p} T_{n,j} \leq x ) | \stackrel{\Pr}{\to} 0, \ \text{as} \ n \to \infty. \label{eq: cmclt}
\end{equation}
Here recall that $p$ is allowed to increase with $n$.
\end{proposition}
\begin{proof}
By Theorem \ref{cor: distances Gaussian to Gaussian}, we have
\begin{equation*}
\sup_{x \in \RR} | \Pr ( \max_{1 \leq j \leq p} S_{n,j}^{\eta} \leq x \mid Z_{1}^{n} ) - \Pr ( \max_{1 \leq j \leq p} T_{n,j} \leq x ) | = O\{ \hat{\Delta}^{1/3}(1 \vee \log (p/\hat{\Delta}))^{2/3} \}.
\end{equation*}
The right side is $o_{\Pr}(1)$ as soon as $\hat{\Delta} = o_{\Pr}(( \log p)^{-2})$.
\end{proof}
We call this result a ``conditional multiplier central limit theorem,'' where the terminology follows that in empirical process theory. See \cite{VW96}, Chapter 2.9.
The notable features of this proposition, which inherit from the features of Theorem \ref{cor: distances Gaussian to Gaussian} discussed above, are: (i) (\ref{eq: cmclt}) can hold even when $p$ is much larger than $n$, and (ii) it allows for arbitrary covariance matrices for $Z_{i}$ (except for the mild scaling condition that $c_{1} \leq n^{-1} \sum_{i=1}^{n} \Ep [ Z_{ij}^{2} ] \leq C_{1}$). The second point is clearly desirable in statistical applications as the information on the true covariance structure is generally (but not always) unavailable.
For the first point, we have the following estimate on $\Ep [ \hat{\Delta} ]$.
\begin{lemma}
\label{lem: Delta estimate}
Let $p \geq 2$. There exists a universal constant $C > 0$ such that
\begin{equation*}
\Ep [ \hat{\Delta} ] \leq C \left [ \max_{1 \leq j \leq p} (n^{-1} {\textstyle \sum}_{i=1}^{n} \Ep [ Z_{ij}^{4} ])^{1/2} \sqrt{\frac{ \log p}{n}} + ( \Ep [ \max_{1 \leq i \leq n} \max_{1 \leq j \leq p} Z_{ij}^{4} ] )^{1/2}\frac{\log p}{n} \right] .
\end{equation*}
\end{lemma}
\begin{proof}
See Appendix.
\end{proof}
Hence with help of Lemma 2.2.2 in \cite{VW96}, we can find various primitive conditions under which $\hat{\Delta} = o_{\Pr}(( \log p)^{-2})$ so that (\ref{eq: cmclt}) holds.
Consider the following examples.
Case (a): Suppose that $\Ep [ \exp (| Z_{ij} |/ C_{1} ) ] \leq 2$ for all $1 \leq i \leq n$ and $1 \leq j \leq p$ for some $C_{1} > 0$.
In this case, it is not difficult to verify that $\hat{\Delta} = o_{\Pr}((\log p)^{-2})$ as soon as $\log p = o(n^{1/5})$.
Case (b): Another type of $Z_{ij}$ which arises in regression applications is of the form $Z_{ij} = \eps_{i} x_{ij}$ where $\eps_{i}$ are stochastic with $\Ep [ \epsilon_{i} ] = 0$ and $\max_{1 \leq i \leq n}\Ep[ | \eps_{i} |^{4q} ] = O(1)$ for some $q \geq 1$, and $x_{ij}$ are non-stochastic (typically, $\eps_{i}$ are ``errors'' and $x_{ij}$ are ``regressors''). Suppose that $x_{ij}$ are normalized in such a way that $n^{-1} \sum_{i=1}^{n} x_{ij}^{2} = 1$, and there are bounds $B_{n} \geq 1$ such that $\max_{1 \leq i \leq n} \max_{1 \leq j \leq p} | x_{ij} | \leq B_{n}$, where we allow $B_{n} \to \infty$. In this case, $\hat{\Delta} = o_{\Pr}((\log p)^{-2})$ as soon as
\begin{equation*}
\max \{ B_{n}^{2} (\log p)^{5}, B_{n}^{4q/(2q-1)} (\log p)^{6q/(2q-1)} \} = o(n),
\end{equation*}
since $\max_{1 \leq j \leq p} ( n^{-1}\sum_{i=1}^{n} \Ep [ (\eps_{i} x_{ij})^{4} ]) \leq B_{n}^{2} \max_{1 \leq i \leq n} \Ep [ \eps_{i}^{4} ] = O(B_{n}^{2})$ and $\Ep [ \max_{1 \leq i \leq n} \max_{1 \leq j \leq p} (\eps_{i} x_{ij})^{4} ] \leq B_{n}^{4} \Ep[\max_{1 \leq i \leq n} \eps_{i}^{4} ] = O(n^{1/q} B_{n}^{4})$.
Importantly, in these examples, for (\ref{eq: cmclt}) to hold, $p$ can increase exponentially in some fractional power of $n$.
\section{Anti-concentration Bounds}
\label{sec: anti-concentration}
The following theorem provides bounds on the {\em L\'{e}vy concentration function}
of the maximum of $p$ Gaussian random variables, where the terminology is borrowed from \cite{RV09}.
\begin{definition}[\cite{RV09}, Definition 3.1]
The {\em L\'{e}vy concentration function} of a real valued random variable $\xi$ is defined for $\epsilon > 0$ as
\begin{equation*}
\LL (\xi,\epsilon) = \sup_{x \in \RR} \Pr ( | \xi - x | \leq \epsilon ).
\end{equation*}
\end{definition}
\begin{theorem}[Anti-concentration]\label{thm: anticoncentration}
Let $X_{1},\dots,X_{p}$ be (not necessarily independent) centered Gaussian random variables with $\sigma_{j}^{2} := \Ep [ X_{j}^{2} ]>0$ for all $1 \leq j \leq p$. Moreover, let
$\underline{\sigma} := \min_{1 \leq j \leq p} \sigma_{j}, \bar{\sigma} := \max_{1 \leq j \leq p} \sigma_{j}$, and $a_{p} := \Ep [ \max_{1 \leq j \leq p} (X_{j}/\sigma_{j}) ]$.
\begin{enumerate}
\item[(i)] If the variances are all equal, namely $\underline{\sigma} = \bar{\sigma} = \sigma$, then for every $\epsilon > 0$,
\begin{equation*}
\LL (\max_{1 \leq j \leq p} X_{j},\epsilon) \leq 4 \epsilon (a_p + 1)/\sigma.
\end{equation*}
\item[(ii)] If the variances are not equal, namely $\underline{\sigma} < \bar{\sigma}$, then for every $\epsilon > 0$,
\begin{equation*}
\LL (\max_{1 \leq j \leq p} X_{j},\epsilon) \leq C \epsilon \{ a_p + \sqrt{1 \vee \log (\underline{\sigma}/\epsilon)} \},
\end{equation*}
where $C>0$ depends only on $\underline{\sigma}$ and $\bar{\sigma}$.
\end{enumerate}
\end{theorem}
The following simpler corollary is useful in applications. This corollary will be used in the proof of Theorem \ref{cor: distances Gaussian to Gaussian}.
\begin{corollary}\label{cor: anticoncentration}
Let $X_{1},\dots,X_{p}$ be (not necessarily independent) centered Gaussian random variables with $\sigma_{j}^{2} := \Ep [ X_{j}^{2} ] > 0$ for all $1 \leq j \leq p$.
Let $\underline{\sigma} := \min_{1 \leq j \leq p} \sigma_{j}$ and $\bar{\sigma} := \max_{1 \leq j \leq p} \sigma_{j}$. Then for every $\epsilon > 0$,
\begin{equation*}
\LL (\max_{1 \leq j \leq p} X_{j},\epsilon) \leq C \epsilon \sqrt{1 \vee \log (p/\epsilon )},
\end{equation*}
where $C>0$ depends only on $\underline{\sigma}$ and $\bar{\sigma}$.
When $\sigma_{j}$ are all equal, $\log (p/\epsilon)$ on the right side can be replaced by $\log p$.
\end{corollary}
\begin{proof}[Proof of Corollary \ref{cor: anticoncentration}]
Since $X_{j}/\sigma_{j} \sim N(0,1)$, by a standard calculation, we have $a_{p} \leq \sqrt{2 \log p}$.
See, e.g., Proposition 1.1.3 of \cite{Talagrand2003}. Hence the corollary follows from Theorem \ref{thm: anticoncentration}.
\end{proof}
\begin{remark}[Anti-concentration vs. small ball probabilities]
The problem of bounding the L\'{e}vy concentration function $\LL (\max_{1 \leq j \leq p} X_{j},\epsilon)$ is qualitatively different from
the problem of bounding $\Pr ( \max_{1 \leq j \leq p} | X_{j} | \leq x )$. For a survey on the latter problem, called the ``small ball problem'', we refer the reader to \cite{LS01}.
\end{remark}
\begin{remark}[Concentration vs. anti-concentration]
\label{rem: concentration vs anticoncentration}
{\em Concentration inequalities} refer to inequalities bounding $\Pr (| \xi - x | > \epsilon )$ for a random variable $\xi$ (typically $x$ is the mean or median of $\xi$). See the monograph \cite{L01} for a study of the concentration of measure phenomenon.
{\em Anti-concentration inequalities} in turn refer to reverse inequalities, i.e., inequalities bounding $\Pr (| \xi - x | \leq \epsilon )$. Theorem \ref{thm: anticoncentration} provides anti-concentration inequalities for $\max_{1 \leq j \leq p} X_{j}$.
\cite{VR07} remarked that ``concentration is better understood than anti-concentration''. In the present case, the Gaussian concentration inequality (see \cite{L01}, Theorem 7.1) states that
\begin{equation*}
\Pr ( | \max_{1 \leq j \leq p} X_{j} - \Ep [ \max_{1 \leq j \leq p} X_{j} ] | \geq r) \leq 2 e^{-r^{2}/(2 \bar{\sigma}^{2})}, \ r > 0,
\end{equation*}
where the mean can be replace by the median. This inequality is well known and dates back to \cite{B75} and \cite{ST78}.
To the best of our knowledge, however, the reverse inequalities in Theorem \ref{thm: anticoncentration} were not known and are new.
\end{remark}
\begin{remark}[Anti-concentration for maximum of moduli, $\max_{1 \leq j \leq p} | X_{j} |$]
Versions of Theorem \ref{thm: anticoncentration} and Corollary \ref{cor: anticoncentration} continue to hold for $\max_{1 \leq j \leq p} | X_{j} |$.
That is, e.g., when $\sigma_{j}$ are all equal ($\sigma_{j} = \sigma$), we have $\LL(\max_{1 \leq j \leq p} | X_{j} |, \epsilon ) \leq 4 ( a_{p}'+1)/\sigma$, where $a_{p}' := \Ep [ \max_{1 \leq j \leq p} | X_{j} |/\sigma ]$. To see this, observe that $\max_{1 \leq j \leq p} | X_{j} | = \max_{1 \leq j \leq 2p} X_{j}'$ where $X_{j}'=X_{j}$ for $j=1,\dots,p$ and $X_{p+j}' = - X_{j}$ for $j=1,\dots,p$. Hence we may apply Theorem \ref{thm: anticoncentration} to $X_{1}',\dots,X_{2p}'$ to obtain the desired conclusion.
\end{remark}
The main feature of Theorem \ref{thm: anticoncentration} is the fact that it provides {\em qualitative} bounds on the L\'{e}vy concentration function $\LL (\max_{1 \leq j \leq p} X_{j},\epsilon)$.
In a trivial example where $p=1$, it is immediate to see that $\Pr ( | X_{1} - x| \leq \epsilon ) \leq \epsilon \sqrt{2/(\pi \sigma_{1}^{2})}$.
A non-trivial case is the situation where $p \to \infty$. In such a case, it is typically not known whether $\max_{1 \leq j \leq p} X_{j}$ has a limiting distribution as $p \to \infty$ (recall that except for $\underline{\sigma} > 0$, we allow for general covariance structures between $X_{1},\dots,X_{p}$) and therefore it is not trivial at all whether,
for every sequence $\epsilon = \epsilon_{p} \to 0$ (or at some rate), $\LL (\max_{1 \leq j \leq p} X_{j},\epsilon) \to 0$ or how fast $\epsilon = \epsilon_{p} \to 0$ should be to guarantee that $\LL (\max_{1 \leq j \leq p} X_{j},\epsilon) \to 0$.
Theorem \ref{thm: anticoncentration} answers this question with explicit, non-asymptotic bounds.
\begin{remark}[Relation to Ball's reverse isoperimetric inequality]
\label{rem: ball}
Application of Ball's \cite{Ball1993} reverse isoperimetric
inequality to our problem gives the following anti-concentration bound:
\begin{equation}\label{ball}
\LL (\max_{1 \leq j \leq p} X_{j},\epsilon) \leq C \epsilon p^{1/4}.
\end{equation}
More precisely, this bound follows from equation (1.4) noted in \cite{Bentkus2003},
which is based on \cite{Ball1993}, and from the fact that the sets of the form $A_{\max}(t) = \{
x \in \RR^{p} : \max_{1 \leq j \leq p} x_j \leq t\}$ are convex. Thus, the dimension $p$ appears as $p^{1/4}$ in the bound (\ref{ball}). In contrast,
our anti-concentration bound has $\sqrt{1 \vee \log (p/\epsilon )}$ instead of $p^{1/4}$, which results
in considerably tighter bounds when $p$ is very large. Note, however, that Ball's inequality is universal
for a broad collection $\mathcal{A}$ of convex bodies, whereas the anti-concentration
inequality developed here can be viewed as a reverse isoperimetric inequality for collection of sets $\mathcal{A}_{\max} = \{ A_{\max}(t) : t \in \RR \}$.
\end{remark}
The presence of $a_{p}$ on the bounds is essential and can not be removed, as the following example suggests.
This shows that there does not exist a substantially sharper estimate of the universal bound of the concentration function than that given in Theorem \ref{thm: anticoncentration}.
Potentially, there could be refinements but they would have to rely on the particular (hence
non-universal) features of the covariance structure between $X_{1},\dots,X_{p}$.
\begin{example}[Partial converse of Theorem \ref{thm: anticoncentration}]
\label{ex1}
Let $X_{1},\dots,X_{p}$ be independent standard Gaussian random variables.
By Theorem 1.5.3 of \cite{Leadbetter1983}, as $p \to \infty$,
\begin{equation}
b_{p} (\max_{1 \leq j \leq p}X_{j} - d_{p}) \stackrel{d}{\to} G(0,1), \label{weak}
\end{equation}
where
\begin{equation*}
b_{p} := \sqrt{2 \log p}, \ \ d_{p} := b_{p} - \frac{ \log(4 \pi) + \log \log p }{2 b_{p}},
\end{equation*}
and $G(0,1)$ denotes the standard Gumbel distribution, i.e., the distribution having the density $g(x) = e^{-x} e^{-e^{-x}}$ for $x \in \mathbb{R}$.
In fact, we can show that the density of $b_{p} (\max_{1 \leq j \leq p}X_{j} - d_{p})$ converges to that of $G(0,1)$ locally uniformly.
To see this, we begin with noting that the density of $b_{p} (\max_{1 \leq j \leq p}X_{j} - d_{p})$ is given by
\begin{equation*}
g_{p}(x) = \frac{p}{b_{p}}\phi(d_{p} + b_{p}^{-1}x) [\Phi(d_{p}+b_{p}^{-1}x)]^{p-1},
\end{equation*}
where $\phi(\cdot)$ and $\Phi(\cdot)$ are the density and distribution functions of the standard Gaussian distribution, respectively.
Pick any $x \in \mathbb{R}$. Since, by the weak convergence result (\ref{weak}),
\begin{equation*}
[\Phi(d_{p}+b_{p}^{-1}x)]^p =\Pr(b_{p} (\max_{1 \leq j \leq p}X_{j} - d_{p})\leq x) \to e^{-e^{-x}}, \ p \to \infty,
\end{equation*}
we have $[\Phi(d_{p}+b_{p}^{-1}x)]^{p-1} \to e^{-e^{-x}}$. Hence it remains to show that
\begin{equation*}
\frac{p}{b_{p}}\phi(d_{p} + b_{p}^{-1}x) \to e^{-x}.
\end{equation*}
Taking the logarithm of the left side yields
\begin{equation}
\log p -\log b_{p} - \log (\sqrt{2\pi}) - (d_{p}+b_{p}^{-1}x)^{2}/2. \label{log}
\end{equation}
Expanding $(d_{p}+b_{p}^{-1}x)^{2}$ gives that
\begin{equation*}
d_{p}^{2} + 2 d_{p} b_{p}^{-1}x + b_{p}^{-2}x^{2} = b_{p}^{2} - \log \log p - \log (4\pi) + 2x + o(1), \ p \to \infty,
\end{equation*}
by which we have $(\ref{log}) = -x + o(1)$. This shows that $g_{p}(x) \to g(x)$ for all $x \in \RR$.
Moreover, this convergence takes place locally uniformly in $x$, i.e., for every $K > 0$, $g_{p}(x) \to g(x)$ uniformly in $x \in [-K, K]$.
On the other hand, the density of $\max_{1 \leq j \leq p} X_{j}$ is given by $f_{p}(x) = p \phi(x) [ \Phi(x) ]^{p-1}$.
By this form, for every $K > 0$, there exist a constant $c > 0$ and a positive integer $p_{0}$ depending only on $K$ such that for $p \geq p_{0}$,
\begin{equation*}
\inf_{x \in [ d_{p} - K b_{p}^{-1}, d_{p} +K b_{p}^{-1}]} b_{p}^{-1} f_{p}(x) = \inf_{x \in [-K,K]} g_{p}(x) \geq \inf_{x \in [-K,K]} g(x) + o(1) \geq c,
\end{equation*}
which shows that for $p \geq p_{0}$,
\begin{equation*}
f_{p}(x) \geq c b_{p}, \ \forall x \in [ d_{p} - Kb_{p}^{-1}, d_{p} +K b_{p}^{-1}].
\end{equation*}
Therefore, we conclude that for $p \geq p_{0}$,
\begin{equation*}
\Pr(|\max_{1 \leq j \leq p} X_{j} - d_{p} | \leq \epsilon) = \int_{d_{p}-\epsilon}^{d_{p}+\epsilon} f_{p}(x) dx \geq 2 c \epsilon b_{p}, \ \forall \epsilon \in [ 0, K b_{p}^{-1} ].
\end{equation*}
By the Gaussian maximal inequality and Lemma 2.3.15 of \cite{D99}, we have
\begin{equation*}
\sqrt{\log p}/12 \leq \mathbb{E}[ \max_{1 \leq j \leq p} X_{j}] \leq \sqrt{2 \log p}.
\end{equation*}
Hence, by the previous result, for every $K' > 0$, there exist a constant $c' > 0$ and and a positive integer $p_{0}'$ depending only on $K'$ such that for $p \geq p_{0}'$,
$a_{p} \geq 1$ and
\begin{equation*}
\LL (\max_{1 \leq j \leq p} X_{j},\epsilon) \geq \Pr(|\max_{1 \leq j \leq p} X_{j} - d_{p} | \leq \epsilon) \geq c' \epsilon a_{p}, \ \forall \epsilon \in [0, K'a^{-1}_{p} ].
\end{equation*}
\qed
\end{example}
\section{Proofs for Section \ref{sec: Gaus and multiplier}}
\label{sec: proof for section 2}
\subsection{Proof of Theorem \ref{theorem:comparison}}
Here for a smooth function $f: \RR^{p} \to \RR$, we write $\partial_{j} f (z) = \partial f(z)/\partial z_{j}$ for $z = (z_{1},\dots,z_{p})^{T}$.
We shall use the following version of Stein's identity.
\begin{lemma}[Stein's identity]
\label{lemma: talagrand}
Let $W =(W_{1},\dots,W_{p})^{T}$ be a centered Gaussian random vector in $\RR^{p}$.
Let $f: \RR^{p} \to \RR$ be a $C^1$-function such that $\Ep [ |\partial_{j} f(W) | ] < \infty$ for all $1 \leq j \leq p$. Then for every $1 \leq j \leq p$,
\begin{equation*}
\Ep[W_{j}f(W)]=\sum_{k=1}^p\Ep[W_{j} W_{k}] \Ep [\partial_k f (W)].
\end{equation*}
\end{lemma}
\begin{proof}[Proof of Lemma \ref{lemma: talagrand}]
See Section A.6 of \cite{Talagrand2003}; also \cite{ChenGoldsteinShao2011} and \cite{Stein1981}.
\end{proof}
We will use the following properties of the smooth max function.
\begin{lemma}
\label{lemma: smooth max property}
For every $1 \leq j,k \leq p$,
\begin{equation*}
\partial_{j} F_{\beta}(z) = \pi_{j}(z), \quad \partial_{j} \partial_k F_{\beta}(z) = \beta w_{jk} (z),
\end{equation*}
where
\begin{equation*}
\pi_{j}(z) := e^{\beta z_{j}}/{\textstyle \sum}_{m=1}^pe^{\beta z_m}, \ w_{jk}(z) := 1 (j = k) \pi_{j} (z)- \pi_{j}(z) \pi_{k} (z).
\end{equation*}
Moreover,
\begin{equation*}
\pi_{j}(z) \geq 0, \ {\textstyle \sum}_{j=1}^p \pi_{j} (z) = 1, \ {\textstyle \sum}_{j,k=1}^p |w_{jk}(z)| \leq 2.
\end{equation*}
\end{lemma}
\begin{proof}[Proof of Lemma \ref{lemma: smooth max property}]
The first property was noted in \cite{Chatterjee2005b}. The other properties follow from a direct calculation.
\end{proof}
\begin{lemma}\label{lemma: second deriv m}
Let $m := g \circ F_{\beta}$ with $g \in C^{2}(\RR)$. Then for every $1 \leq j,k \leq p$,
\begin{align*}
\partial_{j} \partial_k m(z) = (g'' \circ F_\beta) (z) \pi_{j} (z) \pi_{k} (z)+ \beta (g' \circ F_\beta) (z) w_{jk}(z),
\end{align*}
where $\pi_{j}$ and $w_{jk}$ are defined in Lemma \ref{lemma: smooth max property}.
\end{lemma}
\begin{proof}[Proof of lemma \ref{lemma: second deriv m}]
The proof follows from a direct calculation.
\end{proof}
\begin{proof}[Proof of Theorem \ref{theorem:comparison}]
Without loss of generality, we may assume that $X$ and $Y$ are independent, so that $\Ep[X_{j} Y_k]=0$ for all $1 \leq j,k \leq p$.
Consider the following Slepian interpolation between $X$ and $Y$:
\begin{equation*}
Z(t):=\sqrt{t} X+\sqrt{1-t} Y, \ t \in [0,1].
\end{equation*}
Let $m := g \circ F_\beta$ and $\Psi(t):=\Ep[m (Z(t))]$.
Then
\begin{equation*}
|\Ep[m(X)]-\Ep[ m(Y) ]|= | \Psi (1) - \Psi (0)| =\left |\int_{0}^{1} \Psi'(t) dt\right|.
\end{equation*}
Here we have
\begin{align*}
\Psi'(t) & = \frac{1}{2} \sum_{j=1}^p \Ep [ \partial_{j} m(Z(t)) (t^{-1/2}X_{j}-(1-t)^{-1/2}Y_{j}) ] \\
& = \frac{1}{2} \sum_{j=1}^p \sum_{k=1}^p (\sigma_{jk}^{X}-\sigma_{jk}^{Y}) \Ep[\partial_{j} \partial_k m (Z(t)) ],
\end{align*}
where the second equality follows from applying Lemma \ref{lemma: talagrand} to $W = (t^{-1/2}X_{j}-(1-t)^{-1/2}Y_{j}, Z(t)^{T})^{T}$
and $f(W)=\partial_{j} m(Z(t))$. Hence
\begin{align*}
\left |\int_{0}^{1} \Psi'(t) dt\right|
&\leq \frac{1}{2} \sum_{j,k=1}^{p} | \sigma_{jk}^{X}-\sigma_{jk}^{Y} | \left | \int_0^1 \Ep [\partial_{j} \partial_k m(Z(t))] dt \right| \\
&\leq \frac{1}{2} \max_{1 \leq j,k \leq p} | \sigma_{jk}^{X}-\sigma_{jk}^{Y} | \int_0^1 \sum_{j,k=1}^{p} \left | \Ep[\partial_{j} \partial_k m(Z(t))]\right | dt \\
&= \frac{\Delta}{2} \int_0^1 \sum_{j,k=1}^{p} \left | \Ep[\partial_{j} \partial_k m(Z(t))]\right | dt.
\end{align*}
By Lemmas \ref{lemma: smooth max property} and \ref{lemma: second deriv m},
\begin{equation*}
\sum_{j,k=1}^{p} |\partial_{j} \partial_k m(Z(t))| \leq | (g'' \circ F_{\beta})(Z(t))| + 2 \beta | (g' \circ F_{\beta})(Z(t))|.
\end{equation*}
Therefore, we have
\begin{align}
&\left|\Ep[g(F_{\beta}(X))-g(F_{\beta}(Y))]\right| \notag\\
&\leq \Delta \times \left \{ \frac{1}{2} \int_0^1 \Ep[ | (g'' \circ F_{\beta})(Z(t))| ] dt + \beta \int_0^1 \Ep [| (g' \circ F_{\beta})(Z(t))| ]dt\right \} \label{eq: finer bound} \\
&\leq \Delta ( \| g'' \|_{\infty}/2 + \beta \| g' \|_{\infty}), \notag
\end{align}
which leads to the first assertion. The second assertion follows from the inequality (\ref{eq: smooth max property}). This completes the proof.
\end{proof}
\subsection{Proof of Theorem \ref{cor: distances Gaussian to Gaussian}}
We first note that we may assume that $0 < \Delta \leq 1$ since otherwise the proof is trivial (take $C \geq 1$ in (\ref{eq: K-distance})).
In what follows, let $C>0$ be a generic constant that depends only on $\min_{1 \leq j \leq p} \sigma_{jj}^{Y}$ and $\max_{1 \leq j \leq p} \sigma_{jj}^{Y}$, and its value may change from place to place.
For $\beta > 0$, define $e_{\beta} := \beta^{-1} \log p$.
Consider and fix a $C^{2}$-function $g_0:\RR \to [0,1]$ such that $g_0(t)=1$ for $t \leq 0$ and $g_0(t)=0$ for $t \geq 1$. For example, we may take
\begin{equation*}
g_{0}(t) =
\begin{cases}
0, & t \geq 1, \\
30\int_{t}^{1} s^{2}(1-s)^{2} ds, & 0 < t < 1, \\
1, & t \leq 0.
\end{cases}
\end{equation*}
For given $x \in \RR, \beta > 0$ and $\delta > 0$, define $g_{x,\beta,\delta} (t)=g_0(\delta^{-1}(t-x-e_\beta))$. For this function $g_{x,\beta,\delta}$, $\| g_{x,\beta,\delta}' \|_{\infty} = \delta^{-1} \| g_{0}' \|_{\infty}$ and $\| g_{x,\beta,\delta}'' \|_{\infty} = \delta^{-2} \| g_{0}'' \|_{\infty}$. Moreover,
\begin{equation}
1(t \leq x+e_{\beta}) \leq g_{x,\beta,\delta}(t) \leq 1(t \leq x + e_{\beta} + \delta), \ \forall t \in \RR. \label{eq: property of g}
\end{equation}
For arbitrary $x \in \RR, \beta > 0$ and $\delta > 0$, observe that
\begin{align*}
\Pr ( \max_{1 \leq j \leq p} X_{j} \leq x )
&\leq \Pr ( F_{\beta} (X) \leq x + e_{\beta}) \leq \Ep[g_{x,\beta,\delta}(F_\beta(X))] \\
&\leq \Ep[g_{x,\beta,\delta}(F_\beta(Y))] + C(\delta^{-2}+\beta \delta^{-1})\Delta \\
&\leq \Pr ( F_\beta (Y) \leq x+e_\beta+\delta ) + C(\delta^{-2}+\beta \delta^{-1})\Delta \\
&\leq \Pr (\max_{1 \leq j \leq p} Y_{j} \leq x+e_\beta+\delta ) + C(\delta^{-2}+\beta \delta^{-1})\Delta,
\end{align*}
where the first inequality follows from the inequality (\ref{eq: smooth max property}), the second from the inequality (\ref{eq: property of g}), the third from Theorem \ref{theorem:comparison}, the fourth from the inequality (\ref{eq: property of g}), and the last from the inequality (\ref{eq: smooth max property}).
We wish to compare $\Pr (\max_{1 \leq j \leq p} Y_{j} \leq x+e_\beta+\delta) $ with $\Pr (\max_{1 \leq j \leq p} Y_{j} \leq x ) $, and this is where the anti-concentration inequality plays its role.
By Corollary \ref{cor: anticoncentration}, we have
\begin{align*}
&\Pr (\max_{1 \leq j \leq p} Y_{j} \leq x+e_\beta+\delta ) - \Pr (\max_{1 \leq j \leq p} Y_{j} \leq x ) \\
&\quad = \Pr (x < \max_{1 \leq j \leq p} Y_{j} \leq x+e_{\beta}+\delta ) \\
&\quad \leq \LL ( \max_{1 \leq j \leq p} Y_{j}, e_{\beta}+\delta) \\
&\quad \leq C (e_{\beta} + \delta) \sqrt{1 \vee \log (p / (e_{\beta} + \delta))} \\
&\quad \leq C (e_{\beta} + \delta) \sqrt{1 \vee \log (p / \delta)}.
\end{align*}
Therefore,
\begin{multline}
\Pr ( \max_{1\leq j\leq p}X_{j}\leq x ) - \Pr (\max_{1\leq j\leq p}Y_{j}\leq x) \\
\leq C \{ (\delta^{-2}+\beta \delta^{-1})\Delta + (e_{\beta}+\delta)\sqrt{1 \vee \log(p/\delta)} \}. \label{normalcomp}
\end{multline}
Choose $\beta$ and $\delta$ in such a way that
\begin{equation*}
\beta = \delta^{-1}\log p \ \text{and} \ \delta = \Delta^{1/3} (2\log p)^{1/6}.
\end{equation*}
Recall that $p \geq 2$ and $0 < \Delta \leq 1$. Since $\delta \geq \Delta^{1/3} \geq \Delta$, $1 \vee \log (p/\delta) \leq 2 \log (p/\Delta)$. Hence the right side on (\ref{normalcomp}) is bounded by $C \Delta^{1/3} ( \log (p/\Delta))^{2/3}$.
For the opposite direction, observe that
\begin{align*}
\Pr ( \max_{1 \leq j \leq p} X_{j} \leq x )
&\geq \Pr ( F_{\beta} (X) \leq x) \geq \Ep[g_{x-e_{\beta}-\delta,\beta,\delta}(F_\beta(X))] \\
&\geq \Ep[g_{x-e_{\beta}-\delta,\beta,\delta}(F_\beta(Y))] - C(\delta^{-2}+\beta \delta^{-1})\Delta \\
&\geq \Pr (F_{\beta}(Y) \leq x - \delta ) - C(\delta^{-2}+\beta \delta^{-1})\Delta \\
&\geq \Pr (\max_{1 \leq j \leq p} Y_{j} \leq x - e_{\beta} - \delta ) - C(\delta^{-2}+\beta \delta^{-1})\Delta.
\end{align*}
The rest of the proof is similar and hence omitted.
\qed
\section{Proof of Theorem \ref{thm: anticoncentration}}
\label{sec: proof for section 3}
The proof of Theorem \ref{thm: anticoncentration} uses some properties of Gaussian measures.
We begin with preparing technical tools. Here let $\phi(\cdot)$ and $\Phi (\cdot)$ denote the density and distribution functions of the standard Gaussian distribution:
\begin{equation*}
\phi(x) = \frac{1}{\sqrt{2 \pi}} e^{-x^{2}/2}, \ \Phi (x) = \int_{-\infty}^{x} \phi(t) dt.
\end{equation*}
The following two facts were essentially noted in \cite{Y65,Y68} (note: \cite{Y65} and \cite{Y68} did not contain a proof of Lemma \ref{lemma: Y65-1}, which we find non-trivial). For the sake of completeness, we give their proofs after the proof of Theorem \ref{thm: anticoncentration}.
\begin{lemma}
\label{lemma: Y65-1}
Let $W_{1},\dots,W_{p}$ be (not necessarily independent nor centered) Gaussian random variables with unit variance. Suppose that $\Corr(W_{j},W_{k})<1$ whenever $j \neq k$. Then the distribution of $\max_{1 \leq j \leq p} W_{j}$ is absolutely continuous with respect to the Lebesgue measure and a version of the density is given by
\begin{equation}
f(x) = \phi(x) \sum_{j=1}^{p} e^{ \Ep[W_{j}] x-(\Ep[W_{j}])^2/2 } \cdot \Pr\left (W_{k} \leq x, \forall k \neq j \mid W_{j} = x \right). \label{density}
\end{equation}
\end{lemma}
\begin{lemma}
\label{lemma: Y65-2}
Let $W_{0},W_{1},\dots,W_{p}$ be (not necessarily independent nor centered) Gaussian random variables with unit variance. Suppose that $\Ep[W_{0} ]\geq 0$. Then the map
\begin{equation}\label{eq: prob}
x \mapsto e^{ \Ep[W_{0} ] x-(\Ep[W_{0}])^2/2 } \cdot\Pr( W_{j} \leq x, 1 \leq \forall j \leq p \mid W_{0} = x)
\end{equation}
is non-decreasing on $\RR$.
\end{lemma}
Let us also recall (a version of) the Gaussian concentration (more precisely, deviation) inequality. See, e.g., \cite{L01}, Theorem 7.1, for its proof.
\begin{lemma}\label{thm3}
Let $X_{1},\dots,X_{p}$ be (not necessarily independent) centered Gaussian random variables with variance bounded by $\sigma^{2} > 0$. Then for every $r > 0$,
\begin{equation*}
\Pr ( \max_{1 \leq j \leq p} X_{j} \geq \Ep [ \max_{ 1 \leq j \leq p} X_{j} ] + r ) \leq e^{-r^{2}/(2\sigma^{2})}.
\end{equation*}
\end{lemma}
We are now in position to prove Theorem \ref{thm: anticoncentration}.
\begin{proof}[Proof of Theorem \ref{thm: anticoncentration}]
The proof consists of three steps.
{\bf Step 1}. This step reduces the analysis to the unit variance case. Pick any $x\geq0$. Let $W_{j} :=(X_{j}-x)/\sigma_{j}+x/\underline{\sigma}$.
Then $\Ep[W_{j}]\geq 0$ and $\Var (W_{j})=1$. Define $Z:= \max_{1 \leq j \leq p} W_{j}$.
Then we have
\begin{align*}
\Pr ( |\max_{1 \leq j \leq p}X_{j}-x |\leq \epsilon )
&\leq \Pr\left(\left|\max_{1 \leq j \leq p}\frac{X_{j}-x}{\sigma_j}\right|\leq \frac{\epsilon}{\underline{\sigma}}\right) \\
&\leq \sup_{y\in\RR}\Pr\left(\left|\max_{1 \leq j \leq p}\frac{X_{j}-x}{\sigma_j}+\frac{x}{\underline{\sigma}}-y\right|\leq \frac{\epsilon}{\underline{\sigma}}\right) \\
&=\sup_{y\in\RR}\Pr\left(\left|Z-y\right|\leq \frac{\epsilon}{\underline{\sigma}}\right).
\end{align*}
{\bf Step 2}. This step bounds the density of $Z$.
Without loss of generality, we may assume that $\Corr (W_{j},W_{k})<1$ whenever $j \neq k$. Since the marginal distribution of $W_{j}$ is $N(\mu_{j},1)$ where $\mu_{j}:=\Ep[W_{j}]=(x/\underline{\sigma}-x/\sigma_j)\geq 0$, by Lemma \ref{lemma: Y65-1}, $Z$ has density of the form
\begin{equation}\label{eq: density form}
f_{p}(z) = \phi(z) G_{p} (z),
\end{equation}
where the map $z \mapsto G_{p}(z)$ is non-decreasing by Lemma \ref{lemma: Y65-2}. Define $\bar{z}:= (1/\underline{\sigma}-1/\bar{\sigma})x$, so that $\mu_{j}\leq \bar{z}$ for every $1 \leq j \leq p$. Moreover, define $\bar{Z}:=\max_{1 \leq j \leq p}(W_{j}-\mu_{j})$. Then
\begin{align*}
\int_{z}^{\infty} \phi (u) du G_{p}(z) &\leq \int_{z}^{\infty} \phi(u) G_{p}(u) du \\
&= \Pr( Z> z) \\
&\leq \Pr( \bar{Z} > z - \bar{z} ) \\
& \leq \exp \left \{ -\frac{(z-\bar{z}-\Ep[ \bar{Z} ])_{+}^{2}}{2} \right \},
\end{align*}
where the last inequality is due to the Gaussian concentration inequality (Lemma \ref{thm3}).
Note that $W_{j}-\mu_{j}=X_{j}/\sigma_{j}$, so that
\begin{equation*}
\Ep[\bar{Z}]=\Ep [ \max_{1 \leq j \leq p}(X_{j}/\sigma_{j}) ] =: a_p.
\end{equation*}
Therefore, for every $z \in \RR$,
\begin{equation}\label{eq: bound Gn}
G_{p} (z) \leq \frac{1}{1-\Phi(z)} \exp \left \{ -\frac{(z-\bar{z}-a_p)_{+}^{2}}{2} \right \}.
\end{equation}
Mill's inequality states that for $z>0$,
\begin{equation*}
z \leq \frac{\phi(z)}{1-\Phi (z)}\leq z \frac{1+z^2}{z^2},
\end{equation*}
and in particular $(1+z^2)/z^2 \leq 2$ when $z>1$. Moreover, $\phi(z)/\{ 1-\Phi(z) \}\leq 1.53 \leq 2 $ on $z \in (-\infty, 1)$. Therefore,
\begin{equation*}
\phi(z)/\{ 1-\Phi(z) \} \leq 2 (z \vee 1), \ \forall z \in \RR.
\end{equation*}
Hence we conclude from this, (\ref{eq: bound Gn}), and (\ref{eq: density form}) that
\begin{equation*}
f_{p}(z) \leq 2(z \vee 1) \exp \left \{ -\frac{(z-\bar{z}-a_p)_{+}^{2}}{2} \right \}, \ \forall z \in \RR.
\end{equation*}
{\bf Step 3}. By Step 2, for every $y \in \RR$ and $t > 0$, we have
\begin{align*}
\Pr\left( | Z - y | \leq t \right) &= \int_{y -t}^{y+t} f_{p}(z) dz
\leq 2 t \max_{z \in [y -t, y+t]}f_{p}(z) \leq 4 t (\bar{z}+ a_p + 1),
\end{align*}
where the last inequality follows from the fact that the map $z \mapsto z e^{-(z-a)^{2}/2}$ (with $a > 0$) is non-increasing on $[a+1,\infty)$.
Combining this inequality with Step 1, for every $x \geq 0$ and $\epsilon > 0$, we have
\begin{equation}
\Pr ( | \max_{1 \leq j \leq p}X_{j} - x | \leq \epsilon ) \leq 4 \epsilon \{ (1/\underline{\sigma}-1/\bar{\sigma}) |x|+a_p + 1 \} /\underline{\sigma}. \label{eq: anticoncentration main}
\end{equation}
This inequality also holds for $x<0$ by the similar argument, and hence it holds
for every $x \in \RR$.
If $\underline{\sigma}=\bar{\sigma} =\sigma$, then we have
\begin{equation*}
\Pr ( | \max_{1 \leq j \leq p}X_{j} - x | \leq \epsilon ) \leq 4 \epsilon (a_p + 1)/\sigma, \ \forall x \in \RR, \ \forall \epsilon > 0,
\end{equation*}
which leads to the first assertion of the theorem.
On the other hand, consider the case where $\underline{\sigma} < \bar{\sigma}$. Suppose first that $0 < \epsilon \leq \underline{\sigma}$.
By the Gaussian concentration inequality (Lemma \ref{thm3}), for $|x|\geq \epsilon+\bar \sigma(a_p+\sqrt{2\log(\underline{\sigma}/\epsilon)})$, we have
\begin{align}
\Pr ( | \max_{1 \leq j \leq p} X_{j}-x|\leq\epsilon ) &\leq \Pr (\max_{1 \leq j \leq p}X_{j} \geq |x|-\epsilon ) \notag \\
&\leq \Pr \left (\max_{1 \leq j \leq p}X_{j}\geq \Ep[\max_{1 \leq j \leq p}X_{j}] +\bar{\sigma}\sqrt{2\log(\underline{\sigma}/\epsilon)} \right ) \notag \\
&\leq \epsilon/\underline{\sigma}. \label{eq: anticoncentration tail}
\end{align}
For $|x| \leq \epsilon+\bar \sigma(a_p+\sqrt{2\log(\underline{\sigma}/\epsilon)})$, by (\ref{eq: anticoncentration main}) and using $\epsilon \leq \underline{\sigma}$, we have
\begin{equation}
\Pr ( | \max_{1 \leq j \leq p}X_{j} - x | \leq \epsilon ) \leq 4 \epsilon \{ (\bar{\sigma}/\underline{\sigma}) a_{p} + (\bar{\sigma}/\underline{\sigma}-1)\sqrt{2\log(\underline{\sigma}/\epsilon)} + 2 - \underline{\sigma}/\bar{\sigma}\}/\underline{\sigma}.
\label{eq: anticoncentration non-tail}
\end{equation}
Combining (\ref{eq: anticoncentration tail}) and (\ref{eq: anticoncentration non-tail}), we obtain the inequality in (ii) for $0 < \epsilon \leq \underline{\sigma}$ (with a suitable choice of $C$). If $\epsilon> \underline{\sigma}$, the inequality in (ii) trivially follows by taking $C \geq 1/\underline{\sigma}$. This completes the proof.
\end{proof}
\begin{proof}[Proof of Lemma \ref{lemma: Y65-1}]
Let $M := \max_{1 \leq j \leq p} W_{j}$.
The absolute continuity of the distribution of $M$ is deduced from the fact that $\Pr(M \in A) \leq \sum_{j=1}^{p} \Pr(W_{j} \in A)$ for every Borel measurable subset $A$ of $\RR$.
Hence, to show that a version of the density of $M$ is given by (\ref{density}), it is enough to show that $\lim_{\epsilon \downarrow 0} \epsilon^{-1} \Pr(x < M \leq x+\epsilon)$ equals the right side on (\ref{density}) for a.e. $x \in \RR$.
For every $x \in \RR$ and $\epsilon > 0$, observe that
\begin{align*}
& \{ x < M \leq x + \epsilon \} \\
& = \{ \exists i_{0}, W_{i_{0}} > x \ \text{and} \ \forall i, W_{i} \leq x +\epsilon \} \\
&= \{ \exists i_{1}, x < W_{i_{1}} \leq x + \epsilon \ \text{and} \ \forall i \neq i_{1}, W_{i} \leq x \} \\
&\quad \cup \{ \exists i_{1}, \exists i_{2}, x < W_{i_{1}} \leq x + \epsilon, x < W_{i_{2}} \leq x + \epsilon \ \text{and} \ \forall i \notin \{ i_{1},i_{2} \}, W_{i} \leq x \} \\
&\qquad \qquad \vdots \\
&\quad \cup \{ \forall i, x < W_{i} \leq x + \epsilon \} \\
&=: A^{x,\epsilon}_{1} \cup A^{x,\epsilon}_{2} \cup \cdots \cup A^{x,\epsilon}_{p}.
\end{align*}
Note that the events $A^{x,\epsilon}_{1},A^{x,\epsilon}_{2},\dots,A^{x,\epsilon}_{p}$ are disjoint. For $A^{x,\epsilon}_{1}$, since
\begin{equation*}
A^{x,\epsilon}_{1} = \cup_{i=1}^{p} \{ x < W_{i} \leq x+\epsilon \ \text{and} \ W_{j} \leq x, \forall j \neq i \},
\end{equation*}
where the events on the right side are disjoint, we have
\begin{align*}
\Pr(A^{x,\epsilon}_{1}) &= \sum_{i=1}^{p} \Pr(x < W_{i} \leq x+\epsilon \ \text{and} \ W_{j} \leq x, \forall j \neq i) \\
&= \sum_{i=1}^{p} \int_{x}^{x+\epsilon} \Pr(W_{j} \leq x, \forall j \neq i \mid W_{i} = u) \phi(u-\mu_{i}) du,
\end{align*}
where $\mu_{i} := \Ep [ W_{i} ]$.
We show that for every $1 \leq i \leq p$ and a.e. $x \in \RR$, the map $u \mapsto \Pr(W_{j} \leq x, \forall j \neq i \mid W_{i} = u)$ is right continuous at $x$.
Let $X_{j}=W_{j}-\mu_{j}$ so that $X_{j}$ are standard Gaussian random variables. Then
\begin{equation*}
\Pr(W_{j} \leq x, \forall j \neq i \mid W_{i} = u)=\Pr(X_{j} \leq x-\mu_{j}, \forall j \neq i \mid X_{i} = u-\mu_{i}).
\end{equation*}
Pick $i=1$. Let $V_{j} = X_{j} - \Ep[ X_{j} X_{1}] X_{1}$ be the residual from the orthogonal projection of $X_{j}$ on $X_{1}$.
Note that the vector $(V_{j})_{2 \leq j \leq p}$ and $X_{1}$ are jointly Gaussian and uncorrelated, and hence independent, by which we have
\begin{align*}
&\Pr(X_{j} \leq x-\mu_{j}, 2 \leq \forall j \leq p \mid X_{1} = u-\mu_{1}) \\
&= \Pr(V_{j} \leq x-\mu_{j}-\Ep[ X_{j} X_{1}] (u-\mu_{1}), 2 \leq \forall j \leq p \mid X_{1} = u-\mu_{1}) \\
&= \Pr(V_{j} \leq x-\mu_{j}-\Ep[ X_{j} X_{1}] (u-\mu_{1}), 2 \leq \forall j \leq p).
\end{align*}
Define $J := \{ j \in \{ 2,\dots,p \} : \Ep[ X_{j} X_{1}] \leq 0 \}$ and $J^{c} := \{ 2,\dots, p\} \backslash J$. Then
\begin{align*}
&\Pr(V_{j} \leq x-\mu_{j}-\Ep[ X_{j} X_{1}] (u-\mu_{1}), 2 \leq \forall j \leq p) \\
&\to \Pr( V_{j} \leq x_j, \forall j \in J, V_{j'} < x_{j'}, \forall j' \in J^{c}), \ \text{as} \ u \downarrow x,
\end{align*}
where $x_j=x-\mu_{j}-\Ep[X_{j}X_1](x-\mu_{1})$.
Here each $V_{j}$ either degenerates to $0$ (which occurs only when $X_{j}$ and $X_{1}$ are perfectly negatively correlated, i.e., $\Ep [ X_{j} X_{1} ] = -1$) or has a non-degenerate Gaussian distribution, and hence for every $x \in \RR$ expect for at most $(p-1)$ points $(\mu_{1}+\mu_{j})/2, 2 \leq j \leq p$,
\begin{align*}
\Pr( V_{j} \leq x_j, \forall j \in J, V_{j'} < x_{j'}, \forall j' \in J^{c}) &= \Pr( V_{j} \leq x_{j}, 2 \leq \forall j \leq p ) \\
&= \Pr(W_{j} \leq x, 2 \leq \forall j \leq p \mid W_{1} = x).
\end{align*}
Hence for $i=1$ and a.e. $x \in \RR$, the map $u \mapsto \Pr(W_{i} \leq x, \forall j \neq i \mid W_{i} = u)$ is right continuous at $x$. The same conclusion clearly holds for $2 \leq i \leq p$.
Therefore, we conclude that, for a.e. $x \in \RR$, as $\epsilon \downarrow 0$,
\begin{eqnarray*}
\frac{1}{\epsilon} \Pr(A_{1}^{x,\epsilon}) &\to& \sum_{i=1}^{p} \Pr(W_{j} \leq x, \forall j \neq i \mid W_{i} = x)\phi(x-\mu_{i})\\
&=&\phi(x)\sum_{i=1}^{p} e^{\mu_{i}x-\mu_{i}^2/2} \Pr(W_{j} \leq x, \forall j \neq i \mid W_{i} = x).
\end{eqnarray*}
In the rest of the proof, we show that, for every $2 \leq i \leq p$ and $x \in \RR$, $\Pr(A_{i}^{x,\epsilon}) = o(\epsilon)$ as $\epsilon \downarrow 0$, which leads to the desired conclusion. Fix any $2 \leq i \leq p$. The probability $\Pr(A_{i}^{x,\epsilon})$ is bounded by a sum of terms of the form $\Pr(x < W_{j} \leq x+\epsilon, x < W_{k} \leq x+\epsilon)$ with $j \neq k$. Recall that $\Corr(W_{j},W_{k})<1$. Assume that $\Corr(W_{j},W_{k})=-1$. Then for every $x \in \RR$, $\Pr(x < W_{j} \leq x+\epsilon, x < W_{k} \leq x+\epsilon)$ is zero for sufficiently small $\epsilon$. Otherwise, $(W_{j},W_{k})^{T}$ obeys a two-dimensional, non-degenerate Gaussian distribution and hence $\Pr(x < W_{j} \leq x+\epsilon, x < W_{k} \leq x+\epsilon) = O(\epsilon^{2}) = o(\epsilon)$ as $\epsilon \downarrow 0$ for every $x \in \RR$. This completes the proof.
\end{proof}
\begin{proof}[Proof of Lemma \ref{lemma: Y65-2}]
Since $\Ep [ W_{0} ] \geq 0$, the map $x \mapsto \exp( \Ep[ W_{0} ] x- (\Ep[ W_{0} ])^{2} )$ is non-decreasing. Thus it suffices to show that the map
\begin{equation}\label{eq: reduced probability}
x \mapsto \Pr ( W_{1} \leq x, \dots, W_{p} \leq x \mid W_{0} =x)
\end{equation}
is non-decreasing.
As in the proof of Lemma \ref{lemma: Y65-1}, let $X_{j}=W_{j}-\Ep [ W_{j} ]$ and let $V_{j} = X_{j} - \Ep[ X_{j} X_{0}] X_{0}$ be the residual from the orthogonal projection of $X_{j}$ on $X_{0}$. Note that the vector $(V_{j})_{1 \leq j \leq p}$ and $X_{0}$ are independent.
Hence the probability in (\ref{eq: reduced probability}) equals
\begin{align*}
&\Pr(V_{j} \leq x-\mu_{j} - \Ep[X_{j}X_{0}](x-\Ep[ W_{0} ] ), 1 \leq \forall j \leq p \mid X_{0}=x-\Ep[ W_{0} ])\\
&=\Pr(V_{j} \leq x-\mu_{j} - \Ep[X_{j}X_{0}](x-\Ep[ W_{0} ]), 1 \leq \forall j \leq p),
\end{align*}
where the latter is non-decreasing in $x$ on $\RR$ since $\Ep[X_{j}X_{0}] \leq 1$.
\end{proof}
|
2,877,628,090,421 | arxiv | \section{Introduction}
\label{sec:intro}
CNNs have achieved unprecedented success in visual recognition tasks. The development of mobile devices drives the increasing demand to deploy these deep networks on mobile platforms such as cell phones and self-driving cars. However, CNNs are usually resource-intensive, making them difficult to deploy on these memory-constrained and energy-limited platforms.
To enable the deployment, one intuitive idea is to reduce the model size. Model compression is the major research trend for it. Previously several techniques have been proposed, including pruning~\citep{lecun1990optimal}, quantization~\citep{soudry2014expectation} and low rank approximation~\citep{denton2014exploiting}. Though these approaches can can offer a reasonable parameter reduction with minor accuracy degradation, they suffer from the three drawbacks: 1) the irregular network structure after compression, which limits performance and throughput on GPU; 2) the increased training complexity due to the additional compression or re-training process; and 3) the heuristic compression ratios depending on networks, which cannot be precisely controlled.
Recently the SK approach was proposed to mitigate these problems by directly training networks using structural (large granularity) sparse convolutional kernels with fixed compression ratios. The idea of SK was originally proposed as different types of convolutional approach. Later researchers explore their usages in the context of CNNs by combining some of these SKs to save parameters/computation against the standard convolution. For example, MobileNets~\citep{howard2017mobilenets} realize 7x parameter savings with only 1\% accuracy loss by adopting the combination of two SKs, depthwise convolution~\citep{sifre2014rigid} and pointwise convoluiton~\citep{lin2013network}, to replace the standard convolution in their networks.
However, despite the great potential with SK approach to save parameters/computation while maintaining accuracy, it is still mysterious in the field regarding how to craft an SK design with such potential (i.e., effective SK design). Prior works like MobileNet~\citep{howard2017mobilenets} and Xception~\citep{chollet2016xception} just adopt simple combinations of existing SKs, and no one really points out the reasons why they choose such kind of design. Meanwhile, it has been a long-existing question in the field whether there is any other SK design that is more efficient than all state-of-the-art ones while also maintaining a similar accuracy with the standard convolution.
To answer this question, a native idea is to try all possible combinations and get the final accuracy for each of them. Unfortunately, the number of combination will grow exponentially with the number of kernels in a design, and thus it is infeasible to train each of them. Specifically, even if we limit the design space to four common types of SKs -- group convolution~\citep{krizhevsky2012imagenet}, depthwise convolution~\citep{sifre2014rigid}, pointwise convolution~\citep{lin2013network} and pointwise group convolution~\citep{zhang2017shufflenet} -- the total number of possible combinations would be $4^k$, given that $k$ is the number of SKs we allow to use in a design (note that each SK can appear more than once in a design).
In this paper, we craft the effective SK design by efficiently eliminating poor candidates from the large design space. Specifically, we reduce the design space from three aspects: composition, performance and efficiency. First, observing that in normal CNNs it is quite common to have multiple blocks which contain repeated patterns such as layers or structures, we eliminate the design space by ignoring the combinations including repeated patterns. Second, realizing that removing designs with large accuracy degradation would significantly reduce the design space, we identify an easily measurable quantity named~\emph{information field} behind various SK designs, which is closely related to the model accuracy. We get rid of designs that lead to a smaller ~\emph{information field} compared to the standard convolution model. Last, in order to achieve a better parameter efficiency, we remove redundant SKs in a design if the same size of~\emph{information field} is already retained by other SKs in the design. With all aforementioned knowledge, we present a SK scheme that incorporates the final four different designs manually reduced from the original design space.
Additionally, in practice, researchers would also like to select the most parameter/computation efficient SK designs based on their needs, which drives the demand to study the efficiency for different SK designs. Previously no research has investigated on the efficiency for any SK design. In this paper, three aspects of efficiency are addressed for each of the SK designs in our scheme: 1) what are the factors which could affect the efficiency for each design? 2) how does each factor affect the efficiency alone? 3) when is the best efficiency achieved combining all these factors in different situations?
Besides, to verify the correctness of our idea we also study the relationship between our proposed scheme and existing designs. The comparisons show that all existing methods are either extreme cases or not optimal ones in terms of the parameter efficiency under our scheme. Further, we show that the accuracy of models composed of new designs in our scheme are better than that of all state-of-the-art methods under the same constraint of parameters, which implies that more efficient designs are constructed by our scheme and again validates the effectiveness of our idea.
The contributions of our paper can be summarized as follows:
\begin{itemize}
\item We are the first in the field to point out that the~\emph{information field} is the key for the SK designs. Meanwhile we observe the model accuracy is positively correlated to the size of the~\emph{information field}.
\item We present a SK scheme to illustrate how to eliminate the original design space from three aspects and incorporate the final 4 types of designs along with rigorous mathematical foundation on the efficiency.
\item We discuss the connections between our proposed scheme and other existing designs like MobileNet~\citep{howard2017mobilenets}, Xception~\citep{chollet2016xception} and ShuffleNet~\citep{zhang2017shufflenet} and show that they are all specific instances of our proposed scheme.
\item We provide some potential network designs which are in the scope of our scheme and have not been explored yet and show that they could have superior performances.
\end{itemize}
\section{Preliminaries}
\vspace{-0.1in}
We first give a brief introduction to the standard convolution and the four common types of SKs.
\vspace{-0.1in}
\subsection{Standard Convolution}
Standard convolution is the basic component in most CNN models, kernels of which can be described as a 4-dimensional tensor: $W\in \mathbb{R}^{C\times X\times Y\times F}$, where $C$ and $F$ are the numbers of the input and the output channels and $X$ and $Y$ are the spatial dimensions of the kernels. Let $I\in \mathbb{R}^{C\times U\times V}$ be the input tensor, where $U$ and $V$ denote the spatial dimensions of the feature maps. Therefore, the output activation at the output feature map $f$ and the spatial location $(x,y)$ can be expressed as,
\begin{equation*}
T(f,x,y) = \sum_{c=1}^C\sum_{x'=1}^X\sum_{y'=1}^YI(c,x-x',y-y')W(c,x',y',f)
\end{equation*}
\subsection{Group Convolution}
Group convolution is first used in AlexNet~\citep{krizhevsky2012imagenet} for distributing the model over two GPUs. The idea of it is to split both input and output channels into disjoint groups and each output group is connected to a single input group and vice versa. By doing so, each output channel will only depend on a fraction of input channels instead of the entire ones, thus a large amount of parameters and computation could be saved. Considering the number of group as $M$, the output activation $(f, x, y)$ can be calculated as,
\begin{equation*}
T(f,x,y) = \sum_{c'=1}^{C/M}\sum_{x'=1}^X\sum_{y'=1}^YI(\frac{C}{M}\lfloor\frac{f-1}{\frac{F}{M}}\rfloor+c',x-x',y-y')W(c',x',y',f)
\end{equation*}
\subsection{Depthwise Convolution}
The idea of depthwise convolution is similar to the group convolution, both of which sparsifies kernels in the channel extent. In fact, depthwise convolution can be regarded as an extreme case of group convolution when the number of groups is exactly the same with the number of input channels. Also notice that in practice usually the number of channels does not change after the depthwise convolution is applied. Thus, the equation above can be further rewritten as,
\begin{equation*}
T(f,x,y) = \sum_{x'=1}^X\sum_{y'=1}^YI(f,x-x',y-y')W(x',y',f)
\end{equation*}
\subsection{Pointwise Convolution}
Pointwise convolution is actually a $1\times 1$ standard convolution. Different from the group convolution, pointwise convolution achieves the sparsity over the spatial extent by using kernels with $1\times 1$ spatial size. Similarly, the equation below shows how to calculate one output activation from the pointwise convolution in detail,
\begin{equation*}
T(f,x,y) = \sum_{c=1}^CI(c,x,y)W(c,f)
\end{equation*}
\subsection{Pointwise Group Convolution}
To sparsify kernels in both the channel and the spatial extents, the group convolution can be combined together with the pointwise convolution, i.e., pointwise group convolution. Besides the use of $1\times 1$ spatial kernel size, in pointwise group convolution each output channel will also depend on a portion of input channels. The specific calculations for one output activation can be found from the equation below,
\begin{equation*}
T(f,x,y) = \sum_{c'=1}^{C/M}I(\frac{C}{M}\lfloor\frac{f-1}{\frac{F}{M}}\rfloor+c',x,y)W(c',f)
\end{equation*}
\section{Sparse Kernel Scheme}
\label{sec:decompose}
Recall that the total number of combinations will grow exponentially with the number of kernels in a design, which could result in a large design space. In this paper, we craft the effective SK design (i.e., design that consumes less parameters but maintains accuracy with the standard convolution) by efficiently examining the design space.
Specifically, first we determine the initial design space by setting the maximum number of SKs (length). To decide this number, two aspects are considered: 1) in order to give the potential to find more efficient designs which have not been explored yet, the maximum length of SK design should be greater than the numbers of all state-of-the-art ones; 2) it is also obvious that the greater length is more likely to consume more parameters, which contradicts our goal to find more efficient designs. Therefore combining the two aspects together, we set the maximum length to 6, which is not only greater than the largest number (i.e., 3) in all current designs, but also makes designs with the maximum length could still be able to be more efficient than the standard convolution.
\subsection{Reduce the Design Space}
We then start to reduce the design space from three aspects: composition, performance and efficiency. In the following paragraphs, we will introduce the three aspects in detail.
\paragraph{Composition.}
The overall layout in CNNs provides a good insight for us to quickly reduce the design space. Specifically, in normal CNNs it is quite common to have multiple stages/blocks which contain repeated patterns such as layers or structures. For example, in both VGG~\citep{simonyan2014very} and ResNet~\citep{he2016deep} there are 4 stages and inside each stage are several same repeated layers. Inspired by the fact, when we replace the standard convolution using various SK designs intuitively there is no need to add these repeated patterns to the original place of each standard convolutional layer. For example, suppose there are three types of SKs, A, B and C, then the following combinations should be removed as containing repeated patterns: AAAAAA, ABABAB and ABCABC. AAAAAA contains the repeated pattern of A, while ABABAB and ABCABC have the patterns of AB and ABC respectively.
Repeated patterns are also easy to detect, which makes the entire process extremely fast. To find such patterns, we can use the regular expression matching. The corresponding expression for the matched combinations should be $(.+?)\bm{1}+$, where $(.+?)$ denotes the first capturing group which contains at least one character, but as few as possible, and $\bm{1}+$ means try to match the same character(s) as most recently matched by the first group as many times as possible. As a result, we can efficiently eliminate the design space with the help of the regular expression.
\begin{figure}[!th]
\centering
\begin{subfigure}{0.2\linewidth}
\centering
\includegraphics[width=1.0\linewidth]{1.ps}
\caption{Standard}
\label{fig:Standard}
\end{subfigure}%
\begin{subfigure}{0.2\linewidth}
\centering
\includegraphics[width=1.0\linewidth]{2.ps}
\caption{DW+PW}
\label{fig:DW+PW}
\end{subfigure}%
\begin{subfigure}{0.2\linewidth}
\centering
\includegraphics[width=1.0\linewidth]{3.ps}
\caption{GC+PWG}
\label{fig:GC+PWG}
\end{subfigure}%
\begin{subfigure}{0.2\linewidth}
\centering
\includegraphics[width=1.0\linewidth]{4.ps}
\caption{PW+DW+PW}
\label{fig:PW+DW+PW}
\end{subfigure}%
\begin{subfigure}{0.2\linewidth}
\centering
\includegraphics[width=1.0\linewidth]{5.ps}
\caption{PWG+DW+PWG}
\label{fig:PWG+DW+PWG}
\end{subfigure}
\caption{Spatial and channel dependency of the standard convolution and four different SK designs. The spatial kernel size is $3\times 3$. Green edges denote the spatial dependency of output activation and blue edges represent the channel dependency.}
\end{figure}
\paragraph{Performance.}
There are lots of SK designs that could result in large accuracy degradation, which gives us another opportunity to greatly reduce the design space. To get rid of them, we need an easily measurable (i.e., no training) property behind various designs that could directly indicate the final accuracy. Fortunately, after analyzing many prior works and conducting many experimental studies, we do find such property. We name it~\emph{information field}.
\begin{definition}(Information Field)
Information field is the area in input tensor which one or more convolutional layers use to generate one output activation. For one output tensor, sizes of information fields for all activations are usually the same.
\end{definition}
Figure~\ref{fig:Standard} shows the spatial and channel dependency for the standard convolution, from which we can also find out the size of~\emph{information field}. Assuming the spatial kernel size is $3\times 3$, starting from any output node in the figure we can see that in terms of the channel dimension each output channel will connect to all input channels and for the spatial dimensions one output activation will depend on activations inside a $3\times 3$ spatial area. Therefore the~\emph{information field} for the standard convolution will be (3, 3, C) where C is the number of input channels.
We find that~\emph{information field} is the key behind all SK designs, and also observe the model accuracy is positively correlated to the size of~\emph{information field}, the idea of which is also validated by later experiments in Section~\ref{sec:study}.
With the help of~\emph{information field}, SK designs that would result in large accuracy degradation could be easily removed from the original design space without actually training the models. Specifically, first for each design we calculate the size of~\emph{information field} by adding up it sequentially from the leftmost kernel to the rightmost one. For example, we use a three-dimensional vector, (1,1,1), to represent the initial values of~\emph{information field} on three different dimensions (i.e., two spatial dimensions and one channel dimension), then corresponding values of the vector will be updated based on the known properties of the SK encountered. After the rightmost kernel, the final vector we get will be the size of~\emph{information field} for the design. Finally we compare it with that of the standard convolution. If the two sizes are the same, we will keep the design, otherwise we will simply discard it. For instance, the design composed of one depthwise convolution will be removed since the~\emph{information field} of it only contains one channel area instead of the full channel space from the standard convolution
\paragraph{Efficiency.}
In order to achieve a better parameter and computation efficiency, we remove designs that include SKs that do not contribute to the~\emph{information field}. Specifically, there are two cases that could worsen the efficiency and should be regarded as inferior designs: 1) it can be easily verified that the size of~\emph{information field} will never decrease when passing through SKs in a design, thus there could be one situation that after one kernel, the size of~\emph{information field} still remains the same, which means the kernel does not help with regards to the~\emph{information field} even if the final size is the same as the standard convolution; 2) it is also possible that the same size of~\emph{information field} with the standard convolution is already retained by a fraction of SKs in a design, in which case, other kernels can also be considered as not contributing to the~\emph{information field}. In terms of efficiency designs in both of the two cases contain non-contributed kernels, therefore we can remove them from the original design space.
To effectively detect and delete such inferior designs within the two cases, we introduce an~\emph{early-stop mechanism} during the process to check the size of~\emph{information field} above. Specifically, as per the two cases we check two things when adding up~\emph{information field} from the leftmost kernel in a design: 1) we record the size of~\emph{information field} before entering each kernel and compare it with the new size calculated after that kernel. If the two sizes are the same, we immediately mark the current design as inferior; 2) we compare the new size of~\emph{information field} with that of the standard convolution. If the size is smaller, we will continue to add up~\emph{information field} from the next kernel, otherwise we will skip to the next design.
With all aforementioned knowledge, we manually reduce the original design space ($4^1+4^2+\cdots+4^6$) to 4 different types of SK designs\footnote{During the process to eliminate the design space, we allow channel permutation within the designs, and when a group convolution is encountered, we will try all possible numbers of groups to calculate the size of~\emph{information field}. As long as there is one group number that can pass the entire process, we will keep the design. In case there are multiple group numbers passing the process, we will consider them as same design.}. In the next section we will present the 4 final designs respectively.
Also notice that other techniques to save parameters such as bottleneck structure~\citep{he2016deep} appear to be complimentary to our approach, which can be combined together to further improve parameter efficiency while maintaining accuracy. To validate this idea, we also consider the bottleneck structure when reducing the design space.
\subsection{Final Sparse Kernel Designs}
\paragraph{Depthwise Convolution + Pointwise Convolution.}
\label{sec:DW+PW}
Unlike the standard convolution which combines spatial and channel information together to calculate the output, the combination of depthwise convolution (DW) and pointwise convolution (PW) split the two kinds of information and deal with them separately. The output activation at location $(f,x,y)$ can be written as
\begin{equation*}
T(f,x,y) = \sum_{c=1}^C[\sum_{x'=1}^X\sum_{y'=1}^YI(c,x-x',y-y')W_1(c,x',y')]W_2(c,f),
\end{equation*}
where $W_1$ and $W_2$ correspond to the kernels of depthwise convolution and pointwise convolution respectively. The dependency of such design is depicted in Figure~\ref{fig:DW+PW}, from which we can easily verify that the size of~\emph{information field} is the same with the standard convolution.
\paragraph{Group Convolution + Pointwise Group Convolution.}
\label{sec:GC+PWG}
The combination of group convolution (GC) and pointwise group convolution (PWG) can be regarded as an extension for the design above, where group convolution is applied on the pointwise convolution. However, simply using pointwise group convolution would reduce the size of~\emph{information field} on the channel dimension since depthwise convolution will not deal with any channel information. To recover the~\emph{information field} depthwise convolution is replaced with the group convolution. Meanwhile channel permutation should be added between the two layers. Assuming the number of channels does not change after the first group convolution, the output activation can be calculated as
\begin{equation*}
T(f,x,y) = \sum_{k'=1}^{C/N}[\sum_{c'=1}^{C/M}\sum_{x'=1}^X\sum_{y'=1}^YI(\frac{C}{M}\lfloor\frac{k-1}{\frac{C}{M}}\rfloor+c',x-x',y-y')W_1(c',x',y',k)]W_2(k',f),
\end{equation*}
where $k = \frac{C}{N}\lfloor\frac{f-1}{\frac{F}{N}}\rfloor+k'$, $M$ and $N$ denote numbers of groups for group convolution and pointwise group convolution and $W_1$ and $W_2$ correspond to the kernels of group convolution and pointwise group convolution respectively. Figure~\ref{fig:GC+PWG} shows the~\emph{information field} of this design clearly.
\paragraph{Pointwise Convolution + Depthwise Convolution + Pointwise Convolution.}
\label{sec:PW+DW+PW}
Although two pointwise convolutions do not ensure a better efficiency in our scheme, the combination with bottleneck structure can help ease the problem, which makes it survive as one of the last designs. Following the normal practice we set bottleneck ratio to $1:4$, which implies the ratio of bottleneck channels to output channels. Also notice that more parameters could be saved if we place the depthwise convolution between the two pointwise convolutions since now depthwise convolution would only apply on a reduced number of channels. As a result, the output activation $T(f,x,y)$ is calculated as
\begin{equation*}
T(f,x,y) = \sum_{k=1}^K[\sum_{x'=1}^X\sum_{y'=1}^Y[\sum_{c=1}^CI(c,x-x',y-y')W_1(c,k)]W_2(k,x',y')]W_3(k,f),
\end{equation*}
where $K$ denote the number of bottleneck channels and $W_1$, $W_2$ and $W_3$ correspond to the kernels of first pointwise convolution, depthwise convolution and second pointwise convolution respectively. Along with the equation Figure~\ref{fig:PW+DW+PW} shows that the~\emph{information field} of such design is same with the standard convolution.
\paragraph{Pointwise Group Convolution + Depthwise Convolution + Pointwise Group Convolution.}
The combination of two pointwise group convolutions and one depthwise convolution can also ensure the same size of~\emph{information field}. Similarly, channel permutation is needed. The bottleneck structure is also adopted to achieve a better efficiency. The output activation is calculated as
\begin{equation*}
T(f,x,y) = \sum_{k'=1}^{K/N}[\sum_{x'=1}^X\sum_{y'=1}^Y[\sum_{c'=1}^{C/M}I(\frac{C}{M}\lfloor\frac{k-1}{\frac{K}{M}}\rfloor+c',x-x',y-y')W_1(c',k)]W_2(k,x',y')]W_3(k',f),
\end{equation*}
where $k = \frac{K}{N}\lfloor\frac{f-1}{\frac{F}{N}}\rfloor+k'$, $K$, $M$ and $N$ represent the number of bottleneck channels and numbers of groups for first pointwise group convolution and second pointwise group convolution and $W_1$, $W_2$ and $W_3$ correspond to the kernels of first pointwise group convolution, depthwise convolution and second pointwise group convolution respectively. Both the equation and Figure~\ref{fig:PWG+DW+PWG} could verify the same size of~\emph{information field} with the standard convolution.
\subsection{Efficiency Analysis}
In addition, we find that the efficiency for different designs in our scheme do not always overlap. Thus to save the pain for researchers to find the most parameter/computation efficient designs based on their needs, we study the efficiency for each of the designs. Specifically, we consider two real situations which are frequently encountered by researchers when applying SK designs (i.e., given the input and the output for a layer and given the total number of parameters for a layer) and give accurate conditions when the best efficiency could be achieved.
\subsubsection{Depthwise Convolution + Pointwise Convolution.}
\paragraph{Efficiency given the input and the output.}
Given the numbers of input and output channels $C$ and $F$. The total number of parameters after applying this design is $9C + CF$, and the number of parameters for standard convolution is $9CF$. Therefore the parameter efficiency of such method is $1/F + 1/9$ represented by the ratio of parameters after and before applying such design. Clearly, given $C$ and $F$, the parameter efficiency is always the same.
\paragraph{Efficiency given the total amount of parameters.}
It can be easily verified that given the total number of parameters the greatest width is reached when the best efficiency is achieved. Thus the condition for the best efficiency given the total amount of parameters should be the same with the one when the greatest width is reached.
The total number of parameters $P$ for the design can be expressed as
\begin{equation*}
P = 3\cdot 3\cdot C + 1\cdot 1\cdot C \cdot F,
\end{equation*}
when studying the greatest width, we need to assume the ratio between $C$ and $F$ does not change, thus the number of output channels $F$ could be written like $F = \alpha \cdot C$ where usually $\alpha \in \mathbb{N}^+$. As a result, from the equation above when $P$ is fixed, the greatest width $G$ (i.e., $\frac{-9+\sqrt{81+4\alpha P}}{2\alpha}$) will also be fixed, which indicates that the parameter efficiency is always the same.
\subsubsection{Group Convolution + Pointwise Group Convolution.}
\label{sec:proof}
\paragraph{Efficiency given the input and the output.}
Similarly, we use the ratio of parameters to show parameter efficiency of this design. Given $C$ and $F$, the number of parameters after using such design can be written as $3\cdot 3\cdot \frac{C}{M}\cdot C + 1\cdot 1\cdot \frac{C}{N}\cdot F = \frac{9C^2}{M} + \frac{CF}{N}$. Since the number of parameters for standard convolution is $9CF$, the ratio will become $\frac{C}{MF} + \frac{1}{9N}$. Notice that to ensure the same size of~\emph{information field} with standard convolution, in any input group of the second layer there should be at least one output channel from each one of the output groups of the first layer, therefore $M\cdot N$ should be less than or equal to the number of output channels from the first layer, i.e., $M\cdot N\leq C$. To further illustrate the relationship between the best parameter efficiency and the choices of $M$ and $N$, we have the following theorem (the proof is given in the Appendix):
\theoremstyle{plain}
\begin{theorem}
\label{the:eff}
With the same size of~\emph{information field}, the best parameter efficiency is achieved if and only if the product of the two group numbers equals the channel number of the intermediate layer.
\end{theorem}
As per the theorem, the best parameter efficiency can be achieved only when $M\cdot N = C$. Thus the ratio will become $\frac{N}{F} + \frac{1}{9N}$. When $F$ is a fixed number, $N$ is the only variable which could affect the efficiency. Since $\frac{N}{F} + \frac{1}{9N}\geq\frac{2}{3}\sqrt{\frac{1}{F}}$, the best efficiency can be achieved when $\frac{N}{F} = \frac{1}{9N}$, or $N = \frac{\sqrt{F}}{3}$.
\paragraph{Efficiency given the total amount of parameters.}
Given the total number of parameters $P$ for one design, both $M$ and $N$ could affect the width of the network. As per Theorem~\ref{the:eff} the greatest $C$ can be reached only when $C = M\cdot N$. When $F = \alpha \cdot C$, $P$ could be written like
\begin{align*}
P & = 3\cdot 3\cdot N\cdot M\cdot N + 1\cdot 1\cdot M \cdot \alpha \cdot M\cdot N = MN(9N+\alpha M) \\
& \geq MN\cdot 2\sqrt{9\alpha MN} = 6\sqrt{\alpha}C^{\frac{3}{2}}
\end{align*}
Given the number of parameters $P$, width C has a upper bound when $9N = \alpha M$, which is also the condition for the best efficiency. The greatest width $G$ is $(\frac{P}{6\sqrt{\alpha}})^{\frac{2}{3}}$.
\subsubsection{Pointwise Convolution + Depthwise Convolution + Pointwise Convolution.}
\paragraph{Efficiency given the input and the output.}
Same as before, given the number of input channels $C$, bottleneck channels $K$ and output channels $F$. After applying the design, the total amount of parameters is reduced to $1\cdot 1\cdot C\cdot K + 3\cdot 3\cdot K + 1\cdot 1\cdot K\cdot F = K(C + F + 9)$. The number of parameters for standard convolution is still $9CF$. Notice that $K = F/4$, therefore the ratio can be further expressed as $\frac{C + F + 9}{36C}$. Clearly, given $C$, $K$ and $F$, such design will also result in a fixed efficiency.
\paragraph{Efficiency given the total amount of parameters.}
When $F = \alpha \cdot C$ and $K = F/4$, the total number of parameters $P$ will be
\begin{equation*}
P = 1\cdot 1\cdot C\cdot \frac{\alpha C}{4} + 3\cdot 3\cdot \frac{\alpha C}{4} + 1\cdot 1\cdot \frac{\alpha C}{4}\cdot \alpha C,
\end{equation*}
when $P$ is fixed, the greatest width $G$ is also fixed, i.e., $\frac{-9\alpha + \sqrt{81\alpha^2+16\alpha^2P+16\alpha P}}{2(\alpha^2+\alpha)}$.
\subsubsection{Pointwise Group Convolution + Depthwise Convolution + Pointwise Group Convolution}
\paragraph{Efficiency given the input and the output.}
We use the same way to evaluate parameter efficiency for this design. First, the number of parameters after applying such method is $1\cdot 1\cdot\frac{C}{M}\cdot K + 3\cdot 3\cdot K + 1\cdot 1\cdot\frac{K}{N} \cdot F = K(\frac{C}{M} + \frac{F}{N} + 9)$. The number for standard convolution is $9CF$. Since $K = F/4$ and as per Theorem~\ref{the:eff} the best parameter efficiency can be achieved only when $K = M\cdot N$, the ratio of parameters can then be represented as $\frac{\frac{C}{M} + 4M + 9}{36C}$. Thus given $C$, $K$ and $F$, the best parameter efficiency can be reached by setting $\frac{C}{M} = 4M$, or $M = \frac{\sqrt{C}}{2}$.
\paragraph{Efficiency given the total amount of parameters.}
Similarly, according to the Theorem~\ref{the:eff} the greatest $C$ can be reached only when the number of bottleneck channels $K = M\cdot N$. Since $F = \alpha \cdot C$ and $K = F/4$, the total number of parameters of one design $P$ can be expressed as
\begin{align*}
P & = 1\cdot 1\cdot \frac{4N}{\alpha}\cdot MN + 3\cdot 3\cdot MN + 1\cdot 1\cdot M\cdot 4MN = MN(\frac{4N}{\alpha}+9+4M) \\
& \geq MN(9+2\sqrt{\frac{16MN}{\alpha}}) = \frac{\alpha}{4}C(9+4\sqrt{C})
\end{align*}
Given the number of parameters $P$, the greatest width $G$ exists when $\alpha M = N$.
\section{Connections to the State-of-the-Arts}
To verify the correctness of our scheme, we further explore the connections between designs in our scheme and other state-of-the-art ones.
\paragraph{GC + PWG.}
Xception and MobileNet are two popular efficient models with the same type of building block. After analyzing them carefully, one can find that they are just one extreme case of this design. Specifically, when $M$ is equal to the number of input channels and $N$ is 1 the design will become the building block of them.
\paragraph{PW + DW + PW.}
Our design can be regraded as an extreme case of ResNeXt when the number of groups is equal to the number of bottleneck channels. Later experiments in Section~\ref{sec:study} indicate that our design could achieve a better performance given the same number of parameters since more parameters could be saved for increasing the width of network.
\paragraph{PWG + DW + PWG.}
ShuffleNet has a similar building block with this design, both of which contain two pointwise group convolutions and a depthwise convolution. However, in ShuffleNet the two pointwise group convolutions share the same group number, whereas two distinct numbers could be used by our design, which could provide a better opportunity to reduce more parameters. Therefore ShuffleNet can be seen as a subset of our sparse kernel design. Experiments in Section~\ref{sec:sota} also show that given the same number of parameters our design could construct a wider model with better performance.
\section{Experiments}
We verify the idea of our scheme via experiments. First, we provide the implementation details for our experiments. Then we study the relationship between the~\emph{information field} and the final accuracy along with the comparisons between different SK designs in our scheme and the state-of-the-art ones.
\subsection{Implementation Details}
\begin{table}[!th]
\caption{Overall network layout. $B$ is the number of blocks at each stage. At the first block of each stage except the first stage down-sampling is performed and the channel number is doubled.}
\label{tab:frame}
\centering
\begin{tabular}{lllll}
\toprule
Layer & Output size & KSize & Strides & Repeat \\
\midrule
Image & $224\times 224$ & & & \\
\midrule
Conv1 & $112\times 112$ & $3\times 3$ & 2 & 1\\
\midrule
Max Pool & $56\times 56$ & $3\times 3$ & 2 & 1\\
Stage 1 & $56\times 56$ & & 1 & $B$\\
\midrule
Stage 2 & $28\times 28$ & & 2 & 1\\
& $28\times 28$ & & 1 & $B-1$ \\
\midrule
Stage 3 & $14\times 14$ & & 2 & 1\\
& $14\times 14$ & & 1 & $B-1$ \\
\midrule
Stage 4 & $7\times 7$ & & 2 & 1\\
& $7\times 7$ & & 1 & $B-1$ \\
\midrule
Average Pool & $1\times 1$ & $7\times 7$ & & 1 \\
\midrule
\multicolumn{5}{c}{1000-d FC, Softmax} \\
\bottomrule
\end{tabular}
\end{table}
The overall layout of the network is shown in Table~\ref{tab:frame}. Identity mapping~\citep{he2016identity} is used over each block. When building the models, we can simply replace every block in the layout with the standard convolution or the SK designs mentioned in Section~\ref{sec:decompose}. Batch normalization (BN)~\citep{ioffe2015batch} is adopted right after each layer in the block and as suggested by~\citep{chollet2016xception} nonlinear activation ReLU is only performed after the summation of the identity shortcut and the output of each block.
We evaluate our models on ImageNet 2012 dataset~\citep{deng2009imagenet,russakovsky2015imagenet}, which contains 1.2 million training images and 50000 validation images from 1000 categories. We follow the same data augmentation scheme in~\citep{he2016identity,he2016deep} which includes randomized cropping, color jittering and horizontal flipping. All models are trained for 100 epochs with batch size 256. SGD optimizer is used with the Nesterov momentum. The weight decay is 0.0001 and the momentum is 0.9. We adopt the similar weight initialization method from~\citep{he2015delving,he2016deep,huang2016densely}. The learning rate starts with 0.1 and is divided by 10 every 30 epochs. All results reported are single center crop top-1 performances.
\subsection{Empirical Study}
\label{sec:study}
\begin{table}[!th]
\caption{Comparisons to illustrate the relationship between the~\emph{information field} and the model accuracy. We tune the number of group to achieve different parameter efficiency. Width here is the number of input channels to the first stage in the network. InfoSize is the size of~\emph{information field} with regards to the input to the first stage. Numbers within the parentheses represent the number of groups. For example, GConv(1) means group convolution with only 1 group, which is also the standard convolution.}
\label{tab:width}
\centering
\begin{tabular}{llllll}
\toprule
Network Unit & \#Params($\times$M) & Depth & Width & InfoSize & Error (\%)\\
\midrule
PW+GConv(1)+PW & 13.9 & 98 & 128 & (3, 3, 128) & 30.0 \\
PW+GConv(32)+PW & 13.9 & 98 & 256 & (3, 3, 256) & 29.2 \\
\midrule
PW+GConv(1)+PW & 28.4 & 194 & 128 & (3, 3, 128) & 29.7 \\
PW+GConv(1)+PW & 28.4 & 98 & 200 & (3, 3, 200) & 29.3 \\
\midrule
PW+GConv(2)+PW & 28.4 & 98 & 256 & (3, 3, 256) & 28.7 \\
PW+GConv(64)+PW & 28.4 & 98 & 512 & (3, 3, 512) & \textbf{28.4} \\
\bottomrule
\end{tabular}
\end{table}
\paragraph{Relationship between the~\emph{information field} and the model accuracy.}
In Section~\ref{sec:decompose}, we have shown that all the SK designs generated by our scheme share the same size of the~\emph{information field} when the size of input is fixed. Meanwhile different SK designs could save different amount of parameters/computation compared to the standard convolution and the saved computation/parameters can then be used to increase the number of channels, enlarge the~\emph{information field}, and increase the final accuracy. The fundamental idea behind this is that we believe the~\emph{information field} is an essential property of all SK designs and could directly affect the final accuracy.
To verify this idea we choose a bottleneck-like design and conduct some comparisons by tuning different number of groups. We adopt the same overall network layout in Table~\ref{tab:frame}. It can be easily verified that given the same size of the input tensor the change of the number of groups in the bottleneck-like design will not affect the size of the~\emph{information field} in the output. Results are shown in Table~\ref{tab:width}. Specifically, compare results on row 2 and row 5, we can see that by increasing the number of group from 2 to 32, more than a half amount of parameters will be saved to generate the same width, however the model accuracy will only decrease slightly. Meanwhile a further comparison on row 5 and row 6 indicate that if we use the saved parameters to increase the network width, the accuracy could still be improved. Since both of the two networks contain the same amount of parameters, overall network layout and type of SK design, the performance gains should only come from the increase of network width (~\emph{information field}). Same phenomenon could also be found by comparing results on row 1 and row 2.
Besides we investigate on different usages of parameters, results on row 3 and row 4 show that the increase of network width has better potential for the improvement of accuracy than that of the depth, which also indicates that the size of the~\emph{information field} could play a more important role on the model accuracy. Additionally results in Table~\ref{tab:width} can further explain the SK design (PW+DW+PW) in Section~\ref{sec:PW+DW+PW} where we directly apply the most parameter-efficient depthwise convolution in the middle since it has the same size of the~\emph{information field} with other group numbers.
\paragraph{Comparisons of different SK designs.}
We also compare different SK designs mentioned in Section~\ref{sec:decompose}. Results are shown in Table~\ref{tab:decompose}. As mentioned in Section~\ref{sec:decompose} all designs have the same-sized~\emph{information field} given the same input. Results from Table~\ref{tab:decompose} show that given the close amount of parameters by choosing different SK designs or group numbers models with different widths can be constructed, and the final accuracy is positively correlated to the model width (the size of the~\emph{information field}), which also coincides with our analysis above. Also notice that results here do not necessarily indicate one type of SK design is always better than the other one in terms of the parameter efficiency since as per the analysis in Section~\ref{sec:decompose} the efficiency also depends on other factors like the number of groups. For example, considering the same number of parameters and overall network layout, there could be a combination of group numbers $M$ and $N$ such that the network with the design GConv($M$)+PWGConv($N$) is wider than that of DW+PW.
\begin{table}[!th]
\caption{Comparisons of different SK designs. All designs share the same network layout.}
\label{tab:decompose}
\centering
\begin{tabular}{lllll}
\toprule
Network Unit & \#Params($\times$M) & Width & InfoSize & Error (\%)\\
\midrule
Standard Convolution & 11.2 & 64 & (3, 3, 64) & 31.1 \\
\midrule
DW+PW & 0.8 & 72 & (3, 3, 72) &31.7 \\
DW+PW & 11.2 & 280 & (3, 3, 280) & 28.5 \\
\midrule
GConv(4)+PWGConv(32) & 11.2 & 128 & (3, 3, 128) & 30.8 \\
GConv(16)+PWGConv(16) & 11.3 & 256 & (3, 3, 256) & 29.4 \\
\midrule
PW+DW+PW & 11.0 & 400 & (3, 3, 400) &26.9\\
\midrule
PWGConv(4)+DW+PWGConv(4) & 11.3 & 560 & (3, 3, 560) & \textbf{25.6}\\
\bottomrule
\end{tabular}
\end{table}
\subsection{Comparisons with the State-of-the-Arts.}
\label{sec:sota}
\begin{table}[!th]
\caption{Comparisons with different state-of-the-art SK designs. All settings are restored from the original papers. Specifically, bottleneck ratio is $1: 4$ for ResNet and ResNeXt adopts cardinality of 16 and bottleneck ratio of $1: 2$. Meanwhile 4 groups are used for ShuffleNet.}
\label{tab:sota}
\centering
\begin{tabular}{lllll}
\toprule
Network Unit & \#Params($\times$M) & Width & InfoSize & Error (\%)\\
\midrule
ResNet~\citep{he2016deep} & 11.2 & 64 & (3, 3, 64) & 31.3 \\
ResNet with bottleneck~\citep{he2016deep} & 11.3 & 192 & (3, 3, 192) &29.9 \\
ResNeXt~\citep{xie2017aggregated} & 11.1 & 192 & (3, 3, 192) & 29.8 \\
Xception~\citep{chollet2016xception} & 11.2 & 280 & (3, 3, 280) &28.5 \\
ShuffleNet~\citep{zhang2017shufflenet} & 11.3 & 560 & (3, 3, 560) & 25.6\\
\midrule
GConv(100)+PWGConv(2) & 8.6 & 200 & (3, 3, 200) &27.0 \\
PWGConv(100)+DW+PWGConv(2) & 10.4 & 700 & (3, 3, 700) & \textbf{24.9} \\
\bottomrule
\end{tabular}
\end{table}
Based on the SK scheme, we are also able to construct more efficient designs than the state-of-the-art ones. Table~\ref{tab:sota} shows comparisons between the SK designs generated by our scheme and the state-of-the-art ones. For fair comparisons, we use the same network layout as shown in Table~\ref{tab:frame} and replace blocks in it with corresponding designs, and the model size around 11.0M is selected as it is the size that different models (e.g., Xception, ResNeXt and ShuffleNet) can be easily configured to. Results in Table~\ref{tab:sota} indicate that SK designs in our scheme could even yield better accuracy with a smaller model size, which also validates the idea of our SK scheme. Also notice that the choices of group numbers used in our designs are chosen to help easily accommodate both the similar model size and the overall network layout, which may not be the most efficient ones that are supposed to result in a wider network with better accuracy under the same limitation of parameters.
\section{Related Works}
\paragraph{Model Compression.}
Traditional model compression techniques include pruning, quantization and low-rank approximation. Pruning~\citep{wen2016learning,ardakani2016sparsely,liu2017learning,li2016pruning,he2017channel,liu2015sparse} reduces redundant weights, network connections or channels in a pre-trained model. However, it could face difficulty for deploying on hardware like GPU since some pruning methods may be only effective when the weight matrix is sufficiently sparse. Quantization~\citep{zhou2016dorefa,zhou2017incremental,courbariaux2015binaryconnect,courbariaux2016binarized,deng2018gxnor,micikevicius2017mixed} reduces the number of bits required to represent weights. Unfortunately, this technique will require specialized hardware support. Low rank approximation~\citep{lebedev2014speeding,jin2014flattened,wang2016design,xue2014singular,novikov2015tensorizing,garipov2016ultimate} uses two or more matrices to approximate the original matrix values in a pre-trained model. Nevertheless, since the process is an approximation of original matrix values maintaining a similar accuracy will always need additional re-training. The focus of this paper, the SK approach, mitigates all these problems by directly training networks using structural sparse convolutional kernels.
\section{Conclusion}
In this paper, we present a scheme to craft the effective SK design by eliminating the large design space from three aspects: composition, performance and efficiency. During the process to reduce the design space, we find an unified property named~\emph{information field} behind various designs, which could directly indicate the final accuracy. Meanwhile we show the final 4 designs in our scheme along with detailed efficiency analysis. Experimental results also validate the idea of our scheme.
|
2,877,628,090,422 | arxiv | \section{Introduction}
Model selection task commonly appears in many branches of science. Investigators are often interested in finding the best model for the dependent variable which lead to a good quality of fit and parsimony. A compromise is to be made between fitness and parsimony, as inclusion of too many dependent variables lead to loss in precision of the regression coefficients and omitting important factors lead to a mis-estimation of the regression coefficients and biased prediction (\cite{ms-murtaugh}). This trade-off makes the model selection task a two objective problem.
However, most of the existing approaches have handled the model selection task as a single objective problem by using various penalized model selection criteria (such as AIC and BIC); see e.g. \cite{ms-jeffreys, miller02, ms-burnham, ms-mackay, ms-gregory, zhu06} and references therein. Despite a lot of work in the direction of model selection techniques, there is no single method which can be utilized for all problems. This is explained by the fact that model selection task is inherently not a single objective problem with a uniquely defined solution. Instead, each selection criterion or single objective method is bound to produce different results, because they work by giving higher or lower importance to either fitness or parsimony. In the past years, ideas from the field of computer science and machine learning have been applied to the field of statistics, particularly for problems with large number of independent variables. Some of the examples are random forests (\cite{breiman01}), support vector machines (\cite{vapnik95}), and boosting (\cite{freund96,hofner11}). In this paper, we utilize the principles from the field of evolutionary computation, to handle the model selection problem through a bi-objective approach.
We propose a multi-objective genetic algorithm for variable selection (MOGA-VS), which draws insights from the advances in the field of evolutionary computation (\cite{deb-book-01, carlos-book}). In MOGA-VS, the model selection task is considered as a multi-objective optimization problem, where the first objective is to reduce the complexity of the model (or reduce the number of coefficients) and the second objective is to maximize the goodness-of-fit (or minimize mean squared error). By doing so, the suggested approach differs from the existing methods in two important ways. First, instead of attempting to arrive at a single model candidate, the method produces a collection of Pareto-optimal\footnote{The notion of Pareto-optimality is synonymous to the best-subset.} regression models from which the most preferred model can be chosen. The second difference follows from the separation of optimization process from choosing a particular trade-off between goodness-of-fit and model parsimony. The problem of finding all optimal trade-offs is performed without any user-intervention, whereas the task of selecting an optimal balance between the two objectives is best left as a user's preference-based decision. The proposed algorithm can also be viewed as a method for exploring the best-subset, which is a tedious task especially for large scale problems. The algorithm can also be easily extended to minimize the generalization error in order to obtain a Pareto-optimal frontier of models based on their generalization performance. The multi-objective optimization task using MOGA-VS is followed by the decision making process, which may be performed using a combination of visual tools and metrics.
The rest of this paper is organized as follows. Section~\ref{sec:mop} provides a summary of the central definitions and an overview of the model selection problem within multi-objective framework.
Section~\ref{sec:overview} gives a literature review on commonly applied model selection methods and discusses their differences to multi-objective optimization framework. The proposed MOGA-VS algorithm is presented in Section~\ref{sec:moga}. Section~\ref{sec:results} presents the results from experiments with three different datasets. One of the datasets is a recently published real dataset on Communities and Crime within United States. Comparisons with respect to well known variable selection techniques are included in the study. Section~\ref{sec:genError} extends the MOGA-VS algorithm for generalization error minimization, and provides results on one of the datasets. Finally, we provide the conclusions in Section~\ref{sec:conclusions}.
\section{Model Selection as a Multi-objective Problem}\label{sec:mop}
In this section, we formulate the regression problem as a multi-objective task, and introduce the main concepts and the notation used in multi-objective problems.
\subsection{Trade-off between complexity and fit}
The regression modeling task can be viewed as a special example of supervised learning. Let $\mathcal{Y}$ be the output space and let $\mathcal{X}=\prod_{i=1}^p\mathcal{X}_i$ denote the input space, where $\mathcal{X}_i$ is domain of the $i$-th explanatory variable and $p$ is the total number of variables. Given a collection of data $(\mathbf{X}_j,Y_j)\in\mathcal{X}\times\mathcal{Y}$, $j=1,\dots,n$, with an unknown probability distribution $\mathcal{D}$, the purpose is to find a model $f:\mathcal{X}\to\mathcal{Y}$ with minimal error on the training set with respect to $\mathcal{D}$. To restrict the search space, the model is assumed to belong to a pre-defined {\it hypothesis space} $\mathcal{H}$. For linear regression, the hypothesis space can be written as the set of all linear functions that can be formed using some subset of the variables contained in the input space,
\begin{equation}
\mathcal{H}=\left\{x \to \sum_{k\in J}\beta_kx_k \ | \ J\subset \{1,\dots,p\}, x_k\in\mathcal{X}_k\right\}.
\end{equation}
The model selection problem follows from the fact that the hypothesis space consists of models with varying complexity. In the case of regression modeling, the hypothesis space $\mathcal{H}$ forms a nested structure $\mathcal{H}_1\subset\mathcal{H}_2\subset\cdots\mathcal{H}_d\subset\cdots\subset\mathcal{H}$, where $\mathcal{H}_d$ represents the subset of models with $d$ many variables. This means that in order to find a preferred model, we need to choose the size of the hypothesis space that provides a good balance between the complexity and fit. It is well known that the larger the hypothesis space (high complexity) the better is the fit, and the smaller the hypothesis space (low complexity) the poorer is the fit. Hence, solving the model selection task is equivalent to considering a multi-objective optimization problem with two conflicting objectives.
\subsection{Multi-objective formulation and optimality}
A multi-objective optimization problem has two or more objectives which are conflicting. The objectives are supposed to be simultaneously optimized subject to a given set of constraints. These problems are commonly found in the fields of science, engineering, economics or any other field where optimal decisions are to be taken in the presence of trade-offs between two or more conflicting objectives.
By interpreting the model selection problem as finding a trade-off between complexity and fit, we can formulate the following two objective problem where the objectives are jointly minimized.
\begin{definition}[Multi-objective problem]\label{def:mop}
Let $\boldphi:\mathcal{H}\to\mathbb{N}\times\mathbb{R}$, $\boldphi=(\varphi_1,\varphi_2)$ denotes an objective vector, where
\begin{itemize}
\item[(i)]the first objective $\varphi_1:\mathcal{H}\to\mathbb{N}$, $\varphi_1(f)=\min\{d\in \mathbb{N}:f\in\mathcal{H}_d\}$ represents the complexity of a model in terms of the number of variables; and
\item[(ii)]the second objective $\varphi_2:\mathcal{H}\to\mathbb{R}$ is the empirical risk $\varphi_2(f)=\frac{1}{n}\sum_{i=1}^nL(f(\mathbf{X}_i),Y_i)$, with quadratic loss function $L(f(\mathbf{X}_i),Y_i)=(Y_i-f(\mathbf{X}_i))^2$. This is same as the mean squared error. Some other suitable objective function may also be considered, for instance, refer Section \ref{sec:genError}.
\end{itemize}
Then the optimization problem is given by
\begin{equation}
\begin{array}{ll}
\minimize_{f\in\mathcal{H}} & \boldphi(f) =
\left(\varphi_1(f),\varphi_2(f)\right), \\
\mbox{subject to}
& f\in C.
\end{array}
\label{eq:bilevel_multi_obj}
\end{equation}
where $C\subset \mathcal{H}$ is a constraint set.
\end{definition}
Usually, multi-objective problems do not have a single optimal solution which simultaneously maximizes or minimizes all of the objectives together; instead there is a set of solutions which are optimal in the sense that they are not dominated by any other solution. Once the models with best-fit corresponding to different complexities are available, the user could make the choice for the most preferred model.
\begin{definition}[Dominance]
A model $f^{(1)}$ is said to dominate the other model $f^{(2)}$, denoted as $f^{(1)}\succ f^{(2)}$, if both conditions 1 and 2 are true:
\begin{enumerate}
\item The model $f^{(1)}$ is no worse than $f^{(2)}$ in both objectives, or $\varphi_j(f^{(1)})\leq\varphi_j(f^{(2)})$, $j=1,2$.
\item The model $f^{(1)}$ is strictly better than $f^{(2)}$ in at least one objective, or $\varphi_j(f^{(1)})<\varphi_j(f^{(2)})$ for at least one $j\in\{1,2\}$.
\end{enumerate}
\end{definition}
The idea is illustrated in Figure~\ref{fig:dominance} for a two objective minimization case. Let us consider the point A as a reference point. The dominance relationship of different regions with respect to A are marked by different shadings. The shaded region in the south-west corner marks the area which dominates point A. The area with a lighter shading in the north-east corner is the region dominated by the reference point A. The remaining unshaded area represents the non-dominated region.
\begin{figure*}[hbt]
\begin{minipage}{0.48\linewidth}
\begin{center}
\epsfig{file=dom.eps,width=0.7\linewidth}
\end{center}
\caption{Explanation of the domination concept for a minimization problem, where the reference point A dominates B.}
\label{fig:dominance}
\end{minipage}\hfill
\begin{minipage}{0.48\linewidth}
\begin{center}
\epsfig{file=expl.eps,width=0.7\linewidth}
\end{center}
\caption{Explanation of the concept of a non-dominated set and a Pareto-optimal front.}
\label{fig:pareto}
\end{minipage}
\end{figure*}
The concept of dominance gives a natural interpretation for optimality in multi-objective problems, because the quality of any two solutions can be compared on the basis of whether one point (model) dominates the other point (model) or not.
\begin{definition}[Non-dominated set and Pareto-optimality]\label{def:optimality}
Among a set of solutions $\mathcal{P}\subset\mathcal{H}$, the non-dominated set of solutions $\mathcal{P}^{\star}$ are those that are not dominated by any member of the set $\mathcal{P}$, i.e.
$$
\mathcal{P}^{\star}=\{f\in\mathcal{P}\ | \ \nexists g\in \mathcal{P} \ : \ g\succ f \}.
$$
When the set $P$ is the entire search space, i.e. $P=\mathcal{H}$, the resulting non-dominated collection of models $P^{\star}$ is called the Pareto-optimal set $\mathcal{H}^{\star}$.
\end{definition}
To visualize the idea of Pareto-optimality, Figure~\ref{fig:pareto} shows an example of a minimization problem with two objective functions. The shaded region in the figure represents the image of the feasible region in the search space, i.e. $\varphi(\mathcal{H})=\{\varphi(f):f\in\mathcal{H} \}$. The bold curve marks the Pareto-optimal set, $\varphi(\mathcal{H}^{\star})$, which represents all the optimal points in the two objective minimization problem. To understand the difference between Pareto-optimality and non-dominance, the figure shows also a set of points corresponding to the objective function values of a finite collection of other solutions. Let us denote this group by $\varphi(\mathcal{P})$. Among these points, the ones connected by broken line are the values of solutions in $\mathcal{P}^{\star}$ which are not dominated by any point in the given finite set displayed on the figure. Hence, although none of these points are Pareto-optimal (because $\mathcal{P}^{\star}\cap\mathcal{H}^{\star}=\emptyset$), they still constitute a non-dominated set with respect to the finite set $\mathcal{P}$. The other points which do not belong to the non-dominated set are dominated by at least one of the points in the non-dominated set. Therefore, the difference between an arbitrary non-dominated set, such as $\mathcal{P}^{\star}$, and the Pareto-optimal set $\mathcal{H}^{\star}$ is that, in order for a solution to be considered Pareto-optimal it must be globally non-dominated in the entire search space.
\section{Review on Model Selection Methods}\label{sec:overview}
A number of model selection criteria and methods have been suggested in the recent literature on statistical modeling and machine learning. However, given the lack of any clear standard, none of the methods has become dominant, and this leaves the user puzzled as to which approach to use. Most of the times, each of these selection criteria or methods lead to a different solution which makes it difficult for the user to pick a model.
In this section, the existing models have been roughly categorized into different groups. It concludes with a summary of the central differences between these methods and the multi-objective framework suggested in this paper.
\subsection{Selection by Complexity Regularization}
Talking about the penalized model selection criteria, it can be found that there exist a number of model selection criteria in the literature. However, there are two criteria which are very commonly used. One is an information-theoretic method pioneered by \cite{ms-aic}, known as the Akaike Information Criteria (AIC) and the other one uses the Bayesian evidence, known as the Bayesian Information Criteria (BIC) (\cite{ms-bic}). A model which gives the least value for the criterion is the most preferred one. There are many other information criteria which are not commonly used and have been derived using similar principles as AIC and BIC. They are Deviance Information Criteria (DIC) (\cite{ms-dic}), Expected Akaike Information Criteria (EAIC), Fisher Information Criteria (FIC)~(\cite{ms-fic}), Generalized Information Criteria (GIC) (\cite{ms-gic}), Network Information Criteria (NIC) (\cite{ms-nic}), Takeuchi Information Criteria (TIC) (\cite{ms-tic}) and adaptive model selection (\cite{jasa-ams}).
The classical information criteria can be essentially viewed as various forms of complexity regularization scheme, where the purpose is to penalize complex models based on their information content or using prior knowledge. In general, the choice of model by complexity regularization can be understood as solving a single objective minimization problem,
\begin{equation}\label{eq:penalty}
\hat{f}_n=\argmin_{f\in\mathcal{H}}\left\{J_{\lambda}(f):=\hat{R}_n(f)+ \lambda C(f)\right\}
\end{equation}
where $\hat{R}_n:\mathcal{H}\to\mathbb{R}$ denotes the empirical risk (e.g the function $\varphi_2$ in Problem~\ref{def:mop}), and $C:\mathcal{H}\to\mathbb{R}$ represents the cost of model which is commonly expressed in terms of the model size and sample size.
\subsection{Stepwise Selection Methods}
Stepwise methods are commonly used to select the variables in a regression model. The methods commonly used are forward selection, backward elimination and stepwise regression. Forward selection method adds variables to the model until no remaining variable (outside the model) can add anything significant to the dependent variable. Forward selection starts with no variable in the model. Backward elimination is opposite to forward selection where variables are deleted one by one from the model until all remaining variables contribute something significant to the dependent variable. Backward elimination begins with a model which includes all the variables. Stepwise regression is a modification of the forward selection method in a way such that variables once included in the model are not guaranteed to stay. Stepwise method has a number of weaknesses, like, it usually results in models having too many variables, it suffers under collinearity, it is based on methods intended to test pre-specified hypothesis etc. A detailed discussion on these approaches and their weaknesses can be found in a recent study by \cite{ms-ratner}.
\subsection{Genetic algorithms and other heuristics}
A number of studies use genetic algorithms (GA) and other heuristic algorithms to choose regressors in a regression problem. Some of the studies to the knowledge of the authors are \cite{reg-heuristic1,reg-heuristic2, reg-heuristic3}. However, they differ from the method proposed in this paper as they assume a single objective function (usually an information criteria) and then use the heuristic algorithm to find an optimal regression model which optimizes the chosen objective. The Parallel Genetic Algorithm (PGA) framework suggested by \cite{zhu06} searches for an ensemble of good models, and then uses the entire set for subsequent model selection. A recent heuristic by \cite{wolters11} proposes a non-convergent approach for generating a large number of models for a fixed model size. Thereafter, a feature extraction problem is solved to choose the most appropriate model. The two studies are similar to ours, as they search for multiple models before accepting a particular model. However, they differ from MOGA-VS, as they do not target the entire range of Pareto-optimal models. In the process of converging towards the Pareto-optimal models, MOGA-VS also provides a high number of dominated models close to the Pareto frontier as a by-product of the optimization scheme.
\subsection{Best Subset Method}
In the best subset method, usually an exhaustive or a branch and bound algorithm (\cite{bnb-algo}) is used to find the best models corresponding to fixed number of variables. The best subset selection finds the model with the greatest goodness-of-fit, for a fixed number of variables. When repeated for different number of variables, this procedure yields a set of Pareto-optimal models similar to what we are aiming for. The algorithm to find the best subset becomes computationally very expensive with increasing number of variables and is not a viable technique when the number of variables are very high. The MOGA-VS approach could be a useful strategy to produce the best-subset for large scale problems, when the conventional method fails or becomes very expensive. Many of the existing best subset implementations contain an upper bound on how many variables they can handle.
\subsection{Bayesian Model Averaging}
An alternative to frequentist approaches for model selection is the use of techniques developed for Bayesian model averaging (BMA)~(\cite{bma-leamer,madigan94,bma-chatfield,hoeting99,montgomery10,clyde10}) technique for model selection. In our experiments, we consider one such method where BMA is used to rank models and uses the best subset method. The BMA technique computes the full joint posterior distribution over models which allows incorporation of model uncertainty in posterior inferences.
A common strategy in BMA is to select the highest posterior probability model. As discussed by \cite{clyde10}, there are several other strategies to perform optimal model selection e.g. based on maximization of posterior expected utility. However, the difficulty in BMA is that when a large number of variables is involved, enumeration of the models in the hypothesis space $\mathcal{H}$ becomes a heavy task. Therefore, the use of Markov Chain Monte Carlo techniques or adaptive sampling is necessary even for problems of moderate size. Bayesian model averaging technique could also be used with our algorithm for selecting the best model from the non-dominated set of models.
\subsection{Central differences and motivation for MOGA-VS}
Both classical and multi-objective approaches have their pros and cons. The classical scheme is optimal if the chosen penalty scheme is a good representation of the user's preferences for trade-off between empirical risk and model complexity. However, many times, the model selection can turn out to be quite sensitive to the choice of complexity penalty. Furthermore, as discussed by~\cite{montgomery10}, uncertainty about the correct model specification can be very high in practical applications. For instance, in social sciences such as political research, where large sets of control variables are involved, an attempt to find a single best model is often poorly justified.
The multi-objective framework proposed in the present paper differs from the classical model selection techniques in the following respects:
\begin{itemize}
\item[(i)] {\it Multiple optimal solutions:} By treating the model selection task as a multi-objective optimization problem, we are always looking for a collection of Pareto-optimal solutions instead of attempting to choose one single optimal point directly. The Pareto set contains the best solution in terms of goodness-of-fit for each complexity. Therefore, these set of optimal solutions guarantee that for a given number of variables, there cannot exist a model which can provide a better fit for the training data.
\item[(ii)] {\it Separation of concerns:} The purpose in multi-objective approach is to avoid making an a priori choice of a complexity penalty. To accomplish this, a distinction is made between stages which can be objectively decided and those which are more dependent on the user's preferences and the particular application at hand. Finding the Pareto-optimal frontier is an optimization problem that can be solved without any a priori assumptions, whereas the choice of the preferred point(s) from the Pareto-optimal set is both preference as well as application dependent question. Therefore, in the proposed approach, the optimization stage, and decision-making stage are treated separately. By doing so, the multi-objective technique enhances understanding of the trade-off and what separates the alternative models.
\end{itemize}
The remaining question is how to find the Pareto-optimal solutions. Of course, for a finite search space $\mathcal{H}$, it is always possible to use brute force to find the Pareto-optimal set. However, such a naive approach would be intractable in practice. To solve the optimization problem in an efficient manner, our approach introduces a specialized multi-objective optimization framework that is based on evolutionary computation.
\section{The MOGA-VS Framework}\label{sec:moga}
In this section, we discuss a step-by-step procedure for the Multi-objective Genetic Algorithm for Variable Selection (MOGA-VS). The framework of this algorithm has been inspired by some of the existing evolutionary multi-objective (EMO) procedures (\cite{nsga2, spea2}). The presented algorithm has been specialized to handle the problem~\ref{def:mop} of variable selection efficiently. This section first provides a step-by-step procedure for the proposed algorithm (MOGA-VS). Then, the techniques used for visualizing the Pareto-optimal frontier and selection criteria are discussed.
\subsection{Step-by-Step Procedure for MOGA-VS}
Using the basic genetic algorithm framework, we suggest a specialized algorithm for producing the Pareto-optimal set of regression models when one objective is minimization of number of variables and the other objective is minimization of the in-sample mean squared error (other error measures may also be used, for example, minimization of generalization error is discussed in Section \ref{sec:genError}). It should be noted that whenever we refer to a population member it means we are referring to a regression model. Each member (regression model) is represented by a binary string of the size of number of maximum variables. If a particular variable is present, the bit value is $1$; otherwise the bit value is $0$. For example, if there are $K$ number of maximum variables $(\boldx_1, \boldx_2, \ldots, \boldx_K)$ then the string $(1, 0, 0, 1, \ldots, 1)_K$ represents a regression model where the first variable is present, second is absent, third is absent, fourth is present and so on. Sum of the bits (number of variables present in the model) in the string represents the first objective and the mean squared error of the regression model represents the second objective.
A step-by-step procedure for the Multi-objective Genetic Algorithm for Variable Selection (MOGA-VS) is described as follows:
\begin{enumerate}
\item[1.] Initialize a parent population, $\mathcal{P}$, of size $N$ by picking the regression variables randomly with probability 0.5 for each of the members.
\item[2.] Find the non-dominated set of solutions in the population, i.e. $\mathcal{P}^{\star}$.\footnote{Non-dominated members from a particular set could be identified by performing pairwise comparisons between all the members and selecting the ones which are not dominated by any member.}
\item[3.] Pick up any member from the non-dominated set $\mathcal{P}^{\star}$ and another member randomly from $\mathcal{P}$ to perform a single point crossover (\cite{goldberg-book}) of the binary strings leading to two offsprings. Repeat the process with different parents until $\lambda$ offspring members are produced. Add the offspring members to the set $\mathcal{O}$.
\item[4.] Perform a binary mutation (\cite{goldberg-book}) on each of the offspring members in set $\mathcal{O}$ by flipping the bits with a particular probability.
\item[6.] Add all the offspring members from the set $\mathcal{O}$ to $\mathcal{P}$. The size of $\mathcal{P}$ exceeds $N$, therefore delete dominated members with highest number of variables until the size of $\mathcal{P}$ becomes equal to $N$. In case all the members are non-dominated, then delete the members with highest number of variables.
\item[7.] If specified number of iterations, $i$, are done then terminate the process and report the non-dominated set from $\mathcal{P}$ else go to step 3.
\end{enumerate}
Choosing non-dominated parents for crossover, helps the algorithm in exploring members which are closer to the Pareto-optimal front. The output of the above algorithm is a non-dominated set of regression models $\mathcal{P}^{\star}$, which provides an approximation for the Pareto-optimal frontier $\mathcal{H}^{\star}$ of the entire hypothesis space. MOGA-VS does not require any additional parameters apart from the common genetic algorithm parameters like population size, number of iterations, number of offsprings, probability of crossover and probability of mutation. For these parameters one can follow the following guidelines,
Population size: $N=K$,
Crossover probability: $p_c=0.9$,
Mutation probability: $p_m=1/K$,
No. of offsprings: $\lambda=N$.
The algorithm should be executed for sufficient number of iterations, $i$, such that the non-dominated frontier achieved by the algorithm no longer improves. The population size should not be chosen less than $K$, otherwise the algorithm will not be able to approximate the entire Pareto-optimal frontier. For crossover and mutation probability, the commonly used parameters have been chosen. The performance of the algorithm is not susceptible to variation in these parameters. Once a set of trade-off models are obtained using the MOGA-VS procedure, the frontier needs to be examined to find the most preferred model. This can be done using a combination of graphics (Section~\ref{sec:moga-graphics}) and simple selection metrics (Section~\ref{sec:moga-select}).
\subsection{Visualizing the Pareto-optimal frontier}\label{sec:moga-graphics}
In order to get a quick overview of the obtained solutions, a commonly applied strategy is to construct an illustration of the Pareto-optimal set in the objective space. In a bi-objective framework two types of graphs can be considered:
\begin{itemize}
\item[(i)] {\it Objective Space (OS)-plot:} The Pareto-optimal frontier obtained as a solution to Problem~\ref{def:mop} is a plot where the empirical risk $\varphi_1$ of the optimal models is presented as a decreasing function of model complexity, i.e. $\{(\varphi_1(f),\varphi_2(f))\in\mathbb{N}\times\mathbb{R} : f\in\mathcal{H}^{\star}\}$. The plot can be used for analysing the trade-off between empirical risk and complexity before choosing one of the models. (see Section~\ref{sec:moga-select}).
\item[(ii)] {\it Hypothesis Space (HS)-plot:} To get an idea on the structure of the Pareto-optimal models, i.e. what variables and how many are contained in them, a quick remedy is to consider a HS-plot which is reminiscent of a Gantt-chart in the hypothesis space. In HS-plot, the y-axis shows the variables contained in the Pareto-optimal models, and x-axis shows the optimal models as ordered according to their complexity, i.e. if the input-space $\mathcal{X}$ has $p$-variables, HS-plot corresponds to the set $\{(\varphi_1(f), \mathbf{x}_{k,f}):k\in\{1,\dots,p\}, f\in\mathcal{H}^{\star}\}$ where $\mathbf{x}_{k,f}\in\{0,1\}$ is an indicator for whether $f$ has the $k$-th variable or not. Gray-colour in the chart indicates presence of a variable.
\end{itemize}
Illustrations of the graph-based tools and their use are discussed in the light of experiment studies in Section~\ref{sec:results}.
\subsection{Selecting preferred models}\label{sec:moga-select}
The graphical representations of the Pareto-optimal frontier can be used in conjunction with other criteria to decide which of the optimal models to choose for further examination. Some of these strategies are discussed below.
\begin{itemize}
\item[(i)] {\it Knee-point strategy:} Observing a knee-point (\cite{knee-bechikh,knee-das,knee-branke,knee-deb-gupta}) in the OS-plot can be considered as an indicator for an optimal degree of model complexity. A ``knee'' is interpreted as a saturation point in terms of goodness-of-fit vs complexity, where further increase in model complexity yields only minor improvement in fit. This strategy usually works quite well in many practical problems despite its simplicity.
\item[(ii)] {\it Bayesian statistics:} Another strategy is to consider the use of Bayesian Model Averaging approach along the Pareto-optimal frontier only. This would allow the user to select more than one optimal model to perform statistical inference. For example, if $\mathcal{B}^{\star}\subset\mathcal{H}^{\star}$ is a neighborhood of models surrounding the knee-point of the optimal frontier, the user might want to combine several models to perform posterior inferences on a given quantity of interest $\Delta$, i.e. $p(\Delta|Y)=\sum_{f\in \mathcal{B}^{\star}}p(\Delta|f,Y)p(f|Y)$. This is an appropriate strategy in particular when the user has prior information.
\item[(iii)] {\it Information criteria:} The models along the optimal frontier can also be analyzed using various information criteria discussed in Section~\ref{sec:overview}. Applying various information criteria, to these optimal models, allows the user to ascertain the extent of agreement among different information criteria.
\item[(iv)] {\it F-tests:} In case the user finds that several of the Pareto-optimal models are worthy candidates for further evaluation, then non-nested F-tests or encompassing F-tests between the competing specifications can be considered. More details on non-nested testing can be found e.g. in ~\cite{davidson04}.
\end{itemize}
\section{Results}\label{sec:results}
We provide the results on three different datasets in this section. The evaluation of MOGA-VS is first performed on two simulated datasets for which the true models are known, and then we study the performance on a real dataset. In the first example, we demonstrate how the commonly used information criteria act as value functions in a two-objective space. In the second example which involves a more difficult dataset, we compare the MOGA-VS approach against state-of-the-art model selection approaches. Thereafter, the procedure is evaluated on a recently published communities and crimes dataset\footnote{http://archive.ics.uci.edu/ml/datasets/Communities+and+Crime} within the United States. The purpose is to find the attributes that best explain the total amount of violent crimes per 100K population. The section provides a comparison of the MOGA-VS framework with other state of the art techniques.
\subsection{Simulated Example 1}
This example is a function selection problem in additive regression. It evaluates the MOGA-VS algorithm on a simple problem for which the true model is known. To begin with, we provide the procedure, which we used to construct the dataset. Thereafter, we discuss the results obtained from MOGA-VS in the light of a ``knee-point-analysis'' of the Pareto-optimal frontier, where the mean squared error of the models is plotted against the number of coefficients. Next, the performance of the Pareto-models is analyzed in terms of information criteria to find out which models would be chosen, had we used a single objective procedure of minimizing AIC or BIC.
In all simulations, we have used the following parameter values for MOGA-VS: Population Size: $N=26$, Number of iterations: $i=200$, Crossover probability: $p_c=0.9$, Mutation probability: $p_m=1/K$, No. of offsprings: $\lambda=N$.
We create a data-set which has five independent variables, $(x_1, x_2, x_3, x_4, x_5)$, and one dependent variable, $y$. The true regression model has
linear coefficients but the dependent variable $y$ can be a non linear function of the independent variables.
Now, we generate a row of independent variables and the dependent variable as described below:
\begin{equation}
\begin{array}{c}
x_1 \in rand(0,1),
x_2 \in rand(0,2),
x_3 \in rand(0,1),
x_4 \in rand(0,4),
x_5 \in rand(0,5),\\
y = 10 + 5 x_1 + 2 e^{x_2} + 5 x_3 + 3 x_{3}^3 + 0.1 x_{4}^3 + 0.2 norm(0,1).\\
\end{array}
\label{eq:generate}
\end{equation}
here, $rand(a,b)$ represents a random number between $a$ and $b$ and $norm(0,1)$ is a normally distributed random number with zero mean and a standard deviation of one.
This operation gives us a single row of the data-set $\{y, x_1, x_2, x_3, x_4, x_5\}$. Repeating the operation $n$ number of times
we generate a data-set with $n$ number of rows. For this example, we have taken $n=1000$.
This data-set is given as input to the algorithm along with information about the possible functional forms.
The possible functional forms for the
independent variables $x_i, i \in \{1, 2, 3, 4, 5\}$ are $\{x_{i}, x_{i}^2, x_{i}^3, log (x_{i}), e^{x_{i}}\}, i \in \{1, 2, 3, 4, 5\}$.
Using this information the algorithm creates a new dataset with $25$ columns. The functional forms of each of the variables are treated
as separate variables.
The largest regression model will therefore have $26$ coefficients, one for each of the functional forms and one for the constant term in the model. When
the algorithm is executed we obtain regression models with minimum mean squared error for fixed number of regression coefficients. In other words,
$26$ different models are produced as output with $k \in \{1, 2, 3, \ldots, 26\}$ number of regression coefficients in each of the models.
\subsubsection{Knee Point Analysis}\label{sec:knee-point-experiment1}
MOGA-VS procedure is executed to generate a frontier of models shown in Figure~\ref{fig:front-initial}. The figure shows the initial random population (models) created by the algorithm and the final front which it achieved. It can be seen how over generations, the algorithm has progressed towards the front and has produced a diverse set of solutions. In this figure the model corresponding to
``number of coefficients equal to $6$" represents exactly the $6$ variables present in the true model. The regression model for the data-set with true
variables is as follows:
\begin{equation}
\begin{array}{c}
y = 10.0243 + 4.9926 x_1 + 2.0002 e^{x_2} + 4.9753 x_3 + 2.9963 x_{3}^3 + 0.0999 x_{4}^3\\
\end{array}
\label{eq:regress6}
\end{equation}
It is interesting to note, that the mean squared error falls sharply moving from model with $1$ coefficient to model with $5$ coefficients
and does not change significantly thereafter. The model with $5$ coefficients visually appears a ``Knee Point" and represents a region of interest. The
model representing the true variables is expected to lie close to this region. A closer look reveals that the Pareto-optimal model with $5$ coefficients is:
\begin{equation}
\begin{array}{c}
y = 5.4425 + 4.9951 x_1 + 2.0001 e^{x_2} + 4.5475 e^{x_3} + 0.0999 x_{4}^3\\
\end{array}
\label{eq:regress5a}
\end{equation}
The difference in the Pareto-optimal models with $6$ and $5$ number of coefficients is primarily in the constant term and variable $x_3$.
The differing terms are $10.0243, 4.9753 x_3, 2.9963 x_{3}^3$ in the model with $6$ coefficients and
$5.4425, 4.5475 e^{x_3}$ in the model with $5$ coefficients. Other terms are nearly equal in the two models. Figure~\ref{fig:close} shows the
plot of summation of the differing terms for both the models in the
range $[0, 1]$ ($x_3$ lies in this range). It can be seen that the two functions are significantly close in this range and therefore, the model with five coefficients also offers an acceptable fit.
\begin{figure*}[hbt]
\begin{minipage}[t]{0.45\linewidth}
\begin{center}
\epsfig{file=front-initial.eps,width=0.80\linewidth}
\end{center}
\vspace{-4mm}
\caption{Initial random models and the Pareto-optimal models.}
\label{fig:front-initial}
\end{minipage}\hfill
\begin{minipage}[t]{0.45\linewidth}
\begin{center}
\epsfig{file=close.eps,width=0.75\linewidth}
\end{center}
\vspace{-4mm}
\caption{Plot of the differing terms in the Pareto-optimal models with $6$ coefficients and $5$ coefficients.}
\label{fig:close}
\end{minipage}
\end{figure*}
\vspace{-3mm}
\begin{figure*}[hbt]
\begin{minipage}{0.48\linewidth}
\begin{center}
\epsfig{file=aic_vf.eps,width=0.8\linewidth}
\end{center}
\caption{AIC value function (minimization) on the two objective plane.}
\label{fig:aic_vf}
\end{minipage}\hfill
\begin{minipage}{0.48\linewidth}
\begin{center}
\epsfig{file=bic_vf.eps,width=0.8\linewidth}
\end{center}
\caption{BIC value function (minimization) on the two objective plane.}
\label{fig:bic_vf}
\end{minipage}
\end{figure*}
\subsubsection{Analysing Pareto-Optimal Regression Models Using Information Criteria}
Next, we compute the AIC and BIC values for all the frontier models obtained using MOGA-VS.
We observe that the minimum AIC and BIC values correspond to models with $12$ (AIC: $-3.2962$) and $7$ (BIC: $-3.2550$) coefficients respectively. This leads us to the conclusion that an optimization done using AIC or BIC as an objective function will lead us to the models corresponding to $12$ or $7$ coefficients respectively. AIC and BIC act as a value function in a two objective case. We know that for normally distributed errors:
\begin{equation}
\begin{array}{c}
AIC=\frac{2 k}{n}-2 log(MSE), \\
BIC=\frac{log(n) k}{n}-2 log(MSE).
\end{array}
\label{eq:aicbic}
\end{equation}
In our case the first objective for minimization, $\varphi_1$, is number of variables, and
the second objective for minimization, $\varphi_2$, is mean squared error. This implies AIC and BIC can be written as:
\begin{equation}
\begin{array}{c}
AIC=\frac{2 \varphi_1}{n}-2log(\varphi_2), \\
BIC=\frac{log(n) \varphi_1}{n}-2log(\varphi_2).
\end{array}
\label{eq:aicbic_2obj}
\end{equation}
In two objectives, these equations represent value functions which lead to a single
solution. Figure~\ref{fig:aic_vf} and \ref{fig:bic_vf} represent these value functions in the objective space, $\varphi_1$ and $\varphi_2$. Y-axis has been plotted on a log scale as the values for mean squared error are very close to each other after $\varphi_1=5$.
\subsection{Simulated Example 2}\label{sec:sim2}
We provide the results obtained from a simulated example with 100 variables and 500 observations. To increase the difficulty of the problem we have made all the 100 variables highly correlated by using the following mechanism:
\begin{equation}
x_i = 2z + \delta_i; \quad\quad i=1,2,\ldots,100, \quad\quad \delta_i,z \stackrel{iid}{\sim} N(0,1).
\end{equation}
This introduces a pairwise correlation among all the variables as $0.80$. The response variable is then constructed as follows:
\begin{equation}
y = 0.1x_1 + 0.2x_2 + 0.3x_3 + \ldots 1.0x_{10} + \epsilon; \quad\quad \epsilon \sim N(0,\sigma^2), \sigma=1.
\end{equation}
Once the response and predictor variables are generated, they are fed into the MOGA-VS algorithm with the following parameter values:
Population size: $N=100$,
Maximum number of iterations: $i=500$,
Crossover probability: $p_c=0.9$,
Mutation probability: $p_m=1/K$,
No. of offsprings: $\lambda=N$.
\begin{figure*}
\begin{minipage}[t]{0.49\linewidth}
\begin{center}
\epsfig{file=ex1-figure1.eps,width=0.99\linewidth}
\end{center}
\caption{A part of the MOGA-VS frontier and a part of the Lasso frontier obtained using the simulated dataset from a sample run. The true model is also plotted.}
\label{fig:ex1-figure1}
\end{minipage}\hfill
\begin{minipage}[t]{0.49\linewidth}
\begin{center}
\epsfig{file=ex1-figure3.eps,width=0.99\linewidth}
\end{center}
\caption{The difference between the number of correct variables and the number of incorrect variables contained in the models produced by MOGA-VS and Lasso.}
\label{fig:ex1-figure3}
\end{minipage}
\end{figure*}
The algorithm produces a Pareto-frontier of models with complexities varying from 1 to 100. A part of the frontier produced by the algorithm is shown in Figure \ref{fig:ex1-figure1} for a dataset. We have performed a comparative study, where we examine the performance of our method against the Lasso (\cite{lasso}) scheme.
The Lasso frontier is generated by solving a number of single objective optimization problems with different parameter values\footnote{The Lasso parameter was incremented from 0 in steps of $0.01$, and a singe objective optimization problem was solved for each parameter until a model is obtained which includes all the variables.}. Figure \ref{fig:ex1-figure1} also shows the frontier obtained from the Lasso scheme using the same dataset. To evaluate the validity of the Pareto-optimal frontier obtained using MOGA-VS, we compare it with an exhaustive branch-and-bound search. An exhaustive branch-and-bound search was performed for complexities 1 to 25, and the results are shown in the same figure. One finds that the models obtained using MOGA-VS correspond to the models obtained using an exhaustive search.
Next, we evaluate the performance of the approaches in terms of number of correct or incorrect variables included in the various suggested models. We assign a value to each model as a difference of number of correct and incorrect variables included. For instance, the true model in this case will be assigned a value of 10, as it contains 10 correct and 0 incorrect variables. Any other model will have a value less than 10. Based on this, Figure \ref{fig:ex1-figure3} provides a plot of the difference values assigned to the models along the MOGA-VS and Lasso frontiers.
\begin{figure}
\begin{center}
\epsfig{file=ex1-figure2-10runs.eps,width=0.55\linewidth}
\end{center}
\caption{The models obtained from 10 different runs of MOGA-VS and Lasso for the simulated dataset. The true model is also plotted.}
\label{fig:ex1-figure2}
\end{figure}
We have performed a simulation study where we execute each of the methods (MOGA-VS and Lasso) on 10 different datasets to observe the precision and accuracy. Figure \ref{fig:ex1-figure2} shows the results obtained from 10 sample runs of both the methods. It is easy to observe that the MOGA-VS scheme offers a high accuracy and precision, as the frontiers always pass close to the true model. Most of the models produced by Lasso are far away from the frontier. However, it should be noted that Lasso is not expected to produce models on the Pareto-optimal front. We also execute the stepwise regression model on the simulated data, and the average number of predictor variables chosen by the method from 10 different datasets is 14.70, when the true model has 10 variables.
The results produced by MOGA-VS on this simulated example demonstrates its ability to explore the Pareto-optimal frontier consisting of trade-off models. It is noteworthy that in this example with highly correlated variables, the true model does not correspond to the knee region of the MOGA-VS frontier. This shows the inherent difficulty with the model selection task, which restricts one to rely entirely on one particular model selection scheme. Some of the existing methods like different information criteria, stepwise regression and Lasso have a strong foundation and are theoretically motivated. However, different models proposed by these methods highlight the subjective nature of the model selection task.
\subsection{Communities and crime}
The communities and crimes dataset (\cite{redmond09}) is formed as a combination of the socio-economic and law enforcement data from the 1990 US Census. The data also includes crime statistics from the 1995 FBI Uniform Crime Report. As discussed by \cite{redmond02}, the data set was originally collected to create a data-driven software tool called Crime Similarity System (CSS) for enabling cooperative information sharing among police departments. The idea in CSS is to utilize a variety of context variables ranging from socioeconomic, crime and enforcement profiles of cities to generate a list of communities that should be good candidates to co-operate due to their similar crime profiles.
To demonstrate the performance of MOGA-VS framework, we consider the data-mining task of finding variables that best predict how many violent crimes are committed per 100K people. The number of candidate variables is 122, which corresponds to a hypothesis space $\mathcal{H}$ of size $2^{122}$. All of the variables have been normalized into $[0,1]$ interval to put all data into the same relative scale. The number of observations (or cases) is 1994, and each observation represents a single city or community. According to \cite{redmond02}, the variables have been chosen in close co-operation with police departments to find a collection of factors that provide a good coverage of the different aspects related to the community environment, law enforcement and crime. However, some of the variables included in the data set could not be used directly ``as is'' due to the large number of missing values. To alleviate this, imputation technique\footnote{The imputation was performed using the method {\em imputeData} (\cite{schafer97}) in the mclust-library on R.} was used to replace missing values on 20 attributes.
The MOGA-VS algorithm used the following parameter values:
Population size: $N=122$,
Maximum number of iterations: $i=500$,
Crossover probability: $p_c=0.9$,
Mutation probability: $p_m=1/K$,
No. of offsprings: $\lambda=N$.
\subsubsection{Analysing the Pareto-optimal frontier}
\begin{figure}[t]
\begin{center}
\epsfig{file=convergence_plot.eps,width=0.55\linewidth}
\end{center}
\caption{Pareto-optimal regression models: Mean squared error of the models has been shown on the y-axis and number of coefficients on the x-axis.}
\label{fig:front-crime}
\end{figure}
A description of the Pareto-optimal frontier is provided in Figure~\ref{fig:front-crime}. The plot shows the progress of the MOGA-VS algorithm when all the 1994 observations are considered. In addition to the final frontier, snapshots of intermediate generations are shown to illustrate the convergence towards the optimal front. The algorithm is able to provide a good approximation of the Pareto-frontier already by the 100th generation. However, more generations are needed to ensure convergence to the true frontier. The final result is the set of non-dominated solutions obtained by the algorithm after executing it for 500 generations. The plot for generation $1$ denotes the initial random models. It can be seen from the graph that the initial random models are initialized in the region close to 61 variables. The reason for this is that the initial bits are chosen to be either 0 or 1 with a $50\%$ probability. Therefore, the 122 bit chromosome has 61 number of expected variables. The algorithm is implemented on MATLAB, and required a total execution time of $37.27$ minutes on a Linux machine with 2.5 GHz intel dual core processor and 2 GB or RAM. A total of 61,000 regression models were solved to arrive at the final frontier.
Most of the times we are interested in parsimonious models, so initializing the population with fewer 1s would boost the convergence. The convergence can be further enhanced by specifying constraints in the algorithm to perform a restricted search and producing only those models between $i$ to $j$ number of variables. Given a variable collection with 122 candidates, we would hardly want the final models to contain more than 20 variables. Therefore, a faster approximation of the interesting region of the Pareto-optimal frontier can be obtained by restricting the search for models with size between 1 to 20. Instead of starting the MOGA-VS algorithm with a random population, it is also possible to start the algorithm with close Pareto-solutions as initial population. One of the stepwise selection techniques could be first executed on the dataset to get the trajectory of the stepwise approach. Thereafter, the trajectory models could be used to generate the starting population for the MOGA-VS approach. Trajectory models could be a much better starting guess as compared with a random population. However, in this paper we do not use any starting guesses to justify that the MOGA-VS alone could lead to the set of Pareto-optimal solutions.
Based on visualization of the frontiers, we find that the knee of the curve lies in the region of 5 to 15 variables. The models which explain most of the variation in the response variable are the ones in the knee region. The incremental contributions of the remaining combinations of 112 variables are relatively small. This means that incorporating more explanatory variables would lead to only minor additional explanation of the variation. Choosing one of the models from the knee region offers a good compromise between goodness of fit and complexity. In the Table~\ref{tab:gant-chart} we provide the HS-plot which shows all non-dominated models with $5$ to $15$ number of variables produced by the MOGA-VS algorithm. The variables which are present in the model are marked as $1$ and the others are marked as $0$. This chart provides a useful information as to when the size of the model is increased by $1$ which variable(s) enter the model and which variable(s) are eliminated from the model. Consider a scenario, when a model size is increased from $k$ to $k+1$ causing one variable to leave the model and two variables to enter the model. It suggests that the explanatory power of the two variables entering the model is more than the explanatory power of the variable leaving the model when the remaining $k-1$ variables are kept intact. The chart helps a user to build an insight about the problem and enhances his understanding in order to choose a regression model wisely. After having the background information provided by the MOGA-VS algorithm, one can proceed to use a strategy for model selection. In the next sub-section we discuss the results obtained by other variable selection strategies.
\begin{table}[htp]
\begin{center}
\vspace{4mm}
\caption{Models in the knee region of the Pareto-optimal frontier}
\vspace{3mm}
\resizebox{.75\columnwidth}{!}{
\begin{tabular}{|l|c|c|c|c|c|c|c|c|c|c|c|}
\hline
racepctblack & \cellcolor[gray]{0.9}1 & \cellcolor[gray]{0.9}1 & \cellcolor[gray]{0.9}1 & \cellcolor[gray]{0.9}1 & \cellcolor[gray]{0.9}1 & \cellcolor[gray]{0.9}1 & \cellcolor[gray]{0.9}1 & \cellcolor[gray]{0.9}1 & \cellcolor[gray]{0.9}1 & \cellcolor[gray]{0.9}1 & \cellcolor[gray]{0.9}1 \\ \hline
PctIlleg & \cellcolor[gray]{0.9}1 & \cellcolor[gray]{0.9}1 & \cellcolor[gray]{0.9}1 & \cellcolor[gray]{0.9}1 & \cellcolor[gray]{0.9}1 & \cellcolor[gray]{0.9}1 & \cellcolor[gray]{0.9}1 & \cellcolor[gray]{0.9}1 & \cellcolor[gray]{0.9}1 & \cellcolor[gray]{0.9}1 & \cellcolor[gray]{0.9}1 \\ \hline
PctPersDenseHous & \cellcolor[gray]{0.9}1 & \cellcolor[gray]{0.9}1 & \cellcolor[gray]{0.9}1 & \cellcolor[gray]{0.9}1 & \cellcolor[gray]{0.9}1 & \cellcolor[gray]{0.9}1 & \cellcolor[gray]{0.9}1 & \cellcolor[gray]{0.9}1 & \cellcolor[gray]{0.9}1 & \cellcolor[gray]{0.9}1 & \cellcolor[gray]{0.9}1 \\ \hline
HousVacant & \cellcolor[gray]{0.9}1 & \cellcolor[gray]{0.9}1 & \cellcolor[gray]{0.9}1 & \cellcolor[gray]{0.9}1 & \cellcolor[gray]{0.9}1 & \cellcolor[gray]{0.9}1 & \cellcolor[gray]{0.9}1 & \cellcolor[gray]{0.9}1 & \cellcolor[gray]{0.9}1 & \cellcolor[gray]{0.9}1 & \cellcolor[gray]{0.9}1 \\ \hline
MalePctDivorce & \cellcolor[gray]{0.9}1 & \cellcolor[gray]{0.9}1 & \cellcolor[gray]{0.9}1 & \cellcolor[gray]{0.9}1 & \cellcolor[gray]{0.9}1 & \cellcolor[gray]{0.9}1 & \cellcolor[gray]{0.9}1 & \cellcolor[gray]{0.9}1 & \cellcolor[gray]{0.9}1 & \cellcolor[gray]{0.9}1 & \cellcolor[gray]{0.9}1 \\ \hline
pctWWage & 0 & \cellcolor[gray]{0.9}1 & \cellcolor[gray]{0.9}1 & \cellcolor[gray]{0.9}1 & \cellcolor[gray]{0.9}1 & \cellcolor[gray]{0.9}1 & \cellcolor[gray]{0.9}1 & \cellcolor[gray]{0.9}1 & \cellcolor[gray]{0.9}1 & 0 & 0 \\ \hline
pctUrban & 0 & 0 & \cellcolor[gray]{0.9}1 & \cellcolor[gray]{0.9}1 & \cellcolor[gray]{0.9}1 & \cellcolor[gray]{0.9}1 & \cellcolor[gray]{0.9}1 & \cellcolor[gray]{0.9}1 & \cellcolor[gray]{0.9}1 & \cellcolor[gray]{0.9}1 & \cellcolor[gray]{0.9}1 \\ \hline
NumStreet & 0 & 0 & 0 & \cellcolor[gray]{0.9}1 & \cellcolor[gray]{0.9}1 & \cellcolor[gray]{0.9}1 & \cellcolor[gray]{0.9}1 & \cellcolor[gray]{0.9}1 & \cellcolor[gray]{0.9}1 & \cellcolor[gray]{0.9}1 & \cellcolor[gray]{0.9}1 \\ \hline
numbUrban & 0 & 0 & 0 & 0 & \cellcolor[gray]{0.9}1 & 0 & 0 & \cellcolor[gray]{0.9}1 & \cellcolor[gray]{0.9}1 & \cellcolor[gray]{0.9}1 & \cellcolor[gray]{0.9}1 \\ \hline
RentLowQ & 0 & 0 & 0 & 0 & 0 & \cellcolor[gray]{0.9}1 & \cellcolor[gray]{0.9}1 & \cellcolor[gray]{0.9}1 & \cellcolor[gray]{0.9}1 & \cellcolor[gray]{0.9}1 & \cellcolor[gray]{0.9}1 \\ \hline
MedRent & 0 & 0 & 0 & 0 & 0 & \cellcolor[gray]{0.9}1 & \cellcolor[gray]{0.9}1 & \cellcolor[gray]{0.9}1 & \cellcolor[gray]{0.9}1 & \cellcolor[gray]{0.9}1 & \cellcolor[gray]{0.9}1 \\ \hline
MedOwnCost...Mtg & 0 & 0 & 0 & 0 & 0 & 0 & \cellcolor[gray]{0.9}1 & \cellcolor[gray]{0.9}1 & \cellcolor[gray]{0.9}1 & \cellcolor[gray]{0.9}1 & \cellcolor[gray]{0.9}1 \\ \hline
PctWorkMom & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & \cellcolor[gray]{0.9}1 & \cellcolor[gray]{0.9}1 & \cellcolor[gray]{0.9}1 \\ \hline
pctWSocSec & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & \cellcolor[gray]{0.9}1 & \cellcolor[gray]{0.9}1 \\ \hline
PctKids2Par & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & \cellcolor[gray]{0.9}1 & \cellcolor[gray]{0.9}1 \\ \hline
LemasSwFTFieldOps & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & \cellcolor[gray]{0.9}1 \\ \hline
\multicolumn{ 12}{|c|}{} \\ \hline
No. of Variables & 5 & 6 & 7 & 8 & 9 & 10 & 11 & 12 & 13 & 14 & 15 \\ \hline
MSE $\times$ 100 & 2.00 & 1.92 & 1.89 & 1.88 & 1.86 & 1.85 & 1.84 & 1.83 & 1.82 & 1.81 & 1.80 \\ \hline
\end{tabular}
}
\label{tab:gant-chart}
\end{center}
\vspace{-5mm}
\end{table}
\begin{comment}
\begin{table}[htp]
\caption{Models in the knee region of the Pareto-optimal frontier}
\begin{tabular}{|l|c|c|c|c|c|c|c|c|c|c|c|}
\hline
racepctblack & 1 & 1 & 1 & 1 & 1 & 1 & 1 & 1 & 1 & 1 & 1 \\ \hline
PctIlleg & 1 & 1 & 1 & 1 & 1 & 1 & 1 & 1 & 1 & 1 & 1 \\ \hline
PctPersDenseHous & 1 & 1 & 1 & 1 & 1 & 1 & 1 & 1 & 1 & 1 & 1 \\ \hline
HousVacant & 1 & 1 & 1 & 1 & 1 & 1 & 1 & 1 & 1 & 1 & 1 \\ \hline
MalePctDivorce & 1 & 1 & 1 & 1 & 1 & 1 & 1 & 1 & 1 & 1 & 1 \\ \hline
pctWWage & 0 & 1 & 1 & 1 & 1 & 1 & 1 & 1 & 1 & 0 & 0 \\ \hline
pctUrban & 0 & 0 & 1 & 1 & 1 & 1 & 1 & 1 & 1 & 1 & 1 \\ \hline
NumStreet & 0 & 0 & 0 & 1 & 1 & 1 & 1 & 1 & 1 & 1 & 1 \\ \hline
numbUrban & 0 & 0 & 0 & 0 & 1 & 0 & 0 & 1 & 1 & 1 & 1 \\ \hline
RentLowQ & 0 & 0 & 0 & 0 & 0 & 1 & 1 & 1 & 1 & 1 & 1 \\ \hline
MedRent & 0 & 0 & 0 & 0 & 0 & 1 & 1 & 1 & 1 & 1 & 1 \\ \hline
MedOwnCost...Mtg & 0 & 0 & 0 & 0 & 0 & 0 & 1 & 1 & 1 & 1 & 1 \\ \hline
PctWorkMom & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 1 & 1 & 1 \\ \hline
pctWSocSec & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 1 & 1 \\ \hline
PctKids2Par & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 1 & 1 \\ \hline
LemasSwFTFieldOps & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 1 \\ \hline
\multicolumn{ 12}{|c|}{} \\ \hline
No. of Variables & 5 & 6 & 7 & 8 & 9 & 10 & 11 & 12 & 13 & 14 & 15 \\ \hline
MSE $\times$ 100 & 2.00 & 1.92 & 1.89 & 1.88 & 1.86 & 1.85 & 1.84 & 1.83 & 1.82 & 1.81 & 1.80 \\ \hline
\end{tabular}
\label{gant-chart}
\end{table}
\end{comment}
\subsubsection{Results obtained from other techniques}
In this section, we present the results from other state-of-the-art techniques used for variable selection. Figure~\ref{fig:figure2} shows the frontier obtained using MOGA-VS against the frontier produced by the Lasso scheme of \cite{lasso}. The Lasso frontier is obtained by solving single objective optimization problems with different parameter values\footnote{The Lasso parameter was incremented from 0 in steps of $0.01$, and a singe objective optimization problem was solved for each parameter until a model is obtained which includes all the variables.}. Along with the two frontiers, the figure also shows the trajectory for a stepwise regression scheme, which is found to be close to the frontiers. The model shown with a cross is the final model chosen by the stepwise regression method. The initial points for Lasso and Stepwise method are not visible in the figure as they have a high MSE value.
Figure~\ref{fig:figure3} shows the models obtained using two different parameter values for the Leaps algorithm\footnote{http://cran.r-project.org/web/packages/BMA/BMA.pdf}, i.e. $nbest=1$ and $nbest=10$. The parameter $nbest$ represents the number of models for each variable size to be generated by the leaps algorithm. The results produced in the first two figures are obtained by utilizing the entire data-set for training. In both figures, we observe that the models produced by Lasso, Stepwise and Leaps are dominated by the MOGA-VS frontier.
To examine the sensitivity of the model selection techniques for the choice of training and evaluation data, we proceed with another experiment, where the original data-set is divided into training and evaluation set. To obtain the average results, we create multiple test-sets of training and evaluation data by randomly choosing $50\%$ of the rows from the original data-set as training set and the remaining rows as evaluation set. Aggregated results of the randomization experiment are furnished in Tables~\ref{tab:simulation1} and ~\ref{tab:simulation2}, which provide a performance metric for all the methods across 20 different test sets of training and evaluation data. For the $i^{th}$ test-set we generate the frontiers using one of the methods, and calculate the average MSE (say $\kappa_{i}^{method}$) for a part of the frontier models\footnote{The reason for considering only a part of the frontier is because not all the methods produce models across the entire frontier. Stepwise trajectory contains models from 1 to 25 variables and Leaps contains models from 6 to 20 variables.}. The comparison metric is computed by taking an average of $\kappa_{i}^{method}$ across 20 test sets (say $\kappa^{method}$) for each of the methods. Lower value for $\kappa^{method}$ denotes a better performance. We conclude from the results that the best-fit models for the training set perform better even on evaluation set, but this may not always be true. The performance metric denotes a slightly better performance for the MOGA-VS algorithm on the training sets as well as the evaluation sets.
\begin{table}[htp]
\caption{Values for $\kappa^{MOGAVS}$, $\kappa^{Lasso}$ and $\kappa^{Stepwise}$ computed from 20 frontiers. Models containing 1 to 25 number of variables were considered while taking the average.}
\label{tab:simulation1}
\begin{center}
\resizebox{.5\columnwidth}{!}{
\begin{tabular}{|c|c|c|c|}
\hline
& MOGA-VS & Lasso & Stepwise \\ \hline
Training Set & 0.0184 & 0.0244 & 0.0218 \\
Evaluation Set & 0.0204 & 0.0277 & 0.0233 \\
\hline
\end{tabular}
}
\end{center}
\end{table}
\begin{table}[htp]
\caption{Values for $\kappa^{MOGAVS}$, $\kappa^{Leaps(nbest=1)}$ and $\kappa^{Leaps(nbest=10)}$ computed from 20 frontiers. Models containing 6 to 20 number of variables were considered while taking the average.}
\label{tab:simulation2}
\begin{center}
\resizebox{.7\columnwidth}{!}{
\begin{tabular}{|c|c|c|c|}
\hline
& MOGA-VS & Leaps (nbest=1) & Leaps (nbest=10) \\ \hline
Training Set & 0.0181 & 0.0208 & 0.0195 \\
Evaluation Set & 0.0195 & 0.0217 & 0.0206 \\
\hline
\end{tabular}
}
\end{center}
\end{table}
\begin{figure*}
\begin{minipage}[t]{0.49\linewidth}
\begin{center}
\epsfig{file=figure2.eps,width=0.99\linewidth}
\end{center}
\caption{A part of the MOGA-VS frontier, a part of the Lasso frontier and Stepwise trajectory obtained using the entire communities and crime data as training set.}
\label{fig:figure2}
\end{minipage}\hfill
\begin{minipage}[t]{0.49\linewidth}
\begin{center}
\epsfig{file=figure3.eps,width=0.99\linewidth}
\end{center}
\caption{A part of the MOGA-VS frontier and the Leaps results for two different parameters values obtained using the entire communities and crime data as training set.}
\label{fig:figure3}
\end{minipage}
\end{figure*}
\begin{figure*}
\begin{minipage}[t]{0.49\linewidth}
\begin{center}
\epsfig{file=figure4.eps,width=0.99\linewidth}
\end{center}
\caption{A part of the MOGA-VS frontier, Lasso frontier, and Stepwise trajectory on the evaluation set when $50\%$ of the communities and crime data is used as training set and the remaining $50\%$ as evaluation set.}
\label{fig:figure4}
\end{minipage}\hfill
\begin{minipage}[t]{0.49\linewidth}
\begin{center}
\epsfig{file=figure5.eps,width=0.99\linewidth}
\end{center}
\caption{A part of the MOGA-VS frontier, and Leaps results for two different parameters on the evaluation set when $50\%$ of the communities and crime data is used as training set and the remaining $50\%$ as evaluation set.}
\label{fig:figure5}
\end{minipage}
\end{figure*}
Figures~\ref{fig:figure4} and~\ref{fig:figure5} provide the results on the evaluation data-set for MOGA-VS, Stepwise Regression, Lasso and Leaps for a particular test-set out of 20 randomly generates test-sets. The cross-mark on Figure~\ref{fig:figure4} is the model suggested by the stepwise regression method. We can observe from the graphs that the frontier for MOGA-VS is slightly ahead of the other frontiers particularly towards models with smaller number of variables.
\begin{table}[htp]
\vspace{5mm}
\caption{Experiment Results: Model variables, + denotes a positive coefficient, - denotes a negative coefficient and no sign denotes that the variable is absent in the model}
\label{tab:experiment2-pga}
\begin{center}
\resizebox{.75\columnwidth}{!}{
\begin{tabular}{|l|c|c|c|c|c|c|c|} \hline
Variable & BMA-1 & BMA-2 & BMA-3 & BMA-4 & BMA-5 & Stepwise & PGA \\
\hline
racepctblack & $+$ & $+$ & $+$ & $+$ & $+$ & $+$ & $+$ \\
PctIlleg & $+$ & $+$ & $+$ & $+$ & $+$ & $+$ & $+$ \\
PctPersDenseHous & $+$ & $+$ & $+$ & $+$ & $+$ & $+$ & $+$ \\
HousVacant & $+$ & $+$ & $+$ & $+$ & $+$ & $+$ & $+$ \\
pctWWage & & $-$ & $-$ & & & & \\
pctUrban & $+$ & $+$ & $+$ & $+$ & $+$ & $+$ & $+$ \\
NumStreet & $+$ & $+$ & $+$ & $+$ & $+$ & $+$ & $+$ \\
numbUrban & $-$ & $-$ & $-$ & $-$ & $-$ & & \\
RentLowQ & & $-$ & $-$ & $-$ & & & $-$ \\
MedRent & & $+$ & $+$ & $+$ & & & \\
MedOwnCostPctIncNoMtg & $-$ & $-$ & $-$ & $-$ & $-$ & $-$ & $-$ \\
PctWorkMom & $-$ & $-$ & $-$ & $-$ & $-$ & $-$ & $-$ \\
PctKids2Par & $-$ & $-$ & $-$ & $-$ & $-$ & $-$ & $-$ \\
agePct12t29 & $-$ & & $-$ & $-$ & $-$ & $-$ & $-$ \\
pctWInvInc & $-$ & $-$ & $-$ & $-$ & $-$ & $-$ & $-$ \\
PctEmploy & $+$ & $+$ & $+$ & $+$ & & & \\
MalePctNevMarr & $+$ & & $+$ & $+$ & $+$ & $+$ & \\
& & & & & & $+11$ other & \\
No. of Variables & 14 & 15 & 17 & 16 & 13 & 23 & 12 \\
MSE$\times100$ & 2.03 & 1.98 & 1.97 & 2.02 & 2.03 & 1.83 & 2.08 \\
Post. Prob. & 0.295 & 0.2 & 0.168 & 0.141 & 0.074 & & \\
\hline
\end{tabular}
}
\end{center}
\end{table}
In Table~\ref{tab:experiment2-pga} we provide the results for BMA, PGA and stepwise regression methods. A direct comparison with the MOGA-VS results can be obtained by comparing Table~\ref{tab:experiment2-pga} against Table~\ref{tab:gant-chart}. The table shows the variables which are present in different models along with coefficient sign patterns. A plus sign indicates a positive coefficient for the variable, a negative sign indicates a negative coefficient for the variable, and no sign indicates that the variable is absent in the model. For BMA we have presented the top $5$ models ranked by posterior probabilities. The best model proposed by BMA and the model proposed by PGA has 14 and 12 number of variables respectively. Both of these models lie close to the knee region of the Pareto-frontier. On the other hand Stepwise regression proposes a model with $23$ variables which could be rejected. A closer look at the table shows that the models proposed by BMA and PGA agree with each other and contain mostly common variables. If the user wants lesser complex model with less than 12 variables, then the models in the knee region of the Pareto-frontier offer relevant alternatives. Finally, we would like to end the discussion without suggesting one particular model for the Communities and Crime example, as it not possible to suggest one best solution in the existence of trade-offs. It is ultimately the user who needs to choose a compromise solution which is most suitable for his purposes
\section{Minimization of Generalization Error}\label{sec:genError}
Until now the focus of the paper has been primarily on minimization of in-sample error. However, the problem with evaluating the models using in-sample error is that the model may demonstrate adequate prediction capabilities on the training sample, but may fail drastically in case of unseen data. Therefore, we are often interested in minimizing the generalization error rather than the in-sample error. In order to minimize the generalization error using MOGA-VS, one needs to choose an appropriate generalization error estimator which can be used to compare two models of similar complexity. There are a number of techniques available in the literature which could be used as an estimator for generalization error. In our illustration and experiments we have used the cross validation technique as an estimator to assess the generalization properties of a model.
Once we have chosen an appropriate estimator, it is straightforward to extend the multi-objective approach proposed in this paper for generalization error minimization. If one wishes to minimize the generalization error then the in-sample error minimization objective can be simply replaced with the generalization error minimization objective in MOGA-VS. The generalization error can be estimated using the multi-fold cross validation estimator. Minimizing this error leads to a Pareto-optimal frontier of models which are expected to be less susceptible to overfitting. In the following steps we briefly describe the estimation of the multi-fold cross validation estimator when the number of folds are chosen to be $k$:
\begin{enumerate}
\item Randomly partition the sample into $k$ equal subsamples
\item Choose one of the subsamples for validation and the remaining $k-1$ subsamples for training the model. Record the MSE value obtained from the validation subsample.
\item Repeat the cross validation task $k$ times, choosing one of the subsamples for validation and the remaining subsamples for training. Take the average of the MSE values obtained from $k$ different validation tasks.
\item The average MSE ($\mbox{MSE}_{cv}$) can be used as an estimator for generalization error.
\end{enumerate}
\begin{figure*}
\begin{minipage}[t]{0.49\linewidth}
\begin{center}
\epsfig{file=genError.eps,width=0.99\linewidth}
\end{center}
\caption{Models obtained by in-sample error minimization and generalization error minimization using MOGA-VS.}
\label{fig:genError}
\end{minipage}\hfill
\begin{minipage}[t]{0.49\linewidth}
\begin{center}
\epsfig{file=generalizationErrorCorrectVars.eps,width=0.99\linewidth}
\end{center}
\caption{The difference between the number of correct and the number of incorrect variables contained in the models produced using generalization error minimization and in-sample error minimization.}
\label{fig:genError2}
\end{minipage}
\end{figure*}
Next, in order to evaluate the idea of minimizing the generalization error obtained using cross validation along with complexity, we execute the MOGA-VS algorithm on simulated example 2 presented in Sub-section \ref{sec:sim2}. The sample size in this experiment is restricted to 200 observations to make the dataset more susceptible to over-fitting. This helps to compare results obtained through in-sample error (MSE) minimization and generalization error ($\mbox{MSE}_{cv}$ obtained from 10-fold cross validation) minimization.
We execute MOGA-VS algorithm separately for both error minimization criteria to obtain the Pareto-optimal sets of variables corresponding to each complexity level.
The optimal models for both error minimization measures are presented in Figure \ref{fig:genError}, where y-axis (log scale) represents the MSE values and x-axis represents the complexity. It is clear that the models obtained using in-sample error minimization dominate the true model with respect to the MSE values and complexity as objectives. The most likely explanation for this is over-fitting.
Further, Figure \ref{fig:genError2} shows the plot of the difference between the number of correct variables and the number of incorrect variables contained in the model. The variables contained in the true model are considered as correct variables and the rest of the variables are considered as incorrect. It can be seen that the frontier corresponding to generalization error minimization contains a model which has all the correct variables and none of the incorrect variables. However, the same is not true for the frontier corresponding to the models obtained using in-sample error minimization. Thus, we observe that for datasets which are more susceptible to overfitting, it is advisable to perform generalization error minimization instead of in-sample error minimization.
\section{Conclusions}\label{sec:conclusions}
In this paper, we have proposed a Multi-objective Genetic Algorithm for Variable Selection (MOGA-VS) which can be used for producing the entire set of Pareto-optimal regression models. Once the optimal set of models is known, the most preferred model can be chosen by assessing these models. The proposed algorithm has been tested on a simulated as well as real datasets, and results have been presented. Comparison studies have been performed with state of the art techniques like Lasso, BMA, Stepwise regression and PGA. It has also been shown that the proposed method can be easily used for in-sample as well as out-of-sample error minimization as desired. To conclude, MOGA-VS algorithm can prove to be a useful tool when there are many predictor variables, and a choice for a model with acceptable quality of fit and complexity is to be made. The frontier of solutions produced by the MOGA-VS scheme gives a visual impression to the entire model selection scheme and helps the user to make decisions efficiently.
\bibliographystyle{natbib}
|
2,877,628,090,423 | arxiv | \section{Introduction and Statement of the Main Results}\label{intro}
Throughout this paper, unless specified, $M$ denotes a closed surface and $\mathrm{End}^1(M)$ the space of all the $C^1$-maps from $M$ into itself endowed with the usual $C^1$-topology. The elements of $\mathrm{End}^1(M)$ will be called \textit{endomorphisms}. Some of them display \textit{critical point}, that is, point such that the derivative is not an isomorphism; and the complement are endomorphisms without critical points which are local diffeomorphisms and diffeomorphisms. An endomorphism $f \in \mathrm{End}^1(M)$ is said to be \textit{robustly transitive} if there exists a neighborhood $\mathcal{U}$ of $f$ in $\mathrm{End}^1(M)$ such that every $g \in \mathcal{U}$ is transitive, recalling that \textit{transitive} means that such map has a dense forward orbit in the whole surface.
The main aim of our paper is to give necessary conditions and some topological obstructions for the existence of robustly transitive surface endomorphisms displaying critical points.
The first result we present is in regard of necessary conditions, we show that in the case of surface endomorphisms displaying critical points, a weak form of
hyperbolicity is necessary for robust transitivity. Concretely,
\begin{theorem}\label{thm-A}
Every robustly transitive surface endomorphism displaying critical points is a partially hyperbolic endomorphism.
\end{theorem}
The definition of partial hyperbolicity for endomorphisms requires some preliminary notions about cone-field.
Recall that a \textit{cone-field} $\mathscr{C}$ on $M$ is a family of closed convex non-vanishing cone $\mathscr{C}(x) \subseteq T_xM$ at each point $x$ in $M$. A cone-field $\mathscr{C}$ is said to be \textit{invariant} (or \textit{k-invariant}, if we want to emphasize the role of $k$) if $Df_x^k(\mathscr{C}(x))$ is contained in the interior of $\mathscr{C}(f^k(x))$ for all $x$ in $M$ which will be denoted by $\mathrm{int}(\mathscr{C}(f^k(x)))$. Finally, an endomorphism $f$ is a \textit{partially hyperbolic endomorphism} if there exist $\lambda >1, \ell \geq 1,$ and an invariant cone-field $\mathscr{C}$ satisfying:
\begin{itemize}
\item[[PH1\!\!]] \textit{Transversal to the kernel:} for all $x\in M$ and $n\geq 1$,
$$\ker(Df_x^n)\cap \mathscr{C}(x)=\{0\};$$
\item[[PH2\!\!]] \textit{Unstable property:} for all $x\in M$ and $v \in \mathscr{C}(x)$, $$ \|Df^{\ell}_x(v)\|\geq \lambda\|v\|.$$
\end{itemize}
Let us call this cone-field by \textit{unstable cone-field} for $f$ and denote it by $\mathscr{C}^u$. It should be noted that, up to an iterated, we can assume that the unstable cone-field $\mathscr{C}^u$ is also $\ell$-invariant. If an invariant cone-field satisfies only the transversality property ([PH1]), then we say that $f$ \textit{admits a dominated splitting}.
\medskip
Let us briefly comment on the present state of the art.
Ma\~{n}\'{e} proved
in \cite{Mane-Closinglemma} that every robustly transitive surface diffeomorphism admits a hyperbolic structure in the whole surface. Ma\~{n}\'{e}'s result has no direct generalization to higher dimension. D\'{i}az-Pujals-Ures and Bonatti-D\'{i}az-Pujals proved in \cite{DPU} and \cite{BDP}, for manifolds of dimension three and larger than four, respectively, proved that robust transitivity implies a weak form of hyperbolicity, called
\textit{volume hyperbolicity}\footnote{A diffeomorphism is volume hyperbolic if the invariant splitting $TM=E_1\oplus \cdots \oplus E_k$ is dominated and $Df$ restricts to $E_1$ and $E_k$ are volume contracting and volume expanding, respectively. In particular, it implies partial hyperbolicity in three-dimension.}. For local diffeomorphisms, Lizana-Pujals gave in \cite{LP} necessary and sufficient conditions for the existence of robustly transitive, and, in particular, it is shown that no form of weak hyperbolicity is necessary for robust transitivity. However, for endomorphisms admitting critical points, the robust transitivity requires, as for diffeomorphisms, a weak form of hyperbolicity as it is stated in Theorem \ref{thm-A} above.
The existence of an invariant cone-field on a surface provides some topological obstructions. The next result gives a classification answering a natural question, which surfaces support robustly transitive endomorphisms.
\begin{theorem} \label{thm-B}
The only surfaces that might admit robustly transitive endomorphisms are either the torus $\mathbb{T}^2$ or the Klein bottle $\mathbb{K}^2$.
\end{theorem}
Some comments are in order. For diffeomorphisms, it follows from \cite{Mane-Closinglemma} that the only surface admitting a robustly transitive is the 2-torus.
For local diffeomorphisms, it is well-known that
these maps are covering maps and the only surfaces that can cover themselves are either the torus $\mathbb{T}^2$ or the Klein bottle $\mathbb{K}^2$. However, it was not known, until now, if the existence of robustly transitive endomorphisms displaying critical points implies topological obstructions.
Although some examples of robustly transitive local diffeomorphisms in the torus $\mathbb{T}^2$ and the Klein bottle $\mathbb{K}^2$ (as expanding endomorphisms \footnote{$f \in \mathrm{End}^1(M)$ is an expanding endomorphisms if $\|Df_x(v)\|> \|v\|$ for all $x \in M$ and $v \in T_xM$.} for instance) are well-known, examples of robustly transitive endomorphisms admitting critical points first appear in \cite{Berger-Rovella} and \cite{ILP} on the torus $\mathbb{T}^2$; and recently in \cite{CW2}, we built a new class of robustly transitive endomorphisms exhibiting persistent critical points on the 2-torus
and on the Klein bottle, based on the geometric construction developed by Bonatti-D\'{i}az
in \cite{BD} to produce robustly transitive diffeomorphisms. The proof of Theorem \ref{thm-B} for endomorphisms displaying critical points use a classical argument provided the existence of continuous subbundles over $M$, which follows from Theorem \ref{thm-A}, and it will be presented in Section \cref{sec:main-thm-consequences}.
An immediate and interesting consequence from Theorem \ref{thm-B} is the following.
\begin{cor-thm}
There are not robustly transitive on the sphere $\mathbb{S}^2$.
\end{cor-thm}
The third result we present now is in regard of topological obstructions as well.
More precisely,
\begin{theorem}\label{thm-C}
The action of a transitive endomorphism admitting a dominated splitting in the first homology group of $M$ has at least one eigenvalue with modulus larger than one.
\end{theorem}
In particular, the proof of Theorem \ref{thm-C} will allow us to conclude the following.
\begin{cor-thm}\label{cor1}
The action of partially hyperbolic endomorphisms in the first homology group of $M$ has at least one eigenvalue with modulus larger than one.
\end{cor-thm}
From Theorem \ref{thm-C} we get the following corollary.
\begin{cor-thm}\label{cor2}
The action of robustly transitive endomorphisms
in the first homology group of $M$ has at least one eigenvalue with modulus larger than one.
\end{cor-thm}
Consequently, one obtains the following.
\begin{cor-thm}\label{cor3}
There are not robustly transitive surface endomorphisms homotopic to the identity.
\end{cor-thm}
Let us comment the main ideas of the proof of Theorem \ref{thm-C}. First we prove that the length of the iterated of any tangent arcs to the cone-field grows exponentially; and later, based on \cite{BBI04} where
it is proved that the action of partially hyperbolic diffeomorphisms on three manifolds on the first homology group is partially hyperbolic as well, we prove that if the action has some
eigenvalue of modulus greater than one, then there exists an arc such that the length of its iterated grows sub-exponentially. In order to prove the first part, we adapt the arguments of \cite{Pujals-Sambarino} to our setting.
For further details and the proof of Corollaries \ref{cor1} and \ref{cor2} see Section~\cref{homotopy}.
\medskip
Finally, we introduce the main result of this work giving a dichotomy that will be used to prove Theorem \ref{thm-A}. Before continuing, let us fix the following notation and introduce some previous results. We denote the set of all the critical points of $f$ by $\mathrm{Cr}(f)$ and by $\mathrm{int}(\mathrm{Cr}(f))$ its interior; further, $|\ker(Df)|$ denotes the maximum of $|\ker(Df_x)|$ over $M$, where $|\ker(Df_x)|$ is the dimension of $\ker(Df_x)$. An endomorphism
$f \in \mathrm{End}^1(M)$ is said to have \textit{full-dimensional kernel} if there exist an integer $n \geq 1$ and $x \in \mathrm{Cr}(f)$ such that
\begin{align}
|\ker(Df^n_x)|=\dim M.
\end{align}
This property is an obstruction for robust transitivity and will play an important role in our approach. More precisely,
\begin{keylemma*}[Full-dimensional kernel obstruction]\label{key obstruction}
Let $f: M \to M$ be an endomorphism having full-dimensional kernel. Then $f$ cannot be a robustly transitive endomorphism.
\end{keylemma*}
Since the proof is not difficult we present it as follows.
\begin{proof}[Proof of Key Lemma]
Since $|\ker(Df_x^n)|=2$, one has that $Df_x^n=0$. Then, by \hyperref[Franks-lemma]{Franks' Lemma}, there exists $\tilde{f}$ $C^1$-close to $f$ such that $\tilde{f}^n$ behave as $Df_x^n$ in a neighborhood of $x$, and so $\tilde{f}^n$ is constant in a neighborhood of $x$ implying that $\tilde{f}$ cannot be transitive.
\end{proof}
An interesting fact, that will be shown in Section \cref{sec:main-thm}, is the following dichotomy
which appears ``naturally" as an obstruction for domination property.
\begin{mainthm*}\label{main-theo}
Let $f: M \to M$ be a transitive endomorphism whose $\mathrm{int}(\mathrm{Cr}(f))$ is nonempty and $|\ker (Df^n)|=1$ for every $n\geq 1$. Then,
\begin{itemize}
\item either $f$ admits a dominated splitting;
\item or $f$ can be approximated by endomorphisms having full-dimensional kernel.
\end{itemize}
\end{mainthm*}
Some comments related to the \hyperref[main-theo]{Main Theorem} are in order. Some similarities with the diffeomorphisms setting can be observed at this result. For diffeomorphisms, the dichotomy guarantees that either one has domination property or one can create, up to a perturbation, some obstructions (sinks and sources) for robust transitivity. In both setting, one starts proving the existence of a dominated splitting over a properly subset which can be extended to the whole manifold. The choices of obstructions for robust transitivity and the properly subset play a fundamental role in the proof; the choices are related in a way that allows us to define an invariant splitting over a properly subset which the absence of domination property (see
Definition~\ref{def-DS}) implies an obstruction for robust transitivity. Let us make a brief discussion about previous results.
\medskip
$\bullet$ For diffeomorphisms, the used obstructions for robustly transitive are the existence of \textit{sinks} or \textit{sources}\footnote{A sink (source) is a periodic point (e.g., $f^n(p)=p$) whose linear map $L=Df^n_p$ has all the eigenvalues with modulus less (greater) than one. The existence of sinks (sources) implies the existence of a neighborhood $U$ of $p$ such that $f^n(\overline{U})\subseteq U \, (f^{-n}(\overline{U})\subseteq U$) which contradicts the transitivity.}; and the properly subset is the set of all periodic points in \cite{Mane-Closinglemma}, and it is the homoclinic class of a saddle point in \cite{DPU,BDP}.
\medskip
\begin{itemize}
\item[-] In \cite{Mane-Closinglemma},
the obstructions
mentioned above implies that the periodic points of a robustly transitive diffeomorphism are hyperbolic. Then, one can define a ``natural" invariant splitting over the periodic points.
It is shown that
under the absence of domination property, sinks and sources can be created,
up to a perturbation, which is a contradiction. Therefore, the hyperbolic splitting over the periodic points is dominated and can be extended to the closure of the periodic points that generically is the whole surface.
\medskip
\item[-] In \cite{DPU} and \cite{BDP}, the properly subset is the homoclinic class of a hyperbolic saddle point. It has a ``natural" splitting given by the transversal intersection of the stable and unstable manifolds of such saddle. They prove that the absence of domination of such splitting implies the existence of sinks and sources for a perturbation which is an obstruction for robust transitivity. Finally, they use classical results, such as Closing Lemma and Connecting Lemma, to extend the splitting to the whole manifold.
\end{itemize}
$\bullet$ For non-invertible endomorphisms, observe that just sinks keep being obstructions for robustly transitive (e.g., expanding endomorphisms are robustly transitive and have sources). However, in the setting of the \hyperref[main-theo]{Main Theorem}, the alternative obstruction for robustly transitive, full-dimensional kernel allows us to define a properly subset as the set of all full orbits (see Section~\cref{section-ph}) which enter infinitely many times for the past/future in the set of the critical points. Concretely,
\begin{align}\label{Lambda-set}\tag{$\ast$}
\Lambda_f=\left\{(x_i)_i \subseteq M_f \left| \begin{array}{ll}
f(x_i)=x_{i+1}; \ \ \text{and} \ \ x_i \in \mathrm{Cr}(f)\ \
\text{for}\\ \text{infinitely many negative/positive} \ \ i \in \mathbb{Z}
\end{array} \right.\right\}.
\end{align}
In general, the authors do not know if this set is ``typically" nonempty. However, it will be shown in Section \cref{sec:main-thm} that if $f$ satisfies the hypothesis of \hyperref[main-theo]{Main Theorem} then $\Lambda_f$ is a dense subset, further details in Section \cref{construction-lambda}. More generally, in Section \cref{sec:main-thm-consequences} (see Lemma \ref{typically}), it will be shown that every robustly transitive endomorphism $f$ displaying critical points can be approximated by endomorphisms $g$
whose corresponding $\Lambda_g$ is dense, where $\Lambda_g$ refers to the set associated to $g$ given as in \eqref{Lambda-set}.
Hence, we first assume that $\Lambda_f$ is a nonempty set and define the following subbundles,
\begin{align}\label{def:EF}\tag{$\ast \ast$}
E(x_j)=\ker\Bigl(Df^{^{\tau^{+}_j+1}}_{_{x_j}}\Bigl) \ \ \text{and} \ \ F(x_j)=\mathrm{Im}\Bigr(Df^{^{|\tau^{-}_j|}}_{_{x_{\tiny{\tau_j^{-}}}}}\Bigl),
\end{align}
where $\tau^{+}_j$ is the first time the orbit $(x_{j+i})_i$ enters in the set of critical points and $\tau^{-}_j$ is the first time before the orbit $(x_{j+i})_i$ leaves the set of critical points. In Section \cref{sec:main-thm}, we will prove that either $E\oplus F$ over $\Lambda_f$ is dominated in the sense of Definition \ref{def-DS}; or $f$ can be approximated by an endomorphism having full-dimensional kernel. Finally, we use the equivalence between Definition \ref{def-DS} and the definition of dominated splitting given
at the beginning of this section ([PH1]), see Section \cref{section-ph} for details, to conclude the proof of \hyperref[main-theo]{Main Theorem}.
The novelty in our approach are the full-dimensional kernel obstruction for robustly transitive, the choice of $\Lambda_f$ and the construction of the splitting $E\oplus F$ over $\Lambda_f$. Furthermore, it will not be used any classical result such as Closing Lemma and Connecting Lemma. These are the most important difference between our approach and the approaches in \cite{Mane-Closinglemma,DPU} and \cite{BDP}.
Finally, Section~\cref{sec:exp direction} is devoted to prove that the dominated splitting obtained in Section~\cref{sec:main-thm-consequences} is in fact partially hyperbolic, that is, the extremal dominating bundle is expanding, finishing the proof of Theorem \ref{thm-A}.
\subsection{Sketch of the proof} The strategy for proving the results stated above is as follows. Consider a robustly transitive endomorphism displaying critical points.
\begin{enumerate}
\item The existence of critical points and transitivity allows us to construct a ``properly subset" $\Lambda_f$ defined by \eqref{Lambda-set} (Section \cref{construction-lambda}).
\item Observe that the kernel of the differential is at most one dimensional. If not, we are able to find a perturbation having an attracting periodic point, which contradicts the transitivity of nearby maps (Key Lemma).
\item Previous observation provide us a ``natural" candidate for an invariant splitting over the critical points, since one of the subbundle is determined by the kernel of the differential over the critical points and the other subbundle is given by a ``transversal'' direction to the kernel. Then, this splitting is extended to the iterates and can be defined over the ``properly subset" as in \eqref{EF-splitting} (Section \cref{construction-lambda} and \cref{pf-key-thm}).
\item For every nearby map to the initial holds that the angle between the invariant subbundles over the ``properly set" is uniform bounded away from zero.
Our approach for proving it is a little bit different from the diffeomorphism setting approach in \cite{Mane-Closinglemma,DPU, BDP}. For invertible maps, if the angle goes to zero, sinks and sources can be created, and both are obstructions for transitivity. However, in our setting sources are not an obstruction for transitivity, instead we use the fact of having full-dimensional kernel, contradicting that the initial map is robustly transitive (Section \cref{pf-key-thm}).
\item Then, the splitting is, in fact, a dominated splitting over the ``properly set". Furthermore, this dominated splitting is extended to the whole surface and it is an open property. The strategy for proving the domination property is to prove, first, that the domination property holds in a neighborhood of the critical points, second, it is extended to the ``properly set'', and then it holds for the whole surface.
For this, roughly speaking, assuming that the domination property does not hold, then we can find a perturbation having full-dimensional kernel, contradicting the fact that the initial map is robustly transitive (Section \cref{uniform-DP} and \cref{sec:main-thm-consequences}).
\item Once we have the existence of the dominated splitting, we get topological obstructions providing which surfaces support robustly transitive endomorphisms (Section \cref{sec:main-thm-consequences}).
\item Next, we prove that this splitting has a topological expanding behavior in the direction transversal to the kernel. Indeed, we show
that the length of the iterates of arcs tangent to the extremal dominate bundle grows exponentially
(Section \cref{homotopy}).
\item Finally, we show that the extremal dominate bundle is, in fact, an uniform expanding bundle getting that the initial map is partially hyperbolic as we wished (Section \cref{sec:exp direction}). For this,
we assume by contradiction that the extremal dominate subbundle is not expanding. Then, the domination property implies that the dominated subbundle is contracting, and, up to a perturbation, there is a sink, contradicting that the initial map is robustly transitive.
\end{enumerate}
\section{Preliminaries}\label{section-ph}
This section is devoted to introduce the basic notions that will be used along this paper, such as the notion of partial hyperbolicity for endomorphisms displaying critical points in term of invariant splitting. Moreover, it will also be presented some fundamental properties for dominated splitting as uniqueness, continuity of the splitting, and the equivalence between the both notions of partial hyperbolicity for endomorphisms in terms of cone-field and invariant splitting. For the sake of clearness, we prove that both definition are equivalent and then use indistinctly any of them according to the situation, it is important to recall that since we are in a non-invertible setting, we are dealing with several pre-images and has to be careful when we consider backward iterates. The readers that are familiar with these notions can skip this section, and come back in case it is necessary.
\subsection{Dominated splitting and partial hyperbolicity for endomorphisms}\label{subsection-DS}
Due to the fact that an endomorphism might have several pre-images, it is appropriated to define a splitting over the space of the full orbits (or simply, orbit). Let us denote by $M^{\mathbb{Z}}$ the product space and, for every $f \in \mathrm{End}^1(M)$, define the space of all the orbits of $f$ by
\begin{align}
M_f=\{(x_i)_i \in M^{\mathbb{Z}}: f(x_i)=x_{i+1}, \forall i \in \mathbb{Z}\}.
\end{align}
Denote by $TM_f$ the vector bundle over $M_f$ defined as the pullback of $TM$ by the projection $\pi_0:M_f \to M$ defined by $\pi_0((x_i)_i)=x_0$. In particular, it is well known that $TM_f$ and $TM$ are isomorphic. Unless specified, we use $T_{x}M$ to denote the fiber of $TM$ at $x \in M$ and the fiber of $TM_f$ at $(x_i)_i \in M_f$ where $x_0=x$. Furthermore, it should be noted that $f$ acts in $M_f$ as a shift map $(x_i)_i\mapsto (x_{i+1})_i$ and the derivative acts from $T_{x_i}M$ to $T_{x_{i+1}}M$. Then, we say that $\Lambda$ a subset of $M_f$ is \textit{f-invariant} if it is invariant for the shift map. Before defining dominated splitting (in term of splitting) for endomorphisms displaying critical points, we recall the definition for local diffeomorphisms (or diffeomorphism).
\medskip
A $f$-invariant subset $\Lambda$ in $M_f$ is said to admit a dominated splitting for $f$ if there exist $\alpha>0,\, \ell \geq 1,$ and two one-dimensional bundles $E$ and $F$ such that for all $(x_i)_i \in \Lambda$ and $i \in \mathbb{Z}$ hold
\begin{itemize}
\item[-] \textit{Invariance splitting:}
\begin{align*}
& Df(E(x_i))= E(f(x_i)) \ \ \text{and} \ \ Df(F(x_i))=F(f(x_i));\\ & \text{and}, \ \ T_{x_i}M=E(x_i)\oplus F(x_i);
\end{align*}
\smallskip
\item[-] \textit{Uniform angle:} $\varangle(E(x_i),F(x_i))\geq \alpha$;
\smallskip
\item[-] \textit{Domination Property:}
\begin{align*}
\|Df^{\ell} \mid_{E(x_i)}\|\leq \frac{1}{2}\|Df^{\ell}\mid_{F(x_i)}\|.
\end{align*}
\end{itemize}
$Df^{\ell}\mid_{E(x)}$ and $Df^{\ell}\mid_{F(x)}$ denote the restriction of $Df^{\ell}_x$ to the linear subspaces $E(x)$ and $F(x)$, respectively.
\begin{rmk}\label{rmk:angles}
\begin{itemize}
\item[]
\item[-] In this setting, it is known that as $T_xM=E(x)\oplus F(x)$, there exists a linear map $\phi_x:E(x) \to E^{\perp}(x)$ such that the graph of $\phi_x$ is the subspace $F(x)$. Then, we define the angle between $E(x)$ and $F(x)$ by
$$\|\phi_x\|=\tan \varangle(E(x),F(x)).$$
\smallskip
\item[-] Although we denote the subbundles by $E(x)$ and $F(x)$, we would like to emphasize that in general $E$ depends on the forward orbits $($i.e., $(x_i)_{i\geq 0}$ whose $x_0=x$$)$ and $F$ depends on the backward orbits $($i.e., $(x_i)_{i\leq 0}$ whose $x_0=x$$)$.
\end{itemize}
\end{rmk}
However, when critical points appear, it is natural to think that the invariance property is affected. Indeed, let $T_xM=E(x)\oplus F(x)$ be satisfying the properties above. It should be noted that if $x \in \mathrm{Cr}(f)$ we must have that $E(x)$ is the kernel of $Df$ at $x$. Otherwise, either $Df_x(F(x))=\{0\}$ or $E(f(x))=F(f(x)),$ contradicting the definition above. Thus, a natural extension of the dominated splitting definition for endomorphisms displaying critical points is as follows.
\begin{defi}[Dominated splitting]\label{def-DS}
Let $\Lambda$ be a $f$-invariant subset of $M_f$. We say that $\Lambda$ admits a dominated splitting for $f$ if there exist $\alpha>0,\, \ell \geq 1,$ and two one-dimensional bundles $E$ and $F$ such that for every $(x_i)_i \in \Lambda$ and $i \in \mathbb{Z}$ hold
\begin{itemize}
\item[-] \textbf{Invariance splitting:}
\begin{align*}
& Df(E(x_i))\subseteq E(f(x_i)) \ \ \text{and} \ \ Df(F(x_i))=F(f(x_i));\\ & \text{and} \ \ T_xM=E(x_i)\oplus F(x_i);
\end{align*}
\smallskip
\item[-] \textbf{Uniform angle:} $\varangle(E(x_i),F(x_i))\geq \alpha$;
\smallskip
\item[-] \textbf{Domination Property:}
\begin{align}\label{eq-DS-property}
\|Df^{\ell} \mid_{E(x_i)}\|\leq \frac{1}{2}\|Df^{\ell}\mid_{F(x_i)}\|.
\end{align}
We denote the domination property by $E \prec_{\ell} F$.
\end{itemize}
\end{defi}
If $\Lambda=M_f$, we say $M_f$ admits a dominated splitting or simply $f$ has a dominated splitting. Unless specified, we use $T_{\Lambda}M=E\oplus F$ to denote the splitting over $\Lambda$ and for each point $(x_i)_i \in \Lambda$ the splitting $E\oplus F$ over the orbit $(x_i)_i$ means that
$$\bigsqcup_{i=-\infty}^{\infty} T_{x_i}M= \bigsqcup_{i=-\infty}^{\infty} E(x_i)\oplus F(x_i).$$
\begin{defi}[Partially hyperbolic]
We say that an endomorphism $f$ displaying critical points is partially hyperbolic if there exist two
one-dimensional bundles $E$ and $F$ satisfying:
\medskip
\begin{itemize}
\item[-]\textbf{Dominated splitting:} $TM_f=E\oplus F$ is a dominated splitting for $f$;
\medskip
\item[-]\textbf{Unstable direction:} there exist $k \geq 1$ and $\lambda>1$ such that for all $(x_i) \in M_f$ and $i \in \mathbb{Z}$ hold that
\begin{align}\label{ph-eq}
\|Df^{k}\mid_{F(x_i)}\|\geq \lambda.
\end{align}
\end{itemize}
\end{defi}
It should be noted that, up to an iterated, we can assume that $k\geq 1$ in \eqref{ph-eq} and $\ell\geq 1$ in \eqref{eq-DS-property} are the same.
\subsection{Fundamental properties of dominated splitting}
Here we present some fundamental properties of dominated splitting.
The uniqueness of dominated splitting holds for endomorphisms without critical points, for details see \cite{Crovisier-Potrie}. Next proposition guarantees, even in the setting with critical points, the uniqueness of the splitting.
\begin{prop}[Uniqueness]\label{uniqueness}
The dominated splitting $T_{\Lambda} M=E\oplus F$ is unique. That is, if $\Lambda$ admits two dominated splittings $E\oplus F$ and $G\oplus H$ for $f$, we must have $E(x_i)=G(x_i)$ and $F(x_i)=H(x_i)$ for all $(x_i)_i \in \Lambda$ and $i \in \mathbb{Z}$.
\end{prop}
\begin{proof}
For every $(x_i)_i$ in $\Lambda$, we can consider two cases.
\medskip
\textit{Case I:} the orbit $(x_i)_i$ such that $x_i \notin \mathrm{Cr}(f)$.
\smallskip
In this situation, the proof follows from same arguments as in the invertible case and can be found in \cite{Crovisier-Potrie}.
\smallskip
\textit{Case II:} the orbit $(x_i)_i$ such that $x_i \in \mathrm{Cr}(f)$ for some $i \in \mathbb{Z}$.
\smallskip
Here, without loss of generality, we can assume that $x_0 \in \mathrm{Cr}(f)$ and $x_i \notin \mathrm{Cr}(f)$ for every $i\neq 0$. Thus, one has that $E(x_0)=\ker(Df_{x_0})=G(x_0)$ and $Df_{x_i}$ is an isomorphism for $i\neq 0$ which imply $E(x_i)=G(x_i)$ for $i\leq 0$ and $F(x_i)=H(x_i)$ for $i \geq 1$. In particular, it should be noted that $T_{x_i}M$ admits two dominated splitting $E(x_i)\oplus F(x_i)$ and $E(x_i)\oplus H(x_i)$ for an invertible map $Df$ on $(x_i)_{i\leq 0}$ and $T_{x_i}M$ admits two dominated splitting $E(x_i)\oplus F(x_i)$ and $G(x_i)\oplus F(x_i)$ for the isomorphism $Df$ on $(x_i)_{i\geq 1}$. Therefore, repeating the same argument for the invertible case one can conclude that $E(x_i)=G(x_i)$ and $F(x_i)=H(x_i)$ for every $i \in \mathbb{Z}$.
\end{proof}
The following result shows the continuity of the dominated splitting. Moreover, it proves that a dominated splitting can be extended to the closure.
\begin{prop}[Continuity and extension to the closure]\label{continuity&extension}
The map
\begin{align}\label{EF-map}
\Lambda \ni (x_i)_i \longmapsto \, E(x_0)\oplus F(x_0)
\end{align}
is continuous. Moreover, it can be extended to the closure of $\Lambda$ continuously.
\end{prop}
\begin{proof}
Let $(x_i^n)_i \to (x_i)_i$ as $n \to \infty$ with $(x_i^n)_i$ and $(x_i)_i$ in $\Lambda$. Let $v_n$ and $w_n$ be unit vectors in $E(x_0^n)$ and $F(x_0^n)$, respectively. Up to a subsequence, we can assume that $v_i^n$ and $w_i^n$ converge to $v_i$ and $w_i$ such that the spanned spaces are $\widetilde{E}(x_i)$ and $\widetilde{F}(x_i)$, respectively. In particular, using that $f$ is a $C^1$-map and the angle between $E$ and $F$ is uniformly bounded away from zero, we obtain that $\widetilde{E}\oplus \widetilde{F}$ is a dominated splitting over $(x_i)_i$. Thus, by uniqueness of the existence of dominated splitting, we get that
$\widetilde{E}(x_i)=E(x_i)$ and $\widetilde{F}(x_i)=F(x_i)$ which implies that the argument does not depend on the choice of the subsequence of $(v_i^n)_n$ and $(w_i^n)_n$. Therefore, the map in \eqref{EF-map} is continuous.
By the continuity of the map in \eqref{EF-map} and uniqueness of the dominated splitting, we can extend $E\oplus F$ to the closure of $\Lambda$ as the limits of $E$ and $F$.
\end{proof}
\begin{rmk}\label{proj-E}
It is well-known and follows from the proof of uniqueness that if $E\oplus F$ is a dominated splitting, then the subbundle $E$ only depends on the forward orbits. That is, for every $(x_i)_i$ and $(y_i)_i$ in $\Lambda$, one has for every $i \geq 0$ that
$$E(x_i)=E(y_i), \ \ \text{whenever} \ \ x_0=y_0.$$
In particular, by Proposition \ref{continuity&extension}, the subbundle induced by $E$ on $M$ is continuous which, by slight abuse of notation, we also denote by $E$.
\end{rmk}
The following is a characterization of partial hyperbolicity which will be useful for proving Theorem \ref{thm-A}. However, the proof is a standard argument.
\begin{prop} \label{ph-end}
Let $TM_f=E\oplus F$ be a dominated splitting for $f$. There exist $\ell_0\geq 1$ and $\lambda>1$ such that for every $(x_i)_i \in M_f$ with $x_0=x$, there is $1\leq k \leq {\ell}_0$ with $k$ depending of $(x_i)_i$ so that
\begin{align}
\|Df^k\mid_{F(x)}\|\geq \lambda
\end{align}
if, and only if, $f$ is a partially hyperbolic endomorphism.
\end{prop}
\begin{proof} Since $M_f \ni (x_i)_i \mapsto F(x_0)$ is continuous, we can take $C=\min\{\|Df^j\mid_{F(x_0)}\|:1\leq j \leq \ell_0-1 \,\, \text{and} \,\, (x_i)_i \in M_f\}$
and write $\ell=k\ell_0+r,$ $0 \leq r \leq \ell_0-1$. Then we have for $ r=1,\dots,\ell_0-1,$
\begin{align*}
\|Df^{\ell}\mid_{F(x_0)}\|\geq C\lambda^k.
\end{align*}
Hence, taking $k_0\geq 1$ such that $\lambda_0:=C\lambda^{k_0}>1$, we have that $\|Df^{\ell}\mid_{F(x_0)}\|\geq \lambda_0$. The reciprocal is clear.
\end{proof}
\subsection{Cone-criterion}
Here, we present the equivalence between the definitions of dominated splitting and partial hyperbolicity in terms of cone-field and invariant splitting.
Let $E$ be a $Df$-invariant continuous subbundle of $TM$. For every $x \in M$, we define the following cone-field $\mathscr{C}_{E}$ on $M$ with core $E$ of length (angle) $\eta >0$,
\begin{align}
\mathscr{C}_{E}(x,\eta)=\{u_1+u_2 \in E(x)\oplus E(x)^{\perp}:\|u_2\|\leq \eta \|u_1\|\}.
\end{align}
By Remark \ref{rmk:angles} it can be rewritten as:
$$\mathscr{C}_{E}(x,\eta)=\{v \in E(x)\oplus E(x)^{\perp}:\varangle(E(x),\mathbb{R}\langle v\rangle)\leq \eta\},$$
where $\mathbb{R}\langle v\rangle$ denotes the subspace generated by $v$. The \textit{dual cone} of $\mathscr{C}_{E}(x,\eta)$ is the cone-field given by
\begin{align}
\mathscr{C}_{E}^{\ast}(x,\eta)=\{u_1+u_2 \in E(x)\oplus E(x)^{\perp}:\|u_1\|\leq \eta^{-1} \|u_2\|\}.
\end{align}
In other words, $\mathscr{C}_{E}^{\ast}(x,\eta)$ is the closure of $T_xM\backslash \mathscr{C}_{E}(x,\eta)$. The interior of $\mathscr{C}_{E}$ is given by $$\mathrm{int}(\mathscr{C}_{E}(x,\eta))=\{u_1+u_2 \in E(x)\oplus E(x)^{\perp}:\|u_1\| < \eta \|u_2\|\}\cup \{0\}.$$
By Remark \ref{proj-E}, the subbundle induced by $E$ of a dominated splitting $E\oplus F$ for $f$ over $M_f$ is well-defined. Then, consider the cone-field $\mathscr{C}_E$ on $M$ and state the following equivalence between the definitions of dominated splitting in terms of cone-fields and invariant splitting.
\begin{prop}\label{cone-criterion}
$E\oplus F$ is a dominated splitting for $f$ if, and only if,
the cone-field $\mathscr{C}_E$ on $M$ is invariant and transversal to the kernel.
\end{prop}
\begin{proof}
Since the angle between $E$ and $F$ is bounded away from zero, there is a number $\alpha >0$ verifying that for every $x \in M$, the direction $F$ is not contained in $\mathscr{C}_E(x,\alpha)$. In particular, we have that for every $(x_i)_i\in M_f, \, F(x_i) \subseteq \mathscr{C}_E^{\ast}(x_i,\alpha)$. The domination property implies that, for $k\geq 1$ large enough, the direction $Df^k(E^{\perp}(x_i))$ gets closer to $F(f^k(x_i))$ and $\|Df^k\mid_{E^{\perp}(x_i)}\|\approx \|Df^k\mid_{F(x_i)}\|$. Hence, one can fix $k\geq 1$ so that $Df^k(\mathscr{C}_E^{\ast}(x_i,\alpha)) \subseteq \mathrm{int}(\mathscr{C}_E^{\ast}(f^k(x_i),\alpha))$. This proves the necessary condition. The proof for the sufficient condition can be found in \cite[Section 2]{Crovisier-Potrie}.
\end{proof}
\begin{rmk}
It should be emphasized that the existence of a dominated splitting is an open property in the $C^1$ topology. That is, if $f$ admits an invariant cone-field $\mathscr{C}$ transversal to the kernel, then there exists a neighborhood $\mathcal{U}$ of $f$ in $\mathrm{End}^1(M)$ such that the cone-field $\mathscr{C}$ is invariant and transversal to the kernel for each $g \in \mathcal{U}$ as well.
\end{rmk}
The following result shows the equivalence between the definitions of partially hyperbolic endomorphisms in terms of cone-field and invariant splitting. The proof can be found in \cite{Crovisier-Potrie}.
\begin{prop}\label{ph-cone-criterion}
$f$ is a partially hyperbolic endomorphism if, and only if,
there exists an unstable cone-field $\mathscr{C}^u$ on $M$.
\end{prop}
\begin{rmk}
Partial hyperbolicity as well as dominated splitting is an open property. In other words, the existence of unstable cone-fields is a property shared by all nearby endomorphisms. Recalling that $\mathscr{C}^u$ is an unstable cone-fields for $f$ if it
is invariant, transversal to the kernel and satisfy the unstable property, that is, there are $\ell, \lambda>1$ such that $ \|Df^{\ell}_x(v)\|\geq \lambda\|v\|,$ for all $x\in M$ and $v \in \mathscr{C}^u(x),$ see
properties $[\mathrm{PH1}]$ and $[\mathrm{PH2}]$ in Section \cref{intro}.
\end{rmk}
\section{Proof of Main Theorem}\label{sec:main-thm}
In this section we prove the \hyperref[main-theo]{Main Theorem} stated in Section \Cref{intro}.
Since the existence of an invariant cone-field and a dominated splitting are equivalent, see Proposition \ref{cone-criterion} in Section \Cref{section-ph},
the \hyperref[main-theo]{Main Theorem} can be proved using the notion of dominated splitting in terms of invariant splitting. Before starting the proof let us construct precisely the invariant splitting over the set $\Lambda_f$ in \eqref{Lambda-set} that was previously defined
in Section \cref{intro}.
\subsection{Construction of the invariant splitting}\label{construction-lambda}
Let $f$ be a transitive endomorphism with $\mathrm{int}(\mathrm{Cr}(f))\neq \emptyset$ and $|\ker Df^n|= 1$ for every $n\geq 1$. Let $M_f$ be the inverse limit space for $f$, recalling that $M_f=\{(x_i)_i \in M^{\mathbb{Z}}: f(x_i)=x_{i+1}, \, \forall i\in \mathbb{Z} \}$. It is known that the action of $f$ over $M_f$ (the shift map on $M_f$) is a transitive homeomorphism, for further details see \cite[Theorem 3.5.3]{AH}, and $\pi_0^{-1}(\mathrm{Cr}(f))$ has nonempty interior. Consequently, there is a residual set of points in $M_f$ so that the backward/forward orbits are dense and intersect $\pi_0^{-1}(\mathrm{Cr}(f))$ infinitely many times for the past and for the future. Therefore, $\Lambda_f$ is a dense subset of $M_f$, where $\Lambda_f$ is defined by \eqref{Lambda-set} in Section \cref{intro}. From now on, we refer to this set as ``lambda-set" to avoid explicit reference to the endomorphism.
Let us recall the candidate for the dominated splitting for $f$ over $\Lambda_f$ introduced in Section~\cref{intro}. First, given $(x_i)_i \in \Lambda_f$, we denote by $\tau^{-}_{j}$ and $\tau^{+}_j,$ the first time before its orbit leave the set of critical points and the first time to enter in the set of critical points, respectively. More precisely, for every $(x_i)_i \in \Lambda_f$,
\begin{align}\label{n-past-future}
\begin{split}
\tau^{-}_{j}&=\max\{i<0: x_{j+i} \in \mathrm{Cr}(f) \,\, \text{and} \ \ f(x_{j+i}) \notin \mathrm{Cr}(f)\}; \ \ \text{and} \\
\tau^{+}_{j}&=\min\{i\geq 0:x_{j+i} \in \mathrm{Cr}(f)\}.
\end{split}
\end{align}
Thus, we define the subbundles of $TM_f$ over $\Lambda_f$ as:
\begin{align}\label{EF-splitting}\tag{$\ast \ast$}
E(x_j)=\ker \Big(Df^{^{\tau^{+}_j+1}}_{_{x_j}}\Big)
\ \ \text{and} \ \ F(x_j)=
\mathrm{Im}\Big(Df^{^{|\tau^{-}_j|}}_{_{x_{\tiny{\tau^{-}_j}}}}\Big).
\end{align}
Since $|\ker Df^n|= 1$ for all $n\geq 1$, we have that $E$ and $F$ are one-dimensional subbundles. Furthermore, it should be noted that if $x_j$ in $(x_i)_i$ is a critical point, then we have that $E(x_j)=\ker(Df_{x_j})$. In general, $E$ and $F$ at $x_j$ can be thought as the kernel of the $\tau^{+}_j$-iterated of $f$ where $\tau^{+}_j$ is the time that $f^i(x_j)$ takes to enter in $\mathrm{Cr}(f)$ and the image of the $|\tau^{-}_j|$-iterated of $f$ where $|\tau^{-}_j|$ is the time that $f^{-i}(x_j)$
takes to come back to $\mathrm{Cr}(f)$, respectively.
\begin{figure}[!h]
\centering
\includegraphics[scale=0.7]{EF-splitting}
\caption{$E,F$ subbundles over $\Lambda_f$.} \label{EF-splitting-picture}
\end{figure}
\begin{rmk}
It is interesting to notice that from the definition of $E(x_j)$ follows that it just depends on the forward orbit of $(x_{j+i})_{i\geq 0}$. However, $F(x_j)$ actually depends on the backward orbit of $(x_{j+i})_{i\leq 0}$.
\end{rmk}
We would like to emphasize that there is no assumption over the periodic points. The assumptions $\mathrm{int}(\mathrm{Cr}(f))\neq \emptyset$ and $f$ transitive are to ensure that $\Lambda_f$ is a nonempty dense subset of $M_f$. Hence, we state the following result.
\begin{thm}\label{key-thm}
Let $f \in \mathrm{End}^1(M)$ be an endomorphism whose $|\ker(Df^n)|=1$ for every $n\geq 1$ and $\Lambda_f \neq \emptyset$. Then
\begin{itemize}
\item either $E\oplus F$ is a dominated splitting for $f$ over $\Lambda_f$;
\item or $f$ can be approximated by endomorphisms having full-dimensional kernel.
\end{itemize}
\end{thm}
Assuming the previous theorem, we are able to prove \hyperref[main-theo]{Main Theorem}.
\begin{proof}[Proof of Main Theorem]
Let us assume that $f$ cannot be approximated by endomorphisms having full-dimensional kernel. That is, there exists a neighborhood $\mathcal{U}_f$ of $f$ satisfying
\begin{align}\label{eq-assumption}\tag{$\star$}
\forall g \in \mathcal{U}_f,\, n\geq 1, \ \ |\ker(Dg^n)|=\max_{x \in M}|\ker(Dg^n_x)|\leq 1.
\end{align}
Then, by Theorem \ref{key-thm}, we have that $T_{\Lambda_f}M_f=E\oplus F$ is a dominated splitting for $f$. Moreover, using that $\Lambda_f$ is dense in $M_f$ together with Proposition \ref{continuity&extension}, we can extend the dominated splitting to the whole $M_f$, establishing the proof of \hyperref[main-theo]{Main Theorem}.
\end{proof}
\subsection{Proof of Theorem \ref{key-thm}} \label{pf-key-thm}
We start proving the invariance splitting property.
\begin{lemma}[Invariance splitting]\label{invariance}
For every $(x_i)_i \in \Lambda_f$ holds,
\begin{align}\label{EF-invariant}\tag{\,I\,}
\begin{split}
& Df(E(x_i))\subseteq E(x_{i+1}) \ \ \text{and} \ \ Df(F(x_i))=F(x_{i+1});\\
& \text{and} \ \ \bigsqcup_{i=-\infty}^{\infty} T_{x_i}M=\bigsqcup_{i=-\infty}^{\infty} E(x_i)\oplus F(x_i).
\end{split}
\end{align}
\end{lemma}
\begin{proof}
The proof follows immediately from the definition of $E$ and $F$ together with the fact that $Df_{x_j}$ is an isomorphism for $x_j \notin \mathrm{Cr}(f)$ and $\tau^\pm_{j+1}=\tau^{\pm}_j-1$.
\end{proof}
We would like to emphasize that the Lemma \ref{invariance} (Invariance splitting) above holds for any endomorphism $g \in \mathrm{End}^1(M)$ whose $|\ker(Dg^n)|=1$ for all $n \geq 1$ and $\Lambda_g$ the nonempty set associated to $g$ given as in \eqref{Lambda-set}.
\smallskip
Before proceeding, let us recall the following classical tool in $C^1$-perturbative arguments due to Franks in \cite{Franks}.
\begin{lemma*}[Franks]\label{Franks-lemma}
Let $f \in \mathrm{End}^1(M)$. Given a neighborhood $\mathcal{U}$ of $f$, there exist $\varepsilon >0$ and a neighborhood $\mathcal{U}'$ of $f$ contained in $\mathcal{U}$ so that given $g \in \mathcal{U}'$, every finite collection of points $\Sigma=\{x_0,...,x_n\}$ in $M$, and every family of linear maps $L_i:T_{x_i}M \to T_{g(x_i)}M$ which $\|L_i-Dg_{x_i}\|<\varepsilon$, for each $0\leq i \leq n$, there exists an endomorphism $\tilde{g} \in \mathcal{U}$ and a family of balls $B_i$ centered in $x_i$ contained in a neighborhood $B$ of $\Sigma$ satisfying:
$\tilde{g}(x)=g(x)$, for $x \in M\backslash B$; $\tilde{g}(x_i)=g(x_i)$, and $\tilde{g}\mid_{B_i}=L_i$, for each $0\leq i \leq n$.
\end{lemma*}
Recalling that $\tilde{g}\mid_{B_i}=L_i$ means that the action of $\tilde{g}$ in $B_i$ is equal to the linear map $L_i$.
The following result will be an important tool along this paper and will be used several times. Before state our assertion, let us denote by $E_g$ and $F_g$
the subbundles associated to $g$.
\begin{lemma}\label{Franks-conseq}
For every neighborhood $\mathcal{U}$ of $f$, there exist $\varepsilon>0$ and a neighborhood $\mathcal{U}' \subseteq \mathcal{U}$ of $f$ such that for any $g \in \mathcal{U}'$ whose $\Lambda_g \neq \emptyset$ one has that if for some $(x_i)_i \in \Lambda_g$, there exist a subset $\Gamma=\{x_0,...,x_{n-1}\}$ and $L_i: T_{x_{i-1}}M \to T_{x_i}M$ linear maps satisfying:
$$\|L_i-Dg_{x_{i-1}}\| <\varepsilon, \ \ \text{for each} \ \ 1\leq i \leq n; \ \ \text{and} \ \ L_n \cdots L_1(F_g(x_0))=E_g(x_n).$$
Then, there exist $\tilde{g} \in \mathcal{U}$ and a neighborhood $W$ of $\Gamma$ such that:
\begin{enumerate}[label=$(\roman*)$]
\item $\tilde{g}(x_{i-1})=x_i$ and $D\tilde{g}_{x_{i-1}}=L_i$, for each $1\leq i \leq n$; and $\tilde{g}\mid_{W^c}=g\mid_{W^c}$;
\item there exists $m \geq 1$ such that $|\ker(D\tilde{g}^m)|=2$.
\end{enumerate}
\end{lemma}
\begin{proof}
The existence of $\tilde{g}$ is guaranteed by \hyperref[Franks-lemma]{Franks' Lemma}. Assume, without loss of generality, that $\tau^{-}_0<n \leq \tau^{+}_0$ where $x_{\tau^{-}_0}$ and $x_{\tau^{+}_0}$ belong to $\mathrm{Cr}(g)$ and $$W\cap\{x_{\tau^{-}_0},\dots,x_0,\dots,x_{n-1},\dots,x_{\tau^{+}_0}\}=\Gamma.$$
Then, taking $m=1+|\tau^{-}_0|+\tau^{+}_0$, one has
\begin{align*}
D\tilde{g}^m(T_{x_{\tau^{-}_0}}M)&=Dg^{1+\tau^{+}_0-n} L_n \cdots L_1 Dg^{|\tau^{-}_0|}(F(x_{\tau^{-}_0}))\\
&=Dg^{1+\tau^{+}_0-n} L_n \cdots L_1(F(x_0))=Dg^{1+\tau^{+}_0-n}(E(x_n))=\{0\}.
\end{align*}
In other words, we obtain that $|\ker(D\tilde{f}^m)|=2$.
\end{proof}
It remains to prove that the angle between $E$ and $F$ is uniformly bounded away from zero and there exists $\ell\geq 1$ so that $E \prec_{\ell} F$, recall Definition \ref{def-DS}. In order to prove it, we assume that $f$ cannot be approximated by endomorphisms having full-dimensional kernel,
that is, there exists a neighborhood $\mathcal{U}_f$ of $f$ satisfying \eqref{eq-assumption}.
Throughout this section, we fix $\varepsilon>0$ and a neighborhood $\mathcal{U}$ of $f$ contained in $\mathcal{U}_f$ such that the hypotheses of
Lemma \ref{Franks-conseq} are never satisfied.
Further, let us fix an angle $\alpha>0$ small enough such that any rotation map $R$ of angle less than $\alpha$ verifies that
$\|R \circ Dg - Dg\|<\varepsilon$, for all $g \in \mathcal{U}$. By slight abuse of notation,
we continue denoting $\mathcal{U}$ by $\mathcal{U}_f$.
The following lemma ensures that the angle between $E$ and $F$ is uniformly bounded away from zero.
\begin{lemma}[Uniform angle]\label{unif-angle}
For every $g \in \mathcal{U}_f$ with $\Lambda_g \neq \emptyset$, we have for every $(x_i)_i \in \Lambda_g$ that
\begin{align}\tag{II}
\varangle (E_g(x_i),F_g(x_i)) \geq \alpha,\ \ \text{for all} \ \ i \in \mathbb{Z}.
\end{align}
\end{lemma}
\begin{proof}
Suppose that there exists a sequence of points $x_i$ such that the angles between $E_g(x_i)$ and $F_g(x_i)$ is less than $\alpha$. Let $R:T_{x_i}M \to T_{x_i}M$ be a rotation of angle less than $\alpha$ such that $R(F(x_i))=E(x_i)$. Then, $L_i=R \circ Dg$ satisfies
$$\|Dg - L_i \|\leq \varepsilon \ \ \text{and} \ \ L_i(F_g(x_{i-1}))=E_g(x_i),$$
which is a contradiction. Therefore, for every $(x_i)_i \in \Lambda_g$ and $i \in \mathbb{Z}$, $$\varangle (E_g(x_i),F_g(x_i)) \geq \alpha.$$
\end{proof}
Note that so far we have proved that for all $g \in \mathcal{U}_f$ with $\Lambda_g$ nonempty, the splitting $E_g\oplus F_g$ over $\Lambda_g$ is invariant and its angle is uniform bounded away from zero. Finally, let us prove that the invariant splitting constructed above has the domination property.
\begin{lemma}[Uniform domination property] \label{domination-openess}
There exists a neighborhood $\mathcal{U}$ of $f$ contained in $\mathcal{U}_f$ and an integer $\ell >0$ such that for every $g \in \mathcal{U}$, whose $\Lambda_g$ is nonempty, we have for every $(x_i)_i \in \Lambda_g$ that:
\begin{align}\tag{III}
\|Dg^{\ell}\mid_{E_g(x_i)}\|\leq \frac{1}{2}\|Dg^{\ell}\mid_{F_g(x_i)}\|, \ \ \text{for all} \ \ i\in \mathbb{Z}.
\end{align}
\end{lemma}
The proof of the uniform domination property will be done in the next subsection. The lemma above is slightly more general than the domination property, indeed the statement claims that the angle is bounded away from zero, and the domination property are uniform in an open set for endomorphisms whose ``lambda-set" is nonempty. This will be useful when we wish to prove that the accumulation point of $g \in \mathcal{U}$ with $\Lambda_g$ nonempty, has the domination property as well. Therefore, we conclude the proof of Theorem \ref{key-thm}. \qed
\subsection{Uniform domination property}\label{uniform-DP}
This section is devoted to prove the Lemma \ref{domination-openess} (Uniform domination property) stated in previous section.
Let $f$ be an endomorphism displaying critical points satisfying the assumption \eqref{eq-assumption}.
Let us highlight that for proving this lemma will not be assumed that $\Lambda_f$ is nonempty.
That is, in order to obtain the uniform domination property, $\Lambda_f$ nonempty is not a necessary condition.
The proof of the uniform dominated property will be divided in two steps.
The first one shows the domination property in a neighborhood of the critical points.
In the second step, we extend the domination property got in the first step to the ``lambda-set'' to conclude the lemma.
\subsubsection*{\uline{First step}}\label{pf-domination-critic}
Domination property nearby the set of critical points.
\begin{lemma}\label{domination-open}
There exist two neighborhoods $\mathcal{U}'\subseteq \mathcal{U}_f$ of $f$ and $\mathrm{U}$ of $\mathrm{Cr}(f)$ such that for every $g \in \mathcal{U}'$ which $\Lambda_g$ is well-defined, we have that if $(x_i)_i \in \Lambda_g$ satisfies
\begin{align}\label{eq-lemma}
\|Dg^j\mid_{E_g(x_0)}\|\geq \frac{1}{2}\|Dg^j\mid_{F_g(x_0)}\|,
\end{align}
for every $1\leq j \leq l$. Then, $x_0,x_1,\dots, x_{l-1} \notin \mathrm{U}$.
\end{lemma}
The proof requires some preliminaries notation and results.
\medskip
Since $\min_{\|v\|=1}\|Df_x(v)\|< \max_{\|w\|=1}\|Df_x(w)\|$ for every $x$ nearby $\mathrm{Cr}(f)$, we define $V_f(x)$ and $W_f(x)$ as the subspaces of $T_xM$ in a neighborhood $ \mathrm{U}$ of $\mathrm{Cr}(f)$ verifying:
\begin{align}\label{new-split}
\|Df\mid_{V_f(x)}\|=\min_{\|u\|=1}\|Df_x(u)\| \ \ \text{and} \ \
\|Df\mid_{W_f(x)}\|=\max_{\|w\|=1}\|Df_x(w)\|.
\end{align}
More general, we can assume that for every $g \in \mathcal{U}_f$ one has in $\mathrm{U}$,
$$\min_{\|v\|=1}\|Dg_x(v)\|< \max_{\|w\|=1}\|Dg_x(w)\|;$$
and define $V_g(x)$ and $W_g(x)$ for $x \in \mathrm{U}$, similarly. Furthermore, whenever $\mathrm{Cr}(g) \neq \emptyset,$ $\mathrm{Cr}(g)$ is contained in $\mathrm{U}$, that is, $\mathrm{U}$ is a neighborhood of the critical points for every $g \in \mathcal{U}_f$, and $V_g(x)=\ker(Dg_x)$ for all $x \in \mathrm{Cr}(g)$. On the other hand, note that the following functions:
\begin{align}\label{continuity-VW}
\mathcal{U}_f \ni g \mapsto V_g, \, W_g, \ \ \text{and} \ \ \mathrm{U} \ni x \mapsto V_g(x), \, W_g(x)
\end{align}
are continuous. Moreover, it is well known that $V_g(x)$ and $W_g(x)$ are orthogonal for all $x \in \mathrm{U}$.
Now consider the cone-field $\mathscr{C}_{V_g}$ with core $V_g$ and angle $\eta>0$,
$$\mathscr{C}_{V_g}: \mathrm{U} \ni x \mapsto \mathscr{C}_{V_g}(x,\eta)=\{v+w \in V_g(x)\oplus W_g(x): \|w\|\leq \eta \|v\|\};$$
and recall that $\mathscr{C}_{V_g}^{\ast}(x,\eta)$ is the dual cone-field of $\mathscr{C}_{V_g}(x,\eta)$ defined as the closure of $T_xM\backslash\mathscr{C}_{V_g}(x,\eta)$.
Next result gives an interesting property about the splitting $V_g\oplus W_g$ over $\mathrm{U}$.
\begin{prop}\label{new-splitting}
Given $\eta>0$ and $\theta>0,$ there are two neighborhoods $\mathcal{U} \subseteq \mathcal{U}_f$ of $f$ and $\mathrm{U}$ of the critical points for every $g \in \mathcal{U}$ so that for every $g \in \mathcal{U}$ and $x \in \mathrm{U}$ holds
\begin{align}\label{VW-property}
\varangle(Dg(W_g(x)),\mathbb{R}\langle Dg_x(u)\rangle)<\theta, \, \forall\, u \in \mathscr{C}^{*}_{V_g}(x,\eta).
\end{align}
\end{prop}
\begin{proof}
Since $Df(\mathscr{C}^{*}_{V_f}(x,\eta))=Df(W_f(x))$ for every $x \in \mathrm{Cr}(f)$ and by the equation \eqref{continuity-VW}, we can find a neighborhood $\mathcal{U}\subseteq \mathcal{U}_f$ of $f$ and a neighborhood $\mathrm{U}$ of the critical points for every $g \in \mathcal{U}$ verifying \eqref{VW-property} as we wished.
\end{proof}
Next lemma provides the relation between the splitting $E_g\oplus F_g$ and $V_g\oplus W_g$.
Given $\eta>0$ and $\theta >0$ small enough,
consider the neighborhoods $\mathcal{U}$ and $\mathrm{U}$ given in Proposition \ref{new-splitting}.
\begin{lemma}\label{cont-nearby-sing
For every $g \in \mathcal{U}$ with $\Lambda_g$ well-defined and for every $(x_i)_i \in \Lambda_g$ with $x_0=x \in \mathrm{U}$ holds
\begin{align}\label{eq-cone}
F_g(x) \subseteq \mathscr{C}_{V_g}^{\ast}(x,\eta) \ \ \text{and} \ \ E_g(x) \subseteq \mathscr{C}_{V_g}(x,\eta).
\end{align}
In particular, it follows that $\varangle(Dg_x(W_g(x)),F_g(g(x)))<\theta$.
\end{lemma}
\begin{proof}
Without loss of generality, we can assume $\eta >0$ and $\theta>0$ small enough
such that $0<\eta< \alpha$, where $\alpha$ is given by the Lemma \ref{unif-angle} (Uniform angle),
implying that for every $g \in \mathcal{U}_f$ with $\Lambda_g$ nonempty has the angle between $E_g$ and $F_g$ greater than $\alpha$,
and for every rotation $R$ of angle less than $2\eta$ or $2\theta$ holds for every $g \in \mathcal{U}$,
\begin{align}\label{ineq:rotation}
\|R\circ Dg -Dg \|< \varepsilon.
\end{align}
\smallskip
In order to prove \eqref{eq-cone} we will show that:
\begin{enumerate}[label=(\alph*)]
\item $E_g(x)$ and $F_g(x)$ cannot be contained simultaneously in neither $\mathscr{C}_{V_g}(x,\eta)$ nor $\mathscr{C}_{V_g}^{\ast}(x,\eta)$;
\item $F_g(x)$ and $E_g(x)$ cannot be contained in $\mathscr{C}_{V_g}(x,\eta)$ and $\mathscr{C}_{V_g}^{\ast}(x,\eta)$, respectively.
\end{enumerate}
\medskip
Let us prove item (a). As the angle between $E_g$ and $F_g$ is greater than $\alpha$, one concludes that both $E_g(x)$ and $F_g(x)$ cannot be contained in $\mathscr{C}_{V_g}(x,\eta)$. On the other hand, if $E_g(x)$ and $F_g(x)$ are contained in $\mathscr{C}_{V_g}^{\ast}(x,\eta)$,
by \eqref{VW-property} we have that
\begin{align*}
\varangle(E_g(g(x)),F_g(g(x)))\leq \varangle(E_g(g(x)),Dg(W_g(x))) +\varangle(Dg(W_g(x)), F_g(g(x)))\leq 2\theta.
\end{align*}
Then, we can take a rotation $R:T_{g(x)}M \to T_{g(x)}M$ such that $$R(F_g(g(x)))=E_g(g(x)) \ \ \text{and} \ \ \|R\circ Dg_x-Dg_x\|<\varepsilon$$
which implies, by Lemma \ref{Franks-conseq}, that some perturbation of $g$ has full-dimensional kernel which is a contradiction. This proves (a).
To prove (b), we suppose that $F_g(x)\subseteq \mathscr{C}_{V_g}(x,\eta)$ and $E_g(x) \subseteq \mathscr{C}_{V_g}^{\ast}(x,\eta)$. Then, we can take $v \in \mathscr{C}_{V_g}^{\ast}(x,\eta)$ such that $\varangle(\mathbb{R}\langle v\rangle, F_g(x))<2\eta$ and
\begin{align*}
\varangle(\mathbb{R}\langle Dg_x(v)\rangle, E_g(g(x)))&\leq \varangle(\mathbb{R}\langle Dg_x(v)\rangle, Dg(W_g(x)))\\ &+\varangle(Dg(W_g(x)), E_g(g(x))) \leq 2\theta,
\end{align*}
and so, there exist two rotations $R_0:T_xM \to T_xM$ and $R_1:T_{g(x)}M \to T_{g(x)}M$ such that
$R_0(F_g(x))=\mathbb{R}\langle v\rangle$ and $R_1(\mathbb{R}\langle Dg_x(v)\rangle)=E_g(g(x))$ with $\|R_{\ast}\circ Dg-Dg\|<\varepsilon$, for $\ast=0,1$, which contradicts the assumption (\ref{eq-assumption}). Therefore, we conclude the proof of (b), and in consequence, of the lemma.
\end{proof}
An immediate consequence of Lemma \ref{cont-nearby-sing} is the following.
\begin{lemma}\label{constant-c0}
Given $\nu >0$ and $\eta >0$ small enough, there exist neighborhoods $\mathcal{U}_f$ of $f$ and $\mathrm{U}$ of $\mathrm{Cr}(f)$ such that for every $g \in \mathcal{U}_f$ whose $\Lambda_g$ is nonempty, one has that for $(x_i)_i \in \Lambda_g$ with $x_0=x \in \mathrm{U}$,
\begin{align}
\|Dg\mid_{E_g(x)}\|< \nu \ \ \text{and} \ \ \|Dg_x(v)\|\geq 2\nu,\, \forall\, v \in \mathscr{C}_{V_g}^{\ast}(x,\eta).
\end{align}
\end{lemma}
Finally, we can prove Lemma \ref{domination-open}.
\begin{proof}[Proof of Lemma \ref{domination-open}]
It follows from Lemma \ref{cont-nearby-sing} that given $\eta>0$ and $\theta>0$, there are neighborhoods $\mathcal{U}$ of $f$ contained in $\mathcal{U}_f$ and $\mathrm{U}$ of $\mathrm{Cr}(f)$ so that for every $g \in \mathcal{U}$ whose $\Lambda_g$ is nonempty and $(x_i)_i \in \Lambda_g$ with $x_0=x \in \mathrm{U}$, one has that:
\begin{align*}
E_g(x) \subseteq \mathscr{C}_{V_g}(x,\eta) \ \ \text{and} \ \ F_g(x) \subseteq \mathscr{C}_{V_g}^{\ast}(x,\eta).
\end{align*}
In particular, for all $u \in \mathscr{C}_{V_g}^{\ast}(x,\eta)$ we have that $\varangle(\mathbb{R}\langle Dg(u)\rangle, Dg(W_g(x)))<\theta$.
For $(x_i)_i \in \Lambda_g$, we define the cone-field $\mathscr{C}_{E_g,F_g}(x_i,\beta)$ with core $F_g$ and length $\beta>0$ in coordinates $E_g\oplus F_g$ over $(x_i)_i$ by:
$$\mathscr{C}_{E_g,F_g}(x_i,\beta)=\{u_1+u_2\in E_g(x_i)\oplus F_g(x_i):\|u_1\|\leq \beta \|u_2\|\}.$$
Note that the cone-field above is in $E_g\oplus F_g$ coordinates which may not be an orthogonal splitting. However, since the angle between $E_g$ and $F_g$ is uniformly bounded away from zero, we may suppose that $\eta, \theta >0$ are chosen small enough so that for every $(x_i)_i\in \Lambda_g$ whose $x_0 \in \mathrm{U}$ follows that:
\begin{itemize}
\item[-] for every $v \in \mathscr{C}_{V_g}(x_0,\eta)$ holds $\varangle(\mathbb{R}\langle v \rangle, E_g(x_0))<\beta$;\smallskip
\item[-] for all $u \in T_{x_0}M$ holds:
$$\varangle(\mathbb{R}\langle Dg_{x_0}(u)\rangle,F_g(g(x_0)))<2\theta \Longrightarrow Dg_{x_0}(u) \in \mathscr{C}_{E_g,F_g}(g(x_0),\beta/2).$$
\end{itemize}
Now, assume that $(x_i)_i \in \Lambda_g$ is any point satisfying the equation \eqref{eq-lemma}. That is, $(x_i)_i$ satisfies:
$$\|Dg^j\mid_{E_g(x_0)}\| \geq \frac{1}{2} \|Dg^j\mid_{F_g(x_0)}\|, \, \text{for each} \ \ 1 \leq j \leq l.$$
Hence, Lemma \ref{constant-c0} implies $x_0 \notin \mathrm{U}$. Thus, it remains to show that $x_1,\dots,x_{l-1} \notin \mathrm{U}$. Suppose, without loss of generality, that $x_{l-1} \in \mathrm{U}$. Let $u=u_1+u_2 \in E_g(x_0)\oplus F_g(x_0)$ a vector in the closure of $TM\backslash \mathscr{C}_{E_g,F_g}(x_0,\beta)$, denoted by $\mathscr{C}_{E_g,F_g}^{\ast}(x_{0},\beta)$. That is, $\|u_1\|\geq \beta \|u_2\|$. Then, $Dg^j(u)=Dg^j(u_1)+Dg^j(u_2) \in E_g(x_j)\oplus F(x_j)$ satisfying
\begin{align}\label{eq-final}
\|Dg^j(u_1)\|=\|Dg^j\mid_{E_g(x_0)}\|\|u_1\|\geq \frac{\beta}{2} \|Dg^j\mid_{F_g(x_0)}\|\|u_2\|=\frac{\beta}{2}\|Dg^j(u_2)\|.
\end{align}
In other words, $Dg^j(u)$ does not belong to $\mathscr{C}_{E_g,F_g}(x_j,\beta/2)$ for $1\leq j \leq l$. On the other hand, for every $w \in \mathscr{C}_{V_g}^{\ast}(x_{l-1},\eta)$,
\begin{align*}
\varangle(\mathbb{R}\langle Dg(w)\rangle, F_g(x_{l}))&\leq \varangle(\mathbb{R}\langle Dg(w)\rangle, Dg(W_g(x_{l-1})))\\& + \varangle(Dg(W_g(x_{l-1})), F_g(x_{l}))\leq 2\theta,
\end{align*}
and so $Dg^{l-1}(u)$ does not belong to $\mathscr{C}_{E_g,F_g}^{\ast}(x_{l-1},\eta)$ which implies that $Dg^{l-1}(u) \in \mathscr{C}_{V_g}(x_{l-1},\eta),$ and consequently, $ \varangle(\mathbb{R}\langle Dg^{l-1}(u) \rangle, E_g(x_{l-1}))<\beta$. Finally, assuming that $\beta >0$ is small enough so that $u \in \mathscr{C}_{E_g,F_g}(x_0,\beta)$ implies $\varangle(\mathbb{R}\langle u\rangle,F_g(x_0))$ is small enough,
there are two rotations $R_1$ and $R_2$ on $T_{x_0}M$ and $T_{x_{l-1}}M$, respectively, such that
$R_1(F_g(x_0))=\mathbb{R}\langle u \rangle,$ $R_2(\mathbb{R}\langle Dg^{l-1}(u) \rangle)=E(x_{l-1})$, and
$\|R_{\ast}\,\circ Dg-Dg\|<\varepsilon$, for $\ast=1,2$.
However, by Lemma \ref{Franks-conseq} and assumption (\ref{eq-assumption}), we get a contradiction. This concludes the proof.
\end{proof}
\subsubsection*{\uline{Second step}} Domination property on the ``lambda-set".
\smallskip
In order to finish the proof of the Lemma \ref{domination-openess} (Uniform domination property), we use the first step to extend the domination property to the corresponding ``lambda-set" for every nearby endomorphism. First let us introduce the following auxiliary result, the proof will be omitted and can be found in \cite[Appendix A]{Potrie}.
\begin{lemma}\label{Potriethesis}
Given $\kappa>0$ and $K>0$, there exists $l>0$ such that if $A_1,...,A_l$ is a sequence in $GL(2,\mathbb{R})$ and $v, w$ are unit vectors in $\mathbb{R}^2$ verifying:
\begin{align}
\max_{1\leq i \leq l}\{\|A_i\|,\|A^{-1}_i\|\}\leq K; \ \ \text{and} \ \ \|A_l...A_1(v)\| \geq \frac{1}{2} \|A_l...A_1(w)\|.
\end{align}
Then, there exist rotations $R_1,...,R_l$ of angles less than $\kappa$ such that $$R_lA_l...R_1A_1(\mathbb{R}\langle w \rangle)=A_l...A_1(\mathbb{R}\langle v \rangle).$$
\end{lemma}
Let $\mathcal{U}$ and $\mathrm{U}$ given by Lemma \ref{domination-open} in the first step. We now prove the following.
\medskip
\noindent
\textbf{Claim 1}: There exists $l\geq 1$ such that for every $(x_i)_i \in \Lambda_g$ there exists an integer $j:=j((x_i)_i), 1 \leq j \leq l,$ such that,
\begin{align}\label{eqclaim1}
\|Dg^j\mid_{E_g(x_0)}\|\leq \frac{1}{2}\|Dg^j\mid_{F_g(x_0)}\|.
\end{align}
\begin{proof}[Proof of Claim 1]
For every $g \in \mathcal{U},$ either $\mathrm{Cr}(g)$ is empty or $\mathrm{Cr}(g)$ is contained in $\mathrm{U}$. Then, there are $K\geq \max\{\|Dg_y\|,\|Dg^{-1}_y\|\}$ uniformly in $M\backslash \mathrm{U},$ for every $g \in \mathcal{U},$ and $\kappa>0$ small enough such that any rotation $R$ of angle less than $\kappa$ satisfies: $$\|R\circ Dg -Dg\|<\varepsilon, \ \ \text{for all} \ \ g \in \mathcal{U}.$$
Fix $l_1\geq 1$ as in Lemma \ref{Potriethesis}. Suppose now that (\ref{eqclaim1}) does not hold. In particular, for $l \geq l_1$ there exists $(x_i)_i \in \Lambda_g$ such that
\begin{align}
\|Dg^j\mid_{E_g(x_0)}\|\geq \frac{1}{2}\|Dg^j\mid_{F_g(x_0)}\|.
\end{align}
for every $1\leq j \leq l$. By Lemma \ref{domination-open}, we have that $x_0, x_1, \dots, x_{l-1} \notin \mathrm{U}$. And so, for each $1 \leq j \leq l,$ $A_j=Dg_{x_{j-1}}$ verifies Lemma \ref{Potriethesis}. Consequently, there exist rotations $R_1,\dots,R_{l}$ so that:
$$ \|R_jA_j-A_j\|<\varepsilon, \, 1 \leq j \leq l; \ \ \text{and} \ \ R_lA_l\cdots R_1A_1(F_g(x))=E_g(g^l(x)).$$
By Lemma \ref{Franks-conseq} and the assumption \eqref{eq-assumption}, we get a contradiction. This concludes the proof of Claim 1.
\end{proof}
Fix $l_1\geq 1$ given by Lemma \ref{Potriethesis}. It satisfies Claim 1, that is, for every $g \in \mathcal{U}$ whose $\Lambda_g$ is nonempty such that for each $(x_i)_i \in \Lambda_g$, there exists an integer $j_0:=j((x_i)_i), \, 1\leq j \leq l_1$, verifying:
\begin{align}\label{eq-key}
\|Dg^{j_0}\mid_{E_g(x_0)}\|\leq \frac{1}{2}\|Dg^{j_0}\mid_{F_g(x_0)}\|.
\end{align}
Finally, in order to conclude the proof of the Lemma \ref{domination-openess} (Uniform domination property) remains to show the following assertion.
\medskip
\noindent
\textbf{Claim 2}: There exists $\ell \geq 1$ such that for every $(x_i)_i \in \Lambda_g$,
\begin{align*}
\|Dg^{\ell}\mid_{E_g(x_0)}\|\leq \frac{1}{2}\|Dg^{\ell}\mid_{F_g(x_0)}\|.
\end{align*}
We emphasize that $\ell$ is uniform in $\mathcal{U}$.
\begin{proof}[Proof of Claim 2]
Assume $\ell$ greater than $l_1$. Let $j_0=j((x_i)_i),\, 1 \leq 1 \leq l_1,$ such that:
\begin{align*}
\|Dg^{\ell}\mid_{E_g(x_0)}\|&=\|Dg^{\ell-j_0}\mid_{E_g(x_{j_0})}\|\|Dg^{j_0}\mid_{E_g(x_0)}\|\\
&\leq \frac{1}{2}\|Dg^{\ell-j_0}\mid_{E_g(x_{j_0})}\|
\|Dg^{j_0}\mid_{F_g(x_0)}\|.
\end{align*}
Then, repeating $r$-times the process, we obtain $L_{r}=\ell-\sum_{i=0}^{r-1} j_i$ with $1\leq L_r\leq l_1$ such that:
\begin{align*}
\|Dg^{\ell}\mid_{E_g(x_0)}\| &\leq \biggl(\frac{1}{2}\biggr)^r\|Dg^{L_r}\mid_{E_g(x_{j_{r-1}})}\|
\|Dg^{\ell-L_r}\mid_{F_g(x_0)}\| \\
& \leq \biggl(\frac{1}{2}\biggr)^r C_0\|Dg^{\ell}\mid_{F_g(x_0)}\|,
\end{align*}
where $C_0$ is such that:
\begin{align}\label{c0}
C_0\geq \frac{\max_{x \in M}\{\|Dg^i_x\|:i=1,2, \dots ,l_1\}}
{\min\{2\nu, \min_{\|v\|=1}\{\|Dg(v)\|^{l_1}:x\in M\backslash \mathrm{U}\} \}}
\end{align}
with $\nu>0$ given by Lemma \ref{constant-c0}.
\begin{rmk}\label{contstan-rmk}
The constant $C_0$ is well-defined, since $F_g(x) \subseteq \mathscr{C}_{V_g}^{\ast}(x,\eta)$ in $\mathrm{U}$ and by the fact that $\min_{\|v\|=1}\|Dg_x(v)\|$ is positive in $M\backslash \mathrm{U}$. In particular, we can take $C_0$ uniform for the neighborhood $\mathcal{U}$.
\end{rmk}
Therefore, taking $\ell\geq 1$ large enough such that $(1/2)^r C_0 \leq 1/2$, we have for each $(x_i)_i \in \Lambda$ that:
\smallskip
$\bullet$ for $\tau^{+}_0 \geq \ell$,
$$\|Dg^{\ell}\mid_{E_g(x_0)}\|\leq \frac{1}{2}\|Dg^{\ell}\mid_{F_g(x_0)}\|;$$
$\bullet$ for $\tau^{+}_0 < \ell, \ \ \|Dg^{\ell}\mid_{E_g(x_0)}\|=0$. In particular,
$$\|Dg^{\ell}\mid_{E_g(x_0)}\|=0 \leq \frac{1}{2}\|Dg^{\ell}\mid_{F_g(x_0)}\|.$$
This concludes the proof of Claim 2 and, consequently, the Lemma \ref{domination-openess} (Uniform domination property).
\end{proof}
\section{Consequences of Main Theorem}\label{sec:main-thm-consequences}
In this section we introduce an auxiliary result that will be used for proving Theorem \ref{thm-B} stated in Section \cref{intro}. Both results are proved in this section. Recalling that Theorem \ref{thm-B} asserts that the torus and the Klein bottle are the only surfaces that admit robustly transitive endomorphisms. Concretely,
\begin{thm}\label{thm-DS}
If $f \in \mathrm{End}^1(M)$ is a robustly transitive endomorphism displaying critical points, then $M_f$ admits a dominated splitting for $f$.
\end{thm}
Before proving the theorem above, we prove Theorem \ref{thm-B}.
\subsection{Proof of Theorem B}\label{pf-thm-B}
Assuming Theorem \ref{thm-DS}, denote by $E\oplus F$ the dominated splitting over $M_f$ for $f$. Thus, by Remark \ref{proj-E}, we have that $E$ induces a continuous subbundle on $TM$ over $M$, by slight abuse of notation it is denoted by $E$ as well. Let $(\widetilde{M}, p, p^{\ast}(E))$ be the double covering of $E$ over $M$. Hence, since the subbundle $p^{\ast}(E)$ of $T\widetilde{M}$ is orientable, we can define a vector field $X:\widetilde{M} \to T\widetilde{M}$ such that $X(x)\not=0 \in p^{\ast}(E)$. Therefore, one gets that $\chi(\widetilde{M})=0$, and so, $\chi(M)=0$. Thus, $M$ is either the torus $\mathbb{T}^2$ or the Klein bottle $\mathbb{K}^2$. This complete the proof of Theorem B.\qed
\medskip
We now prove Theorem \ref{thm-DS}.
\subsection{Proof of Theorem $\ref{thm-DS}$}
Let $\mathcal{U}_f \subseteq \mathrm{End}^1(M)$ be a neighborhood of $f$ such that every $g \in \mathcal{U}_f$ is transitive. By
the \hyperref[key obstruction]{Key Lemma} stated in Section~\cref{intro}, we have that every $g \in \mathcal{U}_f$ satisfies the assumption \eqref{eq-assumption}, recalling that this assumption says that $|\ker(Dg^n)|\leq 1$, for all $g \in \mathcal{U}_f$ and $n\geq 1$.
Define $\mathcal{D}$ as the subset of $\mathcal{U}_f$ given by,
\begin{align}
\mathcal{D}=\{g \in \mathcal{U}_f: \mathrm{int}(\mathrm{Cr}(g))\neq \emptyset\}.
\end{align}
Observe that since every $g \in \mathcal{D}$ is transitive and $\mathrm{int}(\mathrm{Cr}(g))\neq \emptyset$,
then $\Lambda_g$ is dense in $M_g$, recall \eqref{Lambda-set} in Section \cref{intro} and \cref{construction-lambda}.
\begin{lemma}\label{typically}
For any $g \in \mathcal{U}_f$ whose $\mathrm{Cr}(g)\neq \emptyset$ and any neighborhood $\mathcal{U} \subseteq \mathcal{U}_f$ of $g$, holds that $\mathcal{D}\cap \mathcal{U}\neq \emptyset$. In particular, $\mathcal{D}$ contains a family of endomorphisms converging to $f$.
\end{lemma}
\begin{proof}
Suppose, without loss of generality, that $\mathrm{int}(\mathrm{Cr}(g))$ is empty and let $p$ in $\mathrm{Cr}(g)$. By \hyperref[Franks-lemma]{Franks' Lemma}, we can consider two sequences, the first one $(B_n)_n$ of neighborhoods of $p$ and the other one $(g_n)_n$ in $\mathcal{U}$ such that for all $n\geq 1$ holds:
\begin{enumerate}
\item[-] $\mathrm{int}(\mathrm{Cr}(g_n))\subseteq B_n$, $g_n(x)= Dg_p$ for $x\in \mathrm{int}(\mathrm{Cr}(g_n))$, and the diameter of $B_n$ goes to zero as $n$ goes to infinity;
\item[-] $g_n$ converges to $g$ in ${\rm{End}}^1(M)$, and $\mathrm{int}(\mathrm{Cr}(g_n))\neq \emptyset$ for every $n$;
\item[-] $g_n \mid_{M\backslash B_n}=g$ and $g_n(p)=g(p)$.
\end{enumerate}
This proves the lemma.
\end{proof}
Assume, up to changing the neighborhood of $f$, $\mathcal{U}_f$ satisfies
the Lemma \ref{domination-openess} (Uniform domination property), recall Section~\cref{sec:main-thm}. Considering the family $(f_n)_n$ in $\mathcal{D}$ converging to $f$
given by Lemma \ref{typically}, the \hyperref[main-theo]{Main Theorem} implies that each $f_n$ admits a dominated splitting $E_n\oplus F_n$ verifying:
\begin{align}
\begin{split}
\exists\, \alpha >0, \,& \ell \geq 1 \ \ \text{such that} \ \ \forall (x_i)_i \in M_{f_n}, i \in \mathbb{Z},\, \text{and} \, \, n \geq 1, \\ &
\varangle(E_n(x_i),F_n(x_i))>\alpha \ \ \text{and} \ \ E_n\prec_{\ell} F_n.
\end{split}
\end{align}
On the other hand, by Lemma \ref{cone-criterion}, we can find a family of cone-fields $\mathscr{C}_{E_n}$ of uniform angle (length) such that $$Df_n^{\ell}(\mathscr{C}_{E_n}^{\ast}(x))\subseteq \mathrm{int}(\mathscr{C}_{E_n}^{\ast}(x)) \ \ \text{and} \ \ T_xM=E_n(x) \oplus \mathscr{C}_{E_n}^{\ast}(x),$$
for all $x \in M$. Furthermore, the uniqueness of the domination property implies that the following limits are well-defined,
\begin{align}
E(x)=\lim E_n(x) \ \ \text{and} \ \ \mathscr{C}_{E}^{\ast}(x)=\lim \mathscr{C}_{E_n}^{\ast}(x).
\end{align}
And so, we have that for every $x \in M$,
$$Df^{\ell}(\mathscr{C}_{E}^{\ast}(x))\subseteq \mathrm{int}(\mathscr{C}_{E}^{\ast}(x)) \ \ \text{and} \ \ T_xM=E(x) \oplus \mathscr{C}_{E}^{\ast}(x).$$
Hence completes the proof of Theorem \ref{thm-DS}.
\qed
\section{Homotopy classes}\label{homotopy}
This section is devoted to the proof of Theorem \ref{thm-C}, Corollaries \ref{cor1} and \ref{cor2}, stated in Section \cref{intro}. Let us recall using a revisited version of Theorem \ref{thm-C}.
\begin{thm-C}
If $f \in \mathrm{End}^1(M)$ is a transitive endomorphism admitting a dominated splitting. Then, the action of $f$ on the first homology group has at least one eigenvalue with modulus larger than one.
\end{thm-C}
Throughout this section, we denote by $E$ a $Df$-invariant continuous subbundle of $TM$ and $\mathscr{C}$ a continuous invariant cone-field on $M$ transverse to $E$, recalling that if $x \in \mathrm{Cr}(f)$ then $E(x)=\ker(Df_x)$.
To prove Theorem \ref{thm-C}, we first show that the length of the iterates of an arc $\gamma$ tangent to the cone-field $\mathscr{C}$ by $f$ grows exponentially. Afterward,
we find a relation between the $\gamma$ iterates growth and the volume growth of the balls containing the iterates of $\gamma$, allowing us to conclude the proof of Theorem \ref{thm-C}. Furthermore, the exponential growth of the iterates by $f$ of a tangent arc to the cone-field is the first step in order to find an expanding direction for $f.$ This is proved in Section \cref{sec:exp direction}, recalling that one of our goals is to prove that $f$ is partially hyperbolic, hence completing the proof of Theorem \ref{thm-A}.
\subsection{Topological expanding direction}
Let us fix some notion that will be used along this section.
An {\it{u-arc}} is an injective Lipschitz curve $\gamma:[0,1]\to M$ such that $\gamma' \subseteq \mathscr{C}$, where $\gamma'$ denotes the set of tangent vectors to $\gamma$. In other words, all tangent vectors to $\gamma$ are contained in the cone and the Lipschitz constant of $\gamma$ is uniform bounded by the length of the cone.
We denote by $\ell(\gamma)$ the length of $\gamma$.
\begin{thm}\label{thm-u-arc}
For every u-arc $\gamma$, the length of $f^n(\gamma)$ grows exponentially.
\end{thm}
The proof of the theorem above is an adaptation of \cite{Pujals-Sambarino} in our context. For completeness we present here the details.
\begin{defi}
We say that an u-arc $\gamma$ is a $\delta$-$u$-arc provided the next condition holds:
\begin{align}
\ell(f^n(\gamma))\leq \delta,\ \ \text{for every} \ \ n\geq 0.
\end{align}
\end{defi}
In other words, a $\delta$-$u$-arc is an {\it{u-arc}} such that the length of the forward iterates remain bounded.
\medskip
The main idea for proving Theorem \ref{thm-u-arc} is to ensure that there is no $\delta$-$u$-arc. Hence, we will study some consequences in case this kind of arcs exist. The first one is the following.
\begin{lemma}\label{lema-PS}
There exist $ 0<\lambda<1,\, \delta>0, \, C>0$, and $n_0\geq 1$ such that given $\delta$-$u$-arc $\gamma$ and $x \in f^{n_0}(\gamma)$ one has that:
\begin{align}\label{stable-direction}
\|Df^j\mid_{E(x)}\|<C\lambda^j, \ \
\text{for every} \ \ j\geq 1.
\end{align}
\end{lemma}
\begin{proof}
By the domination property, there exists $\ell \geq 1$ such that
\begin{align}\label{cone-u-arc}
\|Df^{\ell}\mid_{E(x)}\|\leq \frac{1}{2}\|Df^{\ell}(v)\|,
\end{align}
for all $v \in \mathscr{C}(x)$ with $ \|v\|=1,$ and $x \in M$. By continuity, given a small $a>0$ there are $\delta_0, \theta_1 >0$ such that for every $x, y$ with
$d(x,y)<2\delta_0$ and $v \in \mathscr{C}(x), w \in \mathscr{C}(y)$, with $\varangle(w,v)<\theta_1$ \footnote{The angle between $v$ and $w$ is calculated using the local identification $TM\mid_U=U\times \mathbb{R}^2$.}, follows that:
\begin{itemize}
\item[-] $\|Df_x(v)\|\geq (1-a)\|Df_y(w)\|$;
\item[-] $\|Df\mid_{E(y)}\|,\|Df\mid_{E(x)}\|< a$, if $x,y \in B(\mathrm{Cr}(f),\delta_0)$; and,
\item[-] $\|Df\mid_{E(x)}\|\leq (1+a) \|Df\mid_{E(y)}\|$, if $y \notin B(\mathrm{Cr}(f),\delta_0)$,
\end{itemize}
where $B(\mathrm{Cr}(f),\delta_0)=\{x \in M: d(x,\mathrm{Cr}(f))<\delta_0\}$, recalling that $\mathrm{Cr}(f)$ is the set of the critical points.
\medskip
Since $\mathscr{C}$ is a continuous $Df$-invariant cone-field, we fix $0<\delta<\delta_0$ and $n_0\geq 1$ large enough so that for every $x,y \in M$ with $d(x,y)\leq \delta$ and $v \in f^{n_0}(\mathscr{C}(x)),\, w \in f^{n_0}(\mathscr{C}(y))$, we have that $\varangle(w,v)<\theta_1$. Thus, taking $\beta>0$ so that $1<(1-a)(1+\beta)<2$, one can obtain for every $\delta$-$u$-arc $\gamma$ and $t \in [0,1]$ that:
\begin{align}\label{u-arc}
\|Df^k\mid_{\mathbb{R}\langle \gamma'_{n_0}(t)\rangle}\|\leq (1+\beta)^k
\end{align}
for $k$ sufficiently large and $\gamma_n(t)=f^n(\gamma(t))$.
Indeed, assume that $\gamma:=\gamma_{n_0}$ is parametrized by the arc length. Suppose, by contradiction, that there exists a
sequence $(k_j)_{j}$ going to infinity as $j$ goes to infinity such that:
\begin{align*}
\|Df^{k_j}\mid_{\mathbb{R}\langle \gamma'(t_j)\rangle}\|>(1+\beta)^{k_j}.
\end{align*}
Since $\ell(\gamma_n)\leq \delta$, one has that $d(\gamma_n(t),\gamma_n(s))\leq \delta$ for every $t,s\in [0,1]$. Thus, for every $t\in [0,1]$,
\begin{align*}
\|Df^{k_j}(\gamma'(t))\|(1-a)^{k_j}\|Df^{k_j}(\gamma'(t_{j}))\|\geq ((1-a)(1+\beta))^{k_j}.
\end{align*}
In particular, we would have that $\ell(f^{k_j}(\gamma))\geq ((1-a)(1+\beta))^{k_j}\ell(\gamma)$, contradicting that $\gamma$ is a $\delta$-$u$-arc. Hence, the equation \eqref{u-arc} holds.
Finally, choosing $\beta > 0$ such that $1<((1-a)(1+\beta))^{\ell}<2$ follows that for $k\geq 1$ large enough,
\begin{align}
\|Df^{k\ell}\mid_{E(x)}\|\leq \lambda^{k\ell}, \ \ \text{for all} \ \ x \in \gamma,
\end{align}
where $\lambda \in (0,1)$ is chosen so that $((1-a)(1+\beta))^{\ell}/2<\lambda^{\ell} <1$.
This shows that equation \eqref{stable-direction}
holds.
\end{proof}
Note that $\lambda \in (0,1)$ above can be taken uniformly for every $\delta$-$u$-arc with $0<\delta<\delta_0$.
From now on, we fix $0<\lambda<1$ and $0<\delta<\delta_0$ as in Lemma \ref{lema-PS} and $\lambda'>0$ such that $(1+a)\lambda<\lambda'<1$. Up to replacing by iterates, we assume for the next result that $\gamma$ satisfies Lemma \ref{lema-PS}.
The following result provides the existence of the local stable manifold for each point belonging to a $\delta$-$u$-arc $\gamma$.
\begin{lemma}\label{lemma-stable-mfld}
There exists $\alpha>0$ such that for every $x \in \gamma$, there is a unique curve $\xi_x:(-\alpha,\alpha)\to M$ orientation preserving satisfying:
\begin{equation}\label{eq-solution}
\left\{\begin{array}{ll}
\xi_x'(t) \in E(\xi(t)) \ \ \text{with} \ \ \|\xi_x'(t)\|=1;\\
\xi_x(0)=x.
\end{array}\right.
\end{equation}
\end{lemma}
\begin{proof}
Peano's Theorem guarantee the existence of an interval $I=(-\alpha_0,\alpha_0)$ where there exists at least one solution defined on it. Suppose by contradiction that $\xi_0,\xi_1:I \to M$ are two different solutions of \eqref{eq-solution}. For $0<\alpha<\alpha_0$, consider
the set $D_{\alpha}$
of all points $y \in M$ such that there exists a solution $\xi_x$ of \eqref{eq-solution} with $\xi_x(\alpha)=y$. It is easy to see that
$D_{\alpha}$
is a connected compact set in $M$.
Then, by the proof of Lemma \ref{lema-PS},
\begin{align*}
&\|Df\mid_{E(y)}\|,\|Df\mid_{E(x)}\|< a,\, \text{if} \ \ x,y \in B(\mathrm{Cr}(f),\delta_0); \ \ \text{and},\\
&\|Df\mid_{E(x)}\|\leq (1+a) \|Df\mid_{E(y)}\|,\, \text{if} \ y \notin B(\mathrm{Cr}(f),\delta_0).
\end{align*}
Denote by $\mathcal{Q}$ the region bounded by $\xi_0,\xi_1$ and
$D_{\alpha}$
as in Figure \ref{curve-cone}.
\begin{figure}[!h
\centering
\includegraphics[scale=0.9]{curve-cone
\caption{Bounded region $\mathcal{Q}$. }\label{curve-cone}
\end{figure}
Since for all $y \in \mathcal{Q}$ there exists a solution $\xi_x$ of \eqref{eq-solution} with $y=\xi_x(t)$ for some $0<t<\alpha$, one has that
\begin{align}\label{diam-Q}
\begin{split}
d(f^n(y),f^n(x))\leq \ell(f^n\circ\xi_x)\mid_{0}^t &\leq \|Df^n\mid_{E(x)}\|(1+a)^n\ell(\xi_x)\\
&\leq C\lambda^n(1+a)^n \alpha \leq C \lambda'^n\alpha.
\end{split}
\end{align}
In other words, the diameter of the set $f^{n}(\mathcal{Q})$ goes to zero as $n$ goes to infinity. Thus, by transitivity of $f$, we can choose $n\geq 1$ such that the diameter of $f^n({\rm{int}(\mathcal{Q})})$ is small enough so that $f^n({\rm{int}(\mathcal{Q})})\subseteq \mathcal{Q}$ which contradicts the transitivity.
\end{proof}
Finally, for $\varepsilon >0$ small enough, denote the set $\{\xi_x(t): t \in (-\varepsilon,\varepsilon)\}$ by $W^{s}_{\varepsilon}(x)$.
In particular,
\begin{align*}
y \in W^{s}_{\varepsilon}(x) \ \ \Longrightarrow \ \
d(f^n(x),f^n(y)) \to 0, \,\, \text{as} \,\, n \to +\infty.
\end{align*}
\begin{rmk}
It should be noted that $\alpha_0>0$ can be taken uniform in Peano's Theorem. In particular, fixed $\delta_0>0$ and $0<\alpha<\alpha_0$, we have that for $\delta$-$u$-arc $\gamma$ with $0<\delta<\delta_0$, the size of the local stable manifold can be taken uniform.
By uniqueness of the solution of the equation \eqref{eq-solution} and the invariance of $E$ by $Df$, we have that for $n\geq 1$ large enough,
\begin{align}\label{eq-stable}
f^n(W^s_{\varepsilon}(x))\subseteq W^s_{\varepsilon}(f^n(x)), \forall x \in \gamma.
\end{align}
\end{rmk}
Let us call by {\it box} the following open set,
\begin{align}
W^{s}_{\varepsilon}(\gamma)=\bigcup_{x \in \gamma} W^s_{\varepsilon}(x).
\end{align}
The next result characterizes the dynamic of a $\delta$-$u$-arc and ensures that its existence is an obstruction for transitivity. The proof is inspired in \cite[Theorem 3.1]{Pujals-Sambarino} and gives a characterization of the $\omega$-limit of a $\delta$-$u$-arc $\gamma$, denoted by $\omega(\gamma)$. For completeness we present the proof adapted to our setting as follows.
\begin{thm}\label{PS}
If $\gamma$ is a $\delta$-$u$-arc with $0<\delta \leq \delta_0$, then one of the following properties holds:
\begin{enumerate}[label=$\arabic*.$]
\item $\omega(\gamma) \subseteq \tilde{\beta}$, where $\tilde{\beta}$ is a periodic
simple closed curve normally attracting.\\
\item There exists a normally attracting periodic arc $\tilde{\beta}$ such that $\gamma \subseteq W^s_{\varepsilon}(\tilde{\beta})$.\\
\item $\omega(\gamma) \subseteq {\rm{Per}}(f)$, where ${\rm{Per}}(f)$ is the set of the periodic points of f. Moreover,
one of the periodic points is either a semi-attracting periodic point or an attracting one (i.e.,
the set of points $y \in M$ such that $d(f^n(p),f^n(y)) \to 0$ contains an open set in $M$).
\end{enumerate}
\end{thm}
\begin{proof}
Let $\gamma_n:=f^n(\gamma)$. By transitivity of $f$, there exists $n_0\geq 1$ large enough verifying the equation \eqref{eq-stable} and
\begin{align}\label{intesection}
W^{s}_{\varepsilon}(\gamma)\cap W^s_{\varepsilon}(\gamma_{n_0})\not=\emptyset.
\end{align}
Consequently, $W^{s}_{\varepsilon}(\gamma_{_{(k-1)n_0}})\cap W^s_{\varepsilon}(\gamma_{_{kn_0}})\not=\emptyset$.
\medskip
If $\ell(\gamma_{_{kn_0}})$ goes to zero as $k$ goes to infinity, then $\omega(\gamma)$ consist of a periodic orbit.
\medskip
Indeed, if $\ell(\gamma_{kn_0}) \to 0$, then $\ell(\gamma_n) \to 0$ as $n \to \infty$. Let $p$ be an accumulation point of $\gamma_{kn_0}$. That is, there exist a subsequence $(k_j)_j$
and $x \in \gamma$ such that $f^{k_jn_0}(x)\to p$. In particular, as $\ell(\gamma_n)\to 0$, one has
$\gamma_{_{k_jn_0}} \to p$ as $j \to \infty$, and by
\eqref{intesection}, it follows that the limit is independent of the subsequence $(k_j)_j$, and so, we have $\gamma_{_{kn_0}} \to p$ as $k \to \infty$. Hence, $\gamma_{_{kn_0+r}} \to f^{r}(p)$ for $0\leq r \leq n_0-1$, implying that $p$ is a periodic point. Thus, for every $x \in \gamma$ we have that $\omega(x)$ consist only of the periodic orbit of $p$. This proves item ($3$).
\medskip
On the other hand, if $\lim_{k\to +\infty}\ell(\gamma_{_{kn_0}})\geq c> 0$, then there exists a subsequence $(k_j)_j$ such that $\gamma_{_{k_j n_0}} \to \beta$, where $\beta$ is an arc which is at least $C^1$ and
tangent to $\mathscr{C}^{\ast}_E$, since
\begin{equation*}
\gamma'(t^-)=\displaystyle\lim_{s \to 0 (s<0)}\frac{\gamma(t+s)-\gamma(t)}{s} \ \ \text{and} \ \
\gamma'(t^+)=\displaystyle\lim_{s\to 0 (s>0)}\frac{\gamma(t+s)-\gamma(t)}{s}
\end{equation*}
belong to $\mathscr{C}^{\ast}_E$, and so
$\displaystyle\lim_{j\to \infty}Df^{k_jn_0}(\gamma'(t^-)) =\displaystyle\lim_{j\to \infty}Df^{k_jn_0}(\gamma'(t^+))$.
Observe that $f^{n_0}(\beta)$ is the limit of $f^{k_jn_0}(\gamma_{n_0})$ and, by \eqref{intesection}, $\beta \cup f^{n_0}(\beta)$ is a $C^1$-curve.
Let
\begin{align}\label{beta}
\tilde{\beta}=\bigcup_{k\geq 0}f^{kn_0}(\beta).
\end{align}
Let us prove that there are two possibilities:
either $\tilde{\beta}$ is an arc or a simple closed curve.
\medskip
In order to prove it, first note that for every $k\geq 0$ the curve $f^{kn_0}(\beta)$ is a $\delta$-$u$-arc. In particular, for each $x \in \tilde{\beta}$
there exists $\varepsilon(x)>0$ such that $W^s_{\varepsilon(x)}(x)$ is the local stable manifold for $x$. Thus, the set
\begin{align*}
W^s(\tilde{\beta})=\bigcup_{x \in \tilde{\beta}} W^s_{\varepsilon(x)}(x)
\end{align*}
is a neighborhood of $\tilde{\beta}$.
Finally, we show that given $x \in \tilde{\beta}$ there exists a neighborhood $B(x)$ of $x$ in $M$ such that $B(x)\cap \tilde{\beta}$ is an arc which implies that $\tilde{\beta}$ is a simple closed curve or an interval.
Let $x \in \tilde{\beta}.$ Note that $x \in f^{k_1n_0}(\beta)$ for some $k_1\geq 0$ by \eqref{beta}. Taking $I$ an open interval in $f^{k_1n_0}(\beta)$ containing $x$ and $B(x)$ neighborhood of $x$ with $\mathrm{diam}(B(x))<c/2$ such that
\begin{align*}
B(x) \subseteq W^s(\tilde{\beta}) \ \ \text{and} \ \ B(x)\cap \beta_1 \subseteq I,
\end{align*}
where $\beta_1$ is an interval containing $f^{k_1n_0}(\beta)$ with $\ell(\beta_1)\geq c/2$.
\medskip
Now, we claim that for every $y \in \tilde{\beta}\cap B(x)$, one has that $y \in I$.
\medskip
Indeed, assume, without loss of generality, that $y \in f^{k_2n_0}(\beta)$. Since
\begin{align*}
f^{k_{\ast}n_0}(\beta)=\displaystyle\lim_{j\to \infty} f^{k_jn_0+k_{\ast}n_0}(\gamma) \ \ \text{for} \ \ \ast=1,2,
\end{align*}
and as both have nonempty intersection with $B(x)$, we conclude that, for some $j$, $f^{k_jn_0+k_1n_0}(\gamma)$ and $f^{k_jn_0+k_2n_0}(\gamma)$ are linked by a local stable manifold. Hence $f^{k_1n_0}(\beta)\cap f^{k_2n_0}(\beta)$ is an arc $\beta'$ tangent to $\mathscr{C}^{\ast}_E$. Therefore, $y \in B(x)\cap \beta' \subseteq \beta_1$. This completes the proof that $\tilde{\beta}$ is an arc or a simple closed curve. Furthermore, since $f^{n_0}(\tilde{\beta})\subseteq \tilde{\beta}$, it follows that for every $x \in \gamma,\, \omega(x)$ is the $\omega$-limit of a point in $\tilde{\beta}$, hence item (1) or item (2) holds, completing the proof of the theorem.
\end{proof}
As a consequence we have the following result.
\begin{cor}\label{corPS}
There is not $\delta$-$u$-arc provided $\delta$ small.
\end{cor}
\begin{proof}
From Theorem \ref{PS} follows that the $\omega$-limit of a $\delta$-$u$-arc is either a periodic simple closed curve normally attracting, or a semi-attracting periodic point, or
there exists a normally attracting periodic arc. In any case, it contradicts the fact that $f$ is transitive.
\end{proof}
\begin{lemma}\label{growing-curve}
For $\delta>0$ small enough, there exists $n_0 \geq 1$ such that for every $u$-arc $\gamma$ with $\delta/2 \leq \ell(\gamma) \leq \delta$, one has that $\ell(f^{n}(\gamma))\geq 2\delta$ for some $0\leq n \leq n_0$.
\end{lemma}
\begin{proof}
Otherwise, we should have a sequence $(\gamma_n)_n$ of $u$-arc so that for each $n\geq 1$, one has that: $$\ell(f^j(\gamma_n))\leq 2\delta \ \ \text{for every} \ \ 1\leq j \leq n.$$
Since $\gamma_n' \subseteq \mathscr{C}$, we have that the Lipschitz constant of $\gamma_n$ is uniformly bounded. In particular, the family $\{\gamma_n\}_n$ is uniformly bounded and equicontinuous. That is,
\begin{itemize}
\item[-] $d(\gamma_n(t),\gamma_n(0))\leq \delta$ for every $t \in [0,1]$ and $n \geq 1$;
\item[-] $\forall \varepsilon>0, \exists \nu>0$ such that for every $n\geq 1$,
$$\forall \,t,s \in [0,1],|t-s|<\nu \Longrightarrow d(\gamma_n(t),\gamma_n(s))<\varepsilon.$$
\end{itemize}
Then, by Arzel{\`a}-Ascoli's Theorem, up to taking a subsequence, $\gamma_n$ converges uniformly to
the $2\delta$-$u$-arc $\gamma$, since $\gamma$ is a Lipschitz curve with $\ell(f^k(\gamma))\leq 2\delta$ and $\gamma' \subseteq \mathscr{C}$, contradicting the Corollary \ref{corPS}.
\end{proof}
Finally, we prove the main result of this section.
\begin{proof}[Proof of Theorem $\ref{thm-u-arc}$]
Fix $\delta>0$ small enough and $n_0\geq 1$ as in Lemma \ref{growing-curve}.
Note that there exists $\rho > 0$ such that every $u$-arc $\gamma$ with $\ell(\gamma)$ larger than
$\delta/2$ verifies that $\ell(f^j(\gamma))\geq \rho$, for every $1\leq j\leq n_0$. Otherwise, there exists a sequence of $u$-arc $\gamma_n$ satisfying $\ell(\gamma_n)\geq \delta/2$ and $\ell(f^{j}(\gamma_n))\!\to\! 0$ as $n \to +\infty$, for some $1\leq j \leq n$. Then, by Arzel{\`a}-Ascoli's Theorem, up to take a subsequence, there exists an $u$-arc $\gamma$ satisfying $\gamma=\lim \gamma_n$ with $\ell(\gamma)\geq \delta/2$ and $\ell(f^j(\gamma))=0$. Therefore, there exists $t \in (0,1)$ such that $\gamma'(t)\in \ker(Df^j)$ which contradicts the fact that $\mathscr{C}$ is transversal to the kernel.
\medskip
Now let us prove that if $\ell(\gamma)\geq \delta$ then $\ell(f^j(\gamma))$ grows exponentially.
\smallskip
By the observation above and Lemma \ref{growing-curve}, there exists $1\leq j_1 \leq n_0$ such that
$\ell(f^{j_1}(\gamma))\geq 2\delta$. Thus, $f^{j_1}(\gamma)$ can be divided in two $u$-arcs, $\gamma_1$ and $\gamma_2,$ each one with length larger than $\delta$.
Repeating the process, we get that
$\ell(f^{jn_0}(\gamma))\geq 2^j\rho,$ for every $ j\geq 1$, finishing the proof.
\end{proof}
\subsection{Proof of Theorem \ref{thm-C}}
Let us prove the main goal of Section \cref{homotopy}.
Consider the lift of the structure we have on $M$ to its universal cover $\mathbb{R}^2$. Denote by $\widetilde{E}$ and $\widetilde{\mathscr{C}}$, the lifts of the subbundle $E$ and the cone-field $\mathscr{C},$ respectively.
Assume $\widetilde{E}$ orientable.
Let $\tilde{f}:\mathbb{R}^2 \to \mathbb{R}^2$ be the lift of $f$. We use tilde to denote the tangent curves to $\widetilde{\mathscr{C}}$ and points in $\mathbb{R}^2$ as well.
Consider the following balls centered at a curve $\tilde{\gamma}$ and $\tilde{x}$, respectively,
$$B(\tilde{\gamma},\varepsilon)=\{\tilde{y} \in \mathbb{R}^2: d(\tilde{y},\tilde{\gamma})<\varepsilon\}\ \ \text{and} \ \ B(\tilde{x},\varepsilon)=\{\tilde{y} \in \mathbb{R}^2: d(\tilde{x},\tilde{y})<\varepsilon\}.$$
\begin{lemma}\label{area-curve}
There exist $\varepsilon>0$ and $C>0$ such that for every $\tilde{\gamma}:[0,1] \to \mathbb{R}^2$ of $C^1$ class with $\tilde{\gamma}'\subseteq \mathscr{C}^{\ast}_{\widetilde{E}}$,
holds
\begin{align}\label{eq-volume}
area(B(\tilde{\gamma},\varepsilon))\geq C \ell(\tilde{\gamma}).
\end{align}
\end{lemma}
\begin{proof}
First, if $\tilde{\gamma}$ is such that $\tilde{\gamma}'\subseteq \mathscr{C}^{\ast}_{\widetilde{E}}$, then
$\tilde{\gamma}$ is a simple curve. That is, $\tilde{\gamma}:[0,1] \to \mathbb{R}^2$ is injective.
In fact, suppose, without loss of generality, that $\tilde{\gamma}(0)=\tilde{\gamma}(1)$. Let $D$ be a disk such that its boundary $\partial D$ is the curve $\tilde{\gamma}$. Since $\widetilde{E}$ is orientable and transverse to $\partial D$, we may define a non-vanishing vector field on $D$. However, by Poincar{\'e}-Bendixson Theorem, every continuous vector field on $D$ transversal to $\partial D$ has a singularity. Therefore, we cannot have $\tilde{\gamma}(t)=\tilde{\gamma}(s)$ with $t\neq s$ in $[0,1]$.
\smallskip
Second, let us prove that there exists $\varepsilon >0$ so that for any ball $B(\tilde{x},\varepsilon)$ the intersection $B(\tilde{x},\varepsilon)\cap \tilde{\gamma}$ has at most one connected component.
Fix $\varepsilon >0$ small enough such that the tangent curve to $\widetilde{E}$ passing through the
point $\tilde{x}$ divides $B(\tilde{x},\varepsilon)$ in two connected components. It is possible, because
$\widetilde{E}$ induces a continuous vector field on $\mathbb{R}^2$ and it is bounded. Now, suppose that
$\tilde{\gamma}(t_1) \in B(\tilde{\gamma}(t_0),\varepsilon)$ for some $0 \leq t_0 < t_1\leq 1$.
Since $\tilde{\gamma}' \subseteq \mathscr{C}^{\ast}_{\widetilde{E}}$, we can take a disk $D$ such that the
distribution $\widetilde{E}$ induce a continuous vector field on $D$ ($D$ is a disk whose boundary
is the union of a tangent curve to $\widetilde{E}$ from $\tilde{\gamma}(t_1)$ to $\tilde{\gamma}(t_0)$ and $\tilde{\gamma}$).
\begin{figure}[!h]
\centering
\includegraphics[scale=0.6]{distrib-E
\caption{The distribution $\widetilde{E}$ and disk $D$.}\label{distribuicao}
\end{figure}
Then, repeating the same arguments one gets, by Poincar{\'e}-Bendixson Theorem, that such vector field has a singularity on $D,$
which is a contradiction, and hence, follows the assertion. Therefore, we conclude that there exists $\varepsilon>0$ such that $B(\tilde{x},\varepsilon)\cap \tilde{\gamma}$ has at least one component.
\smallskip
Finally, given $L_0\geq 1$ large, up to changing $\varepsilon$, we can assume that every $C^1$ tangent curve to the cone-field with length larger than $L_0$ is not contained in a ball of radius $\varepsilon$. Thus, assume $\ell(\tilde{\gamma})\gg L_0$. Then, consider $k\geq 1$ the largest integer less or equal to $\ell(\tilde{\gamma})/L_0$ and the set
$\{\tilde{x}_1,...,\tilde{x}_k\}$ contained in $\tilde{\gamma}$ such that the curve $\tilde{\gamma}_j$ in $\tilde{\gamma}$ that passes through $\tilde{x}_{j}$ has length $L_0$ and $\{B(\tilde{x}_j,\varepsilon/2)\}_j$ are two-by-two disjoints. Thus, we have that:
\begin{align*}
area(B(\tilde{\gamma},\varepsilon))\geq \displaystyle\sum_{1 \leq j\leq k} area(B(\tilde{x}_j,\varepsilon/2))
\geq C_0 \frac{\ell(\tilde{\gamma})}{2L_0},
\end{align*}
where $C_0$ is the area of the ball of radio $\varepsilon/2$. Therefore, taking $C=C_0/2L_0$ follows the equation \eqref{eq-volume}.
\end{proof}
Now we are able to prove Theorem \ref{thm-C} which is inspired in \cite[Theorem 1.1]{BBI04}.
\begin{proof}[Proof of Theorem \rm{\ref{thm-C}}]
It is well known that there exists a unique square matrix $A$ with integers entries such that $\tilde{f}=A+\phi$, where $\phi$ is $\pi_1(M)$-periodic map, that is, $\phi(\tilde{x}+v)=\phi(\tilde{x})$
for every $v \in \pi_1(M)$ and $\tilde{x} \in \mathbb{R}^2$. Assume by contradiction that the absolute value of all the eigenvalues of $A$ are less or equal to one. Thus, the diameter of the images of every compact set under the iterates of $\tilde{f}$ grows sub-exponentially.
Let $B_n$ be a ball centered at $\tilde{x}_n \in \tilde{\gamma}_n$ with radius
equal to the length of $\tilde{\gamma}_n$ plus $\varepsilon$, where $\tilde{\gamma}_n$ is the image by $\tilde{f}^n$ of a $C^1$-curve $\tilde{\gamma}$ with
$\tilde{\gamma}'\subseteq \mathscr{C}^{\ast}_{\widetilde{E}}$, and $\varepsilon>0$ is given by Lemma \ref{area-curve}.
Since the length
of $\tilde{\gamma}_n$ grows sub-exponentially, we have that the area of $B_n$ grows sub-exponentially. Since $B_n$ contains the neighborhood $B(\tilde{\gamma}_n,\varepsilon)$ of $\tilde{\gamma}_n$ and Lemma \ref{area-curve}, we have that:
$$area(B_n)\geq area(B(\tilde{\gamma}_n,\varepsilon))\geq C\ell(\tilde{\gamma}_n).$$
However, Theorem \ref{thm-u-arc} implies that the length of $\tilde{\gamma}_n$ grows exponentially, getting a contradiction. Thus, $A$ must have at least one eigenvalue with modulus larger than one.
\end{proof}
Finally, we prove Corollaries \ref{cor1} and \ref{cor2} stated in Section \cref{intro}.
\begin{proof}[Proof of Corollary \rm{\ref{cor1}}]
The proof follows immediately from the fact that partially hyperbolic endomorphisms admits an unstable cone-field which implies that the iterates of any $C^1$ tangent arc to the unstable cone-field grows exponentially. And so, we can repeat the same argument as in the proof of Theorem \ref{thm-C}.
\end{proof}
\begin{proof}[Proof of Corollary \rm{\ref{cor2}}]
Let $f \in \mathrm{End}^1(M)$ be a robustly transitive endomorphism.
Note that $f$ either admits a dominated splitting or not.
Assuming $f$ admits a dominated splitting, then Theorem \ref{thm-C} implies that $f$ is homotopic to a linear map having at least one eigenvalue with modulus larger than one, proving our assertion.
On the other hand, if $f$ does not admit a dominated splitting, Theorem \ref{thm-DS} implies that the set of its critical points is empty, then $f$ is a local diffeomorphism. Observe that if $f$ is approximated by an endomorphism which admits a dominated splitting, then Theorem \ref{thm-C} implies that $f$ is homotopic to a linear map having at least one eigenvalue with modulus larger than one. Hence, it can be assumed that $f$ is a robustly transitive endomorphism (local diffeomorphism) which has no dominated splitting in a robust way, that is, there exists a neighborhood $\mathcal{W}$ of $f$ in $\mathrm{End}^1(M)$ such that for every $g \in \mathcal{W}$ does not admit a dominated splitting. Since \cite[Theorem 4.3]{LP}, $f$ is volume expanding. Therefore, using the same arguments as in the proof of Theorem \ref{thm-C}, we have that if the absolute value of all the eigenvalues of $A,$ as in Theorem \ref{thm-C}, are less or equal to one, then the area of the ball grows sub-exponentially, contradicting that $f$ is volume expanding, and finishing the proof.
\end{proof}
\section{Expanding direction}\label{sec:exp direction}
Let $f \in \mathrm{End}^1(M)$ be a robustly transitive endomorphism displaying critical points. By Theorem \ref{thm-DS}, $f$ admits a dominated splitting. That is, there exist $\ell \geq 1$ and a splitting $E\oplus F$ of $TM$ so that for all $(x_i)_i \in M_f$ and $i \in \mathbb{Z}$,
\begin{itemize}
\item[-] $Df(E(x_i))\subseteq E(f(x_i)) \ \ \text{and} \ \ Df(F(x_i))=F(f(x_i));$
\item[-] the angle between $E$ and $F$ is uniform bounded away from zero; and,
\item[-] $\|Df^{\ell}\mid_{E(x_i)}\|\leq \frac{1}{2}\|Df^{\ell}\mid_{F(x_i)}\|.$
\end{itemize}
Or equivalently, there exist a $Df$-invariant continuous subbundle $E$ of $TM$ and a $Df$-invariant continuous cone-field $\mathscr{C}: x \in M \mapsto \mathscr{C}(x,\eta)$ transverses to $E$. Recalling that Proposition \ref{cone-criterion} shows that both notion of dominated splitting, ([PH1]) in Section~\ref{intro} and Definition \ref{def-DS} in Section~\ref{section-ph}, are equivalent.
This section is devoted for finishing the proof of Theorem \ref{thm-A}.
Recalling Theorem \ref{thm-u-arc}, we have proved so far that the iterates of any arc $\gamma$ tangent to $\mathscr{C}$ grows exponentially.
Thus, it remains to be proved the existence of a real number $\lambda>1$ such that $\|Df^{\ell}\mid_{F(x_i)}\|\geq \lambda$ for all $(x_i)_i \in M_f$ and $i \in \mathbb{Z}$, that is, to show that $F$ is an uniform expanding subbundle.
In order to prove the previous assertion, we assume by contradiction that $F$ is not expanding and use the domination property to prove for every $u$-arc $\gamma$ holds that the subbundle $E$ on $\gamma$ is contracting, recall that $\gamma$ is a tangent arc to $\mathscr{C}(x,\eta)$. Then, taking a small box $W(\gamma)$, we have that its iterates expands on the cone direction and contracts on the $E$ direction.
Finally, we use that the iterates of the box intersects itself infinitely many times to create, up to a perturbed, a sink, contradicting that the map is robustly transitive.
\smallskip
Let us denote by $\gamma_x:[-1,1] \to M$ an $u$-arc of $C^1$ class with $\gamma_x(0)=x \in M$ having the same order of $[-1,1]$. Follow directly from the fact that $E$ induce locally a non-vanishing vector field transverse to the cone-field $\mathscr{C}$ and Peano's Theorem that, for all $y \in \gamma_x$, there exists $\xi_y:(-\alpha,\alpha)\to M$ such that $\xi_y(t)\in E(\xi_y(t))$ with $\|\xi_y'(t)\|=1$ for all $t \in (-\alpha,\alpha)$ and $\xi_y(0)=y$. Let $\ubar{\xi},\bar{\xi}:(-\alpha,\alpha) \to M$ be two tangent curves to the subbundle $E$ with $\ubar{\xi}(0)=\gamma_x(-1)$ and $\bar{\xi}(0)=\gamma_x(1)$.
Define a \textit{$\nu$-box} centered at $\gamma_x$ with $\ubar{\xi}$ and $\bar{\xi}$ as the bottom and top of the box, respectively, by:
\begin{align}
W_{\nu}(\gamma_x,\ubar{\xi},\bar{\xi})=\left\{\xi_y(t) \in M \left|
\begin{array}{ccc} \ubar{\xi}\leq \xi_y(t) \leq \bar{\xi} \,\, \text{for all} \, \, |t|\leq \nu
\end{array}\right.\right\},
\end{align}
where $\ell(\xi_y\mid_{[0,\pm\nu]})=\nu$ since $\xi_y$ is parameterized by arc length, and $\ubar{\xi}\leq z \leq \bar{\xi}$ means that every $u$-arc $\gamma_z$ with length larger than $\ell(\gamma_x),$ one has that $\ubar{\xi}(t),\, \bar{\xi}(t') \in \gamma_z$ for some $t, \,t' \in (-\alpha,\alpha)$ and $\ubar{\xi}(t)\leq z \leq \bar{\xi}(t')$ in the induced order by $\gamma_z$. By simplicity, we denote by $\partial^{-}W_{\nu}$ and $\partial^{+}W_{\nu}$, the bottom and the top of the $\nu$-box, respectively, in case there is no confusion about the center, bottom and the top of the $\nu$-box.
\subsection{Existence of a periodic point}
The following result will be used to create a box which is expanding on the cone direction and contracting on the $E$ direction.
\begin{lemma}\label{Existence-pp}
Suppose that there exist $0<\lambda <1,\, C>0$
such that for some $x \in M$ holds,
\begin{align}\label{bat-eq}
\|Df^n\mid_{E(x)}\|\leq C\lambda^n,\, \forall n \geq 1.
\end{align}
Then, for every $\nu>0$ small and $N\geq 1$ large, there is a periodic point $p$ of period $l\geq N$ so that $d(f^j(p),f^j(x))<\nu$ for each $0\leq j \leq l-1$.
\end{lemma}
\begin{proof}
Fix $a>0$ so that $(1+a)\lambda < 1$. By continuity of $y \mapsto \|Df\mid_{E(y)}\|$, there is $\nu_0 > 0$ such that:
\begin{itemize}
\item[-] $\|Df\mid_{E(z)}\|<a$ for every $z \in B(\mathrm{Cr}(f),\nu_0)$;
\item[-] $\|Df\mid_{E(y)}\|\leq (1+a)\|Df\mid_{E(z)}\|, \, \forall y,z \in M$ with $d(y,z)<\nu_0$.
\end{itemize}
Thus, whenever $d(f^j(y),f^j(x))\leq \nu_0$ for each $0\leq j \leq n-1$, we have that: $$\|Df^n\mid_{E(y)}\|\leq C((1+a)\lambda)^n.$$
Fix $0<\nu<\nu_0$ small enough and $N\geq 1$ large enough so that for every $u$-arc $\gamma$ with $\ell(\gamma)\geq \nu/2$ one has $\ell(f^n(\gamma))\geq 2^n\rho$, for $n\leq N,$ where $\rho > 0$ is given in the proof of Theorem \ref{thm-u-arc}. Moreover, choose $n_0\geq 1 $ so that $2^{n_0}\rho\geq 2\nu$ and $N\geq n_0$. Since the connected components $\gamma^{-}_x=\gamma_x\mid_{[-1,0]}$ and $\gamma^{+}_x=\gamma_x\mid_{[0,1]}$ of an $u$-arc $\gamma_x$ are $u$-arcs as well, we can assume, without loss of generality, that the length of $f^n(\gamma^{\pm}_x)$ are larger than $2\nu_0$ for every $n\geq n_0$. On the other hand, if the $\nu$-box $W_{\nu}(\gamma_x,\ubar{\xi},\bar{\xi})$ is contained in $B(x,\nu_0),$ then for all $\xi_y(t) \in W_{\nu}(\gamma_x,\ubar{\xi},\bar{\xi})$, for all $t\in [0,\nu]$, one verifies that:
\begin{align}\label{eq:xi}
\begin{split}
\ell((f\circ \xi_y)\mid_{[0,t]})& =\int_{0}^{t}\|(f\circ \xi_y)'(s)\|ds \\ & \leq\int_{0}^{t}\|Df\mid_{E(\xi_y(s))}\|\|\xi_y'(s)\|ds\leq C((1+a)\lambda)t, \, \forall t \in [0,\nu].
\end{split}
\end{align}
Denote by $\gamma_n$ the connected component of $f^n(\gamma_x)$ in the ball $B(f^n(x),\nu_0)$ containing $f^n(x)$, and by $W_n$ the $\nu$-box centered at $\gamma_n$, assuming $W_0=W_{\nu}$. Moreover, suppose that $0<\nu< \nu_0$ small enough so that for every $y \in W_n$ and $u$-arc $\gamma_y$ with $\ell(\gamma_y)\geq 2\nu$ holds $\gamma_y \pitchfork \partial^{\pm}W_n\neq \emptyset$.
We define by induction the following strips:
\begin{align}
D_0=W_{\nu} \ \ \text{and} \ \ D_n=f(D_{n-1})\cap W_n.
\end{align}
Observe that as $E$ is $Df$-invariant, one has that $f\circ \xi_y$, up to parametrizing by arc length, is an arc of the form $\xi_{f(y)}$. Therefore, for every $y \in D_n$ there exists $\xi_0:[0,t_0]\to B(x,\nu_0)$ such that $\xi_0(0) \in \gamma_x$ and $\xi_0(t_0)=y_0$ verifying $f^n(y_0)=y$ and $\xi_i(t)=(f^i\circ \xi_0)(t)$ belongs to $D_i$, for all $t \in [0,t_0]$ and $i=1,...,n$. In particular, $\xi_i(t) \in B(f^i(\xi_0(0)),\nu)$ for all $t \in [0,t_0]$, and so, one has that
\begin{align*}
\ell(\xi_i\mid_{[0,t]}) \leq \int_{0}^{t}\|(f^i\circ \xi_0)'(s)\|ds\leq C((1+a)\lambda)^i t, \, \forall t \in [0,t_0].
\end{align*}
Thus, $\mathrm{diam}(D_n)$ goes to zero as $n$ goes to infinity.
Since $f$ is transitive, we can find $(z_n)_n$ with $z_n$ and $f^{l_n}(z_n)$ converging to $x$ as $n$ goes to infinity. In particular, the $u$-arc $\gamma_{l_n}$ in $D_{l_n}$ verifies $\gamma_{l_n}\pitchfork \partial^{\pm}D_0\neq \emptyset$.
\begin{figure}[h]
\centering
\includegraphics[scale=0.7]{box-recurrence}
\caption{The action of $f$.}
\end{figure}
Roughly speaking, for every $l\geq N$ we have that $f$ acts on $D_l$ expanding in the ``vertical" direction (cone-field $\mathscr{C}$) and contracting in the ``horizontal" direction (tangent to $E$).
\smallskip
Therefore, fixing $l_n$ as above, there exists $D_0'\subseteq D_0$ such that $f^{l_n}:D_0' \longrightarrow D_{l_n}$. Then, we can find a periodic point of period $l_n$ as follows:
\begin{itemize}
\item [Step 1.] Repeating the process with $D_0'\cap D_{l_n}$ by $f^{kl_n}$ we obtain a sequence of ``vertical" boxes $(\Gamma_k)_k$ such that:
$$\Gamma_{1}\supseteq \Gamma_{2}\supseteq \cdots \supseteq \Gamma_{k}\supseteq \cdots$$
where $\bigcap_k \Gamma_k=\Gamma$ is a ``vertical" curve transverse to the box.
\item[Step 2.] Similarly, we have a sequence of ``horizontal" boxes $(\Sigma_k)_k$ such that:
$$\Sigma_1 \supseteq \Sigma_{2} \supseteq \cdots \supseteq \Sigma_k \supseteq \cdots$$
where $\bigcap_k \Sigma_k=\Sigma$ is a ``horizontal" curve.
\item[Step 3.] Therefore, $\Gamma \cap \Sigma=\{p\}$ is a periodic point of period $l_n$.
\end{itemize}
\begin{figure}[h]
\centering
\includegraphics[scale=0.7]{periodicpoint-box
\caption{Existence of a periodic point.}
\end{figure}
Observe that $d(x,p)<\nu$ and $f^i(p)\in D_i$ for each $i=0,..., l_{n}-1$. In particular, $d(f^i(p),f^i(x))<\nu$ for $i=0,\dots,l_{n}-1$, finishing the proof.
\end{proof}
Finally, we prove Theorem \ref{thm-A}.
\subsection{Proof of Theorem A}
In Theorem \ref{thm-DS}, we proved that every $f$ robustly transitive endomorphism displaying critical points
admits a dominated splitting.
Let us denote it by $E\oplus F$. Recall that for any $(x_i)_i \in M_f$, the subbundles $E$ and $F$ at $(x_i)_i$ are denoted by $E(x_0)$ and $F(x_0)$, and the action of $f$ on $M_f$ is the \textit{shift map} on $M_f$, that is, $f^n((x_i)_i)=(x_{n+i})_i$ for $n \in \mathbb{Z}$.
From now on, we will use a classical idea to create sinks using the domination property and vanishing Lyapunov exponent.
Recalling that we wish to prove that $f$ is partially hyperbolic, suppose instead that $f$ is not a partially hyperbolic endomorphism. Then, Proposition \ref{ph-end} implies the existence of $\tau >0$ such that for every $k \in \mathbb{N}$, there exists $(x_i^k)_i \in M_f$ such that for every $1\leq j \leq k$ holds:
\begin{align*}
\|Df^j\mid_{F(x_0^k)}\|\leq 1+\tau.
\end{align*}
For simplicity, let us denote $(x_i^k)_i$ by $x^k$.
Thus, for every $k \in \mathbb{N}$, we define a measure $\mu_k$ on $M_f$ as follows:
\begin{align*}
\mu_k=\dfrac{1}{k}\sum_{j=0}^{k-1} \delta_{f^j(x^k)}
\end{align*}
where $\delta_{f^j(x^k)}$ denotes the Dirac measure at $(x_{j+i}^k)_i$ in $M_f$. Let $\mu$ be a $f$-invariant measure on $M_f$ which is obtained as an accumulation point of $(\mu_k)_k$. Thus, up to take a subsequence, follows that:
$$\int{\Phi d\mu_k} \to \int{\Phi d\mu}, \ \ \text{for all}\; \Phi \; \text{continuous}.
$$
In particular, $\Phi(\cdot)=\log \|Df\mid_{F(\cdot)}\|$ satisfies:
\begin{align*}
\int{\log \|Df\mid_{F}\|d\mu}&=\lim_{k\to \infty} \int{\log \|Df\mid_{F}\|d\mu_{k}}\\
&=\lim_{k\to \infty} \frac{1}{k}\sum_{j=0}^{k-1} \log \|Df\mid_{F(f^j(x^k))}\|\\
&=\lim_{k\to \infty} \frac{1}{k}\log \|Df^{k}\mid_{F(x_0^{k})}\|\\
&\leq \lim_{k\to \infty} \frac{1}{k} \log (1+\tau) =0.
\end{align*}
On the other hand, using Birkhoff's Ergodic Theorem and Poincar{\'e}'s Recurrence Theorem, there is a recurrent point $(x_i)_i \in M_f$ such that:
\begin{align*}
\lim_{k\to \infty} \frac{1}{k}\sum_{i=0}^{k-1} \log \|Df\mid_{F(x_i)}\|\leq 0.
\end{align*}
Therefore, for every $\varepsilon >0$ there exists $k_0\geq 1$ such that:
\begin{align}\label{eq-lyap-1}
\|Df^k\mid_{F(x_0)}\|=\prod_{i=0}^{k-1}\|Df\mid_{F(x_i)}\|\leq e^{k\varepsilon},\, \text{for all}\; k \geq k_0.
\end{align}
Since $E\oplus F$ is the dominated splitting for $f$, we have that there exists $C>0$ such that
for every $ (x_i)_i \in M_f$ and $i \in \mathbb{Z}$ hold that:
\begin{align}\label{eq-lyap-2}
\|Df^k\mid_{E(x_i)}\|\leq C\left(\frac{1}{2}\right)^{k}\|Df^k\mid_{F(x_i)}\|, \, \text{for all}\; k\geq 1.
\end{align}
In particular, choosing $\varepsilon>0$ small enough so that $\lambda_0=e^{\varepsilon}/2<1$, we get by equations \eqref{eq-lyap-1} and \eqref{eq-lyap-2} that:
\begin{align*}
\|Df^k\mid_{E(x_0)}\|\leq C\lambda_0^k, \, \text{for all}\; k \geq k_0.
\end{align*}
In other words, up to change the constant $C>0$, we have that,
\begin{align}
\|Df^k\mid_{E(x_0)}\|\leq C\lambda_0^k, \, \text{for all}\; k \geq 1.
\end{align}
Therefore, applying Lemma \ref{Existence-pp}, there exists a periodic point $p$ of period $k$ large enough so that the eigenvalues of $Df^k$ at $p$ in modulus are at most $e^{k\varepsilon}\approx (1+\tau)$.
On the other hand, if we consider $L_p:\oplus_{i=0}^{k-1}T_{f^i(p)}M \to \oplus_{i=0}^{k-1}T_{f^i(p)}M$ defined by
$$L_p(v_0,v_1,\dots,v_{k-1})=(Df(v_{k-1}),Df(v_0),\dots, Df(v_{k-2})),$$
we have that
$$\omega \ \ \text{is an eigenvalue of} \ \ L_p \Longleftrightarrow \omega^k \ \ \text{is an eigenvalue of} \ \ Df^k_p.$$
Suppose, without loss of generality, that $\omega$ is the eigenvalue with maximum modulus and it satisfies $1-|\omega|^{-1}<\varepsilon$, where $\varepsilon$ is small enough. Then, by \hyperref[Franks-lemma]{Franks' Lemma}, there exists
a perturbation $h$ of $f$ such that $h^i(p)=f^i(p)$ and $Dh=(|\omega|^{-1}-\varepsilon)Df$ at $f^i(p)$. Hence, $p$ is a sink for $h$ which contradicts that $h$ is transitive, recalling that $f$ is robustly transitive. This proves Theorem \ref{thm-A}. \qed
\subsection*{Acknowledgments:}
The authors are grateful to E. Pujals, L. Mora, R. Potrie, and S. Luzzatto for
useful and encouraging conversations and suggestions. The authors
are also grateful for the nice environment provided by IMPA, PUC-Rio, UdelaR, UFBA, UFAL and ICTP
during the preparation of this paper. The first author was partially supported with CNPq-IMPA funds and ULA-Venezuela, and the second author by CNPq-IMPA, UFAL and ICTP.
\bibliographystyle{alpha}
|
2,877,628,090,424 | arxiv |
\section{Introduction}
Wyner \cite{wyner1975} introduced the notion of the wire-tap channel
(Fig.~\ref{fig:wiretapChannel}) in 1975:
Alice wants to communicate a message $W \in \{1,\dots,M\}$ to Bob through a
communication channel $\mathsf{V}: \mathcal{X} \rightarrow \mathcal{Y}$. Eve also has access to
what Alice transmits via a \emph{wire-tapper}'s channel $\mathsf{W}: \mathcal{X} \rightarrow
\mathcal{Z}$ and the aim of Alice is to keep the message hidden from her while
maximizing the rate of information transmitted to Bob, $R \triangleq \frac1n
\log M$.
\begin{figure}[htb]
\centering
\scalebox{0.75}{\input{figures/wiretapChannel}}
\caption{The Wire-Tap Channel}
\label{fig:wiretapChannel}
\end{figure}
To this end, Alice encodes $W$ as a codeword $\mathbf{X} \in \mathcal{X}^n$ and sends it via
$n$ consecutive uses of the channel. Bob observes the output sequence of $\mathsf{V}$,
$\mathbf{Y} \in \mathcal{Y}^n$, and estimates $W$ given $\mathbf{Y}$. On the other
side, Eve has access to $\mathbf{Z} \in \mathcal{Z}^n$ (the output sequence of $\mathsf{W}$), and
attempts to make an inference about $W$.
Wyner (in case when $\mathsf{W}$ is degraded with respect to $\mathsf{V}$) \cite{wyner1975}
and later Csisz\'ar{} and K\"orner{} (in a more general context of $\mathsf{V}$ being
more capable than $\mathsf{W}$)\cite{csiszar1978} showed that, given any input
distribution $P_X$, Alice can communicate reliably to Bob at any rate $R$ up to
\begin{equation}
I(X;Y) - I(X;Z), \label{eq:highestRate}
\end{equation}
(when $(X,Y) \sim P_X(x) \mathsf{V}(y|x)$ and $(X,Z) \sim P_X(x) \mathsf{W}(z|x)$) while
keeping the rate of information leaked to Eve about $W$ as small as desired;
i.e., guaranteeing
\begin{equation}
\frac1n I(W;\mathbf{Z}) \le \epsilon, \label{eq:weakSecrecy}
\end{equation}
for any $\epsilon > 0$, using sufficiently large $n$.
Wyner's measure of secrecy allows one to investigate the trade-off between
the message rate and the information leakage rate but is too weak from the
security point of view; even if the amount of information Eve learns about the
message $W$ normalized to the number of channel uses vanishes asymptotically,
the amount itself can grow unboundedly as the block-length increases. Therefore,
it is natural to remove the normalization factor in \eqref{eq:weakSecrecy} and
ask for \emph{strong secrecy}:
\begin{equation}
I(W; \mathbf{Z}) \le \epsilon.
\end{equation}
Maurer and Wolf showed that the highest achievable rate \eqref{eq:highestRate}
under \emph{strong secrecy} requirement does not change \cite{maurer2000}.
Classical achievability constructions \cite{wyner1975,massey1983} are based on
associating each message $w \in \{1,\dots,M\}$ with a sub-code of size $M' =
\exp(nR')$ and transmitting a randomly chosen codeword from that sub-code to
communicate $w$. The reliability of the code is ensured by keeping the total
rate $R' + R$ below $I(X;Y)$. Furthermore, by varying the rate $R'$
from $0$ to $I(X;Z)$, the upper-bound on the information leakage rate,
$\frac1nI(W;\mathbf{Z})$, is controlled. Particularly, by choosing the rate $R'$ just
\emph{below} $I(X;Z)$, weak secrecy is established.
An alternative way to approach the secrecy problem is to establish secrecy
through \emph{channel resolvability} \cite{bloch2013,hayashi2006,hou2014}.
Given an input distribution $P_X$ that induces the distribution $P_Z$ at the
output of a channel $\mathsf{W}: \mathcal{X} \to \mathcal{Z}$, a code of rate $I(X;Z)$ or larger
chosen from the i.i.d.\ $P_X$ random coding ensemble will, with high
probability, induce an output distribution that approximates $P_Z^n$ when
the index of the transmitted codeword is chosen uniformly at random.
\cite{hayashi2006,hou2013,wyner1975CI,han1993,pierrot2013}.
For any fixed message $w \in \{1,\dots,M\}$ the output of Eve's channel has
distribution $P_{\mathbf{Z}|W=w}$. It is not difficult to see that the secrecy is
guaranteed if $P_{\mathbf{Z}|W=w}$ `well approximates' the product distribution
$P_Z^n$ by setting the sub-codes' rate $R'$ just \emph{above}
$I(X;Z)$. In particular, if we measure the quality of
approximation by asking the unnormalized Kullback-Leibler divergence between
$P_{\mathbf{Z}|W=w}$ and $P_Z^n$ to be small, \emph{strong secrecy} will be
established. Indeed, in \cite{hayashi2006,hou2014} it has been
shown that the information leakage, $I(W;\mathbf{Z})$ will be exponentially small in
$n$ provided that $R'$ is above $I(X;Z)$.
\begin{definition}
Given $R$, $R'$ and $\mathsf{W}$, a number $E$
is a \emph{secrecy exponent} for the wire-tapper channel
$\mathsf{W}$, if there exist a sequence of reliable coding schemes of rate
$R$, requiring the entropy rate $R'$ at the encoder, for which
$\displaystyle\liminf_{n\to\infty}
\textstyle -\frac{1}{n} \log[I(W;\mathbf{Z})] \ge E$.
\end{definition}
In \cite{hayashi2006,hou2014} the secrecy exponent is derived using i.i.d.\
random coding ensemble. More specifically, each message $w
\in \{1,\dots,M\}$ is associated with a sub-code whose codewords are
independently (and independent of the codewords of the other sub-codes) sampled
from the i.i.d.\ random coding ensemble. The exponent is derived by
upper-bounding the ensemble-expectation of $D(P_{\mathbf{Z}|W} \Vert P_Z^n | P_W)$
and then concluding that there exists a sequence of codes in the ensemble using
which the information leakage decays at least as fast as $\mathbb{E}[D(P_{\mathbf{Z}|W}
\Vert P_Z^n | P_W)]$ does. The secrecy exponent of Hou and Kramer in
\cite{hou2014} is derived based on their resolvability proof of
\cite[Section~III-A]{hou2013} which is simple but results in a small exponent.
However, by applying the method described in \cite[Section~III-B]{hou2013} to
the wire-tap channel setting a larger exponent can be obtained which is equal to
that of Hayashi in \cite{hayashi2006}.
In \cite{hayashi2011}, Hayashi uses \emph{privacy amplification} to improve the
secrecy exponent based on a different construction than those of
\cite{hayashi2006,hou2013,hou2014}. In addition to a code of size $M M'$, whose
codewords are sampled independently from the i.i.d.\ random coding ensemble, a
hash function is sampled from the ensemble of universal hash functions from
$\{1,\dots,M M'\}$ to $\{1,\dots,M\}$ and revealed to Alice, Bob, and Eve. A
message $m \in \{1,\dots,M\}$ is communicated by sending a randomly chosen
codeword from the code and, then, mapping the index of the sent codeword, using
the hash function, to an element of $\{1,\dots,M\}$. The expected information
leakage (where the expectation is taken over both i.i.d.\ random coding
\emph{and} universal hash functions ensembles) is then upper-bounded to show
that the exponent of the bound is a secrecy exponent.
In this paper, we derive an exponentially decaying upper-bound on
$\mathbb{E}[D(P_{\mathbf{Z}|W=w} \Vert P_Z^n)]$, where the expectation is taken over the
i.i.d.\ random coding ensemble (i.e., the construction used in
\cite{hayashi2006,hou2013,hou2014}), by analyzing the deviations of
$P_{\mathbf{Z}|W=w}$ from its mean. It then follows (by standard expurgation
arguments) that for $\forall \epsilon > 0$, there exist a code of essentially
the same rate $R$, using which $\max_{w} D(P_{\mathbf{Z}|W=w} \Vert P_Z^n) \le
(1+\epsilon) \mathbb{E}[D(P_{\mathbf{Z}|W=w} \Vert P_Z^n)]$. As already noted in
\cite{hou2014}, this is a \emph{worst-case} measure of secrecy in contrast to
$I(W;\mathbf{Z})$ which is an
average-case measure of secrecy. In addition, this shows that our lower-bound on
$\lim_{n \to \infty} -\frac1n \log \mathbb{E}[D(P_{\mathbf{Z}|W=w} \Vert P_Z^n)]$ is a
secrecy exponent. This exponent matches that of \cite{hayashi2011} which is
larger than those of \cite{hayashi2006,hou2013,hou2014}.
\section{Notation}
We use uppercase letters (like $X$) to denote a
random variable and corresponding lowercase version ($x$) for a realization of
that random variable. The boldface letters denote sequences of length $n$. The
$i$-th element of a sequence $\bx$ is denoted as $x_i$.
We denote finite sets by script-style uppercase letters like $\mathcal{S}$. The
cardinality of set $\mathcal{S}$ is denoted by $\abs{\mathcal{S}}$. For a positive integer
$m$, $ \IndexSet{m} \triangleq \{1,2,\dots,m\}$. $\mathbb{R}$ denotes the set of real
numbers and $\bar{\mathbb{R}} = \mathbb{R} \cup \{-\infty, +\infty\}$ is the set of \emph{extended}
real numbers. We write $f(n) \doteq g(n)$ (resp.\ $f(n) \mathrel{\dot{\le}} g(n)$) if
$\lim_{n \to \infty} \frac1n \log \frac{f(n)}{g(n)} = 0$ (resp.\ $\le 0$).
We denote the set of distributions on alphabet $\mathcal{X}$ as $\mathcal{P}(\mathcal{X})$.
If $P \in \mathcal{P}(\mathcal{X})$, $P^n \in \mathcal{P}(\mathcal{X}^n)$ denotes the
product distribution $P^n(\bx) \triangleq \prod_{i=1}^n P(x_i)$.
Likewise, if $\mathsf{V}: \mathcal{X} \to \mathcal{Y}$ is a conditional distribution
$\mathsf{V}^n:\mathcal{X}^n \to \mathcal{Y}^n$ denotes the conditional distribution
$\mathsf{V}^n(\by|\bx) = \prod_{i=1}^n \mathsf{V}(y_i|x_i)$.
We denote the \emph{type} of a sequence $\bx \in \mathcal{X}^n$ by $\type{\bx} \in
\mathcal{P}(\mathcal{X})$ and the \emph{conditional type} of $\by \in \mathcal{Y}^n$ given $\bx
\in \mathcal{X}^n$ by $\condtype{\by|\bx}: \mathcal{X} \to \mathcal{Y}$ (see
\cite[Chapter~2]{csiszar2011IT} for formal definitions).
A distribution $\hat{P} \in \mathcal{P}(\mathcal{X})$ is an \emph{$n$-type} if $n
\hat{P}(x) \in \mathbb{N}_{\ge 0}$ for $\forall x \in \mathcal{X}$.
We denote the set of $n$-types on $\mathcal{X}$ as $\hat{\mathcal{P}_n}(\mathcal{X}) \subsetneq
\mathcal{P}(\mathcal{X})$ and use the fact that $|\hat\mathcal{P}_n(\mathcal{X})| =
O(n^{\abs{\mathcal{X}}})$ \cite[Lemma~2.2]{csiszar2011IT} repeatedly.
If $\hat P \in \hat{\mathcal{P}}_n(\mathcal{X})$, we denote the set of all sequences of type
$\hat P$ as $\mathcal{T}_{\hat{P}} \subset \mathcal{X}^n$. If ${\hat \mathsf{V}}: \mathcal{X} \to \mathcal{Y}$
is a conditional distribution, the \emph{${\hat \mathsf{V}}$-shell} of $\bx \in
\mathcal{X}^n$, is denoted as $\mathcal{T}_{\hat \mathsf{V}}(\bx) \subset \mathcal{Y}^n$.
\section{Result}
In the rest of the paper $(X,Z) \in \mathcal{X} \times \mathcal{Z}$ denotes the pair of
random variables whose joint distribution is $P_{X,Z}(x,z) = P_X(x) \mathsf{W}(z|x)$
where $P_X$ is a fixed input distribution. For simplicity (and with no
essential loss of generality) we assume the $\supp(P_X) = \mathcal{X}$ and
$\supp(P_Z) = \mathcal{Z}$.\footnote{The second assumption follows from the first
together with the assumption that for $\forall z \in \mathcal{Z}$ there exist at
least one $x$ such that $\mathsf{W}(z|x) > 0$.}
Following \cite{massey1983} we consider the following random code construction:
for every message $w \in \IndexSet{M}$, a codebook of size $M' \triangleq \exp(n
R')$, denoted by $\mathcal{C}_w$, is constructed by sampling $M'$ codewords,
$\mathbf{X}_{w,w'}, w' \in \IndexSet{M'}$ independently from the product distribution
$P_X^n$. In order to communicate the message $w$, Alice picks $w' \in
\IndexSet{M'}$ uniformly at random and transmits $\mathbf{X}_{w,w'}$. Given such a
construction, for every $w \in \IndexSet{M}$ and $\bz \in \mathcal{Z}^n$, the
conditional output distribution of $\mathsf{W}$ is
\begin{equation}
P_{\mathbf{Z}|W}(\bz|w)= \frac{1}{M'} \sum_{w'=1}^{M'}
\mathsf{W}^n\big(\bz|\mathbf{X}_{w,w'}\bigr),
\label{eq:condProbZ}
\end{equation}
which is an average of i.i.d.\ random variables and
\begin{equation}
\mathbb{E} \bigl[P_{\mathbf{Z}|W}(\bz|w)\bigr] = P_Z^n(\bz), \qquad \forall w \in
\IndexSet{M}.
\label{eq:expectedPZW}
\end{equation}
\begin{theorem} \label{thm:main}
Using the aforementioned construction, for $\forall w \in \IndexSet{M}$,
\begin{equation*}
\mathbb{E} \bigl[ D(P_{\mathbf{Z}|W = w} \Vert P_Z^n) \bigr]
\mathrel{\dot{\le}} \exp[-n E_{\rm s}(P_X,\mathsf{W},R')].
\end{equation*}
with
\begin{equation}
E_{\rm s}(P_X,\mathsf{W}, R') = \max_{0 \le \lambda \le 1} \bigl\{
\lambda R' - F_0(P_X,\mathsf{W},\lambda) \bigr\},
\label{eq:generalEs}
\end{equation}
where
\begin{equation*}
F_0(P_X,\mathsf{W},\lambda)
\triangleq
\log \biggl[ \sum_{z \in \mathcal{Z}} P_Z(z)
\sum_{x \in \mathcal{X}} P_{X|Z}(x|z)^{1+\lambda} P_X(x)^{-\lambda} \biggr].
\end{equation*}
\end{theorem}
\begin{remark} $F_0(P_X,\mathsf{W},\lambda)$ is a convex function of $\lambda$ (cf.
Appendix~\ref{app:f0convex}) passing through the origin with the slope
\begin{equation*}
\frac{\partial}{\partial \lambda} F_0(P_X,\mathsf{W},\lambda) \Big|_{\lambda = 0}
= I(X;Z).
\end{equation*}
Hence $E_{\rm s}(P_X,\mathsf{W},R') \ge 0$ with equality iff $R' \le I(X;Z)$.
\end{remark}
The only random quantity involved in the divergence $D(P_{\mathbf{Z}|W = w} \Vert
P_Z^n)$ is the conditional distribution $P_{\mathbf{Z}|W=w}$ whose expectation is
$P_Z^n$ as shown in \eqref{eq:expectedPZW}. To prove Theorem~\ref{thm:main} we
shall analyze the deviations of the random variables $P_{\mathbf{Z}|W}(\bz|w)$ from
their mean, $P_Z^n(\bz)$.
As an immediate corollary to Theorem~\ref{thm:main} we have:
\begin{corollary} \label{cor:existence}
For any input distribution $P_X$ and a pair of rates $R$ and $R'$,
there exists a reliable code of rate $R$ using which, for any message
distribution $P_W$,
\begin{align*}
P_{\rm e} &\mathrel{\dot{\le}} \exp[-n E_{\rm r}(P_X, \mathsf{V}, R+R')], \\
I(W;\mathbf{Z}) &\mathrel{\dot{\le}} \exp[-n E_{\rm s}(P_X, \mathsf{W},R')],
\end{align*}
where $P_{\rm e}$ denotes the decoding error probability of Bob and $E_{\rm
r}$ is Gallager's random coding exponent \cite[Chapter~5]{gallager1968}.
Hence, for $(R,R')$ such that $R+R' < I(X;Y)$, the $E_{\rm s}$ in
Theorem~\ref{thm:main} is a secrecy exponent.
\end{corollary}
Corollary~\ref{cor:existence} is proved in Appendix~\ref{app:existence}.
\section{Proof of Theorem~\ref{thm:main}} \label{sec:proof}
For $\forall w \in \IndexSet{M}$ and $\forall \bz \in \mathcal{Z}^n$ let
\begin{equation}
U_n(\bz|w) \triangleq \frac{P_{\mathbf{Z}|W}(\bz|w)}{P_Z^n(\bz)}.
\label{eq:udef}
\end{equation}
Using \eqref{eq:expectedPZW}, it is easy to see that $\mathbb{E}[U_n(\bz|w)] = 1$.
Using the linearity of expectation, we have:
\begin{align}
& \mathbb{E}\bigl[D(P_{\mathbf{Z}|W=w} \Vert P_Z^n)\bigr] \nonumber \\
& \quad = \sum_{\bz \in \mathcal{Z}^n} \mathbb{E}\Bigl[P_{\mathbf{Z}|W}(\bz|w) \log \Bigl(
\frac{P_{\mathbf{Z}|W}(\bz|w)}{P_Z^n(\bz)} \Bigr)
\Bigr]
\nonumber \\
& \quad = \sum_{\bz \in \mathcal{Z}^n} P_Z^{n}(\bz) \mathbb{E}\bigl[ U_n(\bz|w)
\log \bigl( U_n(\bz|w) \bigr) \bigr] \nonumber \\
& \quad = \sum_{\hat P \in \hat\mathcal{P}_n(\mathcal{Z})}
\sum_{\bz \in \mathcal{T}_{\hat P}} P_Z^n(\bz) \mathbb{E}\bigl[U_n(\bz|w) \log \bigl(
U_n(\bz|w) \bigr) \bigr].
\label{eq:typeSum}
\end{align}
To prove Theorem~\ref{thm:main}, we shall use the following result.
\begin{lemma} \label{lem:typeBound}
For $P \in \mathcal{P}(\mathcal{Z})$, let
\begin{align}
& G_0(P_{X,Z}, P, \lambda) \nonumber \\
& \quad \triangleq \sum_{z \in \mathcal{Z}} P(z) \log\Bigl[\sum_{x\in\mathcal{X}}
P_{X|Z}(x|z)^{1+\lambda} P_X(x)^{-\lambda} \Bigr], \label{eq:g0def}
\end{align}
and
\begin{equation}
E_t(P_{X,Z}, R', P) \triangleq \max_{0 \le \lambda \le 1} \bigl\{ \lambda R'
- G_0(P_{X,Z}, P, \lambda)\bigr\}. \label{eq:EtVal}
\end{equation}
Then, for every $w \in \IndexSet{M}$,
\begin{align}
& \mathbb{E}\bigl[U_n(\bz|w) \log \bigl(U_n(\bz | w) \bigr) \bigr] \nonumber \\
& \qquad \mathrel{\dot{\le}}
\exp[-n E_t(P_{X,Z}, R', \type{\bz})]. \label{eq:typeBound}
\end{align}
\end{lemma}
Having proved Lemma~\ref{lem:typeBound}, Theorem~\ref{thm:main} follows by using
\eqref{eq:typeBound} in \eqref{eq:typeSum} and \cite[Lemma~2.6]{csiszar2011IT}
to conclude
\begin{equation*}
\mathbb{E}\bigl[ D(P_{\mathbf{Z}|W=w} \Vert P_Z^n) \bigr] \mathrel{\dot{\le}}
\exp\bigl[ -n E_{\rm s}(P_X,\mathsf{W},R') \bigr],
\end{equation*}
where
\begin{align}
& E_{\rm s}(P_X, \mathsf{W}, R') \nonumber \\
& \quad \triangleq \min_{P \in \mathcal{P}(\mathcal{Z})} \{ D(P
\Vert P_Z) + E_t(P_{X,Z} ,R', P) \}.
\label{eq:exponentDef}
\end{align}
Using \eqref{eq:EtVal}, the equivalence of \eqref{eq:exponentDef} and
\eqref{eq:generalEs} is shown in Appendix~\ref{app:secExp}. This completes the
proof of Theorem~\ref{thm:main}. \hfill \IEEEQED
\begin{IEEEproof}[Proof of Lemma~\ref{lem:typeBound}]
Pick any $\hat P \in \hat\mathcal{P}_n(\mathcal{Z})$ and observe that for $\bz \in
\mathcal{T}_{\hat{P}}$,
\begin{equation*}
\frac{\mathsf{W}^n(\bz|\bx)}{P_Z^n(\bz)}
=
\exp\bigl[n \bigl(D(\condtype{\bx | \bz} \Vert P_X | \hat P) -
D(\condtype{\bx | \bz} \Vert P_{X|Z} | \hat P)\bigr)].
\end{equation*}
For every $P \in \mathcal{P}(\mathcal{Z})$ and stochastic matrix $\mathsf{Q}:
\mathcal{Z} \to \mathcal{X}$ define
\begin{equation}
A_{X,Z}(P;\mathsf{Q}) \triangleq
D(\mathsf{Q} \Vert P_X | P) - D(\mathsf{Q} \Vert P_{X|Z} | P).
\label{eq:apqdef}
\end{equation}
Thus, using \eqref{eq:condProbZ},
\begin{equation}
U_n(\bz|w) = \frac{1}{M'} \sum_{w'=1}^{M'} \exp\bigl[n A_{X,Z}(\hat P;
\condtype{\mathbf{X}_{w,w'} | \bz}) \bigr] \label{eq:usum}
\end{equation}
Let
\begin{equation}
\tilde\mathcal{A} \triangleq \big\{%
A_{X,Z}(\hat{P}; \hat\mathsf{Q}) \text{ for all conditional types $\hat\mathsf{Q}$}
\bigr\} \subset \bar{\mathbb{R}}, \label{eq:atildedef}
\end{equation}
and observe that $|\tilde\mathcal{A}| = O(n^{\abs{\mathcal{X}} \abs{\mathcal{Z}}})$.
Set $\mathcal{A} \triangleq \{a \in \tilde\mathcal{A}: a > -\infty\}$
and for each $a \in \mathcal{A}$ define
\begin{equation}
\mathcal{T}_{a}(\bz) \triangleq \bigcup_{\hat\mathsf{Q}: A_{X,Z}(\hat{P};\mathsf{Q}) = a}
\mathcal{T}_{\hat\mathsf{Q}}(\bz) \subseteq \mathcal{X}^n,
\label{eq:tdef}
\end{equation}
where $\mathcal{T}_{\hat\mathsf{Q}}(\bz)$ is the $\hat\mathsf{Q}$-shell of $\bz$ and the union
is over conditional types $\hat\mathsf{Q} : \mathcal{Z} \to \mathcal{X}$ (thus contains
$O(n^{\abs{\mathcal{X}}\abs{\mathcal{Z}}})$ shells).
Now we can rewrite \eqref{eq:usum} as\footnote{Since $\bz$ and $w$ are
assumed to be fixed throughout the proof, we drop them from the argument of
$U_n$ for the sake of brevity.}
\begin{equation}
U_n \triangleq
U_n(\bz|w) = \frac1{M'} \sum_{a \in \mathcal{A}} N_a \exp(n a),
\label{eq:usuma}
\end{equation}
with $N_a \triangleq \abs{\left\{w': \mathbf{X}_{w,w'} \in \mathcal{T}_a(\bz)\right\}}$
denotes the number of codewords of $\mathcal{C}_w$ in $\mathcal{T}_a(\bz)$. Since
the codewords are independent, $N_a$ is a $\mathrm{Binomial}(M',p_a)$ random
variable where,
\begin{align}
p_a & = P_X^n\bigl(\mathcal{T}_a(\bz)\bigr)
= \sum_{\hat\mathsf{Q}: A_{X,Z} (\hat{P};\hat\mathsf{Q}) = a} P_X^n\bigl(
\mathcal{T}_{\hat\mathsf{Q}}(\bz)\bigr) \nonumber \\
& \doteq \exp\Bigl[ - n \min_{\hat\mathsf{Q}: A_{X,Z}(\hat{P}; \hat\mathsf{Q}) =
a} D(\hat\mathsf{Q} \Vert P_X | \hat{P}) \Bigr]. \label{eq:pa}
\end{align}
In the above, the second equality follows since $\hat\mathsf{Q}$-shells are
disjoint, the third equality follows from \cite[Lemma 2.6]{csiszar2011IT} (a
similar approach is used in \cite{merhav2014} to express a quantity of
interest as a weighted sum of Binomial random variables).
In Appendix~\ref{app:eb} we compute the value of
\begin{equation}
E_b(P_{X,Z}, P, a) \triangleq \min_{\hat\mathsf{Q}: A_{X,Z}(P;\hat\mathsf{Q}) = a}
D(\hat\mathsf{Q} \Vert P_X | P) \label{eq:EbDef}
\end{equation}
and, in particular, show that
\begin{equation}
E_b(P_{X,Z}, P, a) \ge a, \label{eq:EbBound}
\end{equation}
with equality iff $a = D(P_{X|Z} \Vert P_X | P)$.
Partition $\mathcal{A} = \mathcal{A}_1 \cup \mathcal{A}_2$ as
\begin{equation*}
\mathcal{A}_1 = \{a \in \mathcal{A}: a \le R'\}, \qquad
\mathcal{A}_2 = \{a \in \mathcal{A}: a > R'\},
\end{equation*}
and split \eqref{eq:usuma} as
\begin{equation*}
U_n = \underbrace{%
\frac{1}{M'} \sum_{a \in \mathcal{A}_1} N_a \exp(na)
}_{\triangleq S_n}
+
\underbrace{%
\frac{1}{M'} \sum_{a \in \mathcal{A}_2} N_a \exp(na)
}_{\triangleq T_n}.
\end{equation*}
For non-negative $s$ and $t$ and $u \triangleq s+t$ we have
\begin{align*}
u \ln (u) & = s \ln (u) + t \ln (u) \\
& = s \ln (s) + s \ln (1 + t/s) + t \ln (u) \\
& \le s \ln (s) + t (1 + \ln(u))
\end{align*}
where the inequality follows since $\ln(1+t/s) \le t/s$.
Hence,
\begin{align}
& \mathbb{E}[U_n \log(U_n) ] \doteq \mathbb{E}[U_n \ln(U_n)]
\nonumber \\
& \quad \le \mathbb{E}[S_n
\ln(S_n)]+ \mathbb{E}\bigl[T_n \bigl(1 + \ln(U_n) \bigr)\bigr].
\label{eq:ebound1}
\end{align}
Moreover, since $U_n \le 1/ P_Z^n(\bz)$, we have
\begin{equation*}
\ln(U_n) \le \ln \bigl(1/P_Z^n(\bz)\bigr) \le n \ln
(1/p_0)
\end{equation*}
where $p_0 \triangleq \min_{z \in \mathcal{Z}} P_Z(z) > 0$. Thus, from
\eqref{eq:ebound1} we have
\begin{align}
\mathbb{E}\bigl[U_n \ln(U_n) \bigr] & \le \mathbb{E}[S_n \ln(S_n)]
+ (n \ln(1/p_0) + 1 ) \mathbb{E}[T_n] \nonumber \\
& \doteq \mathbb{E}[S_n \ln(S_n)] + \mathbb{E}[T_n].
\label{eq:ebound2}
\end{align}
We now upper-bound each of the above expectations to complete the proof.
First we note that for any constant $c \in \mathbb{R}$,
\begin{equation}
\mathbb{E}[S_n \ln (S_n)] = \mathbb{E}\bigl[S_n \ln (S_n) + c(S_n - \mathbb{E}[S_n])\bigr].
\label{eq:tilte}
\end{equation}
In particular,
\begin{equation*}
\mathbb{E}[S_n \ln(S_n)] = \mathbb{E}[\psi(S_n)]
\end{equation*}
where
\begin{equation}
\psi(s) \triangleq s \ln(s) - \bigl( \ln \bigl( \mathbb{E}[S_n] \bigl) + 1
\bigr)(s-\mathbb{E}[S_n]).
\label{eq:psiDef}
\end{equation}
One can check that (see Fig.~\ref{fig:psi})
\begin{equation}
\psi(s) \le \frac{(s - \mathbb{E}[S_n])^2}{\mathbb{E}[S_n]} + \mathbb{E}[S_n] \ln(\mathbb{E}[S_n])
\le \frac{(s - \mathbb{E}[S_n])^2}{\mathbb{E}[S_n]},
\label{eq:psiBound}
\end{equation}
where the last inequality follows since $\mathbb{E}[S_n] = 1 - \mathbb{E}[T_n] \le 1$ as
$S_n$ and $T_n$ are both non-negative random variables.
\begin{figure}[t]
\centering
\input{figures/psi}
\caption{The function $\psi(s)$ defined in \eqref{eq:psiDef} and the
upper-bound in \eqref{eq:psiBound}. In the figure $\overline{S_n} \triangleq
\mathbb{E}[S_n]$.}
\label{fig:psi}
\end{figure}
Using \eqref{eq:psiBound} in \eqref{eq:tilte} we conclude that
\begin{equation}
\mathbb{E}[S_n \ln(S_n)] \le \frac{\var(S_n)}{\mathbb{E}[S_n]}.
\label{eq:ESlogSBound}
\end{equation}
We now have,
\begin{align}
\mathbb{E}[S_n] & = \sum_{a \in \mathcal{A}_1} p_a \exp(n a) \nonumber \\
& \doteq \exp\Bigl[-n \min_{a \in \mathcal{A}_1} \bigl\{E_b(P_{X,Z}, \hat P, a) -
a \bigr\} \Bigr], \label{eq:ESBound}
\end{align}
where the last equality follows since $|\mathcal{A}_1| =
O(n^{\abs{\mathcal{X}}\abs{\mathcal{Z}}})$. Furthermore,
\begin{align}
& \var(S_n) = \frac{1}{{M'}^2} \sum_{(a,a') \in \mathcal{A}_1^2}
\exp[n (a+a')] \cov(N_a, N_{a'}) \nonumber \\
& \quad \stackrel{\text{(a)}}{\le} \frac{1}{{M'}^2} \sum_{(a,a') \in
\mathcal{A}_1^2} \exp[n(a+a')] \sqrt{\var(N_a)} \sqrt{\var(N_{a'})} \nonumber \\
& \quad = \frac{1}{{M'}^2} \left(\sum_{a \in \mathcal{A}_1}
\exp[n a] \sqrt{\var(N_a)} \right)^2 \nonumber \\
& \quad \stackrel{\text{(b)}}{\doteq} \frac{1}{{M'}^2}
\left(\max_{a \in \mathcal{A}_1} \left\{ \exp[n a] \sqrt{\var(N_a)}\right\}
\right)^2 \nonumber \\
& \quad = \max_{a \in \mathcal{A}_1}\Bigl\{ \frac{1}{ {M'}^2 } \exp[2 n a]
\var(N_a) \Bigr\} \nonumber \\
& \quad \stackrel{\text{(c)}}{\le}
\max_{a \in \mathcal{A}_1} \Bigl\{ \frac{1}{M'} \exp[2 n a] p_a \Bigr\}
\nonumber \\
& \quad \doteq \exp\Bigl[ -n \min_{a \in \mathcal{A}_1} \bigl\{R' + E_b(P_{X,Z},
\hat P, a) - 2 a \bigr\} \Bigr]. \label{eq:varSBound}
\end{align}
In the above,
\begin{enumerate}[(a)]
\item follows by Cauchy--Schwarz inequality,
\item follows since $|\mathcal{A}_1| = O(n^{\abs{\mathcal{X}}{\abs{\mathcal{Z}}}})$,
\item follows since $\var(N_a) = M' p_a (1-p_a) \le M' p_a$,
\end{enumerate}
and finally \eqref{eq:varSBound} follows from \eqref{eq:pa} and
\eqref{eq:EbDef}.
Similar to \eqref{eq:ESBound},
\begin{equation}
\mathbb{E}[T_n] \doteq \exp\Bigl[ - n \min_{a \in \mathcal{A}_2} \bigl\{
E_b(P_{X,Z},\hat P, a) - a \bigr\} \Bigr]. \label{eq:ETBound}
\end{equation}
Putting \eqref{eq:ESBound} and \eqref{eq:varSBound} in \eqref{eq:ESlogSBound}
together with \eqref{eq:ETBound} in \eqref{eq:ebound2} we conclude that
\begin{align}
E_t(P_{X,Z}, R', \hat P) = &\min\{E_1(P_{X,Z}, R', \hat P) -
\bar{E}_2(P_{X,Z}, R', \hat P), \nonumber \\
& \qquad E_2(P_{X,Z}, R', \hat P)\}, \label{eq:typeExpMin1}
\end{align}
where
\begin{align}
E_1(P_{X,Z}, R', \hat P) & \triangleq \min_{a \le R'} \bigl\{
R' + E_b(P_{X,Z}, \hat P, a) - 2 a \bigr\}, \label{eq:e1def} \\
E_2(P_{X,Z}, R', \hat P) & \triangleq \min_{a > R'} \bigl\{
E_b(P_{X,Z}, P, a) - a \bigr\}, \label{eq:e2def} \\
\bar{E}_2(P_{X,Z}, R', \hat P) & \triangleq \min_{a \le R'} \bigl\{
E_b(P_{X,Z}, P, a) - a \bigr\}. \label{eq:e2bardef}
\end{align}
We now observe that:
\begin{enumerate}[i.]
\item lower-bounding $R'$ by $a$ in \eqref{eq:e1def} shows $E_1(P_{X,Z},
R',\hat P) - \bar{E}_2(P_{X,Z}, R', \hat P) \ge 0$.
\item by \eqref{eq:EbBound}, one and only one of $E_2(P_{X,Z}, R', \hat P)$
or $\bar{E}_2(P_{X,Z}, R', \hat P)$ is zero.
\end{enumerate}
Thus \eqref{eq:typeExpMin1} simplifies to
\begin{equation}
E_t(P_{X,Z}, R', \hat P) =
\min \bigl\{E_1(P_{X,Z}, R', \hat P), E_2(P_{X,Z}, R', \hat P)\bigr\}
\label{eq:typeExpMin}
\end{equation}
In Appendix~\ref{app:e1e2} we show that
\begin{subequations}
\begin{align}
E_1(P_{X,Z},R', \hat P) &= \max_{\lambda \le 1} \bigl\{ \lambda R' -
G_0(P_{X,Z}, \hat P, \lambda) \bigr\}, \label{eq:e1val} \\
E_2(P_{X,Z},R', \hat P) &= \max_{\lambda \ge 0} \bigl\{ \lambda R'-
G_0(P_{X,Z}, \hat P, \lambda) \bigr\}. \label{eq:e2val}
\end{align}
\end{subequations}
Using the above in \eqref{eq:typeExpMin} concludes the proof.
\end{IEEEproof}
\section{Discussion}
We derived a lower-bound on the secrecy exponent of the wire-tap
channel using i.i.d.\ random codes. Comparing \eqref{eq:generalEs} with
\cite[Equation~(12)]{hayashi2011}, we see that our exponent is equal to that of
\cite{hayashi2011} which is the best lower-bound on the secrecy exponent among
those reported in \cite{hayashi2006,hayashi2011,hou2014}. However, our
proof is based on a pure i.i.d.\ random coding construction and does not require
the ensemble of universal hash functions as an additional tool. While this
manuscript was in review, it came to our attention that in
\cite{han2014,hayashi2015} also alternative derivations of the same lower-bound
are given based on pure i.i.d.\ random coding constructions.
Our proof is a generalization of that of \cite[Section~III-A]{hou2013}; instead
of partitioning the set of output sequences $\mathcal{Z}^n$ into two classes of
typical and atypical sequences, we partition it into $O(n^{|\mathcal{Z}|})$
type-classes to upper-bound the expected unnormalized Kullback-Leibler
divergence between the output distribution and the desired product distribution
$P_Z^n$. In addition, in Lemma~\ref{lem:typeBound}, we bound the point-wise
difference between those distributions at each $\bz \in \mathcal{Z}^n$.
Furthermore, we believe that the method described here has merit in showing the
doubly exponential nature of the concentration of the output distribution; as we
see in \eqref{eq:condProbZ}, the output distribution $P_{\mathbf{Z}|W}(\bz|w)$ is an
average of $M'$ i.i.d.\ random variables. If the distribution of the summands
was independent of $M'$, the average would have concentrated around its mean
exponentially fast in $M'$, that is \emph{doubly exponentially fast} in $n$.
Although this is not the case, we see in the proof of Lemma~\ref{lem:typeBound}
that among polynomially many summands in \eqref{eq:usuma}, only the one
corresponding to $a = D(P_{X|Z} \Vert P_X | \type{\bz})$ has a
significant contribution to the mean of $U_n(\bz|w)$ (which is a normalized
version of $P_{\mathbf{Z}|W}(\bz|w)$); the rest all have exponentially small means.
Applying the Chernoff bound to this particular term,
we see that if $R' > D(P_{X|Z}\Vert
P_X | \type{\bz})$ the dominant term concentrates around its mean doubly
exponentially fast in $n$. In particular, there exists a class of wire-tapper
channels for which $U_n(\bz|w)$ consists only of this dominant
term.\footnote{This happens if for $\forall z \in \mathcal{Z}$, for every $x \in
\mathcal{X}$ either $\mathsf{W}(z|x) = 0$ or $\mathsf{W}(z|x) = \epsilon_z$ for some constant
$\epsilon_z < 1$ independent of $x$.}
The achievability constructions of
\cite{hayashi2006,hayashi2011,hou2013,hou2014,han2014,hayashi2015} are based on
i.i.d.\ random codes. It is an open question whether random
constant-composition codes \cite{csiszar2011IT} will lead to a better secrecy
exponent. We believe that our method is easily adaptable to other types of
random coding (some ideas presented in \cite{hayashi2011allerton} can also be
useful in this direction). Another important subject in the context of wire-tap
channel is to derive non-trivial upper-bounds on the secrecy exponent.
The performance of a wire-tap code is measured via two quantities, the error
probability and the information leakage, which are both shown to be
exponentially decaying as a function of the block-length $n$. The trade-off
between these exponents has been recently studied in \cite{Tan2015}.
We conclude our discussion by remarking that, as shown in \cite{csiszar1978},
for general channels $\mathsf{V}$ and $\mathsf{W}$, any message rate up to
\begin{equation*}
I(V; Y) - I(V; Z),
\end{equation*}
where $V \markov X \markov (Y,Z)$ form a Markov chain, is achievable. Our
results (and also those of others cited) are straightforwardly extensible to the
case when the channels are prefixed with a channel $P_{X|V}$ and auxiliary
random variable $V$ is used.
\section*{Acknowledgment}
The authors would like to thank Prof. Neri Merhav, Prof. Vincent Y. F. Tan, and
Mohammad Hossein Yassaee for their helpful comments on an earlier version of
this work.
This work was supported by the Swiss NSF under grant number 200020\_146832.
\bibliographystyle{IEEEtran}
|
2,877,628,090,425 | arxiv | \section{Introduction \label{sec:intro}}
Subluminous B stars (sdBs) show
similar colours and spectral characteristics as main sequence stars of
spectral type B, but are much less luminous.
Compared to main sequence B stars the hydrogen Balmer lines in the spectra
of sdBs are stronger while the helium lines are much weaker (if present at
all)
for the colour. The strong line broadening and the early confluence of the
Balmer series is caused by the
high surface gravities ($\log\,g\simeq5.0-6.0$) of these compact stars
($R_{\rm sdB}\simeq0.1-0.3\,R_{\rm \odot}$).
Subluminous B stars are considered to be helium core burning stars with
very thin hydrogen envelopes and masses of about half a solar mass (Heber \cite{heber1})
located at the extreme end of the horizontal branch (EHB).
Subdwarf B stars are
found in all Galactic stellar populations and are sufficiently common to
account
for the UV-upturn of early-type galaxies.
Understanding the origin of the UV-upturn phenomenon hence has to await a
proper understanding of
the origin of the sdB stars themselves.
The discovery of short-period multi-periodic pulsations
in some sdBs provided an excellent opportunity to probe the
interiors of these stars using the tools of asteroseismology.
They were theoretically predicted by
Charpinet et al. (\cite{charpinet96}) at around the same time as they were
observed by
Kilkenny et al. (\cite{kilkenny}).
They are
characterised by low-amplitude, multi-periodic, short-period ($80-600\,{\rm s}$)
light variations that are due to pressure ($p$)-mode
oscillations.
A second family of pulsating sdB stars was discovered by Green et al. (\cite{green2}),
again showing low-amplitude, multi-periodic pulsations, but periods are longer ($2000-9000\,{\rm s}$) and are identified as
gravity ($g$) modes. An important recent achievement of sdB asteroseismology
is the determination of the most
fundamental parameter of a star,
i.e. its mass (for a review see Fontaine et al. \cite{fontaine}).
The origin of EHB stars, however, is wrapped in mystery (see Heber
\cite{heber09} for a review).
The problem is how some kind of mass loss mechanism in the
progenitor manages
to remove all but a tiny fraction of the hydrogen envelope at about
the same time as
the helium core has attained the mass ($\sim0.5$
M$_\odot$) required for the helium flash.
This requires enhanced mass loss, e.g. due to helium mixing driven by
internal rotation (Sweigart \cite{sweigart97}) or at the helium flash itself.
Mengel et al. (\cite{mengel76}) demonstrated that the required strong mass loss
can occur in a close-binary system. The progenitor of the sdB star has
to fill its Roche lobe near the tip of the red-giant branch (RGB)
to lose most of its hydrogen-rich envelope. The merger of binary white dwarfs was investigated
by Webbink (\cite{webbink}) who showed that an EHB star can form when two helium core
white dwarfs merge and the product is sufficiently massive to ignite helium.
Interest in the binary scenario was revived,
when Maxted et al. (\cite{maxted2}) determined a
very high fraction of radial velocity variable sdB stars, indicating that
about two thirds of the sdB stars in the field are in close binaries with periods of less
than 30 days (see also Morales-Rueda et al. \cite{morales}; Napiwotzki et al. \cite{napiwotzki8}). The companions, as far as their nature could be clarified, are mostly M dwarfs or white dwarfs. If the white dwarf companion is sufficiently massive, the merger of the binary
system might exceed the Chandrasekhar mass and explode as a type Ia supernova. Indeed, Bill\`{e}res et al. (\cite{billeres00}) and Maxted et al. (\cite{maxted}) discovered KPD~1930$+$2752, a system that might qualify as a SN Ia supernova progenitor (see also Geier et al. \cite{geier}).
These discoveries triggered new theoretical evolutionary calculations in the
context of binary population-synthesis to identify the importance of
various channels of close-binary evolution (Han et al. \cite{han1,han2}), i.e. two phases of
common-envelope ejection, stable Roche-lobe overflow and white dwarf merger.
\subsection{Outline of the paper}
The purpose of this paper is to clarify the nature of the unseen companions
for 40 short-period sdB binaries, which comprises about half of the sdB stars in single-lined close
binary systems with known periods and radial velocity amplitudes.
We assumed tidally locked rotation and made use of the sdBs' gravities and projected rotational velocities.
The paper is structured in two parts. After a short review on close binary
sdB stars (Sect.~\ref{sec:binaries}), part I (Sects.~\ref{sec:obs} to
\ref{sec:form}) describes the analysis of the sample.
Besides constraining the mass of the companions and unravelling the
nature of most companions as M dwarfs or typical white dwarfs, it reports the
discovery of a population of eight unseen compact companions with masses exceeding $0.9\,M_{\rm \odot}$ (in addition to
KPD~1930$+$2752), some of which even exceed
the Chandrasekhar limit. Accordingly, the latter should be neutron
stars (NS) or black holes (BH). Even if they were massive white dwarfs, it would be surprising to find
such a large fraction, as massive white dwarfs are rare.
As no binary system containing an sdB plus a NS/BH is known today,
we investigate
potential formation scenarios in Sect.~\ref{sec:form} and find it indeed
possible to create such systems through two phases of common envelope evolution.
Our results rest on the assumption of tidally locked rotation. Therefore, part
II of the paper (Sects.~\ref{sec:tidal} and \ref{sec:challenge}) deals with the synchronisation time
scales of sdB stars in close binaries both from a theoretical point of view
and from the perspective of empirical constraints.
The general result is that systems with periods
shorter than $1.2\,{\rm d}$ should be synchronised. Empirical evidence is available that
systems with periods below $0.6\,{\rm d}$ are synchronised as is indeed the case for
the systems with massive companions.
Although selection effects would favour detection of highly inclined systems,
no such system was found among those binaries with massive companions
in our sample. This calls for a careful
inspection of alternative explanations (Sect.~\ref{sec:challenge}). There are two aspects to be discussed.
First, the sdB may not burn helium at all and, thus, is spun up due to ongoing contraction.
Alternatively the actual evolutionary age of
individual stars may be smaller than appreciated, i.e. the EHB star may just have formed only
recently and the systems would therefore not be
synchronised. In Sect.~\ref{sec:summary} we summarise and discuss the results.
\section{Hot subdwarf binaries}\label{sec:binaries}
Several studies aimed at determining the fraction of hot subdwarfs
residing in close binary systems. Samples of hot subdwarfs have been checked for
RV variations. The resulting fractions range from
$39\,\%$ to $78\,\%$ (Green et al. \cite{green4}; Maxted et al. \cite{maxted2};
Napiwotzki et al. \cite{napiwotzki8}). Several studies were undertaken to
determine the orbital parameters of subdwarf binaries
(Edelmann et al. \cite{edelmann}; Green et al. \cite{green3};
Morales-Rueda et al. \cite{morales,morales2}; Karl et al. \cite{karl3}).
The orbital periods range from $0.07-30\,{\rm d}$ with a peak at
$0.5-1.0\,{\rm d}$ (see Fig. \ref{fig:ritter}).
\begin{figure}[t!]
\begin{center}
\resizebox{\hsize}{!}{\includegraphics{14465fg1.eps}}
\caption{Period distributions of the 40 binaries in our sample with known orbital parameters (dashed histogram) and all known 81 sdB binaries in the Ritter \& Kolb (\cite{ritter}) catalogue (blank histogram). }
\label{fig:ritter}
\end{center}
\end{figure}
\subsection{Binary evolution}
For close binary sdBs common envelope ejection is the most probable
formation channel. In this scenario two main sequence stars of
different masses evolve in a binary system. The heavier one will first
reach the red giant phase and fill its Roche lobe. If the mass transfer to the
companion is dynamically unstable, a common envelope (CE) is formed.
Due to friction the two stellar cores lose orbital energy, which is deposited
within the envelope and leads to a shortening of the binary period.
Eventually the common envelope is ejected and a close binary system is formed,
which contains a core helium-burning sdB and a main sequence companion. If this star reaches the red giant branch, another common envelope phase is possible
and can lead to a close binary with a white dwarf companion and an sdB.
If the mass transfer to the companion is dynamically stable, no common envelope
is formed and the primary slowly accretes matter from the secondary. The
companion eventually loses most of its envelope and evolves to an sdB. This
leads to sdB binaries with much larger separation and therefore much longer
orbital periods. Although lots of sdBs have spectroscopically visible main
sequence companions, no radial velocity variable system was detected up to now. Therefore the so called stable Roche lobe overflow (RLOF) channel remains without proof.
Binary evolution also provides a possibility to form single sdB stars via the
merger of two helium white dwarfs (Webbink \cite{webbink}; Iben \& Tutukov
\cite{iben}). Close He white dwarf binaries are formed as a result of two CE-phases.
Loss of angular momentum through emission of gravitational radiation will
cause the system to shrink. Given the initial separation is short enough the
two white dwarfs eventually merge and if the mass of the merger is high enough,
core helium burning is ignited and an sdB with very thin hydrogen envelope is
formed. Recently Politano et al. (\cite{politano}) proposed a new
evolutionary channel. The merger of a red giant and a low mass main-sequence star during
a common envelope phase may lead to the formation of a rapidly rotating hot
subdwarf star. Soker (\cite{soker}) proposed similar scenarios with planetary companions.
A candidate substellar companion to the sdB star HD\,149382 has been discovered
recently (Geier et al. \cite{geier5}).
\subsection{SN Ia progenitors}
Double degenerate systems in close orbits
are viable candidates for progenitors of type Ia supernovae (SN~Ia), which play
an important role as standard candles for the study of cosmic evolution
(e.g. Riess et al. \cite{riess}; Leibundgut \cite{leibundgut}; Perlmutter et
al. \cite{perlmutter}). The nature of their progenitors is still under debate
(Livio \cite{livio}). The progenitor population provides crucial information
for backing the assumption that distant SN~Ia can be used as standard
candles like the ones in the local universe.
There is general consensus that only the thermonuclear explosion of a white
dwarf (WD) is compatible with the observed features of SN~Ia. For this a white
dwarf has to accrete mass from a close companion to reach the Chandrasekhar
limit of $1.4 \,M_{\rm \odot}$ (Hamada \& Salpeter \cite{hamada}). According
to the so-called double degenerate scenario (Iben \& Tutukov \cite{iben}), the
mass-donating companion is a white dwarf, which eventually merges with the
primary due to orbital shrinkage caused by gravitational wave radiation.
A progenitor candidate for the double degenerate scenario must have a total
mass near or above the Chandrasekhar limit and has to merge in less than a
Hubble time. Systematic radial velocity (RV) searches for double degenerates
have been undertaken (e.g. Napiwotzki \cite{napiwotzki2} and references therein)
. The largest of these projects was the ESO SN Ia Progenitor Survey
(SPY, Napiwotzki et al. \cite{napiwotzki9}). The best known double degenerate
SN\,Ia progenitor candidate system KPD\,1930$+$2752 has an sdB primary\footnote{The more massive component of a binary is usually defined as the primary. But in most close sdB binaries with unseen companions the masses are unknown and it is not possible decide a priori which component is the most massive one. For this reason we call the visible sdB component of the binaries the primary throughout this paper.}, which
will become a white dwarf within about $10^{8}\,{\rm yr}$ before the merger occurs in
about $2\times10^{8}\,{\rm yr}$ (Maxted et al. \cite{maxted}; Geier et al. \cite{geier}).
Another sdB+WD binary with massive companion has been found recently (Geier
et al. \cite{geier4}).
Most recently Mereghetti et al. (\cite{mereghetti}) showed that in the X-ray
binary HD\,49798 a very massive ($>1.2\,M_{\rm \odot}$) white dwarf accretes
matter from the wind of its closely orbiting subdwarf O companion.
Iben \& Tutukov (\cite{iben94}) predicted that such a system will evolve into a SN Ia when the
primary fills its Roche lobe and transfers mass to the white dwarf to reach the
Chandrasekhar limit. This makes HD\,49798 a candidate for SN\,Ia progenitor
for this so called single degenerate scenario.
\subsection{Nature of the companions}
An up-to-date compilation of hot subdwarf binaries with known orbital
parameters is presented by Ritter \& Kolb (\cite{ritter}) which lists 81 such
systems.
In general it is difficult to put constraints on the nature of the close
companions of sdB stars. Since most of these binaries are single-lined, only
lower limits to the companion masses could be derived from the stellar mass
functions, which are in general compatible with late main sequence stars of
spectral type M or compact objects like white dwarfs. For single-lined binaries
with longer orbital periods the stellar mass function can help to further
constrain the nature of the unseen companion. Assuming the canonical mass
($0.47\,M_{\rm \odot}$; Han et al. \cite{han1,han2}) for the subdwarf, the
minimum mass of the companion may be high enough to exclude main sequence
stars,
because they would contribute significantly to the flux and therefore appear
in the spectra. This mass limit lies near $0.45\,M_{\rm \odot}$
(Lisker et al. \cite{lisker}).
Twelve sdB binaries have been reported to show eclipses. A combined analysis of the
light curves and time resolved spectra of these stars allows to derive the
system parameters as well as the companion types.
Eight of them have late M companions (see For et al. \cite{for} for a review), while four show shallow variations caused by the eclipse of a white dwarf (Orosz \& Wade \cite{orosz}; Green et al. \cite{green}; Bloemen et al. \cite{bloemen}).
If close binary stars are double-lined, the mass ratio of the systems can be
derived from the RV semi-amplitudes of the two components. Until recently, only
one double-lined He-sdB+He-sdB binary could be analysed
(Ahmad \& Jeffery \cite{ahmad4}).
Light variations can help to unravel the nature of the companion by means of the reflection effect and by ellipsoidal
variations, even if there are no eclipses.
In short period sdB binaries with orbital periods up to about half a day and
high inclination the hemisphere of a cool main sequence or substellar
companion directed towards the subdwarf is significantly heated up by the hot
primary. This leads to a characteristic modulation of the light curve with the
orbital period, which is a clear indication for an M-star or substellar
companion. Such light variations are easily measured in short period binaries
with high orbital inclinations. Fourteen sdB+M binaries with
this so-called reflection effect are known so far. Since detailed physical
models of the reflection effect are not available yet, several free parameters
have
to be adjusted to fit the observed light curves. Only very limited constraints
can therefore be put on the companion masses and radii from an observed
reflection effect alone. The absence of a reflection effect can also help to
constrain the nature of the unseen companions (Maxted et al. \cite{maxted5}; Shimanskii et al. \cite{shimanskii}). This method works best for binaries with periods of less than $0.5\,{\rm d}$ because otherwise
the expected reflection effect from an M dwarf companion is hard to detect (Drechsel priv. comm.; Napiwotzki et al. in prep.). The binary JL\,82 shows a very strong reflection, because it is clearly detectable despite the long orbital period of $0.74\,{\rm d}$. What causes the strong variation is not yet understood (Koen \cite{koen2}, see also Sect. \ref{sec:lowmassm}).
A massive white dwarf companion was identified as companion of an sdB
(Bill\`{e}res et al. \cite{billeres00}; Maxted et al. \cite{maxted2}; Geier et al. \cite{geier}), which shows a variation in its light curve caused by the tidal distortion of the sdB. Similar signs of ellipsoidal
deformation could be detected in five other cases (Orosz \& Wade \cite{orosz};
O'Toole et al. \cite{otoole}; Geier et al. \cite{geier2}; Koen et al. \cite{koen3}; Bloemen et al. \cite{bloemen}). These stars must have white dwarf companions, because the effect of tidal distortion in the
light curve is much weaker than a reflection effect, if present.
From 81 close binary subdwarfs with known orbital parameters
(Ritter \& Kolb \cite{ritter}), 13 have bona fide M dwarf companions, while
7 companions have to be white dwarfs. In another 11 binaries compact
companions are most likely. One of the binaries has a subdwarf companion. The
nature of the unseen companions in the remaining 50 binaries could not be
clarified with the methods described so far.
Some hot subluminous stars may not be connected to EHB-evolution at all, as
exemplified by HD\,188112 (Heber et al. \cite{heber5}), which was found to be of
too low mass to sustain helium burning in the core. Its atmospheric parameters
place the star below the EHB. An object like HD\,188112 is considered to be a direct
progenitor of low-mass white dwarfs (Liebert et al., \cite{liebert}),
which descend from the red giant branch
and cool down.
\subsection{Rotational properties}
While the rotational properties of blue horizontal branch (BHB) stars both in
globular clusters and in the field are thoroughly examined
(see e.g. Behr \cite{behr}), there is no systematic study for EHB stars yet.
Most of the sdB stars where $v_{\rm rot}\sin{i}$-measurements are available, are slow rotators (Heber et al. \cite{heber2}; Napiwotzki et al. \cite{napiwotzki3}; Edelmann \cite{edelmann}).
The knowledge of the projected rotational velocity, combined with the
gravity determination, allows to derive the mass of single-lined binaries, if the rotation is tidally locked to the orbit. A similar technique has been applied to low-mass X-ray binaries. Kudritzki \& Simon (\cite{kud1}) made use of this method for the first time in the field of hot subdwarfs to constrain the parameters of the sdO binary HD\,49798. Recently, also a few sdB systems have been studied in this way (e.g. Napiwotzki et al. \cite{napiwotzki3}; O'Toole et al. \cite{otoole3}; Geier et al. \cite{geier}, \cite{geier2}, \cite{geier4}). Here we apply this technique to a much larger sample.
\section*{Part I: Quantitative spectral analysis and binary evolution}
Here we present our measurements of projected rotational velocities for
a sample of 51 radial velocity variable sdBs stars in total. 40 of them are drawn from the Ritter \& Kolb (\cite{ritter}) catalogue (including GD\,687, a system published more recently, Geier et al. \cite{geier4}) and have well determined orbital parameters.
Eleven additional radial velocity variable sdB stars have also been analysed,
but orbital parameters have not yet been determined.
The main aim is to constrain the masses of the companions under the assumption of tidally locked rotation.
Observations and analysis method are described in
Sects.~\ref{sec:obs} and \ref{sec:ana}. Surface gravity (Sect.~\ref{sec:atmo})
and projected rotational velocities (Sect. \ref{sec:rot}) will be combined with
the mass function to derive companion masses and inclinations. The nature of the companions is discussed Sect.~\ref{sec:masses}. An evolutionary scenario for the formation of neutron star or black hole companions to sdB stars is proposed in Sect.~\ref{sec:form}.
\section{Observations and Data Reduction \label{sec:obs}}
The first set of UVES spectra were obtained in the course of the
ESO Supernovae Ia
Progenitor Survey (SPY, Napiwotzki et al. \cite{napiwotzki9,napiwotzki2})
at spectral resolution $R\simeq20\,000-40\,000$ covering
$3200-6650\,{\rm \AA}$ with two small gaps at $4580\,{\rm \AA}$ and
$5640\,{\rm \AA}$. Each of the 19 stars were observed at least twice.
The data reduction is described in Lisker et al.
(\cite{lisker}). For some of the systems follow-up
observations with UVES in the same setup were undertaken to derive the orbital
parameters. These were taken through a narrow slit for better accuracy.
For the high priority target PG\,1232$-$136 we obtained 60 short
exposures ($2\,{\rm min}$) with UVES through a very narrow slit ($0.4"$)
to achieve higher resolution ($R=80\,000$) covering $3770-4980\,{\rm \AA}$ and $5690-7500\,{\rm \AA}$.
High resolution spectra ($R=30\,000$, $4260-6290\,{\rm \AA}$) of 12 known
close binary subdwarfs have been taken with the HRS fiber spectrograph at the
Hobby Eberly Telescope (HET) in the second and third trimester 2007.
The spectra were reduced using standard ESO MIDAS routines.
Another sample of 11 known bright subdwarf binaries was observed with the
FEROS spectrograph ($R=48\,000$, $3750-9200\,{\rm \AA}$) mounted at the ESO/MPG
2.2m telescope. The spectra were downloaded from the ESO science archive and
reduced with the FEROS-DRS pipeline under the ESO MIDAS context in optimum
extraction mode.
Three spectra of subdwarf binaries were obtained with the FOCES spectrograph
($R=30\,000$, $3800-7000\,{\rm \AA}$) mounted at the CAHA 2.2m telescope.
Three spectra were taken with the HIRES instrument ($R=45\,000$,
$3600-5120\,{\rm \AA}$) at the Keck telescope. Two spectra taken with the
echelle spectrograph ($R=20\,000$, $3900-8060\,{\rm \AA}$) at the 1.5m
Palomar telescope were provided by N. Reid (priv. comm.).
Because a wide slit was used in the SPY survey and the seeing
disk did not always fill the slit, the instrumental profile of some of the UVES spectra was seeing dependent.
This has to be accounted for to estimate the instrumental resolution.
The seeing of all single exposures was measured with the DIMM seeing monitor
at Paranal Observatory and taken from the ESO science archive
(Sarazin \& Roddier \cite{sarazin}). As a test the seeing was also estimated from
the width of the echelle orders perpendicular to the direction of dispersion in some cases
and found to be consistent with the DIMM measurements.
The errors are considered to be lower than the change of seeing during the
exposures (up to $0".2$).
The resolution of the spectra taken with the fiber spectrographs
FEROS, FOCES and HRS was assumed as constant. Changes in the instrumental
resolution because of temperature variations and for other reasons were
considered as negligible.
The single spectra of all programme stars were RV-corrected and co-added in
order to achieve higher signal-to-noise.
\section{Analysis method \label{sec:ana}}
Since the the programme stars are single-lined spectroscopic binaries,
no information about the orbital motion of the sdBs' companions is available,
and thus only their mass functions can be calculated.
\begin{equation}
\label{equation-mass-function}
f_{\rm m} = \frac{M_{\rm comp}^3 \sin^3i}{(M_{\rm comp} +
M_{\rm sdB})^2} = \frac{P K^3}{2 \pi G}
\end{equation}
Although the RV semi-amplitude $K$ and the period $P$ are determined by the RV
curve, the sdB mass $M_{\rm sdB}$, the companion mass $M_{\rm comp}$ and the
inclination angle $i$ remain free parameters.
In the following analysis we adopt the mass range for sdBs in binaries which
underwent the common envelope channel given by Han et al. (\cite{han1},
\cite{han2}) if no independent mass determinations are available (see
Sect.~\ref{sec:masses} for details).
In close binary systems, the rotation of the stars becomes synchronised to their
orbital motion by tidal forces (see Sect.~\ref{sec:tidal} for a detailed
discussion). In this case their rotational periods equal the orbital periods
of the binaries. If the sdB primary is synchronised in this way its rotational
velocity $v_{\rm rot}$ can be calculated.
\begin{equation}
v_{\rm rot} = \frac{2 \pi R_{\rm sdB}}{P}
\end{equation}
The stellar radius $R$ is given by the mass-radius relation and can be derived,
if the surface gravity $g$ has been determined.
\begin{equation}
R = \sqrt{\frac{M_{\rm sdB}G}{g}}
\end{equation}
The measurement of the projected rotational velocities $v_{\rm obs}=v_{\rm rot}\,\sin\,i$
and the surface gravities $g$ therefore allows to constrain the systems'
inclination angles $i$. With $M_{\rm sdB}$ as free parameter the mass function
can be solved and the inclination angle as well as the companion mass can be
derived. Because of $\sin{i} \leq 1$ a lower limit for the sdB mass is
given by
\begin{equation}
\label{eq:minmass}
M_{\rm sdB} \geq \frac{v_{\rm obs}^{2} P^{2}g}{4 \pi^{2}G}
\end{equation}
This method has already been applied to the sdB+WD binaries HE\,1047$-$0436
(Napiwotzki et al. \cite{napiwotzki3}), Feige\,48
(O'Toole et el. \cite{otoole3}), KPD\,1930$+$2752
(Geier et al. \cite{geier}), PG\,0101$+$039 (Geier et al. \cite{geier2}) and
GD\,687 (Geier et al. \cite{geier4}).
There are no signatures of companions visible in the optical spectra of our
programme stars. Main sequence stars with masses higher than
$0.45\,M_{\rm \odot}$ could therefore be excluded because
otherwise spectral features of
the cool secondary (e.g. Mg\,{\sc i} lines at $\simeq5170\,{\rm \AA}$)
would appear in the spectra (Lisker et al. \cite{lisker}) and a flux excess in the
infrared would become visible in the spectral energy distribution (Stark \& Wade
\cite{stark}; Reed \& Stiening \cite{reed}).
Another possibility to detect M dwarf or brown dwarf companions are reflection effects in the
binary light curves. The detection of a reflection effect provides solid
evidence for the presence of an M dwarf or brown dwarf companion. The non-detection of such a
modulation can be used to constrain the nature of the companion as well, since a compact object like
a white dwarf would be too small to contribute significantly to the total flux and cause a detectable reflection effect. But constraining the companion type in this way is problematic for several reasons. First of all, the amplitude of the reflection becomes very small (a few mmag) unless the binary has a short period ($<0.5\,{\rm d}$, Drechsel priv. comm.; Napiwotzki et al. in prep.). Unless the photometry is excellent, such shallow variations over long timescales are not detectable from the ground. Furthermore, the amplitude of the modulation depends on the binary inclination, which is not known in general. An sdB+M binary seen at very low inclination does not show a detectable reflection effect. But most importantly the physics behind the reflection effect itself is poorly understood and one has to use rather crude approximations to derive its amplitude. The most recent detection of a surprisingly strong reflection effect in the long period system JL\,82 (Koen \cite{koen2}) illustrates this.
Some of our programme stars have already been checked for modulations in their light curves. We consider the lack of a reflection effect as significant constraint, if the orbital period of the binary is shorter than $0.5\,{\rm d}$. In this case the companion should be a compact object. In the case of binaries with longer periods the non-detection of a reflection effect is used as consistency check.
The atmospheric parameters effective temperature and surface gravity of most
of our programme stars have been derived from low resolution spectra with
sufficient accuracy and can be taken from literature in most cases. In order
to measure projected rotational velocities of sdB stars however, high spectral
resolution is necessary, because the $v_{\rm rot}\sin{i}$ are small in most
cases.
\section{Determination of the surface gravity and systematic errors
\label{sec:atmo}}
Since the precise determination of the atmospheric parameters, especially the
surface gravity, is of utmost importance for our analysis, this section is
devoted to the systematic uncertainties dominating the determination of these
parameters. Spectra of sdB stars in the literature were analysed either with metal
line-blanketed LTE model atmospheres or with NLTE model atmospheres neglecting
metal line blanketing altogether. As pointed out by Heber et al.
(\cite{heber2}), Heber \& Edelmann (\cite{heber4}) and Geier et al.
(\cite{geier}), systematic differences between these two approaches are
present. Most importantly the gravity scale differs by about $0.05\,{\rm dex}$.
Most of the atmospheric parameters of our programme stars are taken from
literature and were derived by fitting LTE or NLTE models (Table \ref{orbit}). The
adopted errors in $\log{g}$ range from $0.05$ to $0.15$. It is important to
note that all stars except PG\,1336$-$018, HW\,Vir, PG\,1432$+$159 and PG\,2345$+$318 have been analysed with the same
grids of LTE and NLTE atmospheres and the same fitting procedure. The error
in surface gravity starts to dominate the error budget of the derived
parameters as soon as the error in $v_{\rm rot}\sin{i}$ drops below about
$1.0\,{\rm kms^{-1}}$ (see Sect.~\ref{sec:rot}).
\begin{figure}[t!]
\begin{center}
\resizebox{\hsize}{!}{\includegraphics{14465fg2.eps}}
\caption{$T_{\rm eff}-\log{g}$-diagram for the entire sample under study.
The helium main sequence (HeMS) and the EHB band (limited by the zero-age
EHB, ZAEHB, and the terminal-age EHB, TAEHB) are superimposed with EHB evolutionary tracks for solar metallicity taken from
Dorman et al. (\cite{dorman}) labelled with their masses. Average error bars ($\Delta T_{\rm eff}=500-1000\,{\rm K}$, $\Delta \log{g}=0.05-0.10$) are given in the lower right corner. The filled symbols mark binaries with known orbital parameters (see Table~\ref{orbit}), the open symbols radial velocity variable systems for which orbital parameters are unavailable or uncertain (see Table~\ref{tab:vrotnosol}).}
\label{fig:tefflogg}
\end{center}
\end{figure}
In cases where no reliable atmospheric parameters could be found in literature, we determined them by fitting LTE models.
Since the accuracy of the parameters is very much dependent on the higher Balmer
lines, a high S/N in this region is necessary. The quality of high resolution
spectra obtained with FEROS or FOCES declines toward the blue end. This can cause systematic shifts in the
parameter determination (up to $\Delta T_{\rm eff}\simeq2000\,{\rm K}$ and
$\Delta \log{g}=0.2$). That is why we chose UVES, HIRES or low resolution
spectra to determine the atmospheric parameters if possible. In order to
improve the atmospheric parameter determination of TON\,S\,183,
BPS\,CS\,22169$-$0001 and $[$CW83$]$\,1735$+$22 we obtained additional
medium resolution spectra with WHT/ISIS in August 2009. A medium resolution spectrum of KPD\,1946$+$4340
taken with ISIS (Morales-Rueda et al. \cite{morales}) and a low resolution spectrum taken with the B\&C spectrograph
mounted at the $2.3\,{\rm m}$ Bok telescope on Kitt Peak (Green priv. comm.) have been fitted with metal-enriched
models.
For the hot stars BPS\,CS\,22169$-$0001, $[$CW83$]$\,1735$+$22 and KPD 1946$+$4340 the NLTE models usually applied gave a strong mismatch for the He\,{\sc ii} line at $4686\,{\rm \AA}$. Using metal line blanketed LTE models of solar composition did not improve the fit. A similar problem was found by O'Toole and Heber (\cite{otoole2}) in their analysis of our programme star CD\,$-$24\,731 (and two other hot sdBs), which is of similarly high temperature. The problem was remedied by using metal enhanced models. Later, the same indication was found for KPD\,1930$+$2752 (Geier et al. \cite{geier}) and AA\,Dor (M\"uller et al. \cite{mueller}). For this reason we used model atmospheres of ten times solar metallicity. Although the atmospheric parameters did not change much, the He\,{\sc ii} line at $4686\,{\rm \AA}$ was matched well in concert with the He\,{\sc i} and hydrogen Balmer lines.
Only in the case of JL\,82 we had to rely on FEROS spectra. Since the parameters derived from
these spectra ($T_{\rm eff}=25\,000\,{\rm K}$, $\log{g}=5.20$) turned out to
be very similar to the ones derived from the FEROS spectra of TON\,S\,183
($T_{\rm eff}=26\,000\,{\rm K}$, $\log{g}=5.00$), the systematic shifts
($\Delta T_{\rm eff}=+1500\,{\rm K}$, $\Delta \log{g}=+0.2$) should be
similar as well. The parameters of JL\,82 have therefore been corrected for
these shifts.
Results are summarised in Table~\ref{tab:atm} and plotted in
Fig.~\ref{fig:tefflogg}, where they are compared to canonical models for the EHB band.
The programme stars populate the EHB band between the zero-age
(ZAEHB) and the terminal-age EHB (TAEHB). Most of the hottest stars
($>33\,000\,{\rm K}$) are located above the TAEHB and probably have evolved off the
EHB already.
\begin{table*}[t!]
\caption{Atmospheric and orbital parameters}\label{tab:atm}
\label{orbit}
\begin{center}
\begin{tabular}{lllllll}
\hline
\noalign{\smallskip}
System & $T_{\rm eff}$ & $\log{g}$ & $P$ & $K$ & $\gamma$ & References\\
& [K] & & [d] & [${\rm km\,s^{-1}}$] & [${\rm km\,s^{-1}}$] & \\
\noalign{\smallskip}
\hline
\noalign{\smallskip}
PG\,1017$-$086 & 30300 $\pm$ 500 & 5.61 $\pm$ 0.10 & 0.0729938 $\pm$ 0.0000003 & 51.0 $\pm$ 1.7 & -9.1 $\pm$ 1.3 & 14\\
KPD\,1930$+$2752 & 35200 $\pm$ 500 & 5.61 $\pm$ 0.06 & 0.0950933 $\pm$ 0.0000015 & 341.0 $\pm$ 1.0 & 5.0 $\pm$ 1.0 & 7\\
HS\,0705$+$6700 & 28800 $\pm$ 900 & 5.40 $\pm$ 0.10 & 0.09564665 $\pm$ 0.00000039 & 85.8 $\pm$ 3.7 & -36.4 $\pm$ 2.9 & 2\\
PG\,1336$-$018 & 32800 $\pm$ 500 & 5.76 $\pm$ 0.05 & 0.101015999 $\pm$ 0.00000001 & 78.7 $\pm$ 0.6 & -25 & 1,23\\
HW\,Vir & 28500 $\pm$ 500 & 5.63 $\pm$ 0.05 & 0.115 $\pm$ 0.0008 & 84.6 $\pm$ 1.1 & -13.0 $\pm$ 0.8 & 24,3\\
PG\,1043+760 & 27600 $\pm$ 800 & 5.39 $\pm$ 0.10 & 0.1201506 $\pm$ 0.00000003 & 63.6 $\pm$ 1.4 & 24.8 $\pm$ 1.4 & 13,15\\
BPS\,CS\,22169$-$0001\dag & 39300 $\pm$ 500 & 5.60 $\pm$ 0.05 & 0.1780 $\pm$ 0.00003 & 14.9 $\pm$ 0.4 & 2.8 $\pm$ 0.3 & 25,5\\
PG\,1432$+$159 & 26900 $\pm$ 1000 & 5.75 $\pm$ 0.15 & 0.22489 $\pm$ 0.00032 & 120.0 $\pm$ 1.4 & -16.0 $\pm$ 1.1 & 21,16\\
PG\,2345$+$318 & 27500 $\pm$ 1000 & 5.70 $\pm$ 0.15 & 0.2409458 $\pm$ 0.000008 & 141.2 $\pm$ 1.1 & -10.6 $\pm$ 1.4 & 22,16\\
PG\,1329$+$159 & 29100 $\pm$ 900 & 5.62 $\pm$ 0.10 & 0.249699 $\pm$ 0.0000002 & 40.2 $\pm$ 1.1 & -22.0 $\pm$ 1.2 & 13,15\\
HE\,0532$-$4503 & 25400 $\pm$ 500 & 5.32 $\pm$ 0.05 & 0.2656 $\pm$ 0.0001 & 101.5 $\pm$ 0.2 & 8.5 $\pm$ 0.1 & 10,19\\
CPD\,$-$64\,481 & 27500 $\pm$ 500 & 5.60 $\pm$ 0.05 & 0.2772 $\pm$ 0.0005 & 23.8 $\pm$ 0.4 & 94.1 $\pm$ 0.3 & 19,5\\
PG\,1101$+$249 & 29700 $\pm$ 500 & 5.90 $\pm$ 0.07 & 0.35386 $\pm$ 0.00006 & 134.6 $\pm$ 1.3 & -0.8 $\pm$ 0.9 & 4,16\\
PG\,1232$-$136 & 26900 $\pm$ 500 & 5.71 $\pm$ 0.05 & 0.3630 $\pm$ 0.0003 & 129.6 $\pm$ 0.04 & 4.1 $\pm$ 0.3 & 25,5\\
Feige\,48 & 29500 $\pm$ 500 & 5.54 $\pm$ 0.05 & 0.376 $\pm$ 0.003 & 28.0 $\pm$ 0.2 & -47.9 $\pm$ 0.1 & 19,20\\
GD\,687 & 24300 $\pm$ 500 & 5.32 $\pm$ 0.07 & 0.37765 $\pm$ 0.00002 & 118.3 $\pm$ 3.4 & 32.3 $\pm$ 3.0 & 11,9\\
KPD\,1946$+$4340 & 34200 $\pm$ 500 & 5.43 $\pm$ 0.10 & 0.403739 $\pm$ 0.0000008 & 167.0 $\pm$ 2.4 & -5.5 $\pm$ 1.0 & 25,15\\
HE\,0929$-$0424 & 29500 $\pm$ 500 & 5.71 $\pm$ 0.05 & 0.4400 $\pm$ 0.0002 & 114.3 $\pm$ 1.4 & 41.4 $\pm$ 1.0 & 10,18\\
HE\,0230$-$4323 & 31100 $\pm$ 500 & 5.60 $\pm$ 0.07 & 0.45152 $\pm$ 0.00002 & 62.4 $\pm$ 1.6 & 16.6 $\pm$ 1.0 & 11,5\\
PG\,1743$+$477 & 27600 $\pm$ 800 & 5.57 $\pm$ 0.10 & 0.515561 $\pm$ 0.0000001 & 121.4 $\pm$ 1.0 & -65.8 $\pm$ 0.8 & 15\\
PG\,0001$+$275 & 25400 $\pm$ 500 & 5.30 $\pm$ 0.10 & 0.529842 $\pm$ 0.0000005 & 92.8 $\pm$ 0.7 & -44.7 $\pm$ 0.5 & 25,5\\
PG\,0101$+$039 & 27500 $\pm$ 500 & 5.53 $\pm$ 0.07 & 0.569899 $\pm$ 0.000001 & 104.7 $\pm$ 0.4 & 7.3 $\pm$ 0.2 & 8\\
PG\,1248$+$164 & 26600 $\pm$ 800 & 5.68 $\pm$ 0.10 & 0.73232 $\pm$ 0.000002 & 61.8 $\pm$ 1.1 & 16.2 $\pm$ 1.3 & 13,15\\
JL\,82 & 26500 $\pm$ 500 & 5.22 $\pm$ 0.10 & 0.73710 $\pm$ 0.00005 & 34.6 $\pm$ 1.0 & -1.6 $\pm$ 0.8 & 25,5\\
TON\,S\,183 & 27600 $\pm$ 500 & 5.43 $\pm$ 0.05 & 0.8277 $\pm$ 0.0002 & 84.8 $\pm$ 1.0 & 50.5 $\pm$ 0.8 & 25,5\\
PG\,1627$+$017 & 23500 $\pm$ 500 & 5.40 $\pm$ 0.10 & 0.8292056 $\pm$ 0.0000014 & 70.10 $\pm$ 0.13 & -54.16 $\pm$ 0.27 & 25,6\\
PG\,1116+301 & 32500 $\pm$ 1000 & 5.85 $\pm$ 0.10 & 0.85621 $\pm$ 0.000003 & 88.5 $\pm$ 2.1 & -0.2 $\pm$ 1.1 & 13,15\\
HE\,2135$-$3749 & 30000 $\pm$ 500 & 5.84 $\pm$ 0.05 & 0.9240 $\pm$ 0.0003 & 90.5 $\pm$ 0.6 & 45.0 $\pm$ 0.5 & 10,18\\
HE\,1421$-$1206 & 29600 $\pm$ 500 & 5.55 $\pm$ 0.07 & 1.188 $\pm$ 0.001 & 55.5 $\pm$ 2.0 & -86.2 $\pm$ 1.1 & 11,18\\
HE\,1047$-$0436 & 30200 $\pm$ 500 & 5.66 $\pm$ 0.05 & 1.21325 $\pm$ 0.00001 & 94.0 $\pm$ 3.0 & 25 $\pm$ 3.0 & 17\\
PG\,0133$+$114 & 29600 $\pm$ 900 & 5.66 $\pm$ 0.10 & 1.23787 $\pm$ 0.000003 & 82.0 $\pm$ 0.3 & -0.3 $\pm$ 0.2 & 15,5\\
PG\,1512$+$244 & 29900 $\pm$ 900 & 5.74 $\pm$ 0.10 & 1.26978 $\pm$ 0.000002 & 92.7 $\pm$ 1.5 & -2.9 $\pm$ 1.0 & 13,15\\
$[$CW83$]$\,1735$+$22 & 38000 $\pm$ 500 & 5.54 $\pm$ 0.05 & 1.278 $\pm$ 0.001 & 103.0 $\pm$ 1.5 & 20.6 $\pm$ 0.4 & 25,5\\
HE\,2150$-$0238 & 30200 $\pm$ 500 & 5.83 $\pm$ 0.05 & 1.321 $\pm$ 0.005 & 96.3 $\pm$ 1.4 & -32.5 $\pm$ 0.9 & 11,18\\
HD\,171858 & 27200 $\pm$ 800 & 5.30 $\pm$ 0.10 & 1.63280 $\pm$ 0.000005 & 87.8 $\pm$ 0.2 & 62.5 $\pm$ 0.1 & 25,5\\
PG\,1716$+$426 & 27400 $\pm$ 800 & 5.47 $\pm$ 0.10 & 1.77732 $\pm$ 0.000005 & 70.8 $\pm$ 1.0 & -3.9 $\pm$ 0.8 & 13,15\\
PB\,7352 & 25000 $\pm$ 500 & 5.35 $\pm$ 0.10 & 3.62166 $\pm$ 0.000005 & 60.8 $\pm$ 0.3 & -2.1 $\pm$ 0.3 & 25,5\\
CD\,$-$24\,731 & 35400 $\pm$ 500 & 5.90 $\pm$ 0.05 & 5.85 $\pm$ 0.003 & 63 $\pm$ 3 & 20 $\pm$ 5 & 19,5\\
HE\,1448$-$0510 & 34700 $\pm$ 500 & 5.59 $\pm$ 0.05 & 7.159 $\pm$ 0.005 & 53.7 $\pm$ 1.1 & -45.5 $\pm$ 0.8 & 10,18\\
PHL\,861 & 30000 $\pm$ 500 & 5.50 $\pm$ 0.05 & 7.44 $\pm$ 0.015 & 47.9 $\pm$ 0.4 & -26.5 $\pm$ 0.4 & 10,18\\
\noalign{\smallskip}
\hline
\end{tabular}
\tablefoot{In the last column references for the atmospheric parameters effective temperature $T_{\rm eff}$ and surface gravity $\log{g}$ (first number) and the orbital parameters period $P$, radial velocity semi-amplitude $K$ and system velocity $\gamma$ (second number) are given separately. If both parameter sets are taken from one source, only one reference number is given. References: $^{1}$Charpinet et al. (\cite{charpinet5}), $^{2}$Drechsel et al. (\cite{drechsel}), $^{3}$Edelmann (\cite{edelmann3}), $^{4}$ Edelmann et al.
(\cite{edelmann2}), $^{5}$Edelmann et al. (\cite{edelmann}), $^{6}$For et al. (\cite{for2}), $^{7}$Geier et al. (\cite{geier}), $^{8}$Geier et al. (\cite{geier2}), $^{9}$Geier et al. (submitted), $^{10}$Karl et al. (\cite{karl3}), $^{11}$Lisker et al. (\cite{lisker}), $^{12}$Maxted et al. (\cite{maxted4}), $^{13}$Maxted et al. (\cite{maxted2}), $^{14}$Maxted et al. (\cite{maxted3}),
$^{15}$Morales-Rueda et al. (\cite{morales}), $^{16}$Moran et al. (\cite{moran}), $^{17}$Napiwotzki et al. (\cite{napiwotzki3}), $^{18}$Napiwotzki et al. (in prep.) preliminary results are given in Karl et al. (\cite{karl3}), $^{19}$O'Toole \& Heber (\cite{otoole2}), $^{20}$O'Toole et al. (\cite{otoole3}), $^{21}$Saffer et al. (\cite{saffer}), $^{22}$Saffer et al. (\cite{saffer2}), $^{23}$Vu\v ckovi\'c et al. (\cite{vuckovic2}), $^{24}$Wood \& Saffer (\cite{wood2}) and this work$^{25}$. \dag The significance of the orbital solution given by Edelmann et al. (\cite{edelmann}) is rather low, but the possible aliases all lie around $0.2\,{\rm d}$.}
\end{center}
\end{table*}
\section{Projected rotational velocities \label{sec:rot}}
With the gravity at hand, we can derive masses once the projected rotational velocities
have been measured. This is not an easy task because the sdB stars are known to
be slow rotators. Hence, the broad Balmer and helium lines are ill-suited.
Sharp metal lines are most sensitive to rotational broadening, in particular for
low velocities, while they tend to be ironed out for fast rotators.
In order to reach the best accuracy it is necessary to make use
of as many weak metal lines as possible.
\subsection{Projected rotational velocities from metal lines
\label{sec:rotlow}}
In order to derive $v_{\rm rot}\,\sin{i}$, we compared the observed spectra
with rotationally broadened, synthetic line profiles using a semi-automatic
analysis pipeline. The profiles were computed for the stellar parameters given
in Table~\ref{orbit} using the LINFOR program (developed by Holweger, Steffen
and Steenbock at Kiel university, modified by Lemke \cite{lemke}).
\begin{figure*}[t!]
\begin{center}
\resizebox{\hsize}{!}{\includegraphics{14465fg3.eps}}
\caption{{\bf Left hand panels:} Numerical simulations.
$v_{\rm rot}\sin{i}$ values derived from individual lines are
plotted against the wavelength. Standard sdB model spectra with
noise, instrumental and rotational broadening were used for the
calculations. Case A (upper left panel):
$v_{\rm rot}\sin{i} = 10.0\,{\rm km\,s^{-1}}$ and S/N$=100$.
The result $9.2\pm0.9\,{\rm km\,s^{-1}}$ is consistent with the true value
within the error margin. The
distribution of individual $v_{\rm rot}\sin{i}$-measurements is shown in the
upper right panel. Case B (lower left panel):
$v_{\rm rot}\sin{i} = 7.0\,{\rm km\,s^{-1}}$ and S/N$=20$. Note that many
lines indicate zero velocity (empty squares). The dashed line corresponds to
the average including the zero values of $3.5\,{\rm km\,s^{-1}}$, which is
systematically lower than the true value. The zero values have to be rejected
in order to obtain the result (solid line): $7.2\pm1.0\,{\rm km\,s^{-1}}$ which
is
consistent with the true value within the error margin. {\bf Right hand
panels:} Distribution of individual $v_{\rm rot}\sin{i}$-measurements. The
shaded bin to the left marks the zero values which have to be rejected.}
\label{normalsdb}
\end{center}
\end{figure*}
\begin{figure}[t!]
\begin{center}
\resizebox{\hsize}{!}{\includegraphics{14465fg4.ps}}
\caption[Rotational broadening fit result for HE\,1047$-$0436.]{Rotational
broadening fit result for HE\,1047$-$0436. The measured $v_{\rm rot}\sin{i}$
is plotted against the wavelength of the analysed lines. The solid line
corresponds to the average. The inlet shows an example fit of a line doublet.
The thick solid line is the best fit $v_{\rm rot}\sin{i}$. The three thin
lines correspond to fixed rotational broadenings of $0,5,10\,{\rm kms^{-1}}$.}
\label{he1047}
\end{center}
\end{figure}
For a standard set of up to 187 unblended metal lines from 24 different ions and with
wavelengths ranging from $3700$ to $6000\,{\rm \AA}$ a model grid with
appropriate atmospheric parameters and different elemental abundances was
automatically generated with LINFOR. The actual number of lines used as input
for an individual star depended on the wavelength coverage. Due to the
insufficient quality of the spectra and the pollution with telluric features
in the regions blueward of $3700\,{\rm \AA}$ and redward of
$6000\,{\rm \AA}$ we excluded them from our analysis. A simultaneous fit of
elemental abundance, projected rotational velocity and radial velocity was
then performed separately for every identified line using the FITSB2
routine (Napiwotzki et al. \cite{napiwotzki6}). A more detailed description
of the line selection and abundance determination will be published in
Paper III of this series (Geier et al. in prep.).
Ill-suited lines were rejected. This rejection procedure included several
criteria. First the fitted radial velocity had to be low, because all spectra
were corrected to zero RV before. Features with high RVs ($>15\,{\rm km\,s^{-1}}$)
were considered as misidentifications or noise features. Then the fit quality
given by the $\chi^2$ had to be comparable to the average. Lines with
$\chi^2$-values more than $50\%$ of the average were excluded. A spectral
line was also rejected, if the elemental abundance was lower or higher than
the model grid allowed. Equivalent width and depth of the line were measured
and compared to the noise to distinguish between lines and noise features.
Mean value and statistical error were calculated from all measurements
(see Figs. \ref{he1047}$-$\ref{pg1232}). The set of usable lines differs
from star to star due to the different atmospheric parameters and chemical
compositions. In some cases the line list had to be modified and lines were
included or excluded after visual inspection. All outputs of the pipeline
have been checked by visual inspection.
Behr (\cite{behr}) used a similar method to measure the low
$v_{\rm rot}\sin{i}$ of blue horizontal branch stars from high resolution spectra.
The errors given in that work are of the same order as the ones given here.
\subsection{Systematic errors in the determination of the projected rotational
velocity from metal lines \label{sec:rotsystem}}
Since the velocities measured from the metal lines are low, a thorough
analysis of the errors is crucial. To quantify them, we carried out numerical
simulations. Synthetic spectra with fixed rotational broadening were computed
and convolved with the instrumental profile. The standard list of metal lines
and average sdB parameters ($T_{\rm eff}=30\,000\,{\rm K}$, $\log{g}=5.50$)
were adopted. Random noise was added to mimic the observed spectra. The
rotational broadening was measured in the way described above using a grid of
synthetic spectra for various rotational broadenings and noise levels.
As the resolution is seeing dependent for a subset of spectra we also varied
the instrumental profile.
Variations in the instrumental profile changed the measured
$v_{\rm rot}\sin{i}$ by up to $1.0\,{\rm km\,s^{-1}}$ for low S/N and poor
seeing and about $0.5\,{\rm km\,s^{-1}}$ in case of high S/N and good seeing.
The noise level caused errors ranging from $2-6\,{\rm km\,s^{-1}}$ per line
dependent of S/N. Accounting for the number of lines used the error of the average is
of the order of typically $0.5-3.0\,{\rm km\,s^{-1}}$. A variation of the
atmospheric parameters within the derived error limits gives an error of
$0.2\,{\rm km\,s^{-1}}$ and is therefore negligible.
We used a standard limb darkening law for the rotational broadening
independent of wavelength. Berger et al. (\cite{berger}) estimated
the influence of applying a wavelength dependent limb darkening law on the
measurements of projected rotational velocities in DAZ white dwarf spectra.
In the case of the Ca\,{\sc ii} K lines they used, a small difference in the line
cores was found. Nevertheless, the systematic deviation in $v_{\rm rot}\sin{i}$
was smaller than $1\,{\rm km\,s^{-1}}$. Because systematic errors caused by this
effect would lead to higher real projected rotational velocities than
measured, the influence of a wavelength dependent limb darkening law on our
results was tested as well. We found the effect to be even lower, because the
analysed metal lines are much weaker than the Ca\,{\sc ii} K lines used by
Berger et al. (\cite{berger}) and the effect becomes more significant for
stronger lines. A limb darkening law independent of wavelength is therefore
appropriate for our analysis.
\begin{figure}[t!]
\begin{center}
\resizebox{\hsize}{!}{\includegraphics{14465fg5.ps}}
\caption{Rotational broadening fit result for PG\,1232$-$136
(see Fig. \ref{he1047}). Despite the high quality of the data no
significant $v_{\rm rot}\sin{i}$ could be measured and only an upper
limit could be derived.}
\label{pg1232}
\end{center}
\end{figure}
\subsubsection{Individual line fits}
Our numerical experiments included typical numbers of spectral lines ($20-50$)
as have been used in the analysis spread over the entire wavelength range
available ($\simeq3700-6000\,{\rm \AA}$ dependent on the instruments used).
Fig.~\ref{normalsdb} shows the results of two numerical simulations. The top
panel displays the result for $v_{\rm rot}\sin{i}=10\,{\rm km\,s^{-1}}$
well above the detection limit and high S/N$=100$. The fitted
$v_{\rm rot}\sin{i}$ values for individual lines show small dispersion.
The bottom panel of Fig.~\ref{normalsdb} shows the result for
$v_{\rm rot}\sin{i}=7\,{\rm km\,s^{-1}}$, which is closer to the detection
limit, and low S/N$=20$. Due to the lower S/N individual lines scatter more
strongly around the mean. Since negative values of $v_{\rm rot}\sin{i}$ are
not possible, the distribution of the measurements is expected to be a
truncated Gaussian. As can be seen in the lower right hand panel the
distribution doesn't look like a Gaussian, but rather bimodal with many zero
measurements. This distribution can be explained, because the truncation of
the Gaussian occurs at the detection limit rather than
$v_{\rm rot}\sin{i}=0\,{\rm km\,s^{-1}}$. This detection limit is different
for each star. It is caused by the thermal broadening of the lines, which
scales with $\sqrt{T_{\rm eff}/A}$, $A$ being the atomic weight. The mix of
spectral lines used ranges from C ($A=12$) to Fe ($A=56$). The hotter the
star, the poorer the result as the number of lines decreases with
$T_{\rm eff}$ while the detection limit increases. Other important parameters
affecting the detection limit are spectral resolution and the S/N level of
the spectra.
That is why including the zero values of the bimodal distribution in the
calculation of the mean would lead to a systematic shift of $v_{\rm rot}\sin{i}$ to
lower values (see Fig. \ref{normalsdb} lower left panel). For this reason all
zero values were excluded and the artificial rotational broadening could be
measured properly. As the lower limit for this method we derived about
$v_{\rm rot}\sin{i}>5.0-8.0\,{\rm km\,s^{-1}}$ depending on the resolution of the instrument.
If more than two thirds of the lines were measured to be zero, this value was adopted as upper limit
for $v_{\rm rot}\sin{i}$.
As can be seen in the upper panel of Fig.~\ref{normalsdb} the measured mean
value slightly deviates from the true rotational broadening by
$0.8\,{\rm km\,s^{-1}}$. Although this deviation is still within the error bars,
it turned out that such shifts of up to $1\,{\rm km\,s^{-1}}$ can be caused by
systematic effects. The most likely explanation is that for every individual
line not only the rotational broadening, but also the elemental abundance is
fitted. This should affect the $v_{\rm rot}\sin{i}$-distribution and cause a
deviation from the ideal case of random distribution around the mean. Instead
of changing the rotational broadening a slightly different elemental abundance
may lead to a similar $\chi^{2}$-value. Due to this systematic effect a
minimum $v_{\rm rot}\sin{i}$-error of $1.0\,{\rm km\,s^{-1}}$ is adopted even if
the statistical error is lower.
Our analysis revealed that the restriction to just a few metal lines in a
small wavelength range can lead to even higher systematic deviations and that
it is better to use as many lines as possible scattered over an extended wavelength range
to measure projected rotational velocities.
There is also an upper limit. With increasing $v_{\rm rot}\sin{i}$ the lines
are getting broader and broader and eventually cannot be detected any more in
spectra with S/N typical for our sample. As soon as $v_{\rm rot}\sin{i}$
exceeds about $25\,{\rm km\,s^{-1}}$ almost no metal lines can be used unless
the S/N is much higher than the average of our sample. To measure higher
projected rotational velocities the Balmer and helium lines must be used as
described in Sect.~\ref{sec:rothigh}.
\begin{figure}[t!]
\begin{center}
\resizebox{\hsize}{!}{\includegraphics{14465fg6.ps}}
\caption{Rotational broadening fit result for HE\,0532$-$4503 (see
Fig. \ref{he1047}).}
\label{he0532}
\end{center}
\end{figure}
\begin{figure}[]
\begin{center}
\resizebox{\hsize}{!}{\includegraphics{14465fg7.eps}}
\caption{Selected helium lines of KPD\,1946$+$4340 are plotted against the shift relative to rest wavelengths. The spectrum (histogram) is overplotted with the best fitting rotationally broadened model (strong line). A model without rotational broadening (weak line) is overplotted for comparison.}
\label{fitkpd1946}
\end{center}
\end{figure}
\subsubsection{Fitting several lines simultaneously}
The FITSB2 routine also allows to fit a lot of lines simultaneously and to use
different methods of calculating the fitting error (e.g. bootstrapping).
In principle it is possible to measure the rotational broadening from all
lines simultaneously and derive the error. But in practice this approach is
problematic. Fitting up to 25 parameters (24 abundances and
$v_{\rm rot}\sin{i}$) to more than 50 lines simultaneously and derive the
error using a bootstrapping algorithm requires a lot of computer power. In
test calculations we fitted up to nine lines of a synthetic spectrum with
noise, rotational and instrumental broadening added simultaneously. The
bootstrap error was consistent with the error we derived with the method
described above. Furthermore our error estimate turned out to be slightly
higher, which renders our approach more conservative. In the case of very
low $v_{\rm rot}\sin{i}$ only some lines remain sensitive to changes in line
shape due to rotational broadening. The lower limit that can be reached with
the simultaneous approach is therefore higher than what can be detected with
the single line approach.
\begin{figure}[t!]
\begin{center}
\resizebox{\hsize}{!}{\includegraphics{14465fg8.eps}}
\caption{Shaded histogram showing the distribution of the measured $v_{\rm rot}\sin{i}$ of 51 RV variable sdBs. The blank histogram marks the expected uniform distribution of
$v_{\rm rot}\sin{i}$, if the rotational velocity were the same
for all stars ($v_{\rm rot}=8.3\,{\rm kms^{-1}}$) and rotation axes
were randomly oriented. The solid vertical line at
$v_{\rm rot}\sin{i}\simeq5.0\,{\rm kms^{-1}}$ marks the detection
limit. All sdBs with lower $v_{\rm rot}\sin{i}$ are stacked into the
first bin (dotted histogram). All sdBs with $v_{\rm rot}\sin{i}$
higher than $24\,{\rm kms^{-1}}$ are summed up in the last bin.}
\label{vrotdistrib_RV}
\end{center}
\end{figure}
\begin{figure}[]
\begin{center}
\resizebox{\hsize}{!}{\includegraphics{14465fg9.eps}}
\caption{The measured $v_{\rm rot}\sin{i}$ of 40 RV variable sdBs
plotted against the orbital period of the binaries
(Tables~\ref{tab:vrotRV}, \ref{orbit}). For seven stars, marked as
open inverted triangles, only upper limits were derived.
The solid diamond marks $[$CW83$]$\,1735$+$22 that rotates faster than synchronised (see Sect.~\ref{sec:hd188}
for a detailed discussion). PG\,2345$+$318 rotates slower than synchronised and is marked with a filled triangle (see Sect.~\ref{sec:age} for a discussion).}
\label{PvrotI}
\end{center}
\end{figure}
\subsubsection{Orbital smearing}
In the case of binary systems with very short orbital periods
($0.1-0.2\,{\rm d}$) and high RV amplitudes, the variable Doppler shift of the
spectral lines during the exposure can lead to a smearing effect, which can
be misinterpreted as rotational broadening unless the S/N of the spectra is
very high. Orbital smearing is clearly visible in most FEROS spectra of
PG\,1232$-$136, which has an orbital period of $0.36\,{\rm d}$ and an
RV-semiamplitude of $130\,{\rm km\,s^{-1}}$ (Edelmann et al. \cite{edelmann}).
The exposure times of these spectra ranged from $6$ to $30$ minutes.
Choosing one single FEROS spectrum with sharp lines obtained at the orbital
phase when smearing should be minimal, we derived
$v_{\rm rot}\sin{i}=6.2\pm0.8\,{\rm km\,s^{-1}}$ (Geier et al. \cite{geier3}).
Due to the importance of this object for our conclusions we obtained another
60 spectra of PG\,1232$-$136 with UVES at higher resolution ($R=80\,000$). The exposure
time of each spectrum was only $2$ minutes. After co-adding all these spectra
we constrained $v_{\rm rot}\sin{i}<5.0\,{\rm km\,s^{-1}}$
(see Fig.~\ref{pg1232}). Although the difference between these two results
appears to be not very large, it nevertheless illustrates the influence of
orbital smearing.
In the case of the short period ($<0.1\,{\rm d}$) eclipsing sdB+M binary
HS\,0705$+$6700 with an RV-semiamplitude of $86\,{\rm kms^{-1}}$ the effect
is much stronger. While Drechsel et al. (\cite{drechsel}) measure
$v_{\rm rot}\sin{i}=110\pm14\,{\rm kms^{-1}}$ from medium resolution spectra
with short exposure times ($10-15\,{\rm min}$), we measure
$v_{\rm rot}\sin{i}=158\pm12\,{\rm kms^{-1}}$ from a high resolution
spectrum taken with HET/HRS and an exposure time of $30\,{\rm min}$. From the
high resolution data we can only constrain an upper limit of
$v_{\rm rot}\sin{i}<170\,{\rm kms^{-1}}$.
Two other stars of our sample (PG\,1336$-$018 and PG\,1043$+$460) may also
be affected by orbital smearing, if the spectra we used were obtained during
unfavourable orbital phases. Only upper limits can be given for their
$v_{\rm rot}\sin{i}$.
\subsubsection{Other systematic errors and their impact on the companion mass determination}
Other possible sources of systematic errors are broadening through
microturbulence or unresolved pulsations.
No signi\-ficant microturbulence could be measured which is consistent with the
analysis of Edelmann et al. (\cite{edelmann4}). Our sample contains six long-period
pulsating sdBs of V\,1093\,Her type
(Green et al. \cite{green2}) and four short period pulsators of
V\,361\,Hya type (Kilkenny et al. \cite{kilkenny}). It has been shown by Telting et al.
(\cite{telting}) that unresolved high amplitude pulsations with short periods
can significantly contribute to or even dominate the line broadening. This is
not a problem for our sample stars, because the pulsation periods of the
V\,1093\,Her stars are long compared to our exposures times and the amplitudes
are low. No significant pulsational broadening is expected in the case of the
short period pulsators Feige\,48 and HE\,0230$-$4323 as well, because the amplitudes of the
pulsations are low (Reed et al. \cite{reed2}; Charpinet et al. \cite{charpinet2}; Kilkenny et al. \cite{kilkenny2}).
The line broadening of KPD\,1930$+$2752 and PG\,1336$-$018 is totally dominated by their rotation,
because the sdBs are spun up by their close companions (Geier et al.
\cite{geier}; Vu\v ckovi\'c et al. \cite{vuckovic2}).
It has to be pointed out that unresolved pulsations, microturbulence and any
other unconsidered effect would cause an extra broadening of the lines.
The true projected rotational velocity would in this case always be lower than
the one we determined. In this case the derived orbital inclination would also be
lower and the estimated mass of the unseen companion would be higher (see Sect. \ref{sec:ana}).
Unaccounted systematic effects would therefore lead to higher companion masses.
This fact is important for the interpretation of the results (see Sect.~\ref{sec:masses}).
\subsection{Projected rotational velocities from hydrogen and
helium lines \label{sec:rothigh}}
A few sdBs, which reside in close binary systems, are known to be spun up by
the tidal influence of their companions. The projected rotational velocities
of these stars are as high as $100\,{\rm km\,s^{-1}}$ (e.g. Drechsel et al.
\cite{drechsel}; Geier et al. \cite{geier}).
Rotational broadening irons out the weak metal lines unless the spectra are of
excellent S/N.
However, for higher projected rotational velocities,
Balmer and helium lines remain the only choice
to determine $v_{\rm rot}\sin{i}$. Due to
thermal and pressure broadening Balmer and helium lines are less sensitive to
rotational broadening than metal lines. From our simulations we derive
detection limits of $v_{\rm rot}\sin{i}\simeq15\,{\rm km\,s^{-1}}$ for helium
lines and $v_{\rm rot}\sin{i}\simeq25\,{\rm km\,s^{-1}}$ for the Balmer line
cores given an S/N$\simeq100$. For lower quality data these limits go up
significantly. For many of our spectra the Balmer and helium lines are
insensitive unless $v_{\rm rot}\sin{i}$ exceeds $\simeq 50\,{\rm km\,s^{-1}}$.
To measure the $v_{\rm rot}\sin{i}$ we calculated LTE model spectra with the
appropriate atmospheric parameters (see Table~\ref{orbit}) and
performed a simultaneous fit of rotational broadening and helium abundance to
all usable Balmer line cores and helium lines using the FITSB2 routine
(Napiwotzki et al. \cite{napiwotzki6}, for an example see Fig.~\ref{fitkpd1946}). All systematic effects discussed in
the previous section except orbital smearing become negligible in this case. The
quoted uncertainties are $1\sigma$-$\chi^{2}$-fit errors.
The helium ionisation problem in hot sdBs (see Sect.~\ref{sec:atmo}) caused by neglected metal opacity can affect the measurement of the rotational broadening, if helium lines are used. This became apparent in the analysis of the eclipsing sdOB binary AA\,Dor. While Rauch \& Werner (\cite{rauch}) used metal-free NLTE models and measured $v_{\rm rot}\sin{i}=47\pm5\,{\rm km\,s^{-1}}$ for the He\,{\sc ii} line at $4686\,{\rm \AA}$, Fleig et al. \cite{fleig} measured $v_{\rm rot}\sin{i}=35\pm5\,{\rm km\,s^{-1}}$ by fitting metal line blanketed NLTE models to FUSE spectra. Rucinski (\cite{rucinski}) derived $v_{\rm rot}\sin{i}$ by an analysis of line profile variations during the eclipse and reported a mismatch between the Mg\,{\sc ii} line at $4481\,{\rm \AA}$ and the He\,{\sc ii} line at $4686\,{\rm \AA}$. M\"uller et al. (\cite{mueller}) resolved this conundrum and showed that consistent results ($v_{\rm rot}\sin{i}=30\pm1\,{\rm km\,s^{-1}}$) can be achieved if the appropriate (metal enriched) model atmospheres are used (see Sect.~\ref{sec:atmo}).
To account for this effect we used LTE models with ten times solar metallicity rather than metal-free NLTE models to measure the rotational broadening of the Balmer line cores and helium lines in the two hot sdOBs KPD\,1946$+$4340 (see Fig.~\ref{fitkpd1946}) and $[$CW83$]$\,1735$+$22. While in the case of $[$CW83$]$\,1735$+$22 the $v_{\rm rot}\sin{i}$-values derived with the two different model grids were the same, a significant difference was measured for KPD\,1946$+$4340. The $v_{\rm rot}\sin{i}$ derived with the metal-free models was $42\,{\rm km\,s^{-1}}$ compared to $26\,{\rm km\,s^{-1}}$ with metal-enriched models.
Due to the fact that KPD\,1946$+$4340 is eclipsing (Bloemen et al. \cite{bloemen}) it is possible to verify that the $v_{\rm rot}\sin{i}$ measured with metal enriched models is fully consistent with the assumption of synchronised rotation (see Sect.~\ref{sec:empirical}).
\subsection{Results \label{sec:close}}
Projected rotational velocities of 46 close binary subdwarfs have been
measured and supplemented by five measurements taken from literature (Tables~\ref{tab:vrotRV} and ~\ref{tab:vrotnosol}). For 40 systems the orbital parameters are known. In general the projected rotational velocities are small. The other 11 systems are slow rotators, too. These systems can not be analysed further as their mass functions are still unknown.
\begin{table*}[t!]
\caption{Projected rotational velocities for the binary sdB systems from
Table~\ref{tab:atm}.}
\label{tab:vrotRV}
\begin{center}
\begin{tabular}{lllllllll}
\hline
\noalign{\smallskip}
System & $T_{\rm eff}$ & $m_{B}$ & S/N & seeing & $N_{\rm lines}$ & ${v_{\rm rot}\,\sin\,i}$ & Instrument & Reference\\
& [K] & [mag] & & [arcsec] & & [${\rm km\,s^{-1}}$] & \\
\noalign{\smallskip}
\hline
\noalign{\smallskip}
PG\,1627$+$017$^{l}$ & 23\,500 & 11.3 & 64 & & 11 & $<$7.0 & HRS &\\
GD\,687 & 24\,300 & & & & & 21.2 $\pm$ 2.0 & & Geier et al. \cite{geier4}\\
JL\,82$^{l}$ & 25\,000 & 12.2 & 55 & & 57 & 10.4 $\pm$ 1.0 & FEROS & \\
PB\,7352 & 25\,000 & 12.0 & 61 & & 39 & 7.4 $\pm$ 1.0 & FEROS &\\
HE\,0532$-$4503 & 25\,400 & 16.1 & 83 & 0.8 & 18 & 11.1 $\pm$ 1.0 & UVES & \\
PG\,0001$+$275$^{l}$ & 25\,400 & 12.8 & 129 & & 24 & 12.6 $\pm$ 1.0 & FOCES &\\
PG\,1248$+$164 & 26\,600 & 14.4 & 47 & & 13 & 8.9 $\pm$ 1.3 & HRS &\\
PG\,1232$-$136 & 26\,900 & 13.1 & 167 & & 64 & $<$5.0 & UVES &\\
PG\,1432$+$159 & 26\,900 & 13.6 & 50 & & 22 & 9.5 $\pm$ 1.0 & HRS &\\
PG\,1716$+$426$^{l}$ & 27\,400 & 13.7 & 61 & & 24 & 10.9 $\pm$ 1.0 & HRS &\\
PG\,0101$+$039$^{l}$ & 27\,500 & & & & & 10.9 $\pm$ 1.1 & & Geier et al. \cite{geier2}\\
CPD\,$-$64\,481 & 27\,500 & 11.0 & 152 & & 38 & 4.1 $\pm$ 1.0 & FEROS &\\
PG\,2345$+$318 & 27\,500 & 14.4 & 92 & & 21 & 12.9 $\pm$ 1.0 & HRS &\\
PG\,1043$+$760$^{l}$ & 27\,600 & 13.4 & 15 & & H/He & $<$88 & Palomar&\\
PG\,1743$+$477 & 27\,600 & 13.6 & 57 & & 27 & $<$7.0 & HRS &\\
TON\,S\,183 & 27\,600 & 12.4 & 55 & & 57 & 6.7 $\pm$ 1.0 & FEROS &\\
HD\,171858 & 27\,700 & 9.6 & 90 & & 55 & 6.7 $\pm$ 1.0 & FEROS &\\
HW\,Vir & 28\,500 & 10.3 & 130 & & H/He & 78.3 $\pm$ 1.0 & FEROS &\\
HS\,0705$+$6700 & 28\,800 & 14.2 & 28 & & H/He & $<$170 & HRS &\\
& & & & & H/He & 110 $\pm$ 14 & & Drechsel et al. \cite{drechsel}\\
PG\,1329$+$159 & 29\,100 & 13.3 & 52 & & 26 & 10.7 $\pm$ 1.0 & HRS &\\
Feige\,48$^{s}$ & 29\,500 & 13.1 & 37 & & 36 & 8.5 $\pm$ 1.0 & HIRES &\\
HE\,0929$-$0424 & 29\,500 & 15.4 & 25 & 0.6 & 9 & 7.1 $\pm$ 1.0 & UVES &\\
HE\,1421$-$1206 & 29\,600 & 15.1 & 21 & 0.5 & 18 & 6.7 $\pm$ 1.1 & UVES &\\
PG\,0133$+$114 & 29\,600 & 10.7 & 194 & & 17 & $<$8.0 & FOCES &\\
PG\,1101$+$249 & 29\,700 & 12.5 & 66 & & 24 & 8.1 $\pm$ 1.0 & HIRES &\\
PG\,1512$+$244 & 29\,900 & 13.0 & 87 & & 17 & $<$8.0 & HRS &\\
HE\,2135$-$3749 & 30\,000 & 13.7 & 84 & 1.0 & 53 & 6.9 $\pm$ 1.0 & UVES &\\
PHL\,861 & 30\,000 & 15.1 & 24 & 0.6 & 16 & 7.2 $\pm$ 1.3 & UVES &\\
HE\,1047$-$0436 & 30\,200 & 14.7 & 37 & 0.6 & 37 & 6.2 $\pm$ 1.0 & UVES &\\
HE\,2150$-$0238 & 30\,200 & 15.8 & 27 & 0.8 & 16 & 8.3 $\pm$ 1.5 & UVES &\\
PG\,1017$+$086 & 30\,300 & & & & H/He & 118 $\pm$ 5 & & Maxted et al. \cite{maxted3}\\
HE\,0230$-$4323$^{s}$ & 31\,100 & 13.8 & 59 & 0.9 & 40 & 12.7 $\pm$ 1.0 & UVES &\\
PG\,1336$-$018$^{s}$ & 31\,300 & 14.0 & 40 & & H/He & $<$79.0 & FEROS &\\
PG\,1116$+$301 & 32\,500 & 14.3 & 42 & & 8 & 9.0 $\pm$ 1.7 & HRS &\\
KPD\,1946$+$4340 & 34\,200 & 14.1 & 55 & & H/He & 26.0 $\pm$ 1.0 & HRS &\\
HE\,1448$-$0510 & 34\,700 & 15.0 & 27 & 0.6 & 8 & 7.2 $\pm$ 1.7 & UVES &\\
KPD\,1930$+$2752$^{s}$ & 35\,200 & & & & & 92.3 $\pm$ 1.5 & & Geier et al. \cite{geier}\\
CD\,$-$24\,731 & 35\,400 & 11.6 & 42 & & 8 & 12.1 $\pm$ 1.7 & FEROS &\\
$[$CW83$]$\,1735$+$22 & 38\,000 & 11.5 & 230 & & H/He & 44.0 $\pm$ 1.0 & FOCES &\\
BPS\,CS\,22169$-$0001 & 39\,300 & 12.6 & 109 & & 5 & 8.5 $\pm$ 1.5 & FEROS &\\
\hline
\\
\end{tabular}
\tablefoot{For binaries with high ${v_{\rm rot}\,\sin\,i}$
helium lines and Balmer line cores (H/He) are used instead of metal lines.
The average seeing is only given if the spectra were obtained with a wide
slit in the course of the SPY survey. In all other cases the seeing should not
influence the measurements. $^{c}$Companion visible in the spectrum.
$^{s}$Pulsating subdwarf of V\,361\,Hya type. $^{l}$Pulsating subdwarf of
V\,1093\,Her type.}
\end{center}
\end{table*}
\begin{table*}[t!]
\caption{Projected rotational velocities of radial velocity variable sdBs, for
which orbital parameters are unavailable or uncertain.}
\label{tab:vrotnosol}
\begin{center}
\begin{tabular}{llllllll}
\hline
\noalign{\smallskip}
System & $T_{\rm eff}$ & $m_{B}$ & S/N & seeing & $N_{\rm lines}$ & ${v_{\rm rot}\,\sin\,i}$ & Instrument\\
& [K] & [mag] & & [arcsec] & & [${\rm km\,s^{-1}}$] & \\
\noalign{\smallskip}
\hline
\noalign{\smallskip}
HE\,2208$+$0126 & 24\,300 & 15.6 & 24 & 0.8 & 15 & $<$5.0 & UVES\\
TON\,S\,135 & 25\,000 & 13.1 & 47 & & 35 & 6.6 $\pm$ 1.0 & FEROS\\
HE\,2322$-$4559$^{c}$ & 25\,500 & 15.5 & 23 & 0.7 & 16 & 10.9 $\pm$ 1.1 & UVES\\
HS\,2043$+$0615 & 26\,200 & 16.0 & 22 & 1.3 & 26 & 12.3 $\pm$ 1.1 & UVES\\
HE\,1309$-$1102$^{c}$ & 27\,100 & 16.1 & 7 & 0.6 & 7 & 7.6 $\pm$ 2.3 & UVES\\
HS\,2357$+$2201 & 27\,600 & 13.3 & 29 & 0.7 & 26 & 6.1 $\pm$ 1.1 & UVES\\
HS\,2359$+$1942 & 31\,400 & 14.4 & 14 & 0.6 & 26 & $<$ 5.0 & UVES\\
PG\,1032$+$406 & 31\,600 & 10.8 & 20 & & H/He & $<$34 & Palomar\\
HE\,1140$-$0500$^{c}$ & 34\,500 & 14.8 & 18 & 0.9 & 5 & 5.2 $\pm$ 2.7 & UVES\\
HS\,1536$+$0944$^{c}$ & 35\,100 & 15.6 & 19 & 1.1 & 15 & 12.2 $\pm$ 1.6 & UVES\\
HE\,1033$-$2353 & 36\,200 & 16.0 & 13 & 0.6 & 7 & 9.3 $\pm$ 2.3 & UVES\\
\hline
\\
\end{tabular}
\tablefoot{The average seeing is only given if the spectra were obtained with a wide slit in the course of the SPY survey. In all other cases the seeing should not influence the measurements. Atmospheric parameters are taken from Lisker et al. (\cite{lisker}) except TON\,S\,135 (Heber \cite{heber1}) and PG\,1032$+$406 (Maxted et al. \cite{maxted2}). $^{c}$Companion visible in the spectrum.}
\end{center}
\end{table*}
The projected rotational velocities of HE\,1047$-$0436 and Feige\,48 have
been measured by Napiwotzki et al. (\cite{napiwotzki3}) and O'Toole et al.
(\cite{otoole3}) using a technique similar to the one described here, but
restricted to just a few metal lines. Napiwotzki et al. (\cite{napiwotzki3})
derived an upper limit of $v_{\rm rot}\sin{i}=4.7\,{\rm km\,s^{-1}}$ for
HE\,1047$-$0436. Our measurement of
$6.2\pm0.6\,{\rm km\,s^{-1}}$ is just slightly higher
(see Fig.~\ref{he1047}). While O'Toole et al. (\cite{otoole3}) give an upper
limit of $v_{\rm rot}\sin{i}=5\,{\rm km\,s^{-1}}$ for Feige\,48 we derive
$8.5\pm1.5\,{\rm km\,s^{-1}}$.
Fig.~\ref{PvrotI} shows the measured $v_{\rm rot}\sin{i}$ plotted against the
orbital periods of the binaries. A trend is clearly visible: The longer the
orbital period of the systems, the lower the measured $v_{\rm rot}\sin{i}$.
While the short period systems ($\simeq0.1\,{\rm d}$) were spun up by their
close companions and have high $v_{\rm rot}\sin{i}$ up to
$\simeq100\,{\rm km\,s^{-1}}$, the mean $v_{\rm rot}\sin{i}$ decrease to
below $10\,{\rm km\,s^{-1}}$ as the periods increase to $\simeq1.0\,{\rm d}$.
For orbital periods exceeding $\simeq1.0\,{\rm d}$, the
$v_{\rm rot}\sin{i}$-values scatter around the average
$v_{\rm rot}=8.3\,{\rm km\,s^{-1}}$ for single sdB stars
(Geier et al. \cite{geier3}). We conclude that tidal forces do not
influence the rotation of sdBs for orbital periods considerably longer than
one day.
As can be seen in Fig.~\ref{vrotdistrib_RV} the
$v_{\rm rot}\sin{i}$-distribution of the RV variable sdBs (Tables~\ref{tab:vrotRV}, \ref{tab:vrotnosol})
differs from the uniform distribution of the single stars
(Geier et al. \cite{geier3}), the rotational properties of the full sample
of single sdB stars will be presented in paper II of this series by Geier et
al. (in prep.). A large fraction of binary sdBs exceeds the derived maximum
$v_{\rm rot}=8.3\,{\rm km\,s^{-1}}$ significantly. The most likely reason for
this is tidal interaction with the companions.
\section{Constraining masses, inclinations and the nature of the unseen
companions}\label{sec:masses}
Having determined the projected rotational velocity we are in a position to
derive the companion mass as a function of the sdB mass as described in
Sect.~\ref{sec:ana}.
From 40 sdB binaries, for which all necessary parameters have been determined,
31 could be solved consistently under the assumption of tidally locked rotation.
Two examples are shown in Figs.
\ref{cpdm64481_mass} and \ref{he0532_mass}.
Derived inclinations, subdwarf masses and the allowed masses for the
companions are given in Table~\ref{compmasses}.
If the sdB mass could not be constrained with other methods (e.g. from
photometry, see Table~\ref{compmasses}), the theoretically predicted mass range
was taken from Han et al. (\cite{han1}, \cite{han2}). For the common envelope
ejection channels, which are the only plausible way of forming sdBs in close
binary systems, the possible masses for the sdBs range from
$0.38\,M_{\rm \odot}$ to $0.47\,M_{\rm \odot}$. Since in all simulation sets
of Han et al. (\cite{han1}, \cite{han2}) the mass distribution
shows a very prominent peak at $0.43-0.47\,M_{\rm \odot}$ this mass range is
the most likely one.
The choice of the adopted sdB mass range is backed up by recent mass
determinations via asteroseismology of
short-period pulsating sdBs. Fontaine et al. (\cite{fontaine}) showed the
mass distribution of 12 of these objects, which is in good agreement with
the predicted distribution by Han et al. (\cite{han1}, \cite{han2}).
Consistent with theory no star of this small sample has a mass much lower
than $0.4\,M_{\rm \odot}$. The few sdB masses, that could be constrained by
analyses of eclipsing binary systems also range from $0.38\,M_{\rm \odot}$
to $0.5\,M_{\rm \odot}$ (see e.g.~Sect.~\ref{sec:lowmassm} and For et al.
\cite{for}).
Hence we adopt $0.43-0.47\,M_{\rm \odot}$ as the mass range
for the sdBs in the binary systems we studied, if there is no independent
mass determination either from binary light curve analysis or asteroseismology.
If the derived minimum sdB mass assuming a sychronised orbit (see Equation~\ref{eq:minmass}) exceeds
this reasonable mass range ($M_{\rm sdB} \gg 1\,M_{\rm \odot}$) the sdB primary spins
faster than synchronised and no consistent solution can be found.
This is the case for 9 binaries from our sample.
Most of these systems have orbital periods exceeding $1.2\,{\rm d}$,
where we find that synchronisation is no longer established
(see Sect.~\ref{sec:tidal}). It has to be pointed out that only
subdwarfs rotating faster than synchronised can be identified in this way.
If an sdB should rotate slower than synchronised, one would always get an
apparently consistent, but incorrect solution, which overestimates the
companion mass (see Sect.~\ref{sec:age}). For {\bf PG\,0133$+$114} there is some doubt whether
the star is synchronised or not as the minimum mass for the sdB is
$0.51\,{M_{\rm \odot}}$ at the upper end of the predicted mass range for core
helium-burning objects and its period is rather long ($1.24\,{\rm d}$).
The minimum companion mass would be $0.38\,\,{M_{\rm \odot}}$, while the statistically
most likely one ($i=52^{\circ}$) $0.48\,M_{\rm \odot}$, indicating it is a white dwarf, if the
system is synchronised.
The nature of the companion was deduced unambiguously for most of the remaining stars (except five)
from the masses and additional information. The companions to {\bf PG\,1248$+$164}\footnote{A light curve of this star has been taken by Maxted et al. (\cite{maxted5}). No variability could be detected. Although the orbital period is rather long ($0.73\,{\rm d}$) and a reflection effect therefore shallow, the companion may be a low-mass WD rather than an M dwarf.},
{\bf HE\,1421$-$1206}, {\bf Feige~48}\footnote{The mass of the pulsating subdwarf Feige\,48
has been determined in an asteroseismic analysis (van Grootel et al.
\cite{vangrootel}) to $0.52\,{M_{\rm \odot}}$. The corresponding companion
mass is $0.27\,{M_{\rm \odot}}$. Therefore the nature of the unseen
companion remains unclear.
It may be a low mass white dwarf as well as a late M dwarf.
Due to the derived very low inclination and
the presence of short period pulsations, a reflection effect or ellipsoidal
variations are probably too small to be detectable.}, and
{\bf HE\,2135$-$3749} could be either main sequence stars or white dwarfs
because their
masses are lower than $0.45\,M_{\rm \odot}$.
We shall describe the results for three groups of companion stars.
Starting with sdBs orbited by low mass dwarf companions, we proceed to the
systems with white dwarf companions of normal masses. Finally we discuss the
group of binaries that contain massive
compact companions exceeding $0.9\,M_{\rm \odot}$, because such systems
are of particular interest, e.g. as potential SN Ia progenitors.
This includes KPD\,1930+2752, the most massive white dwarf companion to an sdB
star known so far.
\begin{table*}[t!]
\caption{Derived inclination angles, companion masses and likely
nature of the companions.}
\label{compmasses}
\begin{center}
\begin{tabular}{llllllll}
\hline
\noalign{\smallskip}
System & $P^{*}$ & $M_{\rm sdB}$ & $i$ & $M_{\rm comp}$ & $i_{\rm max}$ & $M_{\rm comp, min}$ & Companion \\
& [d] & [$M_{\rm \odot}$] & [deg] & [$M_{\rm \odot}$] & [deg] & [$M_{\rm \odot}$] & \\
\noalign{\smallskip}
\hline
\noalign{\smallskip}
PG\,1017$-$086$^{12}$ & 0.07 & $>$0.47 & $<$73 & $>$0.06 & & & MS/BD$^{r}$ \\
KPD\,1930$+$2752$^{6}$ & 0.10 & $0.47_{-0.02}^{+0.05}$ & $77_{-4}^{+4}$ & $0.94_{-0.03}^{+0.02}$ & & & WD$^{el}$ \\
HS\,0705$+$6700$^{3}$ & 0.10 & 0.48 & $65_{-16}^{+25}$ & $0.15_{-0.03}^{+0.05}$ & & & MS$^{r,ec}$ \\
PG\,1336$-$018$^{2,16}$ & 0.10 & 0.459 & $<$90 & $>$0.12 & & & MS$^{r}$ \\
HW\,Vir$^{4}$ & 0.12 & 0.53 & $75_{-10}^{+15}$ & $0.155_{-0.015}^{+0.015}$ & & & MS$^{r,ec}$ \\
PG\,1043+760$^{13}$ & 0.12 & & $<$78 & $>$0.10 & 90 & 0.06 & WD$^{n}$ \\
BPS\,CS\,22169$-$0001$^{14}$ & 0.18 & & $9_{-2}^{+2}$ & $0.19_{-0.06}^{+0.07}$ & 13 & 0.09 & MS$^{r}$ \\
PG\,1432$+$159$^{12}$ & 0.22 & & $16_{-3}^{+5}$ & $2.59_{-1.10}^{+2.01}$ & 25 & 0.92 & NS/BH$^{n}$ \\
PG\,2345$+$318$^{2}$ & 0.24 & & & & & & WD$^{ec}$ not synchronised\\
PG\,1329$+$159$^{12}$ & 0.25 & & $17_{-2}^{+4}$ & $0.35_{-0.10}^{+0.10}$ & 26 & 0.16 & MS$^{r}$ \\
HE\,0532$-$4503$^{11}$ & 0.27 & & $14_{-2}^{+2}$ & $3.00_{-0.92}^{+0.94}$ & 19 & 1.27 & NS/BH$^{f}$ \\
CPD\,$-$64\,481 & 0.28 & & $7_{-2}^{+2}$ & $0.62_{-0.24}^{+0.42}$ & 11 & 0.24 & WD \\
PG\,1101$+$249 & 0.35 & & $26_{-4}^{+6}$ & $1.67_{-0.58}^{+0.77}$ & 40 & 0.68 & WD/NS/BH $^{f}$ \\
PG\,1232$-$136 & 0.36 & & $<$14 & $>$6.00 & 17 & 3.58 & BH$^{f}$ \\
Feige\,48$^{15}$ & 0.38 & 0.52 & $17_{-2}^{+3}$ & $0.27_{-0.04}^{+0.06}$ & & & MS/WD \\
GD\,687$^{5,7}$ & 0.38 & & $39_{-6}^{+6}$ & $0.71_{-0.21}^{+0.22}$ & 63 & 0.32 & WD$^{f}$ \\
KPD\,1946$+$4340$^{1}$ & 0.40 & & $71_{-15}^{+19}$ & $0.67_{-0.08}^{+0.18}$ & 90 & 0.58 & WD$^{el,ec}$\\
HE\,0929$-$0424$^{11}$ & 0.44 & & $23_{-4}^{+5}$ & $1.82_{-0.64}^{+0.88}$ & 34 & 0.73 & WD/NS/BH$^{f}$ \\
HE\,0230$-$4323$^{9}$ & 0.45 & & $39_{-5}^{+8}$ & $0.30_{-0.07}^{+0.07}$ & 61 & 0.15 & MS$^{r}$ \\
PG\,1743$+$477 & 0.52 & & $<$27 & $>$1.66 & 32 & 1.00 & NS/BH$^{f}$ \\
PG\,0001$+$275 & 0.53 & & $31_{-4}^{+7}$ & $0.79_{-0.23}^{+0.26}$ & 48 & 0.37 & WD \\
PG\,0101$+$039$^{8}$ & 0.57 & & $40_{-6}^{+9}$ & $0.72_{-0.20}^{+0.20}$ & 64 & 0.33 & WD$^{el,n}$ \\
PG\,1248$+$164 & 0.73 & & $52_{-12}^{+25}$ & $0.27_{-0.08}^{+0.10}$ & 90 & 0.12 & MS/WD \\
JL\,82$^{10}$ & 0.74 & & $33_{-5}^{+8}$ & $0.21_{-0.06}^{+0.06}$ & 51 & 0.10 & MS$^{r}$ \\
TON\,S\,183 & 0.83 & & $30_{-5}^{+7}$ & $0.94_{-0.31}^{+0.39}$ & 47 & 0.40 & WD$^{f}$ \\
PG\,1627$+$017 & 0.83 & & $<$34 & $>$0.50 & 45 & 0.32 & WD \\
PG\,1116$+$301 & 0.86 & & 90 & $0.48_{-0.21}^{+0.00}$ & 90 & 0.27 & WD \\
HE\,2135$-$3749 & 0.92 & & $67_{-16}^{+13}$ & $0.41_{-0.12}^{+0.13}$ & 90 & 0.29 & MS/WD \\
HE\,1421$-$1206 & 1.19 & & $57_{-14}^{+33}$ & $0.27_{-0.08}^{+0.10}$ & 90 & 0.16 & MS/WD \\
HE\,1047$-$0436 & 1.21 & & $62_{-10}^{+28}$ & $0.53_{-0.14}^{+0.15}$ & 90 & 0.28 & WD \\
PG\,0133$+$114 & 1.24 & $>0.51$ & 90 & $>0.38$ & & & MS/WD/not synchronised? \\
PG\,1512$+$244 & 1.27 & & & & & & not synchronised? \\
$[$CW83$]$\,1735$+$22 & 1.28 & & & & & & not synchronised \\
HE\,2150$-$0238 & 1.32 & & & & & & not synchronised \\
HD\,171858 & 1.63 & & $58_{-14}^{+32}$ & $0.60_{-0.19}^{+0.25}$ & 90 & 0.37 & WD \\
PG\,1716$+$426 & 1.78 & & & & & & not synchronised \\
PB\,7352 & 3.62 & & & & & & not synchronised \\
CD\,$-$24\,731 & 5.85 & & & & & & not synchronised \\
HE\,1448$-$0510 & 7.16 & & & & & & not synchronised \\
PHL\,861 & 7.44 & & & & & & not synchronised \\
\noalign{\smallskip}
\hline\\
\end{tabular}
\tablefoot{If the sdB mass couldn't be constrained with
other methods the theoretically predicted mass range of
$0.43-0.47\,{M_{\rm \odot}}$ was taken from Han et al. (\cite{han1,han2}).
The minimum masses of the companions and maximum inclinations of the
binaries were calculated for the lowest possible sdB mass
($0.3\,{M_{\rm \odot}}$, Han et al. \cite{han1,han2}). $^{*}$The
orbital periods given here are rounded to the second decimal place. The
accurate values are given in Table \ref{orbit}. Additional constraints to clarify the nature of the unseen companions: $^{r}$The detection of a reflection effect from a cool MS/BD or a $^{n}$non-detection to exclude
this option. The presence of eclipses$^{ec}$ or ellipsoidal
deformations$^{el}$ in the light curves. No signatures of a main-sequence companion
within the given mass range are visible in the flux distribution or in the
spectrum$^{f}$. These informations are taken from $^{1}$Bloemen et al. (\cite{bloemen}), $^{2}$Charpinet et al.
(\cite{charpinet5}), $^{3}$Drechsel et al. (\cite{drechsel}), $^{4}$Edelmann
(\cite{edelmann3}), $^{5}$Farihi et al. (\cite{farihi}), $^{6}$Geier et al.
(\cite{geier}), $^{7}$Geier et al. (\cite{geier4}), $^{8}$Geier et al.
(\cite{geier2}), $^{9}$Koen (\cite{koen}), $^{10}$Koen (\cite{koen2}),
$^{11}$Lisker et al. (\cite{lisker}), $^{12}$Maxted et al. (\cite{maxted3}),
$^{13}$Maxted et al. (\cite{maxted5}), $^{14}$\O stensen (priv. comm.),
$^{15}$van Grootel et al. (\cite{vangrootel}) and $^{16}$Vu\v ckovi\'c et
al. (\cite{vuckovic2}).}
\end{center}
\end{table*}
\subsection{Late main sequence stars and a potential brown dwarf \label{sec:lowmassm}}
{\bf PG\,1017$-$086} is the sdB binary with the shortest orbital period known
to date. Maxted et al. (\cite{maxted3}) reported the detection of a significant
reflection effect, but no eclipses in the light curve. Taking these informations into
account, one can constrain the inclination angle to be lower than $73^{\circ}$
(no eclipses!) and derive a minimum sdB mass of $0.47\,M_{\rm \odot}$.
The minimum mass of the companion is constrained to $0.06\,M_{\rm \odot}$.
The companion is therefore most likely a brown dwarf (BD) or a very late
M dwarf. Only two other candidate sdB+BD systems are known.
{\bf HS\,0705$+$6700} is an eclipsing sdB+M binary with reflection effect.
Drechsel et al. (\cite{drechsel}) performed a detailed photometric and
spectroscopic analysis of this system and derived an inclination of $84^{\circ}.
4$, an sdB mass of $0.483\,M_{\rm \odot}$ and a companion mass of
$0.134\,M_{\rm \odot}$.
Drechsel et al. (\cite{drechsel}) also estimated $v_{\rm rot}\sin{i}$ and
derived the companion mass. Although our result is much less accurate
($0.15_{-0.03}^{+0.05}\,M_{\rm \odot}$), it comes close to that derived from the
light curve.
Much better agreement is reached for {\bf HW\,Vir}, the prototype eclipsing
sdB+M binary, where excellent high resolution spectra are available.
Edelmann (\cite{edelmann3}) recently determined the absolute parameters
of this system spectroscopically using shallow absorption lines of the secondary
to obtain its RV curve for the first time.\footnote{Wood \& Saffer (\cite{wood2}) detected these features in low resolution spectra before.} Edelmann (\cite{edelmann3}) derives
an sdB mass of $0.53\,M_{\rm \odot}$ and a companion mass of
$0.15\,M_{\rm \odot}$. Adopting this sdB mass our derivation of the companion
mass agrees very well ($0.155_{-0.015}^{+0.015}\,M_{\rm \odot}$). The derived inclination
angle of $i=75_{-10}^{+15}\,^{\circ}$ is consistent with the more accurate photometric
solution $i=80^{\circ}.6\pm0^{\circ}.2$ given by Wood, Zang \& Robinson
(\cite{wood}). Most recently Lee et al. (\cite{lee}) presented an analysis on HW\,Vir based on new photometric
data. Their best solution ($i=80^{\circ}.98\pm0^{\circ}.1$, $M_{\rm 1}=0.485\pm0.013\,M_{\rm \odot}$,
$M_{\rm 2}=0.142\pm0.004\,M_{\rm \odot}$) is fully consistent with our results.
The eclipsing and pulsating sdBV+M binary {\bf PG\,1336$-$018} (NY\,Vir)
has been analysed by Vu\v ckovi\'c et al. (\cite{vuckovic}), but no unique
solution could be found. In an asteroseismic study Charpinet et al.
(\cite{charpinet5}) derived the fundamental parameters of this star by
fitting simultaneously the observed pulsation modes detectable in the
light curve. Adopting the asteroseismic value for the sdB mass
($0.459\,M_{\rm \odot}$) for our analysis, the companion mass is
$>0.12\,M_{\rm \odot}$. This result is in agreement with the second solution
from Vu\v ckovi\'c et al.
(\cite{vuckovic}): $M_{\rm sdB}=0.467\,M_{\rm \odot}$,
$M_{\rm comp}=0.122\,M_{\rm \odot}$. Charpinet et al. (\cite{charpinet5})
concluded that the binary must be synchronised to account for the observed
rotational splitting of the pulsation modes and predict
a $v_{\rm rot}\sin{i}=74.9\pm0.6\,{\rm kms^{-1}}$. This predicted value is
consistent with the derived upper limit of
$v_{\rm rot}\sin{i}<79\,{\rm kms^{-1}}$.
{\bf BPS\,CS\,22169$-$0001} was proposed to host a BD companion
(Edelmann et al. \cite{edelmann}), but we derived a very low inclination and
therefore a companion mass too high for a BD ($0.19_{-0.06}^{+0.07}\,M_{\rm \odot}$).
In the light curves of the four binaries {\bf BPS\,CS\,22169$-$0001},
{\bf HE\,0230$-$4323}, {\bf JL\,82} as well as {\bf PG\,1329$+$159} reflection
effects have been detected (see references in Table~\ref{compmasses}).
The derived companion mass ranges are consistent with the masses of late M
dwarfs.
\subsection{White dwarfs \label{sec:lowmasswd}}
Ten stars must have white dwarf companions because no lines from cool companions are
visible and the absence of a reflection effect can be used to exclude a
main sequence companion in some cases.
Among these binaries {\bf KPD\,1946$+$4340} sticks out. Most recently Bloemen et al. (\cite{bloemen}) discovered eclipses and ellipsoidal variations in a spectacular high precision light curve obtained by the Kepler mission. The eclipses are clearly caused by a WD companion. We derive a mass range of $0.59-0.85\,M_{\rm \odot}$ for the unseen companion consistent with a WD. Due to the fact that the binary is eclipsing, the inclination angle has to be close to $90^{\rm \circ}$. Assuming the canonical sdB mass of $0.47\,M_{\rm \odot}$ the companion mass can be constrained to $\simeq0.61\,M_{\rm \odot}$, which is the average mass of WDs with C/O core. This result is perfectly consistent with the indepedent analysis of Bloemen et al. (\cite{bloemen}).
The companion of {\bf GD\,687} has already been shown to be a white dwarf
by Geier et al. (\cite{geier4}) utilising the same technique as used in this
paper and is included for the sake of completeness.
Its merging time of $11.1\,{\rm Gyr}$,
which is just a little shorter than the Hubble time.\footnote{The merging times of all binaries have been calculated using the formula given in Ergma et al. (\cite{ergma}).}
A remarkable object which has a high inclination and a very low companion mass
($>0.10\,M_{\rm \odot}$) is {\bf PG\,1043$+$760}. Due to its short period of
$0.12\,{\rm d}$ a reflection effect should be easily detectable.
But Maxted et al. (\cite{maxted5}) report a non-detection of variations
in the light curve. The companion of this star must be a compact object, most
likely a helium-core white dwarf of very low mass.
\begin{figure}[t!]
\begin{center}
\resizebox{\hsize}{!}{\includegraphics{14465fg10.eps}}
\caption{Mass of the sdB primary CPD\,$-$64\,481 plotted against the mass of the unseen
companion. The companion mass error is indicated by the dashed lines.
The mass range of the CE ejection channel (Han et al. \cite{han1}) is
marked with dotted vertical lines.}
\label{cpdm64481_mass}
\end{center}
\end{figure}
\begin{figure}[]
\begin{center}
\resizebox{\hsize}{!}{\includegraphics{14465fg11.eps}}
\caption{Mass of the sdB primary HE\,0532$-$4503 plotted against the mass of the unseen companion. The companion mass error is indicated by the dashed lines. The mass range of the CE ejection channel (Han et al. \cite{han1}) is marked with dotted vertical lines. The Chandrasekhar mass limit is plotted as solid horizontal line.}
\label{he0532_mass}
\end{center}
\end{figure}
In the case of {\bf PG\,1627$+$017}, a main sequence companion can be excluded
as well. With a mass exceeding $0.50\,{M_{\rm \odot}}$ the companion would be
visible in the spectra in this case. The non-detection of a reflection effect
(Maxted et al. \cite{maxted5}; For et al. \cite{for}) is consistent with our result.
The companion of {\bf PG\,0101$+$039} is a white dwarf. Despite of the long
orbital period of $0.57\,{\rm d}$ a main-sequence companion could be excluded.
A light curve was taken with the MOST satellite. Instead of a reflection
effect the shallowest ellipsoidal deformation ever detected could be
verified (Geier et al. \cite{geier2}). The white dwarf companion could be
quite massive ($0.52-0.92\,{M_{\rm \odot}}$). In this case the total mass
comes close the Chandrasekhar limit, but the merging time would be higher
than the Hubble time. PG\,0101$-$039 does therefore not qualify as SN\,Ia
progenitor candidate. The companion mass range of {\bf PG\,0001$+$275} is
quite similar ($0.56-1.05\,{M_{\rm \odot}}$). A main sequence companion
can be most likely excluded and no reflection effect was detected (Maxted et al. \cite{maxted5}; Shimanskii et al. \cite{shimanskii}). The orbital period of $0.53\,{\rm d}$ is also too long to make PG\,0001$+$275 an SN\,Ia progenitor candidate.
\begin{figure}[t!]
\begin{center}
\resizebox{\hsize}{!}{\includegraphics{14465fg12.eps}}
\caption{$T_{\rm eff}-\log{g}$-diagram; same as Fig. \ref{fig:tefflogg}
but restricted to the sample which could be solved under the assumption of synchronisation. The helium main sequence and EHB band are superimposed with EHB evolutionary tracks from Dorman et al. (\cite{dorman}) labelled with their masses. Binaries with confirmed late main sequence or brown dwarf companions are plotted as filled diamonds, binaries with confirmed white dwarf companions with filled triangles.
Hot subdwarfs where the companion could be a main sequence star or a white
dwarf are marked with solid rectangles. The filled circles mark the sdBs
with putative massive compact companions.
}
\label{fig:teffloggsync}
\end{center}
\end{figure}
\begin{figure}[t!]
\begin{center}
\resizebox{\hsize}{!}{\includegraphics{14465fg13.eps}}
\caption{$T_{\rm eff}-\log{g}$-diagram; same as Fig. \ref{fig:teffloggsync}
but restricted to the non-synchronised systems. The open squares mark binaries with orbital periods longer than $1.2\,{\rm d}$. The filled one marks the system where synchronisation is not established despite its short orbital period.}
\label{fig:teffloggnosync}
\end{center}
\end{figure}
Edelmann et al. (\cite{edelmann}) derived a very low minimum companion mass
for {\bf CPD\,$-$64\,481}. At high inclination the companion mass would have
been consistent with a brown dwarf. However, our analysis provides evidence
that this binary has a very low inclination ($i=5^\circ$ to $9^\circ$), actually
the lowest one of the entire sample,
and therefore a companion mass way too
high for a BD ($0.62_{-0.24}^{+0.42}\,M_{\rm \odot}$) indicating a white dwarf binary.
Due to the low projected rotational
velocity of this star, the fractional error is very high and the companion
mass not very well constrained. For the highest possible companion mass the
system would exceed the Chandrasekhar limit and qualify as SN\,Ia progenitor
candidate due to its short orbital period. However, the inclination angle must
be lower than $5^{\circ}$ is this case. That is why this extreme scenario is
considered to be very unlikely.
The unseen companions in the binaries {\bf HE\,1047$-$0436},
and {\bf HD\,171858} also have masses consistent with white dwarfs.
The mass of the companion to {\bf PG\,1116$+$301} is slightly above the limit of
$0.45\,{M_{\rm \odot}}$. Despite the high inclination derived for this binary no reflection effect was detected in its light curve (Maxted et al. \cite{maxted5}; Shimanskii et al. \cite{shimanskii}), which is consistent with a WD companion.\footnote{The upper limit to the companion mass of PG\,1116$+$301 is identical with the most likely companion mass (see Table~\ref{compmasses}, Fig.~\ref{figcompmasses}). The system can only be synchronised if the inclination reaches its maximum value of $90^{\rm \circ}$. In this case the upper limit to the sdB mass is lower than $0.47\,M_{\rm \odot}$, but still within the possible range (see Sect.~\ref{sec:ana}).}
\subsection{Massive compact companions - white dwarfs, neutron stars, black
holes \label{sec:highmassbh}}
Seven subdwarf binaries (in addition to KPD~1930+2752) have
massive compact companions (see e.g. Fig. \ref{he0532_mass}) exceeding
$0.9\,M_{\rm \odot}$. For all of
these binaries main sequence companions can be excluded, because they would
significantly contribute to the flux or even outshine the subdwarf primary.
The massive companions therefore have to be compact.
The nature of the unseen companion in the binary {\bf KPD\,1930$+$2752} could
be clarified by Geier et al. (\cite{geier}). The short period system consists
of a synchronously rotating, tidally distorted sdB and a massive white dwarf. The
combined mass of the systems reaches the Chandrasekhar limit and the stars
will most probably merge in $200\,{\rm Myr}$. KPD\,1930$+$2752 is the best
double degenerate candidate for SN\,Ia progenitor so far.
The companion mass of {\bf TON\,S\,183} is as high as that of KPD\,1930$+$2752.
However, the error bar is much larger. Hence we can not exclude that it is a
normal white dwarf of $0.6\,M_{\rm \odot}$.
On the other hand the total mass of the system may exceed the
Chandrasekhar limit, but TON\,S\,183 does also not qualify as SN\,Ia progenitor
candidate, because of its long orbital period the merging time exceeds the
Hubble time by orders of magnitude.
For {\bf PG\,1101$+$249} and {\bf HE\,0929$-$0424} the companion mass is slightly above the Chandrasekhar limit, but we can not exclude a massive white dwarf given the errors.
The merging times of HE\,0929$-$0424 and especially PG\,1101$+$249 on the other hand would be near
or below Hubble time and the total masses of the systems would most likely
exceed the Chandrasekhar limit. If the companions should be massive white
dwarfs of C/O composition, these binaries would be SN\,Ia progenitor candidates.
The companions of {\bf PG\,1432$+$159}, {\bf HE\,0532$-$4503} and
{\bf PG\,1743$+$477} may be neutron stars as well as black holes as their
masses exceed the Chandrasekhar limit even when errors are accounted for. Light curves have been obtained of both PG\,1432$+$159 and PG\,1743$+$477. The non-detection of reflection effects is perfectly consistent with compact companions (Maxted et al. \cite{maxted5}). In the case of PG\,1743$+$477 only a lower limit for the companion mass could be derived. Due to their short orbital periods the companions in PG\,1432$+$159 as well as in HE\,0532$-$4503 will merge in a few billion years at most. Since the average
lifetime on the EHB is only $100\,{\rm Myr}$ the sdBs will evolve to white
dwarfs in the meantime. The outcome of a merger between a white dwarf and a neutron star
or a black hole is unclear. Such systems may be progenitors for gamma-ray
bursts or more exotic astrophysical transients (see discussion in Badenes et
al. \cite{badenes}).
In the case of {\bf PG\,1232$-$136} only a lower limit can be given for the companion mass
($>6.0\,M_{\rm \odot}$) which is higher than all theoretical NS masses. The
companion of this sdBs may therefore be a BH.
\subsection{Distribution in the $T_{\rm eff}$-$\log{g}$-plane \label{sec:distribtefflogg}}
Fig.~\ref{fig:teffloggsync} shows the distribution of the 31 solved binaries in the $T_{\rm eff}$-$\log{g}$-diagram. Within their error bars most of the sdB primaries are associated with the EHB as expected. Only three of them (BPS\,CS\,22169$-$0001, KPD\,1930$+$2752, KPD\,1946$+$4340) have evolved beyond the TAEHB. No trends with companion types can be seen. The location on the EHB is a function of the thickness of the stars' hydrogen layers. The thinner this layer is, the higher are $T_{\rm eff}$ and $\log{g}$ at the beginning of EHB-evolution and the more envelope mass has been lost during the CE-ejection. The efficiency of this process seems to be not much affected by the companion type. Companions of all types ranging from low mass M dwarfs or brown dwarfs to massive compact objects are scattered all over the EHB.
While the fraction of evolved sdBs is only $10\%$ in the solved sample, two out of nine subdwarfs ($22\%$) are found in binaries, which could not be solved under the assumption of synchronisation, are obviously not located on the EHB (see Fig.~\ref{fig:teffloggnosync}). A possible reason for this discrepancy is discussed in Sect.~\ref{sec:hd188}.
\subsection{Distribution of companion masses \label{sec:distribsystem}}
Fig.~\ref{massdistribdetail} shows the low mass end of the companion mass distribution.
Excluding the massive systems described in Sect.~\ref{sec:highmassbh}
the histogram mass distribution (Fig.~\ref{massdistribdetail}) displays a peak
at companion masses ranging from $0.2-0.4\,M_{\rm \odot}$.
Most of the low mass objects $<0.4\,M_{\rm \odot}$ have been identified as M dwarfs.
The bona fide white dwarf companions seem to peak at masses ranging from $0.4\,M_{\rm \odot}$ to $0.8\,M_{\rm \odot}$. Because close binary evolution is involved, there should be deviations from the normal mass distribution of single white dwarfs, which shows a characteristic peak at an average mass of $0.6\,M_{\rm \odot}$. We therefore conclude that the mass distribution of the restricted sample looks reasonable and no obvious systematics can be seen. The high fraction of massive compact companions (up to $20\%$ of our sample) on the other hand looks suspicious.
\begin{figure*}[t!]
\begin{center}
\resizebox{\hsize}{!}{\includegraphics{14465fg14.eps}}
\caption{Mass ranges for the unseen companions of 31 binaries under
the assumption of synchronisation (see Table~\ref{compmasses}). The
companion mass ranges are derived for the most likely sdB mass range
of $0.43-0.47\,M_{\rm \odot}$. The dashed vertical line marks the
upper limit to the mass of main-sequence companions. Main-sequence stars with higher masses
would be visible in the spectra and can be excluded. The solid vertical
lines marks the Chandrasekhar mass limit. $^{r}$Binaries with
reflection effect detected in their light curves. The companions are
either late M stars or brown dwarfs. $^{c}$Binaries with compact
companions like white dwarfs, neutron stars or black holes.}
\label{figcompmasses}
\end{center}
\end{figure*}
\begin{figure}[t!]
\begin{center}
\resizebox{\hsize}{!}{\includegraphics{14465fg15.eps}}
\caption{Companion mass distribution of the binaries with low mass
companions (Table~\ref{compmasses},
Fig.~\ref{figcompmasses}).
The solid histogram shows the fraction of
subdwarfs with confirmed white dwarf companions, the dashed histogram the
detected M dwarf companions. The dashed vertical line marks the
average white dwarf mass.}
\label{massdistribdetail}
\end{center}
\end{figure}
As the companion mass depends on the primary mass, the companion masses would be
lower, if the primaries' masses were overestimated.
We have adopted the masses of the sdB primaries to range from $0.43\,M_{\rm \odot}$
to $0.47\,M_{\rm \odot}$ as suggested by the models of Han et al.
(\cite{han1}, \cite{han2}) and backed-up by asteroseismology.
However, the minimum mass of a core helium burning
star can be as small as $0.3\,M_{\rm \odot}$.
In Fig.~\ref{massdistrib030} the companion mass distribution is plotted under
the extreme assumption that all sdBs have this minimum mass for core helium
burning (or the minimum mass allowed by other constraints). Looking at the
low mass regime and comparing the distribution with
Fig.~\ref{massdistribdetail} one immediately notices that this assumption
leads to unphysical results. The distribution of low mass companions peaks
at masses lower than $0.4\,M_{\rm \odot}$, which is very unlikely
especially for white dwarf companions.
Under this extreme assumption only the companion of PG\,1232$-$136 remains
more massive than the Chandrasekhar limit.
Furthermore the companions of PG\,1743$+$477 and HE\,0532$-$4503 still are
more massive than $1.0\,M_{\rm \odot}$ in this case. With just slightly
higher sdB masses the companion masses would exceed the Chandrasekhar limit.
\subsection{The inclination problem}
By plotting the companion masses versus inclination angles
(Fig.~\ref{massincl}) an anomaly becomes apparent. While the systems with
low mass companions cover all inclination angles with a slight
preference for high inclinations, the systems with massive compact companions
are found at low inclinations between $15^{\rm \circ}$ and $30^{\rm \circ}$.
Our sample has been drawn from the catalogue of Ritter \& Kolb (\cite{ritter}),
which is a compilation extracted from literature and not a systematic survey.
Hence selection effects can not be quantified. Most of the low-mass,
high-inclination systems have been discovered by photometry
(eclipses and reflection effect), while all others stem from radial velocity
surveys. The radial velocity technique is biased against low inclinations and
low masses. Hence, massive systems at high inclinations should be found most
easily. However, except for KPD\,1930$+$2752, there is no high inclination object
among the subsample of massive compact companions.
One may speculate that such systems may have been overlooked,
because their spectra may look peculiar due to orbital smearing and are
therefore not classified as sdB stars.
We refrain from further speculations about selection effects and proceed to
search for an evolutionary scenario that can explain the formation of sdB
binaries with neutron star or black hole companions.
\begin{figure}[t!]
\begin{center}
\resizebox{\hsize}{!}{\includegraphics{14465fg16.eps}}
\caption{Mass distribution of the unseen companion stars
(see Fig.~\ref{massdistribdetail}).
The lowest possible companion mass is plotted against the total
number of binaries under the assumption of the lowest possible sdB mass.}
\label{massdistrib030}
\end{center}
\end{figure}
\section{The formation of sdB+NS/BH binaries} \label{sec:form}
Neutron stars and stellar-mass black holes are the remnants of massive stars
ending their lifes in supernova explosions. Detecting these exotic objects is
possible when they are in a close orbit with another star. If matter is
transferred from the companion star to the compact object, X-rays are emitted.
Not many neutron stars or stellar mass black holes could be found up to now.
On the other hand evolved, non-interacting binaries containing such objects
should exist, since X-ray binaries only represent a relatively short phase of
stellar evolution. Without ongoing mass transfer the companion remains
invisible, but should be detectable indirectly from the reflex motion of the
visible star. Badenes et al. (\cite{badenes}) discovered a
massive compact companion to a white dwarf and concluded that this companion is likely to be a neutron star.
But Marsh et al. (\cite{marsh}) convincingly showed that the system is a double degenerate system consisting of a low mass and a very high mass WD. Kulkarni \& van Kerkwijk (\cite{kulkarni}) performed an independent analysis with similar results. In this section the question whether sdB stars with hidden neutron star or black hole companions do exist is discussed in
detail.
\begin{figure*}[t!]
\begin{center}
\includegraphics[angle=90,width=18cm]{14465fg17.ps}
\caption{Schematic diagram of formation scenarios leading to hot
subdwarf binaries with neutron-star (left hand panel) or black-hole (right
hand panel) companions.}
\label{formation}
\end{center}
\end{figure*}
The existence of sdB+NS/BH systems requires an appropriate formation channel.
The evolution that leads to such systems requires an initial binary,
consisting of a primary star that is sufficiently massive to produce a
neutron star or black hole, and a companion, the progenitor of the hot
subdwarf, of typically several solar masses. The initial orbital
period has to be quite large (a few to 20 years), so that mass
transfer only starts late in the evolution of the star, and these
systems generally experience two mass-transfer phases and one
supernova explosion (see Fig.~\ref{formation}). The short orbital periods
observed
for our systems imply that the second mass-transfer phase from the red
giant progenitor of the subdwarf to the compact companion had to be
unstable, leading to a common-envelope and spiral-in phase of the
compact object. The condition for unstable mass transfer constrains
the mass of the progenitor to be larger than the mass of the compact
object (otherwise, mass transfer would be stable and lead to a much
wider system, Podsiadlowski et al. \cite{podsi}).
Fig.~\ref{formation} illustrates the evolution that leads to
systems of this type for two typical examples. While this scenario can
explain most of our systems with high-mass compact components, the
inferred masses of the putative black hole in PG\,1232$-$136 is
larger than we would estimate ($\le 3\,M_\odot$) for a 0.5\,$M_\odot$
sdB star. This may suggest that this system has experienced another
mass-transfer phase after the two common-envelope phases in which mass
was transferred from the sdB star to the compact object. It should
also be noted that, while we assume here that
the mass of the subdwarf is $\sim 0.5\,M_\odot$, consistent with the
properties of the observed systems, the sdB mass range allowed by this
scenario is $0.3 - 1.1\,M_\odot$ for the neutron-star systems and $0.5
- 1.1\, M_\odot$ for the black-hole systems. Compared with the mass
range of $0.3 - 0.7\,M_\odot$ for the standard evolutionary
channel (Han et al. \cite{han1}, \cite{han2}), the subdwarf may therefore
be more massive. An independent determination of the sdB mass (e.g. by
obtaining parallaxes) could therefore help to verify this
scenario.
At the beginning of the second mass-transfer phase, these systems are
expected to pass through a short X-ray binary phase, lasting $\sim
10^5\,$yr, in which a neutron star may accrete up to $\sim
10^{-3}\,M_\odot$ and become a moderately recycled millisecond
pulsar (Podsiadlowski et al. \cite{podsi}). This links these system to the
X-ray binary population
(in a sense, they are failed low-mass X-ray binaries). Population
synthesis estimates (Pfahl et al. \cite{pfahl}) suggest that up to one in
$10^4$ stars in
the Galaxy experience this evolution, implying that of order $1\,\%$ of
all hot subdwarfs should have neutron-star or black-hole
companions. This means that tens of thousands of these systems could
exist in the Galaxy compared to just about $300$ known X-ray binaries.
The binary PSR\,J1802$-$2124, which consists of a millisecond pulsar and a CO white dwarf in close orbit ($P=0.7\,{\rm d}$, $M_{\rm WD}=0.78\,M_{\rm \odot}$) may have evolved in a similar way (Ferdman et al. \cite{ferdman}).
\begin{figure}[t!]
\begin{center}
\resizebox{\hsize}{!}{\includegraphics{14465fg18.eps}}
\caption{Companion mass versus inclination. The solid squares mark compact companions (WD/NS/BH), the solid diamond MS or BD companions. The solid circles mark objects where both companion types are possible.}
\label{massincl}
\end{center}
\end{figure}
\section*{Part II: Synchronisation -- Theory and empirical evidence}
\section{Orbital synchronisation of sdB binaries \label{sec:tidal}}
The results presented above are based on the assumption of tidal
synchronisation. Since especially the discovery of sdB+NS/BH systems
challenges our understanding of stellar evolution, it is necessary to
investigate whether this assumption holds in the case of sdBs. A thorough
discussion of tidal synchronisation in sdB binaries both from the theoretical
and the observational point of view is therefore given.
\subsection{Theoretical timescales for synchronisation}
Which mechanism is responsible for orbital synchronisation in binaries is
still under debate. Theoretical timescales for synchronisation are given by
Zahn (\cite{zahn2}) and Tassoul \& Tassoul (\cite{tassoul}), but
unfortunately they are not consistent for stars with radiative envelopes and
convective cores like hot subdwarfs.
Zahn (\cite{zahn2}) was the first to calculate synchronisation and
circularisation timescales for main sequence stars in close binary systems.
Observations of eclipsing binaries were in good agreement with his theoretical
calculations for late type main-sequence stars with radiative cores and convective
envelopes. Tidal friction caused by the equilibrium tide, which forms under
the tidal influence of the close companion, is very efficient in this case
because convection connects the inner regions of the stellar envelope with its
surface. For radiative envelopes another mechanism is needed to explain the
observed degree of synchronism in early type main-sequence binaries. Dynamical tides,
which are excited at the boundary layer between the convective core and the
radiative envelope are thought to be radiatively damped at the stellar surface
and to transfer angular momentum outwards. This mechanism turns out to be much
less efficient and the predicted synchronisation timescales are too long to
explain the degree of synchronism in some early type main-sequence stars (e.g. Giuricin
et al. \cite{giuricin}).
Tassoul \& Tassoul (\cite{tassoul}) introduced another, hydrodynamical braking
mechanism. Tidally induced meridional currents in the non-synchronous binary
components should lead to synchronisation and circularisation of the system.
This mechanism is very efficient, but it was debated whether it is valid or
not (Rieutord \cite{rieutord}; Tassoul \& Tassoul \cite{tassoul2}). Claret et
al. (\cite{claret1}, \cite{claret2}) studied both mechanisms and compared them
to the available observations. Due to the necessary calibration of many
uncertain parameters a definitive answer as to which mechanism is in better
agreement with observation could not be given.
Applying the theory of tidal synchronisation to sdB binaries is not an easy
task. One of the key results of both theories is that tidal circularisation of
the orbit is achieved after the companions are synchronised.
This means that
once an orbital solution is found and the orbit turns out to be circular, both
companions can be considered as synchronised without knowing their rotational
properties. This simple law cannot be used in the case of sdBs. The reason is
that close binary sdBs were formed via the CE ejection channel.
The common envelope phase is very efficient in circularising the orbit and
all known close binary sdBs have circular orbits or show only small
eccentricities ($\epsilon \leq 0.06$; Edelmann et al. \cite{edelmann};
M\"uller et al. \cite{mueller}; Napiwotzki et al. in prep.).
Stellar structure plays an important role. The synchronisation timescale of
Zahn (\cite{zahn2}) scales with $(R_{\rm C}/R)^{8}$, where $R_{\rm C}$ is the
radius of the convective core and $R$ the stellar radius. The larger the
convective core of a star, the shorter the time span until synchronisation is
reached.
In order to estimate the synchronisation times of the analysed binaries we
used the formulas of Zahn (\cite{zahn2}) and Tassoul \& Tassoul
(\cite{tassoul}).
\begin{eqnarray}
\label{eq:zahn}
t_{\rm sync}({\rm Zahn})&=&52^{-5/3}\left(\frac{R^{3}}{GM}\right)^{1/2}
\left(\frac{I}{MR^{2}}\right)\nonumber\\
& &\times\frac{\left(1+q\right)^{5/6}}{q^{2}}
E_{2}^{-1}\left(\frac{a}{R}\right)^{17/2}
\end{eqnarray}
Here $M=M_{\rm sdB}$, $R=R_{\rm sdB}$, $q=M_{\rm comp}/M_{\rm sdB}$, $a$ is
the separation of the companions, which can be calculated from the measured
orbital parameters using Kepler's third law, and $I$ is the moment of inertia
of the sdB star. We adopted the canonical sdB mass
($M_{\rm sdB}=0.47\,M_{\rm \odot}$) for these calculations. $E_{\rm 2}$ is a
tidal coefficient which is very sensitive to the structure of the star,
especially the size of the convective core. Here we use the first
approximation of Zahn (\cite{zahn2}) $E_{\rm n}=(R_{\rm C}/R)^{2n+4}$ and
adopt $R_{\rm C}/R \simeq 0.15$ and $\frac{I}{MR^{2}}\simeq 0.04$ derived from sdB models calculated by Han
(priv. comm.). For this models a hydrogen layer mass of $10^{-4}\,M_{\rm \odot}$ was chosen
consistent with result from asteroseismology (e.g. Charpinet et al.
\cite{charpinet5}).
\begin{eqnarray}
\label{eq:tassoul}
t_{\rm sync}({\rm Tassoul})&=&5.35\times10^{2+\gamma-N/4}
\frac{1+q}{q}L^{-1/4}\nonumber\\
& &\times M^{5/4}R^{-3}P^{11/4}
\end{eqnarray}
In this equation $M$, $R$ (solar units) and $q$ are defined in the same way as
above. $P$ is the orbital period in days. The luminosity
$L=4\pi\sigma R^{2}T_{\rm eff}^{4}$ can be calculated using the
$T_{\rm eff}$ measurements given in Table~\ref{orbit}. The parameter $N$ is
connected with the different ways of energy transport within the outer layers
of the stellar envelope. It is assumed to be zero in stars with radiative
envelopes. The parameter $\gamma$ can be adjusted to account for large
deviations from synchronism and contributions of both companions. Here the
value $\gamma=1.6$ used by Claret et al. (\cite{claret1}) was chosen.
It has to be noted that this approach is only a crude approximation.
As stated by Claret et al. (\cite{claret2}), the differential equations which
govern the orbital parameters of a binary must be integrated. For this EHB
evolution has to be taken into account. A detailed study of this problem is
beyond the scope of this paper and we shall use equations \ref{eq:zahn} and \ref{eq:tassoul} to
estimate the timescale of synchronisation.
It has to be pointed out that both theories predict the synchronisation timescale
to increase strongly with increasing orbital period and to
decrease with increasing sdB radius as
$t_{\rm sync}\sim P^\alpha$ and $t_{\rm sync}\sim R^{-\beta}$. In the
theory of Zahn (\cite{zahn2}) the exponents are $\alpha=17/3$ and $\beta=9$,
while the Tassoul \& Tassoul formula gives $\alpha=11/4$ and $\beta=3$.
In addition the synchronisation timescale decreases as the mass ratio
increases. Hence it will take lower mass companions longer to synchronise the sdB
star if the other parameters are constant.
\subsection{Synchronisation of our sample}\label{sec:syncsample}
The synchronisation time scale depends strongly on orbital period and
radius. Because the radii of the sdBs differ only little
we display the results of our calculations as a function of orbital period in
Fig. \ref{fig:tsync}.
The synchronisation time scales are given in units of the average EHB
lifetime ($t_{\rm EHB}\simeq 10^{8}\,{\rm yr}$; Dorman et al. \cite{dorman}).
A binary is thought to be synchronised, if the EHB
lifetime is much longer than the synchronisation time. Due to the larger exponents $\alpha$ and $\beta$ the slope of the
relations is steeper and the scatter larger for the Zahn (\cite{zahn2}) theory compared
to the one proposed by Tassoul \& Tassoul (\cite{tassoul}).
What can be seen immediately is that the timescales of Zahn (\cite{zahn2}) and Tassoul \& Tassoul (\cite{tassoul})
differ by $2-8$ orders of magnitude.
Observational evidence is needed to constrain the timescales of tidal
synchronisation in close binary sdBs.
For periods shorter than $\simeq0.3-0.4\,{\rm d}$ both theories predict synchronised
rotation and are consistent with our observations. In the period range
$0.4-1.2\,{\rm d}$ only the synchronisation times of Tassoul are consistent
with observation, while the timescales of Zahn quickly exceed Hubble time.
If the orbital periods exceed $\simeq1.2-1.6\,{\rm d}$ the assumption of
synchronisation does not yield consistent results any more, although the
timescales calculated with the prescription of Tassoul \& Tassoul
(\cite{tassoul}) would still predict synchronised rotation.
According to our results, the period limit where synchronisation breaks down,
lies near $1.2\,{\rm d}$. The binaries HE\,2150$-$0238 ($P=1.32\,{\rm d}$)
and PG\,1512$+$244 ($P=1.2\,{\rm d}$) cannot be solved consistently although their periods
are only slightly longer than that of HE\,1047$-$0436 ($1.21\,{\rm d}$) and
PG\,0133$+$114 ($1.24\,{\rm d}$), which can be solved.
Despite its long period, HD\,171858 can be solved
consistently, making it the longest period ($P=1.6\,{\rm d}$) object in our sample
that is synchronised. Why is this? Besides the orbital period the size of the
star matters: The larger the star, the shorter the synchronisation time (see equations \ref{eq:zahn} and \ref{eq:tassoul}).
The gravity of HD\,171858 is lower than that of all other stars with periods
ranging from $1.2\,{\rm d}$ to $1.6\,{\rm d}$ by a factor of $2$ at least. Hence its radius is larger
and synchronisation can be achieved more quickly than in the other stars of slightly shorter periods.
[CW\,83]\,1735$+$22 stands out among the longer-period binaries, because its
projected rotational velocity ($44\,{\rm km\,s^{-1}}$) is unusually high.
Because of its period ($P=1.28\,{\rm d}$) it is not necessarily expected to be synchronised.
This system is discussed in detail in Sect.~\ref{sec:hd188}.
We also found that the short period binary PG\,2345$+$318 ($P=0.24\,{\rm d}$) rotates slower than synchronised.
This peculiar system is discussed in detail in Sect.~\ref{sec:age}.
\begin{figure}[t!]
\begin{center}
\resizebox{\hsize}{!}{\includegraphics{14465fg19.eps}}
\caption{Observed orbital period is plotted against the
synchronisation times of Zahn (\cite{zahn2}, open symbols) and
Tassoul \& Tassoul (\cite{tassoul}, filled symbols) both in units of
the average lifetime on the EHB ($10^{8}\,{\rm yr}$, Dorman et al.
\cite{dorman}). The solid horizontal line marks the border between
synchronisation within the EHB lifetime and synchronisation times
longer than the EHB lifetime. The squares mark sdB binaries, where the
primaries have been proven to be synchronised by light curve analysis
of eclipsing or ellipsoidal variable systems. The circles mark
binaries where synchronisation could be shown by asteroseismology. The
systems marked with diamonds could be solved consistently under the
assumption of synchronisation, while the systems marked with downward triangles
rotate faster than synchronised. PG\,2345$+$318 is the only sdB in our sample that rotates slower
than synchronised. It is marked with an upward triangle.}
\label{fig:tsync}
\end{center}
\end{figure}
In general the synchronisation mechanism of Zahn (\cite{zahn2}) is not
efficient enough to explain the observed level of synchronisation, while the
mechanism of Tassoul \& Tassoul (\cite{tassoul}) on the other hand appears to
be much too efficient. Nevertheless, care has to be taken interpreting these
results, because both theories give timescales for the synchronisation of entire
stars from the core to the surface, while only the rotation at the surface can
be measured from line broadening. Goldreich \& Nicolson
(\cite{goldreich}) showed that in stars with radiative envelopes and Zahn's
braking mechanism at work, the synchronous rotation proceeds from the surface
towards the core of the star. This means that the outer layers are
synchronised faster than the rest of the star. This effect would explain the
discrepancy between Zahn's theory and our results at least to a certain
extent. Unfortunately it was not possible to quantify this effect so far
(see e.g. review by Zahn \cite{zahn3}).
Tidal synchronisation does not necessarily lead to an equality of orbital and
rotational period. Higher spin resonances are possible and would change the
derived parameters significantly (in case of the planet Mercury the ratio of
orbital and rotational period is $3/2$). To fall into a higher resonance, the
binary eccentricity has to be high at some point of its evolution. But close
sdB binaries underwent at least one common envelope phase (maybe two in case
of compact companions), which led to a circularisation of the orbit. The small
eccentricities in some of our programme binaries reported by Edelmann
et al. (\cite{edelmann}) and Napiwotzki et al. (in prep.) are considered to be still consistent with this
scenario. For these reasons, higher resonances are unlikely to occur in this
evolutionary channel.
\section{Empirical evidence for synchronisation}\label{sec:empirical}
The timescale of the synchronisation process is highly dependent on the tidal
force exerted by the companion. If the companion is very close and the orbital
period therefore very short, synchronisation is established much faster than
in binaries with longer orbital periods. If an sdB binary with given orbital
period is proven to be synchronised, all other sdB binaries with shorter
orbital periods should be synchronised as well. Although the timescales also
scale with sdB radius and companion mass, the orbital period is the dominating
factor because sdB radii differ only little and the dependence on
companion mass is not so strong.
\subsection{Eclipsing and ellipsoidal variable systems}
Eclipsing sdB binaries are of utmost importance to test the synchronisation
hypothesis because the inclinations can be derived directly from their
light curves. It has been shown in Sect.~\ref{sec:lowmassm} that the parameters of the
eclipsing sdB+dM binaries PG\,1336$-$018, HS\,0705$+$6700 and HW\,Vir are consistent
with synchronised orbits. This essentially means that the calculated
$v_{\rm rot}\sin{i}$ for synchronous rotation, which can be obtained as
described in Sect.~\ref{sec:ana} given the orbital period, the radius of the
sdB and the inclination angle are known, is consistent with the measured value.
In eclipsing systems, all these parameters can be measured.
This provides clear empirical evidence that at least the upper layers of the
stellar envelopes are synchronised to the orbital motion of the eclipsing sdB
binaries in our sample. We therefore conclude that all sdBs in close binaries
with orbital periods up to $0.12\,{\rm d}$ should be synchronised as well.
Two well studied sdBs clearly show ellipsoidal variations in their light curves with
periods exactly half the orbital periods (KPD\,1930+2752, Bill\`{e}res et al. \cite{billeres00},
Maxted et al. \cite{maxted2}, Geier et al.,
\cite{geier}, is further discussed in Sect.~\ref{sec:lowmasswd};
KPD\,0422+5421, Koen et al. \cite{koen4}, Orosz \& Wade \cite{orosz}, is not part of our sample).
This alone is only an indication for tidal synchronisation, because the
light curve variations have to be present at the proper orbital phases as
well. To really prove synchronisation it is necessary that the stellar
parameters determined independently from the light curve analysis are
consistent with a synchronised orbit. This is the case for KPD\,0422$+$5421 as
well as KPD\,1930$+$2752. Both ellipsoidal variable systems have very short
periods of $\simeq0.1\,{\rm d}$ and high inclination. Otherwise ellipsoidal
variations are very hard to detect.
Most compelling evidence for synchronisation in a binary system with a period considerably longer than that of the above mentioned systems is provided in the case of the eclipsing sdB+WD binary KPD\,1946$+$4340 ($P=0.404\,{\rm d}$).
Bloemen et al. (\cite{bloemen}) derived most accurate binary parameters from a spectacular high-S/N light
curve obtained by the Kepler mission. These results are fully consistent with the constraints we put on this
system (see Sect.~\ref{sec:lowmasswd}). We therefore conclude that sdB binaries with periods shorter than $P\simeq0.4\,{\rm d}$ should be synchronised.
Furthermore, the sdB+WD binary PG~0101$+$039 ($P=0.567\,{\rm d}$) shows very weak luminosity
variations at half the
orbital period detected in a 16.9 day long, almost uninterrupted light curve
obtained with the MOST satellite (Randall et al. \cite{randall2}). Geier
et al. (\cite{geier2}) showed that the sdB in this binary is most likely
synchronised. The empirical lower limit for tidal synchronisation in close
sdB binaries is therefore raised to $P\simeq0.6\,{\rm d}$.
\subsection{Asteroseismology}
An independent method to proof orbital synchronisation is provided by
asteroseismology. Van Grootel et al. (\cite{vangrootel}) were able to
reproduce the main pulsation modes of the short period pulsating sdB in the binary
Feige 48 ($P\simeq0.38\,{\rm d}$), derived the surface rotation from the
splitting of the modes and concluded that the subdwarf rotates synchronously.
Charpinet et al. (\cite{charpinet5}) reach a similar conclusion for the short
period eclipsing binary PG\,1336$-$018 ($P\simeq0.10\,{\rm d}$). Furthermore
they probed the internal rotation of the star below the surface layers by
applying a differential rotation law and showed that the sdB rotates as a
rigid body at least down to $0.55\,R_{\rm sdB}$. The remarkable consistency
of the binary parameters derived by asteroseismology (Charpinet et al.
\cite{charpinet5}), binary light curve synthesis (Vu\v ckovi\'c et al.
\cite{vuckovic2}) and the analysis presented here has to be pointed out
again (see Sect.~\ref{sec:lowmassm}). Asteroseismic ana\-lyses revealed that
sdB binaries up to orbital periods of about $0.4\,{\rm d}$ are synchronised.
We therefore conclude that all sdBs in close binaries with shorter periods
should be synchronised as well.
\section{Synchronisation challenged}\label{sec:challenge}
In Sect.~\ref{sec:syncsample} we have shown that synchronisation in our sample has been
established for binaries with periods below $\simeq1.2\,{\rm d}$.
This is corrobated by the theory of synchronisation
although different version of the theory give vastly different results. Empirical evidence sets a
limiting period of $0.6\,{\rm d}$. About half of our sample has periods below that limit
and should therefore be synchronised. These arguments are correct for the sample
but may not hold for individual objects.
We envisage two options: The subdwarf may not be core helium-burning (Sect.~\ref{sec:hd188}). Or an individual EHB star may be too young to have reached synchronisation (Sect.~\ref{sec:age}).
\begin{figure}[t!]
\begin{center}
\resizebox{\hsize}{!}{\includegraphics{14465fg20.eps}}
\caption{$T_{\rm eff}-\log{g}$-diagram similar to Fig.~\ref{fig:tefflogg}. The black filled diamonds mark the known post-RGB binaries HD\,188112 (Heber et al. \cite{heber5}), NGC\,6121$-$V46 (V46 for short, O'Toole et al.
\cite{otoole}), HZ~22 (Sch\"onberner, \cite{schoenberner}, Saffer et
al., \cite{saffer97})
and SDSS\,J123410.37$-$022802.9 (J1234 for short,
Liebert et al. \cite{liebert}). The candidate post-RGB system [CW83]~1735$+$22 is included as well.
The helium main sequence and the
EHB-band
are superimposed with post-RGB evolutionary tracks from Driebe et al.
(\cite{driebe}) labelled by their masses.
}
\label{teffloggpostRGB}
\end{center}
\end{figure}
\begin{figure}[t!]
\begin{center}
\resizebox{\hsize}{!}{\includegraphics{14465fg21.eps}}
\caption{$T_{\rm eff}-\log{g}$-diagram, same as
Fig.~\ref{fig:tefflogg} but restricted to the massive systems (black filled circles) described in
section~\ref{sec:highmassbh} and supplemented
by short-period systems ($\simeq0.1\,{\rm d}$, filled diamonds)
where synchronisation
has been proven empirically. The two filled squares mark the longer
period binaries Feige\,48 ($\simeq0.38\,{\rm d}$), KPD\,1946$+$4340 ($\simeq0.40\,{\rm d}$) and PG\,0101$+$039
($\simeq0.57\,{\rm d}$), which are known to be synchronised. The
open square marks the non-synchronised binary PG\,2345$+$318.
The helium main sequence and the EHB band
are superimposed with EHB evolutionary tracks from Dorman et al.
(\cite{dorman}) labelled by their masses.
}
\label{teffloggmassive}
\end{center}
\end{figure}
\subsection{[CW\,83]\,1735$+$22 and post-RGB evolution}\label{sec:hd188}
The only sdB star known not to burn helium in the core is the single-lined close binary HD\,188112 (Heber et al. \cite{heber5}). According to its atmospheric parameters it is situated well below the EHB (see Fig.~\ref{teffloggpostRGB}). By interpolation of evolutionary tracks from Driebe et al. (\cite{driebe}) a mass of $0.23\,M_{\rm \odot}$ was derived, which could be verified directly, because an accurate parallax of this object was obtained by the Hipparcos satellite.
The different evolution of
so called post-RGB objects like HD\,188112 compared to EHB stars should affect their rotational properties. Post-RGB stars constantly shrink during their evolution towards the WD cooling tracks. Since these stars are not expected to lose angular momentum during the contraction, they have to spin up. In contrast to this a core helium-burning sdB star expands by a factor of about two within $\simeq100\,{\rm Myr}$ and is expected to spin down. Besides HD\,188112 some other objects are also considered to belong to this class (see Fig.~\ref{teffloggpostRGB}).
The post-RGB scenario may explain the unusual properties, especially the fast rotation, of the sdB binary {\bf [CW\,83]\,1735$+$22} (see Sect.~\ref{sec:syncsample}). The star is among the hottest in our sample and it lies far from the EHB band (see Fig.~\ref{teffloggpostRGB}). According to the mass tracks of Driebe et al. (\cite{driebe}) [CW\,83]\,1735$+$22 would have a mass of about $0.3\,M_{\rm \odot}$ (see Fig.~\ref{teffloggpostRGB}). Such a star should shrink by a factor of $5.5$ within $0.3\,{\rm Myr}$ (Driebe et al. \cite{driebe}), which is much shorter than the synchronisation time. Hence we regard its high projected velocity as strong evidence that [CW\,83]\,1735$+$22 is a post-RGB star just like HD\,188112. Since the lifetime of such an object is predicted to be only a few million years, such stars should be rare. The predicted low mass of [CW\,83]\,1735$+$22 can be verified in the way described in Heber et al. (\cite{heber5}) as soon as the GAIA mission will have measured an accurate trigonometric parallax of this star.
One may speculate that the different rotational properties of post-RGB stars may have an influence on the synchronisation process if they are in close binary systems. The spin-up caused by the shrinkage of the star may counteract the spin-down caused by the tidal influence of the companion. Should post-RGB stars have longer synchronisation timescales than EHB stars this may be invoked as a
convenient explanation for the putative high fraction of sdB binaries with massive compact companions. If these binaries should have post-RGB primaries and should not be synchronised, the derived companion masses would be wrong.
This scenario is considered to be unlikely. First of all, we would expect post-RGB stars to rotate faster than synchronised. For the putative sdB+NS/BH systems low projected rotational velocities are measured. If the sdBs should rotate even faster than synchronised, the inclination angle would be even lower and the derived companion masses would go up.
Another strong argument against a post-RGB nature of the sdBs in the candidate systems with massive compact companions is their location in the $T_{\rm eff}-\log{g}$ diagram (see Fig.~\ref{teffloggmassive}). All these binaries are found on or near the EHB, while the known post-RGB stars are obviously not concentrated near the EHB (see Fig.~\ref{teffloggpostRGB}). We therefore conclude that the sdBs with putative massive compact companions are post-EHB rather than post-RGB stars.
\subsection{The role of the stellar age}\label{sec:age}
Up to now we have assumed that the sdB stars already have spent a significant
part of their total life time on the EHB.
In the canonical picture it might be possible to estimate the age of an
individual star by comparing its position in the (T$_{\rm eff}$, $\log
g$)-diagram to EHB evolutionary tracks (e.g. Dorman et al. \cite{dorman}), as
the core mass is fixed at the core helium flash. In binary population models,
however, a degeneracy between mass and age arises as there is a spread of sdB
masses (see Zhang et al. \cite{zhang}).
Because our sample stars nicely populate the canonical EHB band (see
Figs.~\ref{fig:tefflogg}, \ref{fig:teffloggsync}, \ref{fig:teffloggnosync}),
we shall assume that a star is young if it is on or
close to the zero-age extreme horizontal branch (ZAEHB) and old if not.
Note that the speed of evolution along the EHB tracks is nearly constant.
We shall now explore whether some of our targets might possibly be too young
to be synchronised. We shall start with PG\,2345$+$318 and inspect the sample in
the light of the lesson to be learnt.
\subsection{PG\,2345$+$318}
{\bf PG\,2345$+$318} is a short period ($0.24\,{\rm d}$) sdB binary.
We derive a high companion mass of $1.9\pm0.7\,M_{\rm \odot}$ at an
inclination angle of about $22^{\circ}$ indicating that the companion is another massive compact object, i.e. a neutron star or a massive white dwarf. At such a low inclination eclipses are not expected to occur.
However, Green et al. (\cite{green}) presented a preliminary light curve of
this star, and detected a shallow eclipse probably by a white dwarf.\footnote{Besides KPD\,0422$+$5421 (Orosz \& Wade
\cite{orosz}), PG\,0941$+$280 (Green et al. \cite{green}) and KPD\,1946$+$4340 (Bloemen et al. \cite{bloemen}) this is just the
fourth such system known.} Without the additional information from the light
curve this object would therefore be identified as another candidate sdB binary with massive
compact companion. The detection of eclipses immediately rules out this
scenario. The inclination angle has to be near $90^{\circ}$ and the
companion a white dwarf with a mass of $0.38\,M_{\rm \odot}$ according to the constraint set
by the binary mass function. This means that the sdB star in this binary
rotates more slowly than synchronised and proves that such objects exist
among binaries with short orbital periods. The most reasonable explanation for this may be that the
system is very young and the synchronisation process not finished yet.
The atmospheric parameters of this star (see Table~\ref{orbit}) place it indeed near the zero-age EHB (Fig.~\ref{teffloggmassive}), although they have somewhat larger errors than most other stars due to the lack of high quality low resolution spectra (Saffer et al. \cite{saffer}). But the light curve presented by Green et al. (\cite{green}) reveals more
information, which corrobate this scenario. An interesting feature is the presence of a
shallow reflection effect and a weak secondary minimum, which provides
evidence that the white dwarf contributes significantly to the optical flux. This in
turn means that the white dwarf must be young (assuming a luminosity of
$0.5\,L_{\rm \odot}$ evolutionary tracks imply an age of the order of
$10^{6}\,{\rm yr}$) and is another piece of evidence that the system is too young to be
synchronised. Since no light curve solution for PG\,2345$+$318 is published
yet, the discussion of this object must remain preliminary.
\subsection{Are the systems with massive compact companions too young to be
synchronised?}\label{sec:young}
What are the implications for our candidate sample of sdB binaries with
massive compact companions? The orbital periods of these binaries range from
$0.26\,{\rm d}$ to $0.52\,{\rm d}$ where synchronisation should be established
according to the results presented in Sect.~\ref{sec:syncsample} and \ref{sec:empirical}. Given these short orbital periods, the
binaries in question should be synchronised.
Even if the candidate systems were bona-fide EHB stars,
they may just be too young to be synchronised. In Fig.~\ref{teffloggmassive} we plot the positions of the candidate systems with compact companions and compare them to the calibrators Feige\,48, PG\,0101$+$039 and KPD\,1946$+$4340.
It is obvious that the first two of these synchronised sdBs lie closer to the terminal age EHB than to the
zero age EHB. KPD\,1946$+$4340 is already evolved from the EHB and most likely burning helium in a shell. These are indications that these binaries are relatively old. We also note that the
position of PG\,1743$+$477 nearly coincides with that of PG\,0101$+$039. From this coincidence we would expect
it to be synchronised and, hence, the constraint on the companion mass to be reliable.
We also plot the position of the non-synchronised system
PG\,2345$+$318 in Fig.~\ref{teffloggmassive} which lies near the zero-age EHB.
PG\,1232$-$136 and PG\,1432$+$159 are found close to PG\,2345$+$318 and near
the zero-age EHB and thus may be rather young as well. The same holds for
PG\,1101$+$249 which is considerably hotter but also situated very near
the zero-age EHB (ZAEHB).
The remaining candidate sdB binaries with putative massive compact companions are in a similar evolutionary stage as the
synchronised systems in the middle of the EHB band. We conclude that some but not all sdBs in the candidate systems could be too young to have reached synchronisation.
\section{Summary and Outlook}\label{sec:summary}
We have analysed a sample of 51 sdB stars in close single-lined binary systems.
This included 40 systems for which the orbital parameters have been determined
previously. The subsample comprises half of all systems known so far.
From high resolution spectra taken with different instruments the projected rotational velocities of these stars have been derived to an unprecedented precision. Accurate measurements of the surface gravities have mostly been
taken from literature.
Assuming orbital synchronisation and an sdB mass distribution as suggested by
binary population synthesis models as well as by asteroseismology,
the masses and the nature of the unseen
companions could be constrained in 31 cases.
Only in five cases we were unable to classify unambiguously. These companions
may either be low mass main-sequence stars or white dwarfs.
The companions to seven sdBs could be clearly
identified as late M stars. One binary may have a brown dwarf companion.
The unseen companions of nine sdBs are white dwarfs with typical masses, one WD companion has a very low mass.
In eight cases (including the well known system KPD1930$+$2752)
the companion mass exceeds $0.9\,M_{\rm \odot}$. Four of the companions even
exceed the Chandrasekhar limit indicating that they may be neutron
stars; even a stellar mass black hole is possible for the most massive
companions.
The basic assumption of orbital synchronisation in close sdB binaries has been
discussed in detail. Our analysis method yielded consistent
results for binaries up to an orbital period of $\simeq1.2\,{\rm d}$.
Theoretical timescales for synchronisation were calculated using two different
approaches. The theory of Zahn (\cite{zahn2}) was found to be too inefficient
while that of Tassoul \& Tassoul (\cite{tassoul}) predicts too short timescales.
The predictions from both theories are strongly discrepant, calling for
empirical constraints.
Independent observational evidence for synchronisation in sdB binaries
comes from light curve analyses of eclipsing, ellipsoidal deformed, and pulsating sdBs.
Due to this evidence sdB binaries with periods shorter than
$\simeq0.6\,{\rm d}$ should be synchronised. This includes all of the putative
massive systems.
Hence, an evolutionary model for the origin of sdB stars with neutron star or
black hole companions was devised indicating that common envelope evolution is
indeed capable of producing such systems, though at a lower rate than observed.
An appropriate formation channel includes two phases of unstable
mass transfer and one supernova explosion.
The distribution of the inclinations of the systems of normal mass appears to
be consistent with expectations, whereas a lack of high inclinations became
obvious for the massive systems.
There is one star in the sample which rotates fast despite its rather long orbital period. This as well as its
position far from the EHB band hints at a post-RGB nature. The post-RGB stars are expected to be spun-up due to their ongoing
contraction.
The larger number of putative massive companions in low inclination systems is puzzling.
Therefore, we investigated alternative interpretations. The fraction of massive unseen companions can only be lowered, if the sdBs themselves have masses much lower than the anticipated range of
$0.43 - 0.47\,M_{\rm \odot}$ for EHB stars. Evolutionary calculations
showed that EHB stars with masses as low as
$0.30\,M_{\rm \odot}$ can be formed if helium ignites under non-degenerate
conditions but should be very rare. Assuming such low sdB masses, only one unseen companion remains
more massive than the Chandrasekhar limit.
This fraction of $3\%$ is roughly
consistent with theoretical predictions.
Whether the sdB mass is small or not can be checked directly as
soon as accurate parallaxes of
these relatively bright stars will become available through the GAIA mission.
The putative massive sdB systems might not be synchronised if their age
is much less than anticipated. That this can happen is witnessed by
PG\,2345$+$318, a short-period sdB binary in our sample, that we would have
classified as a low-inclination massive system as well, if it were not proven
by eclipses to be highly inclined. Hence the system is not synchronised
despite of its short period ($0.24\,{\rm d}$).
Due to a degeneracy between mass and age, it is
difficult to estimate the sdB's age without knowing its mass. Adopting the
canonical mass, we nevertheless estimated the stars' ages from their position
in the EHB band. Indeed, PG\,2345$+$318, is located right on the zero-age EHB
as are the massive candidates PG\,1232$-$136, PG\,1432$+$159 and PG\,1101$+$249.
These stars may possibly be too young to have reached synchronisation. Hence
the companion masses we derived would be spurious.
However, there is no indication that the other massive systems could be young.
Even if we dismiss three candidates because they may be too young and assume
that the others are of low mass, PG\,1743$+$477 and, in particular,
HE\,0532$-$4503 remain as massive candidates whose companions have masses close
to or above the Chandrasekhar mass.
Different approaches may be chosen to directly verify the presence of neutron
star or black hole companions in our candidate systems.
None of the sdBs in our target systems fills its Roche lobe. No mass transfer
by Roche lobe overflow to the unseen companion can occur and
therefore no X-ray emission is expected.
The ROSAT all-sky survey catalogue (RASS, Voges et al. \cite{voges}) has been checked and, indeed, no sources have been detected at the positions of any candidate sdB+NS/BH systems.
The detection limit of this survey reaches down to about $10^{-13}\,{\rm erg\,cm^{-2}s^{-1}}$.
However, sdB stars are expected to have weak winds. Hence accretion from the
sdB wind might result in faint X-ray emission. This occurs in the bright sdO+WD system HD\,49798 (Mereghetti et al., \cite{mereghetti}). Although stellar wind mass loss rates in sdBs are predicted to be small ($<10^{-12}\,M_{\rm \odot}{\rm yr^{-1}}$, e.g. Vink \& Cassisi \cite{vink}; Unglaub \cite{unglaub3}), they may be sufficient
to cause detectable X-ray flux powered by wind accretion. X-ray telescopes like Chandra or XMM-Newton may
be sensitive enough to detect such weak sources. Pulsar signatures of rapidly spinning
neutron star companions may be detectable with radio telescopes.
Tidal forces by the companion cause an ellipsoidal deformation of the primary
in close binary systems. This deformation appears as a variation of light at
half the orbital period. Two very close subdwarf binaries with orbital periods
of $\simeq\,2\,{\rm h}$ and high orbital inclination show light variations of
about $1\%$, which can be detected from the ground. Performing binary
light curve synthesis it was possible to derive the masses of the binary
components (Orosz \& Wade \cite{orosz}; Geier et al. \cite{geier}). Signatures
of ellipsoidal deformation in the light curves of binaries with longer orbital
periods and lower inclination are much weaker ($\simeq0.01\%$, Drechsel priv.
comm.; Napiwotzki et al. in prep.) and therefore not detectable from the ground. The existence of such very
shallow variations has been proven for the subdwarf binary PG\,0101$+$039 with
an orbital period of $13.7\,{\rm h}$ using a light curve of almost
$17\,{\rm d}$ days duration taken with the MOST satellite. The ellipsoidal
variation was found to be $0.025\%$ (Geier et al. \cite{geier2}).
The full potential of high precision photometry for the analysis of sdB binaries has most
recently been demonstrated by Bloemen et al. (\cite{bloemen}), who analysed a Kepler light
curve of the eclipsing sdB+WD binary KPD\,1946$+$4340. High precision light curves of the best candidates in our sample should be measured with HST. The nature of their unseen companion could then be clarified.
Most of the candidate massive systems have low orbital inclination.
High inclination systems must exist as well. In this case a determination of the orbital parameters is
sufficient to put a lower limit to the companion mass by calculating the
binary mass function. If this lower limit exceeds the Chandrasekhar mass and no sign of a companion
is visible in the spectra, the existence of a massive compact companion is
proven without making any additional assumptions. The Hyper-MUCHFUSS project
(Hypervelocity stars or Massive Unseen Companions to Hot Faint Underluminous
Stars from SDSS, Geier et al. in prep.) was launched in 2007. One of the aims
of this project is to search for sdB binaries with massive compact companions
at high inclinations in a sample of stars selected from the SDSS data base.
\begin{acknowledgements}
We would like to thank Z. Han for providing us with stellar structure models
of sdB stars. We thank E. M. Green, N. Reid and L. Morales-Rueda for sharing
their data with us. We are grateful to R. H. \O stensen and S. Bloemen, who provided us with
informations about new detections or non-detections of indicative features in
sdB light curves, as well as H. Drechsel for modelling such light curves for us.
S. G. was supported by the Deutsche Forschungsgemeinschaft under grant
He~1354/40-3. Travel to La Palma for the observing run at the WHT was funded by DFG
through grant He 1356/53-1.
\end{acknowledgements}
|
2,877,628,090,426 | arxiv | \section{Introduction}
\label{intro}
The study of phonon dynamics in the context of nanoscale solid-state heat transport has received considerable attention \cite{ThermalSpectroscopy2011, Mojtaba2016, Mojtaba2018, MojtabaAPL, Cahill2001, Johnson, Minnich2012, Broido2010, Esfarjani2011, Pop, Pop_micro, NanoTransport2, Tian, Zebarjadi}. One area of significant interest is the study of phonon relaxation-time and free path distributions \cite{McGaughey2010, ThermalSpectroscopy2011, Maznev2011, McGaughey, Dames2015, Minnich2012, Lingping_nature, Pop_micro}. This information is required for modeling heat transport at the kinetic level, which becomes necessary due to the failure of Fourier-based analyses at such small scales. The applications of this study include improved heat management in nanoelectronic circuits and devices \cite{Yu_nature, Wingert, Pop, Pop_micro, Cahill2001, Cahill1997}, microelectromechanical sensors \cite{NanoTransport2} and nano-structured materials for improved thermoelectric conversion efficiency \cite{Biswas, Hochbaum, Boukai, Tian, Zebarjadi, Kraemer}.
Purely theoretical approaches such as density functional theory (DFT) have been widely used to predict phonon properties \cite{Broido2010, Esfarjani2011}. However, such approaches have yet to rise to a stature level where they can replace experimental results, in part due to their failure to consistently reproduce experimental observations \cite{Esfarjani2011}. Another popular (alternative) class of approaches relies on extracting material constitutive information from thermal spectroscopy experiments \cite{ ThermalSpectroscopy2011, Mojtaba2016, Mojtaba2018, MojtabaAPL, Johnson, Minnich2012, Zebarjadi}. However, the analysis of thermal spectroscopy data remains a challenging task. One typical approach consists of extracting the cumulative distribution function (CDF) of thermal conductivity as a function of the free path, $F(\Lambda)$ (see section~\ref{Governing_equ} for more detail), from the experimentally measured temperature relaxation profiles, by invoking the concept of ``effective thermal conductivity" and proceeding to match the experimentally measured response to solutions of the heat conduction equation with the thermal conductivity (or thermal diffusivity) treated as an adjustable, ``effective", property \cite{Lingping_nature, ThermalSpectroscopy2011}. Unfortunately, as it was discussed extensively before~\cite{Mojtaba2016}, this procedure implicitly assumes that heat transport is Fourier-like, which is only justified under fairly restrictive conditions (late times and large scales) that are not always satisfied under experimental conditions. This is highlighted in figure \ref{k_eff_issue}, adapted from \cite{Mojtaba2016} (figure 13 of \cite{Mojtaba2016}). The figure shows calculations for the one-dimensional transient thermal grating (1D-TTG) geometry~\cite{Mojtaba2016} in Si material for two grating sizes. In both cases, the reconstruction based on effective thermal conductivity approach (denoted by ``$k_\text{eff}$-based'') predicts an inaccurate thermal response compared to the one predicted by the methodology proposed in \cite{Mojtaba2016} and further analyzed here (denoted by ``$\tau_\omega$-based''). The true thermal response is denoted by ``BTE''(Boltzmann transport equation)---see section \ref{Governing_equ} for more detail. It is clear that the error in the $k_\text{eff}$-based reconstruction increases as the length scale decreases.
\begin{figure}[htbp]
\hspace{-0.5cm}
\centering
\begin{subfigure}{.49\textwidth}
\centering
\includegraphics[width=1.0\linewidth]{1a.pdf}
\caption{100 nm}
\label{ab_initiomodel}
\end{subfigure}
\begin{subfigure}{.49\textwidth}
\centering
\includegraphics[width=1.0\linewidth]{1b.pdf}
\caption{400 nm}
\label{Hollandmodel}
\end{subfigure}
\caption{Thermal response in 1D-TTG experiment for 100 nm (a) and 400 nm (b) grating sizes. Comparison between the Boltzmann transport equation solution (denoted ``BTE''), Boltzmann transport equation solution with $\tau_\omega$ reconstructed from the BTE solution using the optimization method proposed in \cite{Mojtaba2016, Mojtaba2018} and further analyzed here (denoted by ``$\tau_\omega$ based''), Fourier heat conduction solution with effective thermal conductivity calculated using the well-known suppression function for a thermal grating~\cite{Hua2014, Johnson} (denoted by ``$\kappa_\text{eff}$ based'') and exponential response obtained from solution of Boltzmann transport equation \cite{Mojtaba2016} (denoted by ``Modified $\alpha_\text{eff}$'').}
\label{k_eff_issue}
\end{figure}
Figure \ref{k_eff_issue_2} shows examples of reconstructing Si properties in a 2D-dots geometry~\cite{Lingping_nature} from synthetic (numerical) data, adapted from \cite{Mojtaba2018} (figure 10 of \cite{Mojtaba2018}). The two $\kappa_\text{eff}(L)/\kappa$ plots (figure \ref{k_eff_issue_2}a) correspond to the reconstructed cumulative effective heat conductivity functions, as a function of the system length scale, based on two different Al-Si interface models (transmissivity functions) that have been used in the numerical experiment (Monte-Carlo solution of the BTE); neither is able to recover the material thermal conductivity (reach the value of 1). This is due to experiment not satisfying the late time and large scale criteria needed for the $\kappa_\text{eff}$-based approach to be valid (see \cite{Mojtaba2018} for more detail). Consequently, the two reconstructed $F(\Lambda)$ shown in figure \ref{k_eff_issue_2}b, corresponding to reconstructions based on these effective heat conductivities, have not been able to recover the true CDF.
\begin{figure}[htbp]
\hspace{-0.5cm}
\centering
\begin{subfigure}{.49\textwidth}
\centering
\includegraphics[width=1.0\linewidth]{2a.pdf}
\caption{}
\label{Hollandmodel}
\end{subfigure}
\begin{subfigure}{.49\textwidth}
\centering
\includegraphics[width=1.0\linewidth]{2b.pdf}
\caption{}
\label{T_100n_unique}
\end{subfigure}
\caption{Reconstruction results using the effective thermal conductivity approach for the 2D-dots geometry, based on synthetic data generated by solution of the BTE \cite{Mojtaba2018}. The effective thermal conductivities do not match if different interface models (DMM denotes diffuse mismatch model; expt. denotes experimentally determined transmissivities~\cite{expt_trans}) are used when generating the synthetic data (a). As a result, the corresponding $F(\Lambda)$ do not match and are each different from the true $F(\Lambda)$ (b).}
\label{k_eff_issue_2}
\end{figure}
As noted above, in response to the failure of $\kappa_\text{eff}$-based methods to provide accurate reconstructions at all scales, in \cite{Mojtaba2016, Mojtaba2018, MojtabaAPL} we developed and extensively validated an optimization-based methodology for directly reconstructing phonon relaxation-times from thermal spectroscopy data. In this work, we attempt to address some more fundamental questions, such as the uniqueness of reconstructed quantities (e.g., $F(\Lambda)$), as well as their ability to reproduce the experimental results. In particular, we show that reconstructions that use $F(\Lambda)$ (as a means to defining ``effective thermal conductivities") fail to guarantee a unique thermal response. This can be understood by noting that it is the relaxation-time and {\it not} $F(\Lambda)$ which appears as a primary material-parameter input to the BTE. Specifically, we show that the same $F(\Lambda)$ can be arrived at from multiple relaxation-time distributions, which, in general, will correspond to multiple thermal responses. This observation seriously questions the reliability of approaches which use $F(\Lambda)$ in the reconstruction.
We also study the reliability of the solutions returned by the optimization algorithm proposed in \cite{Mojtaba2016} and in particular the solution uniqueness and sensitivity to noisy measurements. Our results show that the optimized solution exhibits properties associated with a unique minimum; furthermore, the solution is sufficiently robust to noisy measurements.
The remainder of the paper is organized as follows. In section \ref{Governing_equ}, we briefly overview the governing equations, the linearized BTE and related conservation laws. In section \ref{free_path_material}, we show theoretically and numerically that more than one relaxation-times distribution, and consequently material thermal responses, can be related to the same $F(\Lambda)$, and hence, the latter may not be used to study thermal behaviors in all regimes. In section \ref{Unique_analysis}, we investigate the uniqueness of the solution associated with our previously proposed algorithm which instead reconstructs the relaxation-times function; we also investigate the sensitivity of this approach to the uncertainty in the temperature measurements from a Bayesian perspective. Finally, in section \ref{Conclude} we provide a summary of our work and suggestions for future improvements.
\section{Boltzmann transport equation}
\label{Governing_equ}
Given the small temperature differences usually associated with thermal spectroscopy experiments, here we consider the linearized BTE
\begin{equation} \label{BTE}
\frac{ \partial e^d } { \partial t } + \textbf{v}_{\omega} \cdot \nabla_{\textbf{x}} e^d = -\frac{ e^d- (de^\text{eq}/dT)_{T_\text{eq}} \Delta \widetilde{T} } { \tau_{\omega} } ,
\end{equation}
where $e^d= e^d (t, \mathbf{x}, \omega, \mathbf{\Omega})= e-e^\text{eq}_{T_\text{eq}}= \hbar \omega (f- f^\text{eq}_{T_\text{eq}}) $ is the deviational energy distribution, $\omega$ is the phonon frequency, $\mathbf{\Omega}$ is the phonon traveling direction, $f= f (t, \mathbf{x}, \omega, \mathbf{\Omega})$ is the occupation number of phonon modes, $\textbf{v}_{\omega}= \textbf{v} (\omega)$ is the phonon group velocity, $\tau_{\omega}= \tau (\omega)$ is the frequency-dependent relaxation-time, also referred to here as the ``relaxation-times function", and $\hbar$ is the reduced Planck constant. Here and in what follows, we use $\omega$ to denote the dependence on both frequency and polarization.
The above equation is linearized about the equilibrium temperature $T_\text{eq}$, to be understood here as the experimental reference temperature. In general, $\tau_{\omega}= \tau (\omega, T)$; however, as a result of the linearization, $\tau_{\omega}= \tau (\omega, T_\text{eq}) \equiv \tau (\omega)$; in other words, the solutions (and associated reconstruction) are valid for the experiment baseline temperature $T_\text{eq}$. Also, $(de^\text{eq}/dT)_{T_\text{eq}}= \hbar \omega ( df^\text{eq}_T/dT ) \vert_{T_\text{eq}}$ and $f^\text{eq}_{T}$ is the Bose-Einstein distribution with temperature parameter $T$, given by
\begin{equation} \label{BE}
f^\text{eq}_{T} (\omega) = \frac{1} {\exp (\hbar \omega/ k_B T)- 1} ,
\end{equation}
where $k_B$ is Boltzmann's constant. Finally, $\Delta \widetilde{T}= \Delta \widetilde{T} (t, \mathbf{x})= \widetilde{T}- T_\text{eq}$ is referred to as the deviational pseudo-temperature ($\widetilde{T} (t, \mathbf{x} )$ is the pseudo-temperature). Note that the deviational pseudo-temperature, which is different from the deviational temperature defined below, is defined by the energy conservation statement \cite{ARHT}
\begin{equation}
\int_{\mathbf{\Omega}} \int_{\omega} \left[ \frac{C_\omega} {\tau_{\omega}} \Delta \widetilde{T} - \frac{e^d} {\tau_{\omega}} D_{\omega} \right] d \omega d \mathbf{\Omega}= 0 , \label{pseudotemperature}
\end{equation}
in which $D_{\omega}= D(\omega)$ is the density of states, $C_{\omega}= C (\omega; T_\text{eq})= D_{\omega} (de^\text{eq}/dT)_{T_\text{eq}}$ is the frequency-dependent volumetric heat capacity, and $d \mathbf{\Omega}= \sin (\theta) d \theta d \phi$ represents the differential solid angle element such that $\theta$ and $\phi$ refer to the polar and azimuthal angles in the spherical coordinate system, respectively. The temperature $T (t, \mathbf{x})$ is computed from
\begin{equation}
\int_{\mathbf{\Omega}} \int_{\omega} \left[ C_{\omega} \Delta T- e^d D_{\omega} \right] d \omega d \mathbf{\Omega}= 0 , \label{temperature}
\end{equation}
where $\Delta T (t, \mathbf{x})= T- T_\text{eq}$ is the deviational temperature. The frequency-dependent free path is given by
\begin{equation}
\Lambda_{\omega}= v_{\omega} \tau_{\omega} ,
\label{lambda}
\end{equation}
where $v_{\omega}= || \mathbf{v}_{\omega} ||$ is the group velocity magnitude. Cumulative distribution function (CDF) of thermal conductivity as a function of the free path, introduced in section \ref{intro}, is defined as $F (\Lambda):= \frac{1} {3 \kappa} \int_{\omega^*(\Lambda)} C_{\omega} v_{\omega}^2 \tau_{\omega} d \omega$, where $\omega^*(\Lambda)$ is the set of modes such that $\omega^*(\Lambda)= \{\omega | \Lambda_{\omega} \leq \Lambda\}$; similarly, the corresponding probability density function (of thermal conductivity) is given by $\mathfrak{f}= \frac{dF}{d\Lambda}$.
\section{Limitations associated with $F(\Lambda)$}
\label{free_path_material}
In this section, we investigate, both numerically and analytically, whether $F(\Lambda)$ can be used to describe heat transfer in all transport regimes. This investigation is motivated by the widespread use of $F(\Lambda)$ in procedures for reconstructing material properties from thermal spectroscopy experiments.
The approximations and theoretical inconsistencies associated with the use of the concept of effective thermal conductivity, in conjunction with $F(\Lambda)$, to reconstruct thermal spectroscopy data have been discussed before in \cite{Mojtaba2016, Mojtaba2018}, as well as section \ref{intro}. Here, we take a different approach which allows us to examine the premise of such approaches on a more general level. First, we prove through a theorem that a given $F(\Lambda)$ may arise as a result of more than one distribution of relaxation-times. Since different relaxation-time distributions will, in general, lead to different material thermal behaviors, this implies that $F(\Lambda)$ cannot uniquely determine the material thermal behavior, which in turn questions the ability of $F(\Lambda)$ to describe thermal responses in sub-micron regimes. The above assertion is also demonstrated through a numerical experiment using a realistic material model for silicon.
\paragraph{Theorem:} For a given material, if there are two phonon frequencies, $\omega_1$ and $\omega_2$, such that $C_\omega ({\omega_1}) v_\omega({\omega_1})= C_\omega({\omega_2}) v_\omega({\omega_2})$, there exist more than one relaxation-time distributions that lead to a given $F(\Lambda)$.
The proof of theorem is left to Appendix A.
\subsection{Numerical Demonstration}
\label{numer_dem}
In order to highlight the importance of the above in practical terms, namely, the existence of multiple relaxation-time functions for a given $F(\Lambda)$, and more importantly, the non-uniqueness of the thermal response for a given $F(\Lambda)$, we have performed a calculation that directly studies this hypothesis. In the interest of simplicity, all calculations were performed for the 1D-TTG geometry with the Holland model for material Si~\cite{Mojtaba2016, Mojtaba2018}. Readers are referred to Appendix B for a discussion of the functional form of the product $C_\omega ({\omega}) v_\omega({\omega})$ for Si, used in our calculations. We consider the objective function
\begin{equation}
\min_{{\bf U}} \int_{\Lambda} \big| \mathfrak{f}^*(\Lambda)- \mathfrak{f}(\Lambda) \big| d\Lambda ,
\label{optim_unique}
\end{equation}
where $\mathfrak{f}^*$ represents the ``true" free path-dependent distribution function of thermal conductivity and ${\bf U}$ is a set of parameters that parameterizes the relaxation-times function. Here, without loss of generality, we have assumed ${\bf U}$ to be a set of parameters that allows different branches of the relaxation-times function to take the form of a piece-wise linear function with three segments as a function of the phonon frequency; this is the same parameterization that has been used previously in \cite{Mojtaba2016, Mojtaba2018, MojtabaAPL}. This objective function attempts to find free path-dependent thermal conductivity distribution functions $\mathfrak{f}(\Lambda)$ that match the true thermal conductivity distribution function by searching in the parameter space ${\bf U}$. Note that ${\bf U}$ enters the objective function through the definition of the free path distribution $\Lambda_\omega=v_\omega \tau_\omega({\bf U})$.
We perform the optimization using the Nelder-Mead (NM) algorithm \cite{NelderMead}, repeating it multiple times starting from different initial conditions for ${\bf U}$ in order to also study the effect of different initial conditions on the optimization process. Figure \ref{k_unique} shows the relaxation-times and the $F(\Lambda)$ obtained from one of these optimization processes as well as the temperature profile predicted by the optimized ${\bf U}$ versus the true profile at 100 nm length scale. This profile is generated using the inverse fast Fourier transform (IFFT) \cite{IFFT_book} as the forward simulation method \cite{Mojtaba2016}. As it is expected from equation \eqref{optim_unique}, the reconstructed $F(\Lambda)$ matches the true CDF. At the same time, both the relaxation-time functions, and more importantly, the thermal responses do not match, even though all other material properties, as well as the geometry are kept fixed. The mismatch in thermal responses also exists at other length scales.
\begin{figure}[htbp]
\centering
\begin{subfigure}{.32\textwidth}
\centering
\includegraphics[width=1.1\linewidth]{3a.pdf}
\caption{Relaxation-time}
\label{ab_initiomodel}
\end{subfigure}
\begin{subfigure}{.32\textwidth}
\centering
\includegraphics[width=1.1\linewidth]{3b.pdf}
\caption{$F(\Lambda)$}
\label{Hollandmodel}
\end{subfigure}
\begin{subfigure}{.32\textwidth}
\centering
\includegraphics[width=1.1\linewidth]{3c.pdf}
\caption{Temperature}
\label{T_100n_unique}
\end{subfigure}
\caption{Comparison between properties predicted by one representative solution of optimization \eqref{optim_unique} and true data. As expected, although the two $F(\Lambda)$ match, the relaxation-times and thermal responses are clearly different.}
\label{k_unique}
\end{figure}
The results provided in figure \ref{k_unique} clearly show, from a numerical perspective, that $F(\Lambda)$ cannot be used reliably as a material property for predicting thermal behaviors in all regimes. While the non-uniqueness of the relaxation-times function for a given $F(\Lambda)$, as was proven theoretically in this section and shown numerically in figure \ref{ab_initiomodel}, may not be catastrophic on its own, the fact that these different functions predict different thermal responses, as we have seen in figure \ref{T_100n_unique}, implies that a given $F(\Lambda)$ provides insufficient information for predicting the thermal response. The results provided in this section prove that {\bf $F(\Lambda)$ cannot be used to predict thermal responses in all transport regimes}.
\section{An alternative to $F(\Lambda)$}
\label{Unique_analysis}
The discussions provided previously in \cite{Mojtaba2016, Mojtaba2018, MojtabaAPL}, and more importantly, in the previous section, show that $F(\Lambda)$, regardless of the method being used to reconstruct it and its accuracy, is not able to predict the material thermal behavior in all regimes, and the sub-micron regime in particular, which is of interest presently. This is in sharp contrast with the relaxation-times function which is related to the system thermal response through a direct, one-to-one relationship.
In previous work we proposed a multi-stage NM algorithm for reconstruction of relaxation-times function from thermal spectroscopy experiments \cite{Mojtaba2016, Mojtaba2018, MojtabaAPL} and validated it both numerically and experimentally. In this section, we briefly study the reliability of the solutions of this previously proposed optimization algorithm, that is, the uniqueness of the optimization solution as well as a Bayesian-approach inspired study of the sensitivity of the solutions to measurement noise. Due to the complexity of the BTE and the related inverse problems, here we have focused on the numerical study of a specific problem, the 1D-TTG experiment.
Due to the success of the NM algorithm in providing solutions to the reconstruction problem \cite{Mojtaba2016, Mojtaba2018, MojtabaAPL}, here, we use a globalized version of NM algorithm, referred to as the ``globalized bounded Nelder-Mead (GBNM)" \cite{GNM}, to study the uniqueness of the solutions to the reconstruction problem formulation used in \cite{Mojtaba2016, Mojtaba2018, MojtabaAPL}. The GBNM algorithm is very similar to the NM algorithm described before; however, instead of performing the optimization once, starting from one initial condition, it repeats the complete optimization process many times, starting from different probabilistically correlated initial conditions. More details can be found in \cite{GNM}.
\subsection{On the uniqueness of reconstructed solutions}
To assess the uniqueness of the reconstructed solutions obtained via the optimization formulation, we follow a procedure that is similar to our previous work \cite{Mojtaba2016}: we generate synthetic temperature profiles for a material by solving the BTE, and use the generated data to infer the material relaxation-time by solving the following optimization problem
\begin{equation}
\min_{{\bf U}} \mathcal{L}= \min_{{\bf U}} \left[ \frac{ \sum_{t, \textbf{x}, L} | T_\text{m} (t, \mathbf{x}; L)- T_\text{BTE} (t, \mathbf{x}; L, {\bf U}) | } {N}+ \alpha \Bigg| 1- \frac{1} {3\kappa} \int_{\omega} C_{\omega} \tau_{\omega} ({\bf U}) v^2_{\omega} d \omega \Bigg| \right] , \label{objectivefunction_2TA}
\end{equation}
where $T_\text{m} (t, \mathbf{x}; L)$ denotes the experimentally measured temperature (equivalent to synthetically generated temperatures), $T_\text{BTE}$ is the temperature computed from solution of the BTE (the same temperature as in equation \eqref{temperature}), $N$ is the total number of (independent) measurements available, and $L$ is the different characteristic length scales. In the present case, we have assumed the material to be Si and considered its response in the 1D-TTG geometry for 10 nm$<L<$100 $\mu$m, solved for using the IFFT method \cite{IFFT_book, Mojtaba2016}. We also assume that the two $TA$ branches of the material are the same, using the same piece-wise linear parameterization of \cite{Mojtaba2016}, for both the \textit{ab initio} and the Holland models~\cite{Mojtaba2016, Mojtaba2018, MojtabaAPL}.
As stated above, the difference from previous work is that, here, the optimization is performed using the GBNM method. Figure \ref{GNM_plot} shows the value of the objective function for the various points that the GBNM method explored on its way to minimizing ${\cal L}$. More detail on the procedure and parameters used here can be found in \cite{MojtabaThesis}. The horizontal axis in the figure corresponds to the distance between a ``trial" relaxation-times function (whose value of ${\cal L}$ is plotted on the ordinate) and the ``true" solution $\tau^*_\omega$; specifically, ``distance" is defined by
\begin{equation}
\text{distance}:= \frac{{\bigintss}_{\omega} \left| \log\left( \frac{\tau_\omega}{\tau^*_{\omega}} \right) \right| D_\omega d\omega }{\int_{\omega} D_\omega d\omega}.
\label{distance}
\end{equation}
Here, $\omega$, as before, contains information of both different frequencies and branches.
\begin{figure}[htbp]
\centering
\begin{subfigure}{.49\textwidth}
\centering
\includegraphics[width=1.0\linewidth]{4a.pdf}
\caption{}
\label{ab_initiomodelz}
\end{subfigure}
\begin{subfigure}{.49\textwidth}
\centering
\includegraphics[width=1.0\linewidth]{4b.pdf}
\caption{}
\label{Hollandmodel}
\end{subfigure}
\caption{The value of the objective function versus the distance (see equation \eqref{distance}) between the points sampled during GBNM and the true solution for the Holland model (a) and the \textit{ab initio} model (b).}
\label{GNM_plot}
\end{figure}
The main message from figure \ref{GNM_plot} is that the bottom left of the cluster of datapoints in both plots represents a monotonically increasing function, which means that the closer we are to the true solution, the smaller the value of the objective function becomes, and more importantly, this is a monotonic behavior as we approach the true solution. This implies that the solution is unique; non-uniqueness would be implied by points in the lower right hand corner of the diagrams (low value of objective function, while the distance from the solution is large). Here we note that the number of datapoints in each plot in figure \ref{GNM_plot} is approximately 1 million. While exploring the whole parameter space with high resolution is not feasible due to the high dimension of the problem studied here ($\text{dim}({\bf U})=12$; two branches, each parameterized with three lines), the general behavior of the objective function, such as the monotonicity described above, will likely not change if more iterations are used (and consequently an even better exploration of the parameter space is performed).
\subsection{Sensitivity of reconstructed relaxation-time to measurement uncertainty}
While the results provided in figure \ref{GNM_plot} show that the behavior of the objective function $\cal{L}$ introduced in \cite{Mojtaba2016} resembles that of a function with a unique minimum, the sensitivity of this solution to noise in the measurement is not clear. Low sensitivity of an objective function to noise plays an important role in its practical application to real-world measurements. Here we study this sensitivity through the Bayesian theorem, that is, using the relation
\begin{equation}
p\Big( {\bf U} \big| \{ T_\text{m}(t, {\bf x}=0; L)\}_{t,L}\Big) \propto p\Big(\{ T_\text{m}(t, {\bf x}=0; L)\}_{t,L} \big| {\bf U}\Big) \pi({\bf U}) ,
\label{Bayes}
\end{equation}
where $p\Big({\bf U} \big| \{ T_\text{m}(t, {\bf x}=0; L)\}_{t,L}\Big)$ is the distribution of the parameters of the relaxation-times function given the temperature measurements (the posterior distribution) --- the quantity that we are interested in--- from which we can also calculate the distribution of the relaxation-times function itself through the $\tau_\omega({\bf U})$ function; $p\Big(\{ T_\text{m}(t, {\bf x}=0; L)\}_{t,L} \big| {\bf U}\Big)$ is the distribution of measured temperatures, given the parameters of the relaxation-times function (the likelihood function). The likelihood function can be calculated by first solving the BTE for a given relaxation-times function $\tau_\omega({\bf U})$, and then adding an artificial noise to its predicted temperatures that resembles that of noisy measurements. Finally, $\pi({\bf U})$ is the prior distribution which is usually a wide (least informative) distribution of the quantity of interest, the relaxation-times function; for instance, a wide symmetrical distribution around the correct relaxation-times function. More detailed definitions and information on equation~\eqref{Bayes} are provided in Appendix C.
Once we infer the distribution of the different components of {\bf U} (the posterior distribution), we can calculate the distribution of the relaxation-times function as a function of its frequency $\tau_\omega({\bf U})$. The results for the {\it ab initio} silicon model are provided in figure \ref{LA_TA_bayes}. We have also plotted the true relaxation-time functions in this figure. We notice that the distribution matches the true solution ``in the mean''. We also observe that for $LA$ modes, at frequencies around $\omega= 6\times 10^{13}$ rad/s, the distribution is sharper than other frequencies, implying that a relatively more accurate reconstruction of the value of relaxation-time at those frequencies is possible. On the other hand, the uncertainty at very large frequencies ($\omega \gtrsim 6.5\times 10^{13}$ rad/s) is more than the other ranges of the frequency. A similar trend can be observed for $TA$ modes. Overall, by comparing the two plots in figure \ref{LA_TA_bayes}, we conclude that the reconstruction of $TA$ branches is more accurate (the distributions are sharper). This could be due to the assumption that the two $TA$ branches have been approximated by the same function, which consequently led to their greater influence on the thermal behavior.
\begin{figure} [htbp]
\begin{subfigure}{.5\textwidth}
\centering
\includegraphics[width=1.0\linewidth]{5a.pdf}
\caption{$LA$ modes}
\label{NM1}
\end{subfigure}
\begin{subfigure}{.5\textwidth}
\centering
\includegraphics[width=1.0\linewidth]{5b.pdf}
\caption{$TA$ modes}
\label{NM2}
\end{subfigure}
\caption{Contour plot of the probability density of the frequency-dependent relaxation-times distribution of $LA$ and $TA$ modes. The red line in (a) denotes the true $\tau_\omega^{LA}$, while the red line in (b) denotes the true $\tau_\omega^{TA_1}$ and the green line in (b) denotes the true $\tau_\omega^{TA_2}$. }
\label{LA_TA_bayes}
\end{figure}
In order to obtain a better picture of the shape of the distributions at different frequencies, we have also plotted the distributions at a few specific frequencies (equivalent to cross-sections of figure \ref{LA_TA_bayes} at different frequencies). Figures \ref{LA_bayes_prob} and \ref{TA_bayes_prob} show the prior distribution, $\pi(\tau_\omega)$, versus the posterior distribution, $p(\tau_\omega \big| \{ T_\text{m}(t, {\bf x}=0; L)\}_{t,L})$, for a few frequencies of $LA$ and $TA$ modes, respectively. We observe that in most cases, the posterior distribution has become significantly narrower compared to the prior distribution, implying that the chosen prior distribution has been wide enough to avoid bias in inferring the posterior distribution. The distributions in the $TA$ case are sharper, as can also be seen in figure \ref{LA_TA_bayes}. The distribution at very high frequencies ($\omega= 7$ rad/s in figure \ref{LA_bayes_prob} and $\omega= 3.5$ rad/s in figure \ref{TA_bayes_prob}) is wider than the other frequencies, consistent with our previous observations from figure \ref{LA_TA_bayes}. While the results provided in figure \ref{GNM_plot} point to the uniqueness of the objective function minimum point, the narrow distributions observed in figures \ref{LA_TA_bayes}--\ref{TA_bayes_prob} complement those results by implying that good accuracy of solution in the vicinity of the optimal solution, minimally affected by the measurement noise, is attainable.
\begin{figure}[htbp]
\begin{subfigure}{.5\textwidth}
\centering
\includegraphics[width=1.0\linewidth]{6a.pdf}
\caption{}
\label{NM1}
\end{subfigure}
\begin{subfigure}{.5\textwidth}
\centering
\includegraphics[width=1.0\linewidth]{6b.pdf}
\caption{}
\label{NM1}
\end{subfigure}
\caption{Distributions at a few low frequency (a) and high frequency (b) $LA$ modes. }
\label{LA_bayes_prob}
\end{figure}
\begin{figure} [htbp]
\begin{subfigure}{.5\textwidth}
\centering
\includegraphics[width=1.0\linewidth]{7a.pdf}
\caption{}
\label{NM1}
\end{subfigure}
\begin{subfigure}{.5\textwidth}
\centering
\includegraphics[width=1.0\linewidth]{7b.pdf}
\caption{}
\label{NM1}
\end{subfigure}
\caption{Distributions at a few low frequency (a) and high frequency (b) $TA$ modes. }
\label{TA_bayes_prob}
\end{figure}
\section{Summary and outlook}
\label{Conclude}
We have studied the type and reliability of information that can be extracted from thermal spectroscopy experiments. We specifically showed that inverting the experimental data using $F(\Lambda)$ (presumably via the effective thermal conductivity construct) leads to an ill-posed problem because any given $F(\Lambda)$ {\it does not} correspond to a unique system thermal response; as a result, even correct determination of $F(\Lambda)$ from the experiment is insufficient by itself to fully characterize the system behavior in other situations. Although the condition $C_\omega ({\omega_1}) v_\omega({\omega_1})= C_\omega({\omega_2}) v_\omega({\omega_2})$ in section \ref{free_path_material} is more general, in practice, this condition is typically met due to the presence of more than one phonon branch, which $F(\Lambda)$ is unable to distinguish between. Our previous results in \cite{Mojtaba2016, Mojtaba2018, MojtabaAPL, MojtabaThesis}---also briefly reviewed in section~\ref{intro}--- additionally showed that the Fourier heat conduction equation with effective thermal conductivity (or diffusivity) can be used for the reconstruction of $F(\Lambda)$ only if certain late time and large scale assumptions are met (see \cite{Mojtaba2016, Mojtaba2018} and figures~\ref{k_eff_issue} and \ref{k_eff_issue_2} of the present manuscript), and is thus not applicable at arbitrarily small length scales.
Our results in section \ref{Unique_analysis} are consistent with our previous work, showing that the algorithm proposed in \cite{Mojtaba2016} can be reliably used for future reconstruction purposes on real experimental data (as it has already been performed in \cite{MojtabaAPL}) which may include noise in their measurements, with no concerns about the non-uniqueness of the solution or low sensitivity of the relaxation-times function to the measured thermal responses. Note that although finding this unique solution requires a global optimization algorithm which is significantly more expensive, possibly by orders of magnitude, than the multi-stage NM optimization algorithm used in \cite{Mojtaba2016,Mojtaba2018,MojtabaAPL}, this is not required for regular applications. In applications, the multi-stage algorithm proposed in \cite{Mojtaba2016} should provide a solution that is sufficiently close to the global optimum at a fraction of the cost. Specifically, 1 million forward simulations (the number of datapoints shown in figure \ref{GNM_plot}) were used to find the approximate location of the global minimum using GBNM, while as it was discussed in \cite{Mojtaba2016}, our multi-stage algorithm requires at most about 1000 forward simulations to obtain this solution.
While in the present work the numerical analysis of the reliability and uniqueness of our previously proposed algorithm~\cite{Mojtaba2016} has been focused on the 1D-TTG experiment, it is important to note that many thermal spectroscopy experiments also include solid-solid interfaces in their setup, which can add to the complexity of the analysis, for instance, due to lack of sufficient information regarding the interface conductance and transmissivities. However, in a previous work~\cite{Mojtaba2018}, we have studied the reliability of our proposed reconstruction process on the 2D-dots geometry~\cite{Lingping_nature} and have observed that not knowing the interface transmissivities does not influence the reconstruction process significantly, and, in fact, the phonon relaxation-times function can be reconstructed accurately without knowing any information of the transmissivity functions. In future, we will also perform a study similar to present work that aims at uniqueness and sensitivity of the reconstructed relaxation-times in the presence of solid-solid interface with unknown properties.
The processes being described in section \ref{Unique_analysis} introduce new complementary tools for analyses of thermal spectroscopy measurements and open new avenues for future similar studies by combining them with the available theoretical tools, leading to a unified numerical-analytical approach for categorization of different problems based on the uniqueness of their inverse problems, in the context of phonon transport. This will allow researchers to explore the uniqueness of the solution for a particular geometry before performing the experiment, and thus helping them to focus on experiments whose solution to their inverse BTE equations is well-posed. Similarly, the Bayesian approach can help with visualization of sensitivity of the solution around the true solution, thus guiding the researchers towards geometries that reveal high sensitivity in the vicinity of the solution, and consequently, are more robust to measurement noise.
\section*{Acknowledgment}
The authors would like to thank Y. M. Marzouk and G. Chen for their comments and suggestions. This work was supported by the Solid-State Solar-Thermal Energy Conversion Center (S3TEC), an Energy Frontier Research Center funded by the U.S. Department of Energy, Office of Science, Basic Energy Sciences, under Award\# DE-SC0001299 and DE-FG02-09ER46577.
|
2,877,628,090,427 | arxiv | \section{#1}}
\def\Subsection#1{{\vskip-0.8cm}\hbox{\,}\subsection{#1}}
\journalid{337}{15 January 1989}
\articleid{11}{14}
\slugcomment{Submitted to ApJL August 5, 1998}
\lefthead{de Oliveira-Costa {{\frenchspacing\it et al.}}}
\righthead{MAPPING THE CMB}
\begin{document}
\title{Mapping the CMB III: combined analysis of QMAP flights}
\author{Ang\'elica de Oliveira-Costa$^{1,2}$,
Mark J. Devlin$^{3,1}$,
Tom Herbig$^1$,
Amber D. Miller$^1$,}
\author{C. Barth Netterfield$^{4,1}$,
Lyman A. Page$^1$ \&
Max Tegmark$^{2,5}$}
\affil{$^1$Princeton University, Department of Physics, Jadwin Hall,
Princeton, NJ 08544; angelica@ias.edu}
\affil{$^2$Institute for Advanced Study, Olden Lane, Princeton,
NJ 08540}
\affil{$^3$University of Pennsylvania, Department of Physics and Astronomy,
David Rittenhouse Laboratory, Philadelphia, PA 19104}
\affil{$^4$California Institute of Technology, MS 59-33, Pasadena, CA 91125}
\affil{$^5$Hubble Fellow}
\begin{abstract}
We present results from the QMAP balloon experiment, which maps
the Cosmic Microwave Background (CMB)
and probes its angular power spectrum on degree scales.
In two separate flights, data were taken in six channels at two frequency bands
between 26 to 46 GHz.
We describe our method for mapmaking (removal of $1/f$-noise and
scan-synchronous offsets) and power spectrum estimation, as well as
the results of a joint analysis of the data from both flights.
This produces a 527 square degree map of the CMB around the
North Celestial Pole,
allowing a wide variety of systematic cross-checks.
The frequency dependence of the fluctuations is consistent with CMB
and inconsistent with Galactic foreground emission.
The anisotropy is measured in three multipole bands from $\ell\sim 40$ to
$\ell\sim 200$, and the angular power spectrum
shows a distinct rise which is consistent with the Saskatoon results.
\end{abstract}
\keywords{cosmic microwave background -- methods: data analysis}
\section{INTRODUCTION}
The QMAP balloon experiment was designed to map the Cosmic Microwave Background
(CMB) and measure the angular power spectrum on degree scales.
QMAP operates in the Ka ($\sim$ 30 GHz) and Q-band ($\sim 40$ GHz)
with six detectors in two polarizations
(Ka1 and Ka2; Q1 and Q2; Q3 and Q4),
with angular resolution between $0\fdg6$ and $0\fdg9$.
Data were taken during two flights in 1996, the first (hereafter FL1) in June in
Palestine, Texas, and the second (hereafter FL2) in November in Ft. Sumner, New Mexico.
QMAP scanned a 527 square degree region near the North Celestial Pole in a complicated
criss-cross pattern which allows a number of internal checks of the integrity of the
measurements and results in an interconnectedness between pixels that enables efficient
$1/f$-noise removal.
A detailed description of the
design and performance of the
QMAP instrument and results from the first flight
are presented in Devlin {{\frenchspacing\it et al.}} (1998, hereafter D98).
QMAP calibrations and results from the second flight are presented in
Herbig {{\frenchspacing\it et al.}} (1998, hereafter H98).
In this {\it Letter}, we present the method used to analyze the
QMAP experiment (the mapmaking process and the power spectrum extraction),
as well as the combined results from both flights.
\section{METHOD}
\subsection{From Scan Pattern to Map}
For each of the six channels,
the QMAP raw data set consists of $M$=35,318,400 observed data points
$y_i$~ which we store in an $M$-dimensional vector ${\bf y}$. We subdivide
the mapped region into $N$ pixels whose centers ${\bf \hat{r}}_i$ form a
rectangular grid and let $x_i$ denote the true sky temperature in the
direction ${\bf \hat{r}}_i$. Grouping these pixel temperatures into an
$N$-dimensional map vector ${\bf x}$, we can write
\beq{a1}
{\bf y} = {\bf A} {\bf x} + {\bf n},
\end{equation}
where ${\bf n}$ denotes the random instrumental noise vector of size
$M$ and ${\bf A}$ is a matrix of size $M \times N$ that encodes the QMAP
scan strategy. We model ${\bf n}$ as a random variable with zero mean and
covariance matrix given by ${\bf N} \equiv \langle {\bf n} {\bf n}^t \rangle$.
The scan strategy matrix ${\bf A}$ has
${\bf A}_{ij}=1$ if the $i^{th}$ observation points to the
$j^{th}$ pixel, $0$ otherwise.
The goal is to compute a map $\tilde{\bf x}$ that estimates the true map ${\bf x}$
from the raw data ${\bf y}$. We use a linear method
\beq{a2}
\tilde{\bf x} = {\bf W} {\bf y},
\end{equation}
specified by some $N\times M$ matrix ${\bf W}$. Substituting (\ref{a1}) into
(\ref{a2}) shows that the error in the recovered map is
\beq{a3}
\varepsilon \equiv \tilde{\bf x} - {\bf x} = [ {\bf W} {\bf A} - {\bf I} ] {\bf x} + {\bf W} {\bf n},
\end{equation}
where ${\bf I}$ is the identity matrix. Choosing ${\bf W}$ to be (Tegmark 1997a,
hereafter T97a)
\beq{a4}
{\bf W} = [ {\bf A}^t {\bf M}^{-1} {\bf A} ]^{-1} {\bf A}^t {\bf M}^{-1}
\end{equation}
for some $M\times M$ matrix ${\bf M}$ gives ${\bf W}{\bf A}={\bf I}$,
so that $\tilde{\bf x}={\bf x}+{\bf n}$ can be interpreted as an honest-to-goodness map
where the pixel noise $\varepsilon={\bf W}{\bf n}$ is independent of ${\bf x}$.
The noise covariance matrix of the map is simply
\beq{a5}
{\bf\Sigma} \equiv \langle \varepsilon {\varepsilon}^t \rangle = {\bf W} {\bf N} {{\bf W}}^t.
\end{equation}
\subsection{Pre-whitening \& Pink Noise Removal}
Ideally, we would like to use the minimum-variance mapmaking method
(Wright 1996; T97a), which corresponds to the choice ${\bf M}={\bf N}^{-1}$.
In our case, the huge matrix ${\bf N}$ is far from diagonal, since long-term $1/f$ drifts
(so-called pink noise) introduce strong correlations between the noise $n_i$ at
different times. In other words, although direct application of equations (\ref{a2})
and (\ref{a5}) with ${\bf M}={\bf N}^{-1}$ would give what we need in principle, inverting the
non-sparse matrix ${\bf N}$ would require a Hubble time in practice.
Fortunately, we can obtain virtually the same answer by employing a series of numerical
methods detailed in Tegmark (1997b, hereafter T97b). Since the statistical properties of the
noise are virtually constant in time, {\frenchspacing\it i.e.}, ${\bf N}$ is almost a
circulant matrix\footnote{
A circulant matrix is one where each row is the previous
one shifted one notch to the right. They are
easily manipulated (inverted, diagonalized, {\frenchspacing\it etc.}) with Fourier methods
(see T97b).}, we replace the raw data ${\bf y}$ by a high-pass filtered data set
\beq{a6}
\tilde{\bf y} \equiv {\bf D} {\bf y},
\end{equation}
where ${\bf D}$ is another circulant matrix
(a convolution filter), chosen so that both
${\bf D}$ and the filtered noise covariance matrix $\tilde{\bf N}\equiv\expec{\tilde{\bf n}\tilde{\bf n}^t} = {\bf D}{\bf N}{{\bf D}}^t$
are band-diagonal.
Wright (1996) referred to this as ``pre-whitening''
and chose the filter so that $\tilde{\bf N}\approx{\bf I}$.
In our case, however, $\tilde{\bf N}$ is not quite circulant
because of the omission of
$\sim 600$ segments of calibration data (H98),
but of the form $\tilde{\bf N}=\tilde{\bf N}_c+\tilde{\bf N}_s$,
where $\tilde{\bf N}_c$ is circulant and the correction $\tilde{\bf N}_s$ is extremely sparse.
We therefore choose ${\bf M}={\bf N}_c^{-1}$, with $\tilde{\bf y}$ as the new data set.
Defining $\tilde{\bf A} \equiv {\bf D} {\bf A}$, we can rewrite (\ref{a1})
as $\tilde{\bf y}$=$\tilde{\bf A} {\bf x} + \tilde{\bf n}$, so equations (\ref{a2}) and (\ref{a5}) now give
\beq{a11}
\tilde{\bf x} = [{\bf A}^t{\bf D}^t{\bf M}{\bf D}{\bf A}]^{-1} {\bf A}^t{\bf D}^t{\bf M}{\bf D}{\bf y}
\end{equation}
\beq{a12}
{\bf\Sigma} = [{\bf A}^t{\bf D}^t{\bf M}{\bf D}{\bf A}]^{-1}
[{\bf A}^t{\bf D}^t{\bf M}\tilde{\bf N}{\bf M}^t{\bf D}{\bf A}]
[{\bf A}^t{\bf D}^t{\bf M}{\bf D}{\bf A}]^{-1}
\end{equation}
which can be evaluated on a workstation in about 24 hours.
\subsection{Offset Removal}
In order to make maps from the data, we need to remove instrumental
offsets that are synchronous with the chopper position. Although
we find no evidence of atmospheric emission or sidelobe contamination,
there is thermal emission from the
instrument at the mK-level (H98).
We solve for this scan-synchronous component by adding
160 ``virtual pixels" ${\bf x}_2$ (corresponding to the 160
sampling positions along the scan) to the map vector ${\bf x}_1$ and
widening the scan strategy matrix ${\bf A}$ with an additional ``1'' in each row
in one of 160 extra columns.
This allows us to rewrite equation (\ref{a1}) as
\beq{a13}
{\bf y} = {\bf A}_{\rm 1} {\bf x}_{\rm 1} + {\bf A}_{\rm 2} {\bf x}_{\rm 2} =
{\bf A} {\bf x} + {\bf n},
\end{equation}
where ${\bf A}_1$ and ${\bf A}_2$ are matrices of sizes $M \times N$
and $M \times 160$, respectively, so ${\bf x}$ becomes a vector of dimension
$N+160$ and ${\bf A}$ a matrix of size $M \times (N+160)$.
Since the scan strategy is so well interconnected, this method
is able to produce an offset-free map at the cost of only a marginal increase
in pixel noise. However, we do not wish to assume that the offset remains
constant during the entire flight, as even a 10\% variation in a
1 mK offset would cause an artificial signal comparable to the CMB.
Since our offset is almost entirely localized in frequency to the
first two harmonics of the scan rate (H98), we therefore combine the
virtual pixel method with an extremely conservative approach illustrated in
Figure 1: we make ${\bf D}$ a notch filter which annihilates all signals at these
two frequencies, as well as the DC (0 Hz) component.
Our filtering is thus more of a ``pre-blueing'' than a pre-whitening.
The price we pay for this conservatism is that we lose essentially all
information about the CMB dipole, which was detected.
\bigskip
\centerline{{\vbox{\epsfxsize=9.5cm\epsfbox{doc1.eps}}}}
\figcaption{The filtering procedure is illustrated in
the frequency domain (left) and in the time domain
(right) for the Ka1 channel of flight 2.
The noise power spectrum (top left) contains a $1/f$-component
which causes the noise to be almost perfectly correlated between
measurements close together in time (top right).
By multiplying the Fourier transformed signal
by an appropriate filter
(middle left), which corresponds to applying a
convolution filter to the signal (middle right),
we obtain
filtered data $\tilde{\bf y}$ that has a white noise power
spectrum with three ``notches'' (lower left), corresponding to a
block-diagonal time autocorrelation function
(lower right).
1 sample $\approx 1.36$~ms. Vertical units are arbitrary.
}
\subsection{Combining maps}
When combining two maps ${\bf x}_1$ and ${\bf x}_2$
of the same angular resolution
into a single map $\tilde{\bf x}$, we use the minimum-variance weighting
\beq{ComboEq1}
\tilde{\bf x} = \left[{\bf\Sigma}_1^{-1}+{\bf\Sigma}_2^{-1}\right]^{-1}
\left[{\bf\Sigma}_1^{-1}{\bf x}_1 + {\bf\Sigma}_2^{-1}{\bf x}_2\right].
\end{equation}
The resulting covariance matrix for
$\tilde{\bf x}$ is therefore
\beq{CombiEq2}
{\bf\Sigma} =\left[{\bf\Sigma}_1^{-1}+{\bf\Sigma}_2^{-1}\right]^{-1}.
\end{equation}
When combining maps of different resolution, the one with the
higher resolution was first smoothed to the lower resolution.
\subsection{Wiener-Filtered Map}
In addition to the $\tilde{\bf x}$ map, we also compute the $Wiener$ map
${\bf x}_{\it w}$ given by (T97a)
\beq{a14}
{\bf x}_{\it w} = {\bf S} [{\bf S} + {\bf N}]^{-1} \tilde{\bf x},
\end{equation}
where ${\bf S}$=$\langle {\bf x} {\bf x}^t \rangle$ is the CMB covariance matrix,
defined as
\beq{a15}
{\bf S}_{ij} = \sum_{\ell=2}^{\infty}
\frac{(2 \ell + 1)}{4 \pi}
P_{\ell} (\hat{\bf r}_i\cdot\hat{\bf r}_j)
B_{\ell}^2 C_{\ell}.
\end{equation}
To avoid imprinting features on any particular scale, we use a
flat fiducial power spectrum $C_{\ell}$ normalized to
$Q=30\mu{\rm K}$, which is roughly the CMB power level we find in the maps.
We approximate the QMAP beam by a circular
Gaussian with FWHM values given by D98, {\frenchspacing\it i.e.},
$B_{\ell} \approx e^{-\theta^2 \ell(\ell+1)/2}$ and
$\theta \equiv \sqrt{8 \ln 2}$ FWHM.
The Wiener filtered map ${\bf x}_{\it w}$ contains the same information as
$\tilde{\bf x}$, but it is more useful for visual inspection since it is less noisy.
\subsection{From Map to Power Spectrum}
The signal-to-noise ($S/N$) method (Bond 1995; Bunn and Sugiyama 1995) compresses
the information content of a CMB map into a vector ${\bf z} \equiv {\bf B}^t \tilde{\bf x}$,
where ${\bf B}$ is an $N\times N$ matrix whose $i^{th}$ column satisfies the generalized
eigenvalue equation
\beq{a17}
{\bf S} {\bf b}_i = \lambda_i {\bf\Sigma} {\bf b}_i,
\end{equation}
normalized so that ${\bf b}^t_i{\bf\Sigma}{\bf b}_i =1$ and sorted by decreasing $\lambda_i$.
As described in {{\frenchspacing\it e.g.}} T97b, the $N$ numbers $z_i$ are uncorrelated, {\frenchspacing\it i.e.},
\beq{a18}
\langle z_i z_j \rangle =
{\bf b}^t_i ({\bf\Sigma} + {\bf S}) {\bf b}_j =
[{\rm 1}+\lambda_i] \delta_{ij},
\end{equation}
and their variance $\langle z_i^2\rangle$ has a contribution of $1$ from noise and
$\lambda_i$ from signal. This means that the eigenvalue $\lambda_i$ can be
interpreted as a {$S/N$} ratio for $z_i$, and the quantities
$q_i\propto (z_i^2-1)$ can be used as estimators of the power spectrum
$\delta T_\ell\equiv\ell (\ell+1) C_{\ell}/ 2 \pi$, since
$\expec{q_i}\propto\sum_\ell W_\ell\delta T_\ell$ for some window function
$W_\ell$. The $q_i$ tend to probe smaller scales as $i$ increases and
the {$S/N$} drops. The band power measurements in Table 1 and Figure 3 have
been computed by normalizing the individual $q_i$ so that their window functions
integrate to unity and then averaging them in bands with a
minimum-variance weighting, to minimize error bars.
\section{DATA ANALYSIS}
\subsection{Pipeline Tests}
We tested our data analysis pipeline by generating mock raw data sets
that incorporate the QMAP scan strategy. These mock data sets were
processed though the pipeline, recovering the original maps.
When adding Monte Carlo white and pink noise to these mock data sets, we recovered
maps with pixel noise consistent with the noise covariance matrix ${\bf\Sigma}$
computed by the pipeline. Likewise, when adding scan-synchronous offsets
to these mock data sets, we recovered the original maps as well as the
offsets. As expected, the original maps were faithfully recovered even when the
first harmonics of these mock offsets were varied slowly
throughout the flight.
We repeated our analysis for a range of pixel sizes. As expected,
we found that as long as the pixels were smaller than the Shannon oversampling
limit (about 2.5 times smaller than the FWHM), the maps $\tilde{\bf x}$ were virtually
independent of the pixelization.
We made $\tilde{\bf N}$ as band-diagonal as possible using a filter
${\bf D}$ of band width $L$, then neglected the tiny elements of
$\tilde{\bf N}$ further than $L/2$ from the diagonal.
Tests with increasing $L$-values showed that the results converged
for $L\sim 150$, so we used $L=320$ in the analysis to be conservative.
To test whether the Wiener maps were sensitive to our choice of
power spectrum normalization, we generated Wiener maps for fiducial
power spectra with $Q$=20, 30 and 40$\mu{\rm K}$. The visual difference
between the two extreme normalizations was minimal: the maps had
the same spatial features in the same locations, the 20$\mu{\rm K}$ map simply
being slightly smoothed relative to the 40$\mu{\rm K}$ map.
Finally,
if the beam size $\theta$ is overestimated by 1\%, the
band power $\delta T_\ell$ is overestimated by
$[(\theta\ell)^2-2]$ percent. The first term comes from the above-mentioned
Gaussian beam correction $B_\ell$ and the second from the calibration, which
involves the beam area $\propto\theta^2$.
Repeating our full analysis with the assumed FWHM reduced by
$1\sigma$ ($\sim 3\%$), the first effect
decreases the normalization of the two
combined Ka band powers in Table 1 by 0.3\% and 4\%, respectively,
whereas the second effect of course gives a 6\% increase.
\subsection{Data Tests}
The above-mentioned scan-synchronous offset was around 1mK (FL2)
and 10mK (FL1) peak-to-peak. Although our notch filter technique immunized
the results towards drifts in this offset, no such drifts were
actually detected.
How statistically significant is our detection of signal in the maps?
Consider the null hypothesis that a map $\tilde{\bf x}$ contains merely noise,
{\frenchspacing\it i.e.}, $\expec{\tilde{\bf x} {\tilde{\bf x}}^t}={\bf\Sigma}$. Given the alternative hypothesis
$\expec{\tilde{\bf x} {\tilde{\bf x}}^t}={\bf\Sigma}+{\bf S}$, one can show that the most powerful
``null-buster'' test for ruling out the null hypothesis is using the generalized
$\chi^2$-statistic
\beq{NullbusterEq}
\chi^2 \equiv {{\tilde{\bf x}}^t{\bf\Sigma}^{-1}{\bf S}{\bf\Sigma}^{-1}\tilde{\bf x} - \hbox{tr}\>{\bf\Sigma}^{-1}{\bf S}
\over
\left[2\hbox{tr}\>\left\{{\bf\Sigma}^{-1}{\bf S}{\bf\Sigma}^{-1}{\bf S}\right\}\right]^{1/2}},
\end{equation}
which can be interpreted as the number of ``sigmas'' at which the
null noise-only hypothesis is ruled out. The results of this test are given for
all the individual maps in D98 and H98, and show that signal is
detected at significance levels above $15\sigma$ in both flights.
Of these 11 maps (see D98 and H98), many have a substantial spatial overlap.
This allows a series of powerful consistency tests, since many potential
contaminants affect data from different bands and polarization channels differently.
As detailed in D98 and H98, we applied the same null-buster test to
the difference of the various maps in each band where they overlap spatially,
and in all cases found the difference maps consistent with pure noise.
Our S/N eigenmode analysis gives the same conclusion:
significant signal in the best eigenmodes of the individual
channels, but S/N-coefficients consistent with pure noise in the
difference maps.
Thus all of the significant signal appears to be common to
the different channels, indicating that the bulk of the detected signal is
due to temperature fluctuations on the sky.
\subsection{Foreground Contamination}
To constrain the frequency dependence of our signal, we repeated
the null-buster test for weighted difference maps of the form
\beq{foreground}
\tilde{\bf x} \equiv \tilde{\bf x}_1-(\nu_2/\nu_1)^{\beta}\tilde{\bf x}_2,
\end{equation}
where the map $\tilde{\bf x}_1$ and the frequency $\nu_1$ was for the Ka-band
and $\tilde{\bf x}_2$ and $\nu_2$ was for the Q-band.
This placed a $2-\sigma$ lower limit on the spectral
index $\beta$ of $-1.4$,
which means that the signal cannot be dominated by foregrounds such as
free-free emission ($\beta \sim-2.15$) or synchrotron radiation
($\beta \sim -2.8$). A more detailed foreground analysis,
cross-correlating the maps with various foreground templates,
will be presented in a separate paper (de Oliveira-Costa {{\frenchspacing\it et al.}} 1998).
\vskip-2.3cm
\hglue-0.1cm
\centerline{\vbox{\epsfxsize=11cm\epsfbox{everything_map_grid.ps}}}
\vskip-2cm
\figcaption{Wiener-filtered map of the QMAP combined data.
The dashed circle shows the Saskatoon coverage.
\label{MapFig}
}
\section{Results \& Conclusions}
The Wiener map obtained by combining all the data from both flights
is shown in Figure~2. The above-mentioned generalized $\chi^2$-test
shows that the signal observed in this figure is significant at the
$\mathrel{\spose{\lower 3pt\hbox{$\mathchar"218$} 15\sigma$-level.
This map covers 527 square degrees, and has a substantial overlap
with the 200 square degree Saskatoon map. Visual comparison of the two maps in
the overlap region reveals striking similarities, providing further indication
that the bulk of the detected signal is due to temperature fluctuations on the
sky rather than systematic effects. A detailed statistical comparison of the
QMAP and Saskatoon data sets will be presented in a future paper.
\bigskip
\vskip-0.2cm
\centerline{\vbox{\epsfxsize=8.0cm\epsfbox{angelica_powerfig.eps}}}
\vskip-0.3cm
\figcaption{Angular power spectrum of combined data set
shown in Figure~2. The results are listed in Table~1.}
\bigskip
The power spectrum for the total data set (FL1+FL2) is given in
Figure~3 and Table~1.
We see that it agrees well with the Saskatoon power spectrum (Netterfield {{\frenchspacing\it et al.}} 1997),
showing a rise on degree scales to power levels substantially above those
found on large scales by COBE.
The calibrated raw data with pointing will be made public after
publication of this {\it Letter}.
\bigskip
{\footnotesize
Table~1. --- The angular power spectrum.
The band powers $\delta T_\ell\equiv[\ell(\ell+1)C_\ell/2\pi]^{1/2}$
have window functions whose mean and
rms width are given by $\ell_{eff}$ and $\Delta\ell$.
The full window functions are available at
{\it http://dept.physics.upenn.edu/cmb.html}
and {\it http://pupgg.princeton.edu/$\sim$cmb/welcome.html}.
The errors $\delta T$ are uncorrelated within each pair of
Ka-points. The first and and second one in each pair is
dominated by sample variance and detector noise, respectively.
Calibration errors are not included (see H98).
\begin{center}
\begin{tabular}{lcrcl}
\hline
\hline
\multicolumn{1}{l}{Flight} &
\multicolumn{1}{l}{Band} &
\multicolumn{1}{c}{$\ell_{eff}$} &
\multicolumn{1}{c}{$\Delta \ell $} &
\multicolumn{1}{c}{$\delta T$} \\
\hline
1 &Ka &92 &45 &$49^{+6}_{-7}$\\
&Q &84 &46 &$47^{+8}_{-10}$\\
\hline
2 &Ka &91 &47 &$46^{+10}_{-12}$\\
&Ka &145 &64 &$63^{+10}_{-12}$\\
&Q &125 &67 &$56^{+5}_{-6}$\\
\hline
1+2 &Ka &80 &41 &$47^{+6}_{-7}$\\
&Ka &126 &54 &$59^{+6}_{-7}$\\
&Q &111 &64 &$52^{+5}_{-5}$\\
\hline
\end{tabular}
\end{center}
}
\bigskip
We would like to thank Wayne Hu for helpful comments.
This work was supported by
a David \& Lucile Packard Foundation Fellowship (to LP),
a Cottrell Award from Research Corporation, an NSF NYI award,
NSF grants PHY-9222952 and PHY-9600015,
NASA grant NAG5-6034 and Hubble Fellowships
HF-01044.01$-$93A (to TH) and HF-01084.01$-$96A (to MT)
from by STScI, operated by AURA, Inc. under NASA contract NAS5-26555.
|
2,877,628,090,428 | arxiv | \section{Introduction \label{com}}
In years past we have been working on weak interaction inverse beta
decay\cite{WL1:2006,WL2:2007,Cirillo:2012}
including electromagnetic interactions with collective plasma modes of motion.
Our considerations have been recently
criticized\cite{Ciuchi:2012}. Our neutron production rate\cite{WL1:2006,WL2:2007} is
a factor of $\sim 300$ larger than that of Maiani\cite{Ciuchi:2012}. Our purpose is to point
out the source of this difference so that the physical principles may be resolved.
In Sec.\ref{NPM}, the calculation of neutron production for a neutral plasma of
Ciuchi {\it et al}\cite{Ciuchi:2012} is briefly reviewed. Since the surface plasmas
of hot cathodes within which neutron production is observed\cite{Cirillo:2012} are fully
ionized, the neutral atomic gas case is not relevant. The irrelevant two body wave
function\cite{Ciuchi:2012} employed for the neutral gas case should be replaced by
the two body Coulomb wave function relevant to the fully ionized plasma.
This is the usual fully ionized plasma situation, for example, in the study of the weak
interaction electron capture reactions
\begin{eqnarray}
({\rm general})\ \ \ \ e^- + \ ^A_ZX \to \ ^A_{(Z-1)}X +\nu_e\ ,
\nonumber \\
({\rm special\ case})\ \ \ \ e^-+p^+ \to n+\nu_e\ ,
\label{intro1}
\end{eqnarray}
in solar\cite{Bahcall:1962} physics. Scattering Coulomb wave functions also enter
laboratory high energy\cite{Bardin:1994} physics.
The case of the fully ionized plasma is discussed in
Sec.\ref{FIPM}. In Sec.\ref{rnpr} our previous neutron production
estimates\cite{WL1:2006,WL2:2007,Cirillo:2012} are verified employing the scattering
Coulomb wave function.
In the concluding Sec.\ref{conc} we briefly indicate how collective many body interactions
may modify the situation.
\section{Neutral Gas of Atoms \label{NPM}}
For a gas of neutral objects which consist of a heavy electron bound to a proton,
the Coulomb wave function in the zero total momentum frame
\begin{equation}
\psi_{e^- p^+}({\bf r})=\frac{e^{-r/a}}{\sqrt{\pi a^3}}\ \ \ \ \ \ \ \
a=\frac{\hbar^2}{me^2},
\label{npm1}
\end{equation}
wherein \begin{math} {\bf r}={\bf r}_{e^-}-{\bf r}_{p^+} \end{math} and
\begin{math} m \end{math} is the reduced mass of the heavy electron.
With the lowest order Fermi cross section for a heavy electron to scatter
from a proton producing a neutron and a neutrino,
\begin{eqnarray}
\tilde{\nu }=v\sigma=\frac{c}{2\pi }\left(\frac{G_Fm^2}{\hbar c}\right)^2
(g_V^2+3g_A^2)\times
\nonumber \\
\left(\frac{\hbar }{mc}\right)^2(\gamma^2-\gamma_{Threshold}^2).
\label{npm2}
\end{eqnarray}
If \begin{math} n \end{math} denotes the number of bound neutral objects
per unit volume, then the transition rate per unit time per unit volume to produce
neutrons from the decay of the neutral objects
\begin{eqnarray}
\varpi_0((e^-p^+)\to n+\nu_e)=nv\sigma |\psi_{e^- p^+}(0)|^2\ ,
\nonumber \\
\varpi_0=\left(\frac{n}{\pi a^3}\right)v\sigma =
\left(\frac{n\tilde{\nu}}{\pi a^3}\right).
\label{npm3}
\end{eqnarray}
Up to this point we are in agreement with the comment of
Ciuchi {\it et al}\cite{Ciuchi:2012}. Our disagreement involves the more
physical regime wherein the plasma is fully ionized. The particles are
charged and not neutral and the wave function Eq.(\ref{npm1}) chosen by
Ciuchi {\it et al}\cite{Ciuchi:2012} is thereby incorrect. The correct wave
function is written below.
\section{Fully Ionized Plasma Modes \label{FIPM}}
For a fully ionized plasma, the constituents of the plasma are the charged
heavy electron and the proton. We seek the scattering state production of
neutrons
\begin{equation}
e^-+p^+\to n+\nu_e.
\label{fipm1}
\end{equation}
The wave function factor \begin{math} |\psi (0)|^2 \end{math} needed to
include Coulomb attraction into the scattering is changed from the neutral
plasma value \begin{math} 1/(\pi a^3) \end{math}. The positive energy
\begin{math}E=mv^2/2=\hbar^2k^2/2m \end{math}
scattering Coulomb wave function\cite{S. Fluge:1970} must replace
Eq.(\ref{npm1}) ; i.e. in terms of the Gamma function
\begin{math} \Gamma (z) \end{math} and the confluent hypergeometric
function \begin{math} _1F_1(\xi ;\zeta ;z) \end{math}
\begin{eqnarray}
\psi ({\bf r})=e^{i{\bf k\cdot r}}\Big[e^{\pi /(2ka)}
\Gamma \left(1-\frac{i}{ka}\right)\times
\nonumber \\
_1F_1\left(\frac{i}{ka};1;\frac{kr-{\bf k\cdot r}}{ka}\right)\Big].
\label{fipm2}
\end{eqnarray}
If \begin{math} r\to 0 \end{math}, then
\begin{equation}
|\psi (0)|^2=\frac{(2\pi e^2/\hbar v)}{1-exp(-(2\pi e^2/\hbar v))}\ .
\label{fipm3}
\end{equation}
The neutron production rate per unit time per unit volume is then
\begin{eqnarray}
\varpi(e^-+p^+\to n+\nu_e)=n^2 v\sigma |\psi(0)|^2=
n^2 \tilde{\nu } |\psi(0)|^2\ ,
\nonumber \\
\varpi=\frac{2\pi \alpha cn^2\sigma }{1-exp(-2\pi c \alpha/ v)}\ ,
\label{fipm4}
\end{eqnarray}
wherein \begin{math} \alpha =e^2/\hbar c \end{math}.
\section{The Neutron Production Ratio \label{rnpr}}
The ratio \begin{math} \varpi /\varpi_0 \end{math} of the neutron
production rates per unit time per unit volume can be deduced from
Eqs.(\ref{npm3}) and (\ref{fipm4}). Thermal averaging
at a temperature small on the scale of the heavy electron mass
\begin{math} k_BT\ll mc^2 \end{math} yields\cite{Bahcall:1962} the transition rate per unit time
per unit volume for producing neutrons
\begin{eqnarray}
\eta=\frac{\varpi}{\varpi_0}=2\pi^2\alpha na^3\left<\frac{c}{v}\right>,
\nonumber \\
\eta \approx 2\pi^2\alpha na^3 \sqrt{\frac{2mc^2}{\pi k_BT}}\ ,
\label{rnpr1}
\end{eqnarray}
where \begin{math} n \end{math} is the number of electrons per unit
volume.
Previously\cite{WL2:2007} estimated temperatures of hydride
cathodes \begin{math} T\sim 5\times 10^3\ ^oK \end{math}
are in agreement with the observed hot color of their brightly light emitting
surfaces\cite{Cirillo:2012}. The resulting neutron production as described by
Eq.(\ref{rnpr1}) is given by \begin{math} \eta \sim 5\times 10^2 \end{math}
in rough agreement with our previous estimates\cite{WL1:2006,WL2:2007, Cirillo:2012}. The
factor of \begin{math} \sim 300 \end{math} discrepancy is thereby resolved.
\section{Conclusion \label{conc}}
Many body plasma effects on neutron production may be described by the correlations
between the electron coordinates \begin{math} ({\bf r}_1, \cdots ,{\bf r}_N) \end{math}
and proton coordinates \begin{math} ({\bf s}_1, \cdots ,{\bf s}_N) \end{math} as
given by the correlation function
\begin{equation}
C=\frac{1}{N}
\left<\sum_{i=1}^N\sum_{j=1}^N \delta\big({\bf r}_i-{\bf s}_j\big)\right>=n\xi
\label{conc1}
\end{equation}
wherein \begin{math} \xi=|\psi (0)|^2 \end{math} only if there are merely two body
collisions in the plasma. Collective oscillations and many body collisions would tend to
raise the value of \begin{math} \xi \end{math} but require a many body Greens function
analysis to include such effects in detail. However, previous discrepancies
are now understandable.
We reiterate that at the level of dilute plasma two-body correlations dealt with in
previous work\cite{Ciuchi:2012}, the order of magnitude of the discrepancy has herein
been resolved.
|
2,877,628,090,429 | arxiv |
\section{Introduction}
One of the goals of the 4SECURail project\footnote{https://4SECURail.eu November 2019 -- November 2021.} is to observe the possible approaches, benefits, limits, and costs of introducing formal methods inside the \emph{requirements definition} process in the context of railway-signaling systems.
This has been done with the set up of a ``demonstrator'' with the purpose to exemplify the application of state-of-the-art tools and methodologies to a selected railway case study with the collection of meaningful information on the costs and benefits of the process.
The overall context and objectives of this project and experimentation are described in \cite{refD2.1,refrssrail}; in this paper, we describe specifically the approach that has been followed for the formal modeling and initial analysis of the case study, which has seen the exploitation of three different formal verification frameworks.
The rest of the paper is structured as follows: In Section 2, we provide details about the case study that has been the object of the experimentation; in Section 3, we present the formal modeling approach that has been adopted in the demonstrator process; in Section 4, we describe the various kinds of analysis performed. In Sections 5 and 6, we respectively hint at some related works and draw our conclusions.
\section{The reference case study}
The transit of a train from an area supervised by a Radio Block Centre (RBC) to an adjacent area supervised by another RBC occurs during the so-called RBC-RBC handover phase and requires the exchange of information between the two RBCs according to a specific protocol. This exchange of information is supported by the communication layer specified within the UNISIG SUBSET-039~\cite{sub39}, UNISIG SUBSET-098~\cite{sub98}, and UNISIG SUBSET-037~\cite{sub37}, and the whole stack is implemented by both sides of the communication channel.
Figure~\ref{CASESTUDY} summarizes the overall structure of the UNISIG standards, supporting the handover of a train.
The 4SECURail case study is based on two main sub-components of the communication layers constituting the RBC-RBC handover. The considered components are the Communication Supervision Layer (CSL) of the SUBSET-039 and the Safe Application Intermediate SubLayer (SAI) of the SUBSET-098. These two components are the main actors that support the creation/deletion of safe communication lines and protect the transmission of messages exchanged on such lines.
In particular, the CSL is responsible for requesting the activation -- and in case of failure, the re-establishment -- of the communication line, for continuously controlling its liveliness, and for the forwarding of the handover transaction messages. The SAI is responsible for ensuring the absence of excessive delays, repetitions, losses, or re-ordering of messages during their transmissions. This is achieved by adding sequence numbers and time-related information to the RBC messages.
The RBC/RBC communication line consists of two sides that are properly configured as ``initiator'' and ``called''.
\begin{figure}[ht!]
\centering
\includegraphics[width=0.75 \textwidth]{image-casestudy-layers.pdf}
\caption{Overall structure of the 4SECURail case study} \label{CASESTUDY}
\end{figure}
With respect to the SUBSET-098, the 4SECURail case study neither includes the EuroRadio Safety Layer (ER), which is responsible for preventing corruption, masquerading and insertion issues during the communications, nor the lower Communication Functional Module (CFM) interface. With respect to the SUBSET-039, the 4SECURail case study does not include the description of the activation of multiple, concurrent RBC-RBC handover transactions when trains move from a zone supervised by an RBC to an adjacent zone supervised by another RBC.
From the point of view of the CSL, the RBC messages are forwarded to/from the other RBC side without the knowledge of their specific contents or session to which they belong.
The case study of the project, as derived from the above-mentioned standards, is described in natural language in Deliverable D2.3~\cite{refD2.3}, along with the rationale for its choice.
Of course, the level of abstraction of these requirement documents is not that one of an executable system specification, but a higher level.
\section{The formal modeling}
\subsection{From natural language to executable UML specifications}
As shown in Figure~\ref{FromTo}, the first step towards the generation of formal models of the system is the description -- in terms of extremely simple SysML/UML features -- of the system components described by the natural language requirements. It is well known that requirements described in free-style natural language suffer the risk of being unclear (e.g., redundant), potentially ambiguous, in part contradictory, and possibly not describing essential aspects. Moreover, since the railway infrastructure is essentially a system of systems, specifying and guaranteeing the desired interoperability among the various components is a more challanging task than specifying and guaranteeing the independent safety of each singularly specified component.
\begin{figure}[ht!]
\centering
\includegraphics[width=0.7 \textwidth]{from-NL-to-formal-models.pdf}
\caption{From natural language to formal models} \label{FromTo}
\end{figure}
Constructing a possible implementation using a potentially executable notation \emph{with clear semantics} allows (at least) to remove the underlying ambiguities but, until the system is thoroughly tested or verified, the risk of logical deficiencies persists. However, beyond the beneficial ``natural language interpretation'' step, the ``executable implementation'' step risks being a critical source of mistakes.
Therefore, the subsequent formal modeling and analysis step of the ``executable implementation'' becomes essential.
The association of the term ``clear semantics'' with the term ``UML'' can be, in general, quite problematic.
In our case, we have used the very minimal set of UML features needed for our executable modeling, avoiding all the complexities related, for example, to composite states, transition priorities, deferred events, and making the explicit assumption of FIFO event queues. The extreme simplicity of the resulting subset aims not only to the association of a ``precise semantics'' and a \emph{simple intituive meaning} to the designs but also to an ``easy translation'' of the designs into several formal notations.
Appendix A shows our reference UML state-machine diagrams for the CSL and SAI system components of the case study, in both their \emph{initiator-side} and \emph{called-side} version.
\subsection{From executable UML specifications to verifiable scenarios}
The system requirements in the Deliverable D2.3~\cite{refD2.3} have been the base for the design of the executable models of the CSL and SAI components. However, in order to have an actually verifiable system, we need a \emph{closed} system that contains the specified components plus all the needed environment components that stimulate, receive data, and forward messages from the initiator to called side of the system. In order to deal with the time-related aspects of the specification, we also introduce a timer component that allows all the other components to proceed in parallel in an asynchronous way but relatively at the same speed\footnote{Since all the system components are modeled as executing a cyclic activity, the timer component just constrains the frequency of the cycles to be the same while allowing the overlapping of their behavior.}.
Figure ~\ref{systemcomponents} shows the resulting structure of the whole system. Also all the added environment and timer components can be designed in UML to facilitate the system encoding into the selected formal notations. An example of these environment components is contained in Appendix A.
\begin{figure}[ht!]
\centering
\includegraphics[width=0.4 \textwidth]{image-system-components.pdf}
\caption{The complete executable system structure} \label{systemcomponents}
\end{figure}
It is infeasible/impractical to define these environment components in their full possible generality because the system components are heavily dependent on several configuration parameters. It makes more sense to define them according to the properties we intend to verify on the complete closed system.
As first step of formal modeling, the executable UML system diagrams corresponding to a given scenario are translated into the notation accepted by the UMC\footnote{https://fmt.isti.cnr.it/umc} tool.
At the beginning of the project, the possibility of designing the SysML system using a commercial MBSE framework -- namely SPARX-EA\footnote{https://sparxsystems.com/products/ea/index.html} -- has been evaluated.
This approach has been abandoned because of time and effort constraints of the project. Implementing a translator from the SPARX-generated XMI towards UMC would have been a significant effort. Moreover, it would have tied the whole analysis approach to a specific commercial tool, a fact which was not considered desirable.
Therefore our initial SysML models have the structure of simple graphical designs; their role is just that one of constituting an intermediate, easy-to-understand documentation halfway between the natural language requirements and the formal models\footnote{more details can be found in \cite{refrssrail}}.
Starting from the UMC notation, further formal models have been automatically generated in the notations accepted by the ProB\footnote{https://prob.hhu.de/} and CADP/LNT\footnote{https://cadp.inria.fr/} tools.
UMC~\cite{refKANDI,refUMC2,refUMC1} has been chosen as the target of the initial formal encoding because it is a tool natively oriented to fast-prototyping of SysML systems.
It supports a textual notation of UML state-machine diagrams that directly reflects the graphical counterpart, allows fast state-space exploration, state- and event-based (on-the-fly) model checking, and detailed debugging of the system.
Last but not least, it is part of a framework developed locally at ISTI. We have a deep insider knowledge that allowed us to easily implement translators towards the other formal notations within the time and effort constraint of the project. However, UMC is essentially a teaching/research-oriented academic tool and lacks the maturity, stability, and support level required by an industry-usable framework.
Also for this reason we have planned inside the project the exploitation of further, more industry-ready formal frameworks.
ProB \cite{refProB2008} has been selected as the second target of the formal encoding because of its recognized role (see e.g. \cite{refICSE,refTSE}) in the field of formal railway-related modeling. Is it supported by (more than one) very user-friendly GUI. It allows LTL/CTL model checking, state-space exploration, state-space projections, and trace descriptions in the form of sequence diagrams.
Last but not least, it is a framework with which we have already had some previous modeling experience \cite{refD43}, and that did not require a learning-from-scratch step.
CADP/LNT~\cite{refCADP,refLNT} has been selected as the third target of the formal encoding because of its theoretical roots on LTS-related theories. These allow to reason in terms of minimizations, bisimulations, and compositional verifications. CADP is a rich toolbox that supports a wide set of $\mu$-calculus-based branching-time logic and a powerful scripting language (SVL~\cite{refSVL}) to support verification. Also in this case its choice has been influenced by the previous experiences we have had with this framework \cite{refTACAS,refFMSD}.
There are several ways in which SysML/UML designs might be encoded into the ProB and LNT formal notations.
In our case, we made the choice to generate both ProB and LNT models \emph{automatically} from the UMC model.
The translation implemented in our demonstrator is still a preliminary version and does not exploit at best all the features potentially offered by the target framework \footnote{E.g. all message parameters are mapped into integer values without considering the specific subrange to which that might belong.}. Nevertheless, the availability of the automatic translation proved to be an essential aspect of the demonstrated approach.
Our models and scenarios have been developed incrementally, with a long sequence of refinements and extensions. At every single step, we have been able to quickly perform the lightweight formal verification of interest with almost no effort. This would not have been possible without an automatic generation of the ProB and LNT models.
In the following, we will give some details on the overall structure of the generated models, referring to D2.5~\cite{refD2.5} for a broader presentation. All the UMC/ProB/LNT models, specifying the scenarios of interest, are available from an open access repository~\cite{refZENmodels}, as well as the source code of the applied translators~\cite{refZENcode}.
\subsubsection{UMC encoding}
In UMC, a system definition is specified as a set of active objects that are instances of class definitions. A class declaration
specifies a template of state-machine, defining the set of events accepted by the machine, its local variables, and the state-machine behavior when state transitions are triggered.
State machine transitions are encoded in a simple textual form and specify, as shown in Figure~\ref{UMCrule}:
\begin{itemize}
\vspace{-1pt}
\item an optional transition label (R9\_ICSL\_userdataind),
\vspace{-3pt}
\item the source and target states of the transition (COMMS, COMMS),
\vspace{-3pt}
\item a block \{...\} containing: the triggering event of the transition (ISAI\_DATA\_indication), possibly with parameters and guards, and the sequence of actions to be performed as an effect of the transition (the sending of the IRBC\_User\_Data\_indication signal to the RBC\_User component and the assignment to the receiveTimer variable).
\end{itemize}
Appendix B shows the UMC encoding of the component I\_CSL whose UML state-machine diagram is shown in Appendix A.
The mapping of the UML diagrams to the UMC encoding is almost direct.
There are only a few aspects that deserve some attention. One point is that UMC transitions are ``atomic'' also at the system level, while the UML transitions are ``atomic'' only with respect to the state-machine to which they belong. Therefore, if we have a UML transition that sends several signals to other objects, a correct modeling of the behavior requires splitting the UML transition into several atomic steps. An example of this is shown in Figure~\ref{UMCr6}, where a UML transition sending three signals is split into a sequence of three UMC transitions. The second point is that in UML, when a dispatched event does not trigger any transition it is simply removed from the event queue and discarded.
This behavior is implicit in the state-machine diagram, but it is reasonable to make it explicit in the UMC designs to simplify the translation of the models into the other notations. This also allows distinguishing more clearly the case in which an event is intentionally (correctly) discarded from the cases in which the arrival of the event is simply a not relevant situation or the case in which it is a really unintended behavior highlighting a case of system malfunctioning.
\vspace{-10pt}
\begin{figure}[ht!]
\centering
\includegraphics[width=0.7 \textwidth]{image-UMCrule.pdf}
\caption{Textual encoding of a state-machine transition} \label{UMCrule}
\end{figure}
\begin{figure}[ht!]
\centering
\includegraphics[width=0.8 \textwidth]{image-R6.pdf}
\caption{Splitting a UML transition into a sequence of UMC atomic transitions} \label{UMCr6}
\end{figure}
\subsubsection{ProB encoding}
A system specification is structured in ProB as a ``B machine''. In our case, since the system under analysis is composed of several mutually interacting state-machines (and the B language is not able to deal with this concept), we need to ``merge'' all these components into a unique, global state-machine. This has several implications:
\begin{itemize}
\item The class attributes of UML state-machines must be merged into a single B state-machine definition. This may require the prefixing of the variable names with the component names to avoid name clashes. The same manipulation has to be done for the operation names (transition labels in UMC) and the other entities that may require duplication.
\item The currently active state of a UML state-machine is represented in B by the current value of an ad-hoc variable \emph{statemachine\_STATE}. There is one such variable for each UML state-machine.
\item Within the B machine structure, all types, constants, and variable definitions and initializations must appear at the beginning of the machine definition. This disrupts the original structure of the system, forcing us to spread the UML state-machine definition into several places in the B machine specification.
\item In UML state-machines, the event pool (a buffer implementing asynchronous communications that contains at each moment the set of signals arrived in a state-machine but not yet dispatched or discarded) is part of the engine support and thus is not explicitly modelled. In B these event-pool components must be explicitly modelled. This is because, contrary to UMC, B is not a tool natively designed for handling UML state-machines. Therefore ``buffer'' variables representing the state-machine event pool are added to the B model. Consequently, the action of sending a signal to another state-machine will be modelled with the insertion of a value to the corresponding variable buffer, and the dispatching of a signal to trigger a transition will be modelled with the extraction of the first element of such a buffer.
\item Each transition rule definition of the UMC state-machine design is mapped onto an equivalent operation of the B machine.
\end{itemize}
Figure \ref{PROBrule} shows, as an example, the ProB encoding of the UML transition R4 of the initiator CSL component, while the full code of the state-machine is shown in Appendix C.
\begin{figure}[ht!]
\centering
\includegraphics[width=0.75 \textwidth]{image-PROBrule.pdf}
\caption{Textual encoding of a UMC (left) and ProB (right) state-machine transition} \label{PROBrule}
\end{figure}
\vspace{-10pt}
\subsubsection{CADP/LNT encoding}
LNT is one of the formal notations accepted by the CADP verification framework. The notation is a simplified variant of E-LOTOS~\cite{elotos}, of which it preserves the expressiveness but adopts a more user-friendly and regular notations borrowed from imperative and functional programming languages. A system is described in LNT as a parallel composition of (parametric) processes, which synchronize upon a statically defined set of events. A process can have private variables that can be manipulated with classical imperative statements.
The global environment is constituted by the data types and functions used by the processes.
An LNT specification is internally translated into the LOTOS~\cite{lotos} algebraic notation and can be analyzed using the CADP toolbox.
In this case, each UMC state-machine is associated with an independent LNT process. All the processes do not share any memory and interact through synchronous actions in the typical style of process algebras.
Each process handles a local event pool modelled as a FIFO buffer and is \emph{always} enabled to accept synchronizations from other processes willing to push a new event in the queue. Beyond accepting incoming messages, the LNT process can internally evolve, performing internal steps that transform the local status or synchronizing with other processes by sending messages towards other state-machines.
The final system is obtained by composing in parallel all the processes which synchronize the corresponding actions of sending and receiving a message.
Figure~\ref{LNTprocess} shows the overall structure of a state-machine process corresponding to the initiator CSL, while the full code of the process is shown in Appendix D.
\begin{figure}[ht!]
\centering
\includegraphics[width=0.9 \textwidth] {image-LNTprocess.pdf}
\caption{The LNT structure corresponding to the initiator CSL state-machine} \label{LNTprocess}
\end{figure}
\vspace {-10pt}
\section{The formal analysis}
The first goal of our analysis has been the proof that all the three generated models are equivalent. This has been done by saving the possible behavior of the models in the form of Labelled Transition System~\footnote{While in the case of CADP and UMC saving a model in the .aut textual LTS is available as a builtin feature of the framework, in the case of ProB this LTS generation has been obtained through a automated transformation of the model state-space originally saved in the ProB ``.statespace'' textual format.}, and by applying comparison tools~\footnote{e.g. mCRL2 ltscompare or CADP bcg_cmp.} to verify that the three ProB and LNT models are strongly equivalent to the UMC models \footnote{UMC can be configured to associate the LTS transition labels with the UMC transition labels, or with the occurring communications actions, or with other observable events.}.
The main goal of the demonstrator, however, is \emph{not} the complete formal verification of a (fragment of a) standard, but the exemplification of the \emph{categories} of costs and benefits that may come to play with the choice of exploiting formal methods for the improvement of system requirements documents
The focus of our formal analysis is, therefore, to show with some evidence \emph{how} formal methods may be of help in detecting the design errors potentially introduced while producing the UML executable model, in verifying the high-level properties expected by the full system and by its specific components, and in generating clear and rigorous (graphical) feedback on the specified system to the requirements designers.
The detailed analysis of the costs and benefits, not only qualitative but also as far as possible quantitative, is the object of a separate 4SECURail deliverable \cite{refD2.6}).
From our experience, it has become evident that formal methods can be used in a lightweight (i.e., almost ``push button'') way or in an ``advanced'' way. These two degrees of exploitation of formal methods require a very different level of effort and background.
A rigorous static analysis of the formal models is probably the simplest example of lightweight use of formal methods. Just loading a system specification in the verification tool may immediately reveal mistakes and anomalies in the code (type violation, non-relevant updates, missing initializations, mismatch of parameters in messages, etc.).
Other behavioral properties like the absence of deadlocks or examples of reachability of certain states or events can still be verified with just a button-pushing or by writing extremely simple logical properties. Trace examples or counter-examples can be visualised in the form of a UML message sequence diagram \footnote{This is natively possible in the UMC framework and very recently also in the Tck/Tk version of ProB.}.
Further information can be gathered by monitoring the generation and the statistics on the system state-space (if not too large). The visualization of state-space projections (i.e., graphical views of the system state-space once reduced after making observable only some specific detail of the system) can be of great help in understanding and confirming the system behavior without resorting to the encoding of complex temporal logic formulas.
The analysis of more complex behavioral properties, however, may require the writing of more complex temporal logics formulas. This activity may require a greater background and more advanced knowledge of the verification tools.
Figure~\ref{featurestable} shows a table of \emph{some} of the features provided by our three frameworks. In the table, the features that can be easily exploited without any particularly advanced formal methods and tool knowledge, in an almost ``push button'' way, are those appearing in black.
As it can be seen by observing the mentioned table of features, an advantage of our ``formal methods diversity'' approach is the possibility of exploiting the power offered by the whole set of frameworks, like state- or event-based model checking, linear- or branching-time model checking, state-space projections, custom system observations, and various state-space minimizations or reductions.
In our experimentation, the following features have been the most used (more details can be found in Deliverable D2.5 \cite{refD2.5}:
\begin{itemize}
\vspace {-2pt}
\item static analysis in UMC/ProB/LNT
\vspace {-2pt}
\item explanations and animations in UMC/ProB
\vspace {-2pt}
\item weak-complete-divergence-sensitive-trace generation in UMC
\vspace {-2pt}
\item fast state-space generation in UMC
\vspace {-2pt}
\item divbranching minimizations in CADP
\end{itemize}
\vspace {-5pt}
\begin{figure}[ht!]
\centering
\includegraphics[width=0.8 \textwidth]{image-featurestable.pdf}
\vspace {-5pt}
\caption{Table of verification features} \label{featurestable}
\end{figure}
\vspace {-5pt}
When the same feature is available on multiple platforms, also usability aspects play an important role in selecting which one to exploit. E.g. CADP does not allow to observe the evolution of the values of the process variables during the animation of the behavior or the observation of a counter-example, ProB and UMC have richer visualization system, allowing among the other things to observe a trace in the form of a Sequence Message chart, SVL scripting in CADP makes easier the structuring and documentation of the ongoing verification process.
The downside of this \emph{formal methods diversity} approach is that becoming expert in the use of all these frameworks is likely to require a steep learning curve, with the needed single effort to be multiplied by the number of frameworks and with the risk of not becoming expert in any of them.
\subsection{Properties and scenarios}
The formal analysis of the system that has been performed during the the project activity is surely not complete, but sufficient to become reasonably confident in the absence of implementation or logical errors. Further tests and verification are still in progress, e.g., from the point of view of compositional verification in the context of the CADP/LNT framework.
Several kinds of architectures can be generated to observe the system properties or the properties of single components.
Figure~\ref{systemcomponents} shows the case of a ``complete'' architecture, where all the system components are composed together with the needed environment components. Several flavors of this architecture can be designed, depending on the properties we want to observe, on the limit to the complexity we want to set, and on the kind of behavior of the environment we want to consider.
Once the desired behavior of the environment components is established, they need to be instantiated into an executable scenario with the setting of a list of internal parameters fixing the parametric aspects of the specification.
The simplest architecture we have built is when the two RBC remain ``silent'' (not sending any message) and just receive connection/disconnection indications from the CSL layer. In this architecture, the Euroradio level is imagined to be ``nice'', i.e., introducing at most small delays in the communications, not losing nor reordering messages, and not autonomously aborting the existing active communication channel. The set of UML state-machine diagrams describing the components of this architecture is shown in Appendix A.
In this case, the system is simply expected to set up a communication line, keep it alive by exchanging life-signs, and re-establish it in case of failures. Failures can still occur depending on the specific values of the parameters used to instantiate the scenario. In particular, the most important parameters affecting the system behavior in this scenario are:
\vspace {-2pt}
\begin{itemize}
\item The timeout (max\_connectTimer) representing the maximum delay that initiator CSL is allowed to wait before receiving a reply to a connection request (after which a new connection request can be retried).
\vspace {-2pt}
\item The timeout (max\_initTimer) representing the maximum delay that each SAI is allowed to wait for the successful initialization of a new communication line before aborting the creation process.
\vspace {-2pt}
\item The timeout (max\_sendTimer) triggering the periodic sending by the CSL of a new life-sign to keep the communication line alive.
\vspace {-2pt}
\item The timeout (max\_receiveTimer) representing the maximum delay a CSL is allowed to wait before receiving a life-sign or a rbc message from the other side (whose expiration causes the abort of the re-establishment of the communication line).
\end{itemize}
Other important system parameters, like
\begin{itemize}
\item The limit (N) of consecutive loss of messages (detected by the observation of sequence numbers) acceptable by the SAI components, before aborting the safe connection line.
\vspace {-2pt}
\item The maximum traveling delay (K) acceptable for incoming messages whose violation forces the discarding of the message.
\end{itemize}
do not play a relevant role in this scenario.
In this case, we can observe that if the connection, initialization, and receive timer are sufficiently large\footnote{time unit are measured as multiples of the basic system execution cycle.} (e.g., max\_connectTimer = max\_initTimer = 20, max\_receiveTimer = 15, max\_sendTimer = 5) the system successfully establishes an initial communication line without ever losing it.
If we instead reduce the max\_receiveTimer parameter to 8, communication failures and communications line restarts begin to appear
(and the system state-space grows from 19,788,895 to 74,713,472 states).
An extension of the previous architecture is where the RBCs are allowed to send slots of ``nmax'' messages. In this case, we can observe how messages, if they arrive, are delivered to the target RBC without reordering, duplications, and within a maximum delay. In this case, the system state-space grows to 65,386,049 and to 84,883,327 states when slots of 1 or 2 messages are sent by just the RBC on the initiator side.
Beyond the complete architectures described above, other kinds of architectures have been set up.
For example, an ``ICSL testing'' architecture, where the Initiator CSL component is stimulated with an abstract model of the SAI and RBC components, and an ``Initiator-side testing'' architecture, where the whole ER layer and CSL/SAI/RBC on the ``called side'' of the system are abstracted by environment components. In this latter case, we can observe how the messages received from the RBC environment, in the absence of disconnections, are always delivered to the EuroRadio level without losses, duplications, reordering, and within a limited delay.
Further examples of the verifications that have been done on the models can be found on \cite{refD2.5}.
\section{Related works}
The experimentation of formal methods diversity for the analysis of the same specification has already been described by one of the authors in \cite{refST7T7,refMARSten}. In that case, the focus was on a much simpler case study that did not have the complexity of a parametric signaling standard.
As a collateral activity of the project, the same fragment of UNISIG SUBSET 98 has been modelled and verified with UPPAAL by Basile et al. in \cite{refFMICSbasile}.
The current translation of UML state-machine diagrams into ProB has been initially experimented in \cite{refD43}, but other approaches are possible; the UML-like UML-B notation \cite{refUMLB,refSYSMLB} has been proposed as a bridge between an Eclipse-based model framework (Rodin) and the Event-B modeling notation; the suggested approach seems, however, to be tailored to the verification and refinement of single state-machines and not to the analysis of the overall behavior of a set of interacting state-machines.
Many other formal notations have been the target of translations from SysML design. Another work very similar to our from the point of view of the goal is the one described by Bouwman et al. \cite{refBasPoint}. Also in that case the goal was aimed at the analysis of a signaling standard under development rather than the verification of a specific system. The target notation and framework is, in that case, mCRL2.
\section{Conclusions}
Formal analysis of a still fluid, parametric, and environment-depending requirements specification (i.e., requirements elicitation and validation) is a very different kind of activity than verifying that a given implementation is correct with respect to a specific, stable, and rigorous specification.
The possibility to exploit the analysis features offered by more than one verification framework can be of help in approaching this activity.
Moreover, the design of several different scenarios can be necessary to observe the system behavior under various assumptions. From this point of view, the possibility to \emph{automatically} generate the formal models to be analyzed from some executable, widely known, standard, tool-independent notation is a crucial point to make the analysis process accessible also from to people with the relevant railway-signaling knowledge.
The formal methods diversity approach experienced in the project has shown how a lightweight use of formal verification frameworks can already, with a small effort, produce important feedback on the quality of the design. A deeper and more advanced exploitation of all the available features, however, remains a difficult and daunting task, especially when the system complexity and size grow to a level requiring ad hoc mitigation approaches.
The activity shown with our experimentation can be continued and improved in several directions. The executable UML subset used in the project can be greatly extended, still preserving its clear and rigorous semantics and its possibility of automatic translation into several formal notations.
Also, the set of target formal notations (currently limited to UMC, ProB, and LNT) can be extended with a likely small effort to further frameworks like mCRL2, Spin, nuXmv, just to mention some. The detailed description of the project results, the initial executable UML designs, their formal encoding, the source code of the translators, are all publicly available \cite{ref4SECdeli,refZENmodels,refZENcode}.
\vspace{10pt}
{\small
\textbf{Acknowledgements}
This work has been partially funded by the 4SECURail project.
The 4SECURail project received funding from the Shift2Rail Joint Undertaking under the European Union's Horizon 2020 research and innovation programme under grant agreement No 881775 in the context of the open call S2R-OC-IP2-01-2019, part of the ``Annual Work Plan and Budget 2019", of the programme H2020-S2RJU-2019.
The content of this paper reflects only the authors' view and
the Shift2Rail Joint Undertaking is not responsible for any use that may be made of the included information.
We are grateful to the colleagues of the Work Stream 1 of project 4SECURail, and in particular to Alessandro Fantechi, Stefania Gnesi, Davide Basile, Alessio Ferrari, Maurice ter Beek, Andrea Piattino, Laura Masullo and Daniele Trentini for the comments and suggestions during the project.}
\pagebreak
\nocite{*}
\bibliographystyle{eptcs}
|
2,877,628,090,430 | arxiv | \section{Introduction}\label{sec:introduction}
\input{sections/1.introduction.tex}
\section{Problem Setup}\label{sec:problem}
\input{sections/2.problem.tex}
\section{Method}\label{sec:method}
\input{sections/3.methods.tex}
\section{Experiments}\label{sec:experiments}
\input{sections/4.experiments.tex}
\section{Conclusion}\label{sec:conclusion}
\input{sections/5.conclusion.tex}
\section*{Acknowledgements}
This work was partially supported by ONR grant 62909-19-1-2096 to DSW and RS.
\subsection{Problem Setting}
Based on the assumptions introduced above, this section describes (a) an algorithm for learning expected outcomes $Y$ under crude interventions on $X$, operationalized as $F = do(w)$, conditioned on pre-treatment covariates $Z$; and (b) a procedure for interpreting the elements of $X$ that play a mediating role in the fitted model.
\paragraph{Causal response estimation.}
We assume access to a set of features $\phi_i: \mathcal{X} \times \mathcal{Z} \rightarrow \mathbb{R}$. These features are \emph{candidate mediators}, moderated by covariates $Z$, which describe the outcome model for $Y$ as
\begin{align}
Y = \theta_0 + \sum_{i=1}^d \theta_i \phi_i(X, Z) + \epsilon, \label{eq:y}
\end{align}
where $\epsilon$ is an independent error term with $\mathds{E}[\epsilon] = 0$.
Candidates may come from domain experts (e.g., experimentally validated regulatory pathways) or a data-driven approach (e.g., latent factors learned by an autoencoder). They represent a macro-level summary that clarifies what the existing $\mathcal F$ is able to modify in $X$ that simultaneously contributes to $Y$. For example, if $X$ describes a spatially-distributed object, like neural activations or environmental sensors, features $\phi_i$ can correspond to smoothing windows with localized information. If $X$ is a text document, $\phi_i$ may represent aggregate interpretable interactions of relevant entities, topics, and other parts of speech. The linear assumption is substantive but not especially restrictive, given a sufficiently flexible library of basis functions $\Phi$, which, as mentioned above, can be trained directly via neural networks or some other representation learning method.\footnote{Note that, though each $\phi_i$ is a function of $X$ and $Z$, we occasionally simplify notation by suppressing the dependence, writing $\phi_i$ for $\phi_i(X,Z)$ and $\Phi = \{\phi_i\}_{i=1}^d$.}
Given Eq.~\eqref{eq:y}, it follows by the assumptions encoded in Fig.~\ref{fig:setup} and by linearity of expectation that
\begin{align}
\mathbb{E}[Y \mid do(w), z] = \theta_0 + \sum_{i=1}^d \theta_i \mathbb{E}[\phi_i(X, Z) \mid w, z]. \label{eq:do}
\end{align}
We therefore propose a two-stage procedure to estimate $\mathbb{E}[Y \mid do(w), z]$:
\vspace{-1ex}
\begin{enumerate}
\item Learn $g_i(w,z) \equiv \mathbb{E}[\phi_i(X, Z) \mid w, z]$ for all $i$ via any black-box regression algorithm, and let $\hat{\textbf{g}}$ denote the $d$-dimensional vector of resulting expectations.
\item Learn $\bm{\hat\theta} = \argmin_{\bm{\theta}} \mathbb{E}[(Y - \bm{\theta}^\top \hat{\textbf{g}})^2]$ via regularized regression \citep[e.g. Lasso,][]{tibshirani1996regression}, to provide sparsity on $\bm{\theta}$ where supported by the data.
\end{enumerate}
\vspace{-1ex}
The procedure is detailed in Alg.~\ref{alg:prediction}, where we consider the case in which labeled datasets are pooled together into a set with $n$ samples, and we learn a model for $p(x~|~w^\star, z)$ from unlabeled conditions with a single treatment level $w^\star$. This exploits the known structural relationship between $W$, $X$, $\Phi$ and $Y$. In particular, it represents the marginalization of $X$ directly in terms of $\ensuremath \mathbb{E}[\phi_i(X, Z)~|~w, z]$,\footnote{This can be even further simplified if we opt for product features of the shape $\phi_i(X, Z) \equiv \phi_{ix}(X)\phi_{iz}(Z)$, as in this case we have $\ensuremath \mathbb{E}[\phi_{ix}(X)\phi_{iz}(Z)~|~w, z] = \ensuremath \mathbb{E}[\phi_{ix}(X)~|~w, z]\phi_{iz}(z)$ \citep{kaddour2021}.} which avoids the density estimation problem of learning $p(x~|~w, z)$.
There is a relation between this idea and methods for estimating non-linear causal effects in additive-error instrumental variable models based on (potentially infinite) basis expansions \cite{singh:2019,muandet:2020}. However, given the potential high-dimensionality of $X$ and the desire for interpretability, we favor dictionaries that are either hand-constructed or the result of adaptive algorithms. Moreover, although we have the option of fitting $\theta$ by regressing $Y$ directly on $\Phi$, we still favor the regression on $\hat{\textbf{g}}$ instead, as $\phi_i{x, z}$ is a random variable not observable at test time.
\begin{algorithm}[!t]
\caption{Causal Response Prediction}
\label{alg:prediction}
\begin{algorithmic}
\Require Historic interventions $\{w_i, \Phi(x_i,z_i), z_i, y_i\}_{i=1}^{n}$, new intervention training set $\{w^\star, \Phi(x_j,z_j), z_j\}_{j=1}^{n^\star_{train}}$,
new intervention test set $\{{w^{\star}}', z_{j}'\}_{j=1}^{n^\star_{test}}$ \\
\State \textbf{Historic Interventions}
\State Learn $g_k(w, z) = \ensuremath \mathbb{E}[\phi_k (X, Z)~|~w, z]$ \\
\Comment Stage 1, via any black-box model
\State Learn $f(w,z) = \ensuremath \mathbb{E}[Y~|~\textbf{g}(w,z)]$ \\
\Comment Stage 2, via an $L_1$-penalized model
\\
\State \textbf{New Intervention}
\State Learn on train split
\State $g^\star_k(w^\star, z) = \ensuremath \mathbb{E}[\phi_k (X, Z)~|~w^\star, z]$ \\
\Comment Stage 1, update for new intervention $w^\star$
\State Predict on test split
\State $\hat{y} = f(\textbf{g}^\star({w^\star}', z')) = \ensuremath \mathbb{E}[Y~|~do({w^{\star}}'), z']$\\
\Comment Stage 2, predict using pre-learned $f$\\
\Return $\hat{y}$
\end{algorithmic}
\end{algorithm}
\begin{algorithm}[!ht]
\caption{Pragmatic Mediation Selection}
\label{alg:mediation}
\begin{algorithmic}
\Require Weights $\mathbf{\theta}$, training set $\{w_i, \Phi(x_i,z_i), z_i\}_{i=1}^{n}$, test set $\{w_i', \Phi'(x_i',z_i'), z_i'\}_{i=1}^{n'}$, one-sided paired difference test $c(\cdot)$, level $\alpha$, mediators $\mathcal{M} = \{\}$\\
\For {$\phi_{i} \in \Phi$}
\If {$\theta_i \neq 0$}
\State Learn $g_i^0(z) = \ensuremath \mathbb{E}[\phi_i(X, Z)~|~z]$ on train split
\State Learn $g_i^1(z, w) = \ensuremath \mathbb{E}[\phi_i(X, Z)~|~z, w]$ on train split
\State Obtain residual $\epsilon_i^0 = \phi_i' - g_i^0(z')$ on test split
\State Obtain residual $\epsilon_i^1 = \phi_i' - g_i^1(z', w')$ on test split
\State $p = c(|\epsilon_i^0|, |\epsilon_i^1|)$
\If {$p \leq \alpha$}
\State Add mediator $\mathcal{M} = \mathcal{M} \cup \{\phi_i\}$
\EndIf
\EndIf
\EndFor
\\
\Return $\mathcal{M}$
\end{algorithmic}
\end{algorithm}
\paragraph{Explainable pragmatic mediation.} Under the assumptions of our setup, we would like to provide practitioners with qualitative information on the estimated role of the candidate mediators. Informally, we say that $\phi_i(X, Z)$ is a \emph{causal pragmatic mediator} if and only if it covaries with $W$ and $Y$ simultaneously, with adjustments for $Z$ and the other candidate mediators depending on the scenario. More formally, causal pragmatic mediators satisfy two criteria:
\vspace{-1ex}
\begin{itemize}
\item[(i)] $\phi_i(X, Z) \dep W~|~Z$,
\item[(ii)] $\phi_i(X, Z) \dep Y~|~\{\Phi_{\backslash i}, Z\}$
\end{itemize}
\vspace{-1ex}
\noindent where $\Phi_{\backslash i} \equiv \Phi \backslash \phi_i(X, Z)$. We will henceforth refer to (i) and (ii) as $\mathcal M$-criteria.
Another way of interpreting this is by saying that $W$ has a ``nonzero conditional total effect'' on $\phi_i$ for some $Z = z$ (that is, conditional association without adjusting for $\Phi_{\backslash i})$, while $\phi_i$ has a ``direct effect'' on $Y$ (conditional association, also conditioning on $\Phi_{\backslash i}$).
This definition is entirely agnostic to any possible causal structure among the elements of $\Phi$, a structure which is itself indeterminate since $do(x)$ is not defined. Notice that the idea of combining a ``total'' effect ``into'' $\Phi$ with a ``direct'' effect ``out of'' $\Phi$ relates to settings where we may want to design new elements of $\mathcal F$ that ``short-circuit'' the mechanism, by directly targeting $\phi_i$ if this is at all possible and desirable in a particular domain.\footnote{For instance, ignoring $Z$ for simplicity: if there exists a ``Pearlian'' causal chain $X_1 \rightarrow X_2 \rightarrow X_3 \rightarrow Y$ in the system with no further edges, and we have a rich dictionary $\Phi$, (features of) neither $X_1$ nor $X_2$ alone would qualify as causal pragmatic mediators, while (features of) $X_3$ would, even if all interventions in $\mathcal F$ can only directly modify $X_1$ and $X_2$.}
Although the distinction is not crucial for prediction, causal mediators can provide valuable insights about what in $X$ characterizes the effect of $W$ on $Y$. For instance, if only a subset of regions of the brain respond to stimuli and predict some behavior, then novel interventions can be designed targeting just those regions with detectable causal impact.
\begin{figure}[t!] \label{fig:decomp}
\centering
\includegraphics[width=\columnwidth]{figures/decomposition.pdf}
\caption{Recursive partition of $\Phi$ by how elements do or do not shift conditional average treatment effects. See text for details.}
\vspace{-1ex}
\label{fig:decomposition}
\end{figure}
The leaf nodes of the tree depicted in Fig.~\ref{fig:decomposition} correspond to candidate mediators with different functional roles.
\vspace{-1ex}
\begin{enumerate}
\item $\textcolor{CCoral}{\tilde{\Phi}} \equiv \{\phi_i: \phi_i \independent Y~|~\Phi_{\backslash i}, Z\}$. These candidates will receive zero weight in the linear formula described by Eq.~\eqref{eq:y} (and, hence, also Eq.~\eqref{eq:do}). That is, for each $\phi_i \in \textcolor{CCoral}{\tilde{\Phi}}, \theta_i = 0$.
\item ${\textcolor{CGreen}{\hat{\Phi}}} \equiv \{\phi_i: \phi_i \not\in \textcolor{CCoral}{\tilde{\Phi}} \land \phi_i \independent W,Z\}$. In this subset, $\ensuremath \mathbb{E}[\phi_i~|~w, z]$ is constant for all $w$ and $z$. These terms will be absorbed into the intercept of the linear formula described by Eq.~\eqref{eq:do}. That is, $\mathbb{E}[Y \mid do(w), z] = \theta_0 + \sum_{\phi_i \in {\textcolor{CGreen}{\hat{\Phi}}}}\mathds{E}[\phi_i]~+~$``function of $w$ and~$z$''.
\item ${\textcolor{CYellow}{\overline{\Phi}}} \equiv \{\phi_i: \phi_i \not\in \{\textcolor{CCoral}{\tilde{\Phi}} \cup {\textcolor{CGreen}{\hat{\Phi}}}\} \land \phi_i \independent W~|~Z\}$. These candidates will receive nonzero weight in Eq.~\eqref{eq:y}, but only through the $Z \rightarrow \phi_i \rightarrow Y$ path. They are invariant in $W$ and therefore, just like $\textcolor{CCoral}{\tilde{\Phi}}$ and ${\textcolor{CGreen}{\hat{\Phi}}}$, do not contribute to conditional average treatment effects $\mathds{E}[Y~|~do(w), z] - \mathds{E}[Y~|~do(w'), z]$.
\item $\textcolor{CViolet}{\Phi^*} \equiv \{\phi_i: \phi_i \in \Phi \backslash \{\textcolor{CCoral}{\tilde{\Phi}} \cup {\textcolor{CGreen}{\hat{\Phi}}} \cup {\textcolor{CYellow}{\overline{\Phi}}}\}\}$. Only this latter subclass satisfies the $\mathcal{M}$-criteria, picking out causal mediators $\phi_i$ on the $W \rightarrow \phi_i \rightarrow Y$ path.
\end{enumerate}
This recursive partitioning of $\Phi$ immediately suggests a practical method for pragmatic mediation discovery. First, we perform our two-step estimation procedure. Then, for each $\phi_i$ such that $\theta_i \neq 0$, perform a conditional independence test against the null hypothesis $H_0: \phi_i \independent W~|~Z$.\footnote{In randomized trials, where $Z \independent W$ by design, this can be replaced by a marginal association test against $H_0: \phi_i \independent W$, for those $\phi_i$ which are non-trivial functions of $X$.} See Alg.~\ref{alg:mediation} for details.
There exists no uniformly valid conditional independence test for continuous conditioning variables \citep{Shah2018}. However, numerous nonparametric methods have been developed with good performance on real and synthetic datasets \citep{Heinze2018}. In our experiments, we use a simple nested regression procedure, in which we compare the absolute value of out-of-sample residuals for null and alternative models -- i.e., $g_{i}^{0}(z) = \ensuremath \mathbb{E}[\phi_i~|~z]$ and $g_{i}^{1}(z, w) = \ensuremath \mathbb{E}[\phi_i~|~z,w]$, respectively -- using a one-sided Wilcoxon rank-sum test.\footnote{Other tests could in principle be substituted here, e.g. the binomial test or $z$-test, depending on what assumptions one is willing to make about residual distributions. See \citep[Sect.~6]{Lei2018}.} If predictive accuracy significantly improves with the inclusion of $W$, then we reject $H_0$. Estimation and testing are performed on separate samples to ensure unbiased inference. The procedure can easily be modified to adjust for multiple testing.
\subsection{Setup}
We have two primary goals: (a) causal response prediction, and (b) identification of causal pragmatic mediators. We describe the overall setup for all domains below.
\paragraph{Prediction.}
We assume access to $m - 1$ mutually independent historic training regimes with corresponding labeled datasets $\mathcal D_{l_1}, \dots, \mathcal D_{l_{m - 1}}$, where each $\mathcal{D}_{l_k} = \{(W_i, X_i, Y_i, Z_i)^{l_k}\}_{i=1}^{|\mathcal D_{l_k}|}$. Our goal is to learn $\ensuremath \mathbb{E}[Y~|~do(w^\star), z]$ for a new regime $F = do(w^\star)$ (e.g., a prospective intervention). In this regime, we are given access to limited labeled training data $\mathcal{D}_{l_{w^\star}} = \{(W^\star_i, X_i, Y_i, Z_i)^{l_{w^\star}}\}_{i=1}^{|\mathcal{D}_{l_{w^\star}}|}$ and more unlabeled training data $\mathcal{D}_{u_{w^\star}} = \{(W^\star_i, X_i, Z_i)^{u_{w^\star}}\}_{i=1}^{|\mathcal{D}_{u_{w^\star}}|}$, where $|\mathcal{D}_{u_{w^\star}}| \gg |\mathcal{D}_{l_{w^\star}}|$. This captures settings where measurements for $Y$ are expensive, delayed, or simply unrecorded. All methods are evaluated on a test dataset $\mathcal{T}_{w^\star} = \{(W^\star_i, Y_i, Z_i)^{t_{w^\star}}\}_{i=1}^{|\mathcal{T}_{w^\star}|}$.
Baseline methods that estimate $\ensuremath \mathbb{E}[Y~|~do(w^\star), z]$ can only make use of the labeled dataset $\mathcal{D}_{l_{w^\star}}$, as all regimes are mutually independent. However, by exploiting structural information $\Phi(X,Z)$, we are able to leverage the invariant $p(y~|~x, z)$ distribution from prior regimes. That is, we estimate $\textbf{g}$ and $\bm{\theta}$ from $\mathcal D_{l_1}, \dots, \mathcal D_{l_{m - 1}}$ and predict effects in new regimes using only $Z$ and $W$, so our method effectively treats $\mathcal{D}_{l_{w^\star}} \cup \mathcal{D}_{u_{w^\star}}$ as a single test set.
We will compare our approach to multiple regression baselines that estimate $\mathbb{E}[Y~|~do(w^\star), z]$ as the proportion of labeled data for the new regime $\mathcal{L}_{w^\star}$ grows from $10\%$-$100\%$. Specifically, we consider models from four function classes: lasso regression (linear), support vector regression (SVR), random forest (RF), and gradient boosting (GB). Default hyperparameters are used throughout; see Appendix for details. We also note that other methods that seem to share similarity with our goal, such as co-training \citep{blum1998combining} and domain adaptation \citep{chen2011co}, would not be relevant baselines for comparison as they differ significantly from our work in two ways. (1) There is a two-stage functional decoupling arising from Eq.~\ref{eq:do} alongside variable decoupling that we aim to exploit by learning $g$ and then $f$; co-training does not involve such functional decoupling. (2) We can only leverage the first stage of the decoupling in a new domain. We are not aware of any domain adaptation method that accommodates this specific notion of adaptation.
As an additional check on our performance, we further consider 100 different settings of the target $Y$, by sampling 100 different parameters (i.e., weights $\bm{\theta}$) for its structural equations in all three tasks (for the image perturbation example we sample 1500 settings). By demonstrating consistent results across these trials, we illustrate that our method is robust to different configurations of the target variable.
\vspace{-1ex}
\paragraph{Explanation.} Our method is also able to find pragmatic mediators in the complex object $X$. By studying the high-level descriptions $\phi_i$ that (a) receive nonzero weight $\theta_i \neq 0$ in the sparse regression, and (b) reject $H_0: \phi_i \independent W|Z$ at some prespecified level $\alpha$, we can identify causal mediators of relevance. We report mediator discovery error rates for all experiments below. Significance levels for all tests were fixed at $\alpha = 0.01$, with $p$-values adjusted for multiple testing via \citet{Holm1979}'s method.
\subsection{Image Perturbation Simulation}
\label{sec:visual}
\textbf{Setup description.}
Our first example is simulated and visual, which we hope will provide some intuition for the structure of this problem. We start with five possible pixel patterns ($Z$) and perform interventions by adding bivariate normal noise with location $W$. These treatments are influenced with some probability by $Z$. The resulting post-perturbation image ($X$) is then summarized via four different convolution windows, $\Phi(X, Z) = \{\phi_1,\phi_2,\phi_3,\phi_4\}$, where each $\phi_i$ corresponds to a quadrant of the image, and the convolution weights are indexed by the pattern corresponding to the original image $Z$. Finally, the intensity of the pixels over the whole image leads to an outcome ($Y$), given by a linear combination of $\Phi$. The generative model used to produce the simulation is described in Fig.~\ref{tab:visual_ex}.
\begin{figure}
\begin{equation}
\begin{aligned
t \sim&\; \text{Multinomial}(\bm{p})\\
Z =&\; \mbox{pattern}_t \\
W \sim&\; \text{Multinomial}(\Delta_t) \\
f_w =&\; \begin{cases}
\text{for } i=0 \text{ to } 1000: \\
\hspace*{2em}\gamma \sim&\; \hspace*{-10em} \mathcal{N}(W,\textbf{I})\\
\hspace*{2em} \text{if } (d_0,d_0) < \gamma < (d_n, d_n):\\
\hspace*{4em} f_w[\gamma] = f_w[\gamma] + \eta
\end{cases}\\
X =&\; Z+f_w+\mathcal{N}(0,0.5)\\
\Phi =&\; \text{Convolution}_{t}(X) \\
Y =&\; \boldsymbol{\theta}^\top \Phi + \mathcal{N}(0,0.1)
\end{aligned} \nonumber
\end{equation}
\vspace{-4ex}
\caption{Description of the generative model used in the experiment of Sect.~\ref{sec:visual}.
\vspace{-4ex}}
\label{tab:visual_ex}
\end{figure}
$\bm{p} = [0.2, 0.2, 0.2, 0.2, 0.2]$ defines the multinomial distribution from which we sample shape indicator $t$. $\Delta$ denotes a $5\times 4$ matrix, where each row is a simplex containing different probabilities for selecting $W$ values. $d_0=0 $ and $d_n=10$ define the dimensions of all images. The condition involving them and $\gamma$ checks whether the sampled location falls within the image size. $\eta$ is a perturbation parameter (fixed at 0.1 in our experiment) that is added to the sampled location if it passes the check above. This example is designed for demonstrative purposes, and $\phi_1$ is constructed to be the pragmatic mediator we intend to find, as it both varies with $W$ and has a nonzero coefficient ($\theta_1 = 0.7$) in the structural equation for $Y$. See full details in the Appendix. Fig.~\ref{fig:imgPert_simulation_setup} shows an example set of sampled images.
\begin{figure*}[!t]
\centering
\includegraphics[width=\textwidth]{figures/mse_results.pdf}
\vspace{-4ex}
\caption{The mean squared error (MSE) between the estimated causal effect and the true causal effect
as a function of the amount of labeled data that is available in the new regime $do(w^\star)$.
\vspace{-2ex}}
\label{fig:MSE_results}
\end{figure*}
\begin{figure*}[!t]
\centering
\includegraphics[width=\textwidth]{figures/found_phis2.pdf}
\caption{The $\phi_i$ selected by the mediation discovery method. Each set is identified as follows: \textcolor{CRose}{red} by a conditional independence test, \textcolor{CBlue}{blue} by sparse regression, and \textcolor{CViolet}{purple} those satisfying both tests (\textbf{black} are the true mediators). Note for the high-dimensional genomics dataset the \textcolor{CViolet}{$\phi_i$} are identified by only testing those \textcolor{CBlue}{$\phi_i$} selected by sparse regression to increase testing power.
\vspace{-2ex}}
\label{fig:found_phis}
\end{figure*}
\textbf{Results.}
The results of all methods on a new intervention $w^\star$ are presented in Fig. ~\ref{fig:MSE_results}.
Additionally, Fig.~\ref{fig:found_phis} shows the process of mediator explanation for our method. The true mediator in this simulation, $\phi_1$, is indicated in black.
Conditional independence tests identify three windows -- $\textcolor{CRose}{\Phi} = \{\phi_1, \phi_2, \phi_3\}$, indicated in \textcolor{CRose}{red} -- that vary with $W$ after conditioning on $Z$. We fit a lasso regression to estimate causal effects (see Eq.~\ref{eq:do}), selecting windows $\textcolor{CBlue}{\Phi} = \{\phi_1, \phi_4\}$, indicated in \textcolor{CBlue}{blue}. Finally, the intersection of these two sets, $\textcolor{CViolet}{\Phi^*} = \{\phi_1\}$, is our causal mediator, marked in \textcolor{CViolet}{purple}. Fig.~\ref{fig:MSE_results_diffY} presents performance over 1500 different samples of parameters in the structural equation of prediction target $Y$. It shows similar trends to the single $Y$ setting, where our method dominates performance by baselines until 30-40\% of labels are available. The mean squared error (MSE) in this simplified example is far smaller, and required more samples to make std. error scale accordingly. We further note that for the single $Y$ setting, we picked $\theta = \{0.7, 0, 0, -0.5\}$ to clearly demonstrate the idea of pragmatic mediation. For Fig.~\ref{fig:MSE_results_diffY}, we instead sampled $\bm{\theta}$ from a distribution (See Appendix for details), which seemed to help the performance of some baselines, while adversely affecting others.
\subsection{Humorous Edits to News Headlines}
\textbf{Setup description.} As a second example, we consider a dataset from a computational humor experiment. Participants were given news headlines and asked to make single entity changes such that the resulting headline would be humorous \citep{hossain-etal-2019-president}. This work was further extended into a SemEval2020 task, and full datasets were made publicly available.\footnote{See \url{https://www.cs.rochester.edu/u/nhossain/humicroedit.html}.}
For our evaluation, we combine all listed datasets and define the following: original headline ($Z$), new word introduced by edit ($W$), revised headline ($X$). Following the analysis of \citet{hossain-etal-2019-president}, we carried out the following pre-processing procedures: 1. We generated clusters of edit words (granular $W$) by performing a $k$-means clustering on GloVe vector representations \citep{pennington2014glove} of each edit word, with $k=20$. The aim was to reduce the space of possible interventions to topics rather than individual words, for the purpose of defining data subsections as historic and new intervention splits. We used the resulting cluster label to create these splits. 2. We created 30 high-level descriptions $\phi$ for this setting (full description in the Appendix). One can think of $\Phi$ in this scenario as hypotheses to explain the funniness of an edited headline ($X$). 3. Computational humor is known to be a difficult domain for direct prediction tasks. For the illustrative purpose of this paper, we generated funniness scores for the outcome variable $Y$ as a linear combination of $\Phi$ with additive noise. A random third of the coefficients are assigned a value of 0, with the rest sampled from a uniform distribution $\mathcal{U}(-1,1)$.
\textbf{Results.}
The results of our estimation method of the outcome $Y$ for a random new intervention $w^\star$ are presented in Fig.~\ref{fig:MSE_results}. As can be seen, we achieve a MSE of $5.33$, well below alternative estimation methods of $\ensuremath \mathbb{E}[Y~|~do(w^\star),z]$. Furthermore, our method correctly identified the mediator $\phi_2$ in this setting, see Fig.~\ref{fig:found_phis}. Fig.~\ref{fig:MSE_results_diffY} provides another angle on the quality of predictions with our method by examining results over 100 trials with different configurations of $\bm{\theta}$. We can clearly see that our method still outperforms the baseline alternatives, and sees little variation in performance across parameter values, as can be seen by the small std. error bars.
\begin{figure*}[!t]
\centering
\includegraphics[width=\textwidth]{figures/mse_results_diffY.pdf}
\vspace{-4ex}
\caption{The mean squared error (MSE) between the estimated causal effect and the true causal effect
as a function of the amount of labeled data that is available in the new regime $do(w^\star)$. Means and std. error are over 1500 for the Image Perturbation experiments, and 100 for the other two, different configurations of $\bm{\theta}$, the parameters in the structural equations giving rise to $Y$.
\vspace{-3ex}}
\label{fig:MSE_results_diffY}
\end{figure*}
\subsection{Gene Knockouts}
\textbf{Setup description.} As a final experiment, we consider semi-simulated gene knockouts based on data from the Dialogue for Reverse Engineering Assessments and Methods (DREAM) challenge \citep{Marbach2012}. The \emph{E. coli} transcriptome published as part of the DREAM5 challenge comprises a $805 \times 4511$ gene expression matrix, with 334 candidate transcription factors.\footnote{See \url{http://dreamchallenges.org/project/dream-5-network-inference-challenge/}.} We use GENIE3 \citep{Huynh-Thu2010}, a leading gene regulatory network inference algorithm based on random forests, to fit the 4177 structural equations that govern this system. We treat the resulting model as our ground truth SCM.
We simulate $n = 10^4$ samples of baseline expression data for the transcription factors from a multivariate Gaussian distribution with parameters estimated via maximum likelihood. These values are then propagated by GENIE3 to downstream variables, resulting in a complete set of baseline expression data ($Z$). We simulate 10 gene knockout experiments, summarized by the out-degree of the corresponding transcription factor ($W$). Post-intervention expression is once again simulated by GENIE3 ($X$). We treat each subnetwork of at least 10 genes as a pathway, and summarize its expression by taking the first kernel principal component \citep{kpca} of the corresponding submatrix, i.e. the kernel eigengene. The difference between post- and pre-intervention eigengene expression for all 168 modules that meet this dimensionality criterion constitutes our high-level summary ($\Phi$). Modules are subsequently ranked by their Spearman correlation with $W$. The top and bottom 25 are assigned nonzero weight in a linear simulation of outcomes $Y$, with standard normal noise. More details can be found in the Appendix.
\textbf{Results.} Results for a random new gene knockout are presented in Fig.~\ref{fig:MSE_results}. The sparsity of this problem poses a particular challenge for baseline regression methods, which could potentially be mitigated with further tuning. In addition to achieving low MSE on the test set, our method additionally recovers 92\% of all true mediators, with an overall accuracy rate of 85\%. Most of the errors in this example appear to derive from false positives in the lasso regression, which could likely be improved with more cautious tuning of the $\lambda$ parameter that controls model sparsity. As can be seen in Fig.~\ref{fig:MSE_results_diffY}, the same trends remain in place when repeating the experiment over 100 different configurations of $\bm{\theta}$.
\section{Proofs}
\section{ADDITIONAL EXPERIMENTAL DETAILS}
\raisebox{-1pt}{We provide additional details on the conducted experiments below.}
\subsection{Method, evaluation task and baselines}
We tested our estimation method with three different setups: an image-based simulation, an experimental text dataset, and a naturally-simulated experimental genomics dataset. We considered the same evaluation task for all setups:
\begin{enumerate}
\item Using a train set, with various ``seen'' $W$ values, we train a model for phi prediction, and use $\ensuremath \mathbb{E}[\Phi~|~W,Z]$ to fit a lasso regression to predict Y, giving rise to $\ensuremath \mathbb{E}[Y~|~\hat{\Phi}]$ predictions.
\item Next, we see a train split from a test set, corresponding to an ``unseen'' intervention $w'$, on which we relearn $\ensuremath \mathbb{E}[\Phi~|~w',Z]$
\item Finally, we test Y predictions on a test set of the unseen $w'$ test set, using our relearned $\ensuremath \mathbb{E}[\Phi~|~w',Z]$ model, and our previously trained $\ensuremath \mathbb{E}[Y~|~\hat{\Phi}]$ Lasso regression models. Thus, for a test split of the test set, we predict for Y for previously unseen ${w',z}$ pairs, i.e. $\ensuremath \mathbb{E}[Y~|~do(w'), Z']$.
\end{enumerate}\label{item:experiment_desc}
For further clarity, we provide a visual description of datasets used in different stages of the method above in Figure \ref{fig:Dataset_desc}. We compared our estimation accuracy, via Mean Squared Error, as Y labels become available in the unseen $w'$ regime (10-100\% of labels). We record our loss against four baselines estimating the same quantity: 1. linear model, as a Lasso regression with cross validation to pick the coefficient $\lambda$ on the regularization term in the range [0.05,1], 2. SVM predictor with default parameters from \citep{scikit-learn}, 3. Random Forest regression with default parameters from \citep{scikit-learn}, aside for specifying 5 minimal samples in split, and 4. Gradient Boosting with default parameters from \citep{scikit-learn}, aside for maximum depth which was set to 5. We provide code to reproduce our results alongside this document. We also tested a Multi Layered Perceptron baseline, but found it to perform much worse than alternatives without dedicated tuning, and subsequently did not include it in the results.
\begin{figure}[!hbpt]
\centering
\includegraphics[scale=0.4]{figures/Dataset_desc.png}
\caption{Description of dataset splits used in different parts of our experiment, corresponding to items 1-3 in \ref{item:experiment_desc}.}
\label{fig:Dataset_desc}
\end{figure}
For the prediction task we include two settings: one for a single configuration of the target $Y$, and one where we present average performance over 100 different settings of the parameters $\theta$ in the structural equation giving rise to $Y$. Since our test set on which we report result is rather small, for the first setting we average results of the baselines over 10 different shuffles of the dataset on which we train and report result. We do this for the baselines and not for our own method as the baselines have access to 10 different proportions of the target Y, which involved small number of samples and largely varying performance based on the ordering of the dataset. Our method makes no use of labels, and thus is not vulnerable to this variability in performance. However, when we conduct the experiment over 100 different $Y$ configurations, the existence of ample different examples of data better accounts for this variability in performance based on sample ordering. Thus, we simply report results for a single data ordering, and report the mean and standard error over 100 different $Y$ parameters settings instead.
\subsection{Dataset construction and models' training}
\subsubsection{Image Perturbation}
The image perturbation is a simulation dataset that was put together to demonstrate the key ideas of the work. We create a dataset of 10,000 examples, images of size 10X10, equally made up of 5 pixel patterns: cross, square, crossing diagonal, pyramid and diamond. Patterns were allocated for each index by sampling one of 5 pattern indicators from a multinomial with equal probability for each shape. Next, the pattern indicator $Z$ also served to select one of 5 set of probabilities that were used to seed a multinomial from which a perturbation pattern $W$ was picked.
The perturbation was put together via a procedure that used the indicator W as a location to be used in a Multivariate Normal distribution with a two dimensional identity covariance matrix, $I(2)$. For each example in the dataset, we sampled 1000 tuples (x,y) from the Multivariate Normal described, checked whether they fell within the image size (i.e. $\geq (0,0)$ and $<(10,10)$). Each time such tuple fell within the borders of the image, a perturbation of size 0.1 was added to the location (x,y) in the image.
This perturbation regime was added as a mask to the original image reflected by $Z$, to create the post-perturbation $X$. For the construction of $\phi$s, we created five different 1-d convolution transformations, with randomly initialized weights, which will be indexed by $Z$ and applied to $X$. There will be 4 resulting $\phi$s for each image, each corresponding to the 4 quadrants of the image. $\phi_1$ most clearly varies with $W$, with potential little effect on $\phi_2$ and $\phi_3$, but $\phi_4$ should see close to no effect in response to $W$, based on perturbation locations. Finally, Y was constructed as a linear combination of $\phi$s, with the weights [0.7, 0, 0, -0.5] applied to them, and added Gaussian noise $N(0, 0.1)$.
Finally, for the test set with an unseen perturbation pattern $w'$, 2,000 examples that were featured in the training set of size 10,000, were used with a different perturbation, with location $w'=5$. Previous perturbations ranged from 0-3. For specific structural equations corresponding to this description, see Figure 5 in the manuscript.
For the $g$ model, estimating $\ensuremath \mathbb{E}[\Phi~|~w',Z]$, we train a Multi-task MLP with 3 hidden layers, 512 hidden dimensions, and the ReLU activation function. We use 100 epochs and 50 epochs to train the first-stage model $g$ for seen w, and for the unseen w' train split respectively. We use the Adam optimizer, with learning rates of 0.002 and 0.001 used for optimization of each stage respectively. We use a train batch size of 400, and test batch size of 100. We set the seed at 42. We also include the code to reproduce the results, see code files for any additional hyperparameter setting.
Finally, for the robustness to settings of $\theta$ experiment (Figure 8), we had to sample the weights from a distribution such that we can repeat the process 1500 times. We chose a uniform distribution $\theta \sim Unif(-0.3, 0.3)$, and added noise to $Y$ from a $N(0, 0.01)$. We chose those such that similar stats of $Y$ can be achieved, compared to the single $Y$ setting experiment. See generation code in $\text{Python\_Img\_Humor/data\_generation\_notebooks/ImgPertSim.ipynb}$.
\subsubsection{Humor Micro Edits}
The construction of the humor micro edits dataset closely followed the analysis and description in \cite{hossain-etal-2019-president}. The original datasets provided there already contained original headlines (which we used as $Z$ for our purposes), edit words ($W$ for us) and humorous post-edit headlines ($X$)\footnote{Access to original dataset at \url{https://www.cs.rochester.edu/u/nhossain/humicroedit.html .}}. Following the analysis in the paper, we chose to represent each of these sentences and words using their pre-trained 6 Billion-token GloVe word-embedding vector representations, trained originally on 2014 English Wikipedia and Gigaword 5\footnote{Available for download at \url{https://nlp.stanford.edu/projects/glove/}.} \citep{pennington2014glove}. We only included examples in which all words were correctly identified in the pre-trained word embedding, following a standard cleaning procedure (see code for exact details).
$Z$, $W$ and $X$ where does based on vector representation of the original dataset. There are three additional steps we have taken to compose the final dataset. First, we clustered the edit words using K-means clustering with K=20 (implemented via Scikit-learn), following the same procedure carried out in \cite{hossain-etal-2019-president}. We used the labels of these clusters to create the training and test split, such that the test set included edit words from one cluster we left out to function as an unseen intervention $W'$. The unseen intervention was chosen to be one of the larger 5 clusters, to ensure enough training examples for estimation exist. The cluster that was randomly chosen for the results shown in the paper is cluster 11.
Next, we constructed high-level descriptions of the intervention and its implications on $X$ as $\phi$. $\Phi = \{\phi\}_{i=1}^{30}$, and each one was inspired by analysis and hypotheses from \cite{hossain-etal-2019-president}:
\begin{enumerate}
\item $\phi_1$: Length of resulting edited sentence \textit{(does not vary with $w$)}
\item $\phi_2$: Mean cosine distance between GloVe vector of edit word and the rest of words in sentence \textit{(varies with $w$)}
\item $\phi_3$: Location index of replaced word \textit{(does not vary with $w$)}
\item $\phi_4$: Sentiment polarity of edit word, using the pre-trained sentiment processor from \citep{qi2020stanza}\footnote{Full usage details available at \url{https://stanfordnlp.github.io/stanza/sentiment.html .}} \textit{(varies with $w$)}
\item $\phi_5$: Sentiment polarity of resulting sentence, using the pre-trained sentiment processor from \citep{qi2020stanza} (does not vary with $w$)
\item $\phi_6$: Cosine distance between GloVe vector of edited word and GloVe vector of original word \textit{(varies with $w$)}
\item $\phi_7$-$\phi_{10}$: Cosine distance of GloVe vector of edit word from neighboring words (2 preceding, 2 succeeding) \textit{(does not vary with $w$)}
\item $\phi_{11}$-$\phi_{30}$: Distance of mean GloVe embedding of final sentence from clusters' centroids \textit{(does not vary with $w$)}
\end{enumerate}
The set of $\phi$s, which all correspond to different data types, were all scaled to have 0 mean and unit variance, to make them more comparable. Finally, following $\Phi$ as defined above we constructed an outcome variable $Y$ as a linear combination of $\Phi$, with added noise sampled from $N(0, .5)$. We sampled weights for each $\phi, \theta \sim U(-1,1)$, while keeping a random third of the weights at 0. Additionally, we ensured at least one of the $\phi$s varying with $W$ is also zeroed out, to have diversity of all possible cases present.
For the $g$ model, estimating $\ensuremath \mathbb{E}[\Phi~|~w',Z]$, we train a Multi-task MLP with 3 hidden layers, 512 hidden dimensions, and the ReLU activation function. We use 700 epochs and 100 epochs to train the first-stage model $g$ for seen w, and for the unseen w' train split respectively. We use the Adam optimizer, with learning rates of 0.002 and 0.001 used for optimization of each stage respectively. We use a train batch size of 400, and test batch size of 100. We set the seed at 42. We also include the code to reproduce the results, see code files for any additional hyperparameter setting.
\subsubsection{Gene Knockouts}
Our GENIE3 model follows the instructions of \citet{Huynh-Thu2010}, who scale all genes prior to analysis and fit a series of random forest regressions predicting the expression of each "downstream" gene as a function of the 334 candidate transcription factors (TFs). Each forest contains 1000 trees, with $mtry = \sqrt{334}$. The adjacency matrix is computed using the impurity importance measure originally proposed by \citet{Breiman2001}.
We simulate TF data from a multivariate Gaussian distribution with parameters estimated via maximum likelihood. This matrix is then propagated through our GENIE3 model to simulate expression values for downstream genes, with random Gaussian noise $\mathcal{N}(0, \sigma^2)$, where $\sigma$ is the RMSE of the corresponding random forest on out-of-bag data. This data -- TFs and downstream genes -- together comprise the matrix $Z$ of simulated baseline \emph{E. Coli} gene expression.
We sort TFs by outdegree and simulate a knockout experiment on the top ten by replacing their values with a scalar 1 unit less than the observed minimum for each (how much less is irrelevant, as random forests are invariant to monotone transformations). We record the outdegreee of these TFs as $W$ and the resulting expression matrix as $X$.
To compute $\Phi$, we filter out all TFs with outdegree less than 100 and treat each of the remaining 168 TFs as the hub of a module. For downstream genes, module membership is determined by whether the given TF was assigned importance of at least 10 in the GENIE3 adjacency matrix. For each module, we compute the first kernel principal component on a subsample of $n = 1000$ using a radial basis function with default bandwidth given by the median Euclidean distance. These weights are then used to project the remaining data $Z$ and $X$ into the latent space. We define $\Phi$ as the difference between pre- and post-intervention expression values for the kernel eigengene. We proceed to estimate a series of $\ensuremath \mathbb{E}[\phi_j~|~Z, W]$ regressions on a training set comprising 8 of 10 $w$ values using random forests with 500 trees and $mtry = p/3$, where $p$ is the number of genes in a given module.
Each $\phi_j$ is sorted by its association with $W$ using Spearman correlation. $Y$ is then simulated as a linear function of the top and bottom 25 $\phi$'s, with nonzero weights drawn from $\mathcal{N}(\pm{4}, 1)$, where $\pm{4}$ denotes that the amplitude is multiplied by $-1$ with probability 0.5. A lasso regression $\ensuremath \mathbb{E}[Y~|~\hat{\Phi}]$ is fit to the aforementioned training set, with $L_1$ penalty $\lambda$ selected via 10-fold cross-validation. Since, by construction $Z \independent W$ in this experiment, the conditional independence tests can be replaced by a marginal independence test. We use the Spearman correlation to measure the association between $W$ and each $\phi_j$ using the training set.
|
2,877,628,090,431 | arxiv | \section{Introduction}\label{sec:intro}
Simultaneous interpretation (SI) is the act of translating speech in real-time with minimal delay, and is crucial in facilitating international commerce, government meetings, or judicial settings involving non-native language speakers \citep{bendazzoli05epic,hewitt1998court}.
However, SI is a cognitively demanding task that requires both active listening to the speaker and careful monitoring of the interpreter's own output.
Even accomplished interpreters with years of training can struggle with unfamiliar concepts, fast-paced speakers, or memory constraints \citep{lambert1994bridging,liu2004working}.
Human short-term memory is particularly at odds with the simultaneous interpreter as he or she must consistently recall and translate specific terminology uttered by the speaker \citep{lederer1978simultaneous,daro1994verbal}.
Despite psychological findings that rare words have long access times \citep{balota1985locus,jescheniak1994word,griffin1998constraint}, listeners expect interpreters to quickly understand the source words and generate accurate translations.
Therefore, professional simultaneous interpreters often work in pairs \citep{millan2012routledge}; while one interpreter performs, the other notes certain challenging items, such as dates, lists, names, or numbers \citep{jones02conferenceinterpreting}.
\begin{figure*}
\centering
\includegraphics{cai-naacl.pdf}
\caption{
The simultaneous interpretation process, which could be augmented by our proposed terminology tagger embedded in a computer-assisted interpreting interface on the interpreter's computer.
In this system, automatic speech recognition transcribes the source speech, from which features are extracted, input into the tagger, and term predictions are displayed on the interface in real-time.
Finally, machine translations of the terms can be suggested.}
\label{fig:cai}
\end{figure*}
Computers are ideally suited to the task of recalling items given their ability to store large amounts of information, which can be accessed almost instantaneously.
As a result, there has been recent interest in developing computer-assisted interpretation (CAI; \citet{interpretershelp,fantinuoli2016interpretbank,fantinuoli2017speech}) tools that have the ability to display glossary terms mentioned by a speaker, such as names, numbers, and entities, to an interpreter in a real-time setting.
Such systems have the potential to reduce cognitive load on interpreters by allowing them to concentrate on fluent and accurate production of the target message.
These tools rely on automatic speech recognition (ASR) to transcribe the source speech, and display terms occurring in a prepared glossary.
While displaying all terminology in a glossary achieves \textit{high recall} of terms, it suffers from \textit{low precision}.
This could potentially have the unwanted effect of cognitively overwhelming the interpreter with too many term suggestions \cite{stewart2018automatic}.
Thus, an important desideratum of this technology is to only provide terminology assistance when the interpreter requires it.
For instance, an NLP tool that learns to predict only terms an interpreter is likely to miss could be integrated into a CAI system, as suggested in Fig.~\ref{fig:cai}.
In this paper, we introduce the task of predicting the terminology that simultaneous interpreters are likely to leave untranslated using \emph{only} information about the source speech and text.
We approach the task by implementing a supervised, sliding window, SVM-based tagger imbued with delexicalized features designed to capture whether words are likely to be missed by an interpreter.
We additionally contribute new manual annotations for untranslated terminology on a seven talk subset of an existing interpreted TED talk corpus \citep{shimizu2014collection}.
In experiments on the newly-annotated data, we find that intelligent term prediction can increase average precision over the heuristic baseline by up to 30\%.
\section{Untranslated Terminology in SI}\label{sec:taskintro}
Before we describe our supervised model to predict untranslated terminology in SI, we first define the task and describe how to create annotated data for model training.
\subsection{Defining Untranslated Terminology}\label{sec:difficult_terminology}
Formally, we define untranslated terminology with respect to a source sentence $S$, sentence created by a translator $R$, and sentence created by an interpreter $I$. Specifically, we define any consecutive sequence of words $s_{i:j}$, where $0 \leq i \leq N-1$ (inclusive) and $i < j \leq N$ (exclusive), in source sentence $S_{0:N}$ that satisfies the following criteria to be an untranslated term:
\begin{itemize}
\item \textbf{Termhood:} It consists of only numbers or nouns. We specifically focus on numbers or nouns for two reasons: (1) based on the interpretation literature, these categories contain items that are most consistently difficult to recall \citep{jones02conferenceinterpreting,gile2009basic}, and (2) these words tend to have less ambiguity in their translations than other types of words, making it easier to have confidence in the translations proposed to interpreters.
\item \textbf{Relevance:} A translation of $s_{i:j}$, we denote $t$, occurs in a sentence-aligned reference translation $R$ produced by a translator in an offline setting. This indicates that in a time-unconstrained scenario, the term \emph{should} be translated.
\item \textbf{Interpreter Coverage:} It is not translated, literally or non-literally, by the interpreter in interpreter output $I$. This may reasonably allow us to conclude that translation thereof may have presented a challenge, resulting in the content not being conveyed.
\end{itemize}
Importantly, we note that the phrase \textit{untranslated} terminology entails words that are either dropped mistakenly, intentionally due to the interpreter deciding they are unnecessary to carry across the meaning, or mistranslated.
We contrast this with \textit{literal} and \textit{non-literal} term coverage, which encompasses words translated in a verbatim and a paraphrastic way, respectively.
\subsection{Creating Term Annotations}\label{sec:annotation}
To obtain data with labels that satisfy the previous definition of untranslated terminology, we can leverage existing corpora containing sentence-aligned source, translation, and simultaneous interpretation data.
Several of these resources exist, such as the NAIST Simultaneous Translation Corpus (STC) \citep{shimizu2014collection} and the European Parliament Translation and Interpreting Corpus (EPTIC) \citep{bernardini2016eptic}.
Next, we process the source sentences, identifying terms that satisfy the termhood, relevance, and interpreter coverage criteria listed previously.
\begin{itemize}
\item \textbf{Termhood Tests:} To check termhood for each source word in the input, we first part-of-speech (POS) tag the input, then check the tag of the word and discard any that are not nouns or numbers.
\item \textbf{Relevance and Interpreter Coverage Tests:} Next, we need to measure relevancy (whether a corresponding target-language term appears in translated output), and interpreter coverage (whether a corresponding term \emph{does not} appear in interpreted output).
An approximation to this is whether one of the translations listed in a bilingual dictionary appears in the translated or interpreted outputs respectively, and as a first pass we identify all source terms with the corresponding target-language translations.
However, we found that this automatic method did not suffice to identify many terms due to lack of dictionary coverage and also to non-literal translations.
To further improve the accuracy of the annotations, we commissioned human translators to annotate whether a particular source term is translated literally, non-literally, or untranslated by the translator or interpreters (details given in \S\ref{sec:annotation_stc}).
\end{itemize}
Once these inclusion criteria are calculated, we can convert all untranslated terms into an appropriate format conducive to training supervised taggers. In this case, we use an IO tagging scheme \citep{ramshaw1999text} where all words corresponding to untranslated terms are assigned the label I, and all others are assigned a label O, as shown in Fig.~\ref{fig:labeled_sentence}.
\begin{figure}[t]
\centering
\begin{tabular}{@{}c@{\hspace{0.8\tabcolsep}}l@{}}
{\small Src} & \fbox{
\parbox{.4\textwidth}{
$\undset{\text{O}}{\text{\small In}}$
$\undset{\text{O}}{\text{\small California}}$,
$\undset{\text{O}}{\text{\small there}}$
$\undset{\text{O}}{\text{\small has}}$
$\undset{\text{O}}{\text{\small been}}$
$\undset{\text{O}}{\text{\small a}}$
$\undset{\text{I}}{\lbrack\text{\small 40}\rbrack}$
$\undset{\text{O}}{\text{\small percent}}$
$\undset{\text{O}}{\text{\small decline}}$
$\undset{\text{O}}{\text{\small in}}$
$\undset{\text{O}}{\text{\small the}}$
$\lbrack\undset{\text{I}}{\text{\small Sierra}}$
$\undset{\text{I}}{\text{\small snowpack}}\rbrack$.}} \\
{\small Interp} & \fbox{
\parbox{.4\textwidth}{
\begin{CJK}{UTF8}{min}
$\undset{\text{California}}{\text{\small カリフォルニア で は}}$、
$\undset{\text{4}}{\text{\small 4}}$
$\undset{\text{percent}}{\text{\small パーセント}}$
$\undset{\text{decline}}{\text{\small 少な く な っ て}}$
{\small しま い ま し た 。}
\end{CJK}
}}\\
\end{tabular}
\caption{A source sentence and its corresponding interpretation. Untranslated terms are surrounded by brackets and each word in the term is labeled with an I-tag. The interpreter mistakes the term \textit{40} for \textit{4}, and omits \textit{Sierra snowpack}.}
\vspace{-2mm}
\label{fig:labeled_sentence}
\end{figure}
\begin{figure*}
\centering
\includegraphics[trim={2cm 12.58cm 7cm 12cm},clip,width=0.92\textwidth]{svmtagger-color.pdf}
\caption{
Our tagging model at prediction time. A sliding window SVM, informed by a task-specific feature function $\phi$ with access to the POS tags, source speech timing (in seconds), and other information, predicts whether or not words matching the termhood constraint (in blue) are likely to be left untranslated in SI.}
\label{fig:model}
\end{figure*}
\section{Predicting Untranslated Terminology}
With supervised training data in hand, we can create a model for predicting untranslated terminology that could potentially be used to provide interpreters with real-time assistance.
In this section, we outline a couple baseline models, and then describe an SVM-based tagging model, which we specifically tailor to untranslated terminology prediction for SI by introducing a number of hand-crafted features.
\subsection{Heuristic Baselines}\label{sec:baselines}
In order to compare with current methods for term suggestion in CAI, such as \citet{fantinuoli2017challenges}, we first introduce a couple of heuristic baselines.
\begin{itemize}
\item \textbf{Select noun/\# POS tag:}
Our first baseline recalls all words that meet the termhood requirement from \S\ref{sec:taskintro}.
Thus, it will achieve perfect recall at the cost of precision, which will equal the percentage of I-tags in the data.
\item \textbf{Optimal frequency threshold:}
To increase precision over this naive baseline, we also experiment with a baseline that has a frequency threshold, and only output words that are rarer than this frequency threshold in a large web corpus, with the motivation that rarer words are more likely to be difficult for translators to recall and be left untranslated.
\end{itemize}
\subsection{SVM-based Tagging Model}
\label{sec:svmtagger}
While these baselines are simple and intuitive, we argue that there are a large number of other features that indicate whether an interpreter is likely to leave a term untranslated.
We thus define these features, and resort to machine-learned classifiers to integrate them and improve performance.
State-of-the-art sequence tagging models process sequences in both directions prior to making a globally normalized prediction for each item in the sequence \citep{huang2015bidirectional,ma2016end}.
However, the streaming, real-time nature of simultaneous interpretation constrains our model to sequentially process data from left-to-right and make local, monotonic predictions (as noted in \citet{oda14acl,grissom14finalverb}, among others).
Therefore, we use a sliding-window, linear support vector machine (SVM) classifier \citep{cortes1995support,joachims98svm} that uses only local features of the history to make independent predictions, as depicted in Fig.~\ref{fig:model}.\footnote{We also experimented with a unidirectional LSTM tagger \citep{hochreiter97lstm,graves2012sequence}, but found it ineffective on our small amount of annotated data.}
Formally, given a sequence of source words with their side information (such as timings or POS tags) $S = s_{0:N}$, we slide a window $W$ of size $k$ incrementally across $S$, extracting features $\phi(s_{i-k+1:i+1})$ from $s_i$ and its $k-1$ predecessors.
Since our definition of terminology only allows for nouns and numbers, we restrict prediction to words of the corresponding POS tags $Q = \{$CD, NN, NNS, NNP, NNPS$\}$ using the Stanford POS tagger \citep{toutanova2003feature}.
That is, we assign a POS tag $p_i$ to each word from $s_i$ and only extract features/predict using the classifier if $p_i \in Q$; otherwise we always assign the Outside tag.
This disallows words that are of other POS tags from being classified as untranslated terminology and greatly reduces the class imbalance issue when training the classifier.%
\footnote{We note that a streaming POS tagger would have to be used in a real-time setting, as in \cite{oda15acl}.}
\subsection{Task-specific Features}\label{sec:features}
Due to the fact that only a small amount of human-interpreted human-annotated data can be created for this task, it is imperative that we give the model the precise information it needs to generalize well.
To this end, we propose multiple task-specific, non-lexical features to inform the classifier about certain patterns that may indicate terminology likely to be left untranslated.
\begin{itemize}
\item \textbf{Elapsed time:}
As discussed in \S\ref{sec:intro}, SI is a cognitively demanding task.
Interpreters often work in pairs and usually swap between active duty and notetaking roles every 15-20 minutes \citep{lambert1994bridging}.
Towards the end of talks or long sentences, an interpreter may become fatigued or face working memory issues---especially if working alone.
Thus, we monitor the number of minutes elapsed in the talk and the index of the word in the talk/current sentence to inform the classifier.
\item \textbf{Word timing:}
We intuit that a presenter's quick speaking rate can cause the simultaneous interpreter to potentially drop some terminology.
We obtain word timing information from the source speech via forced alignment tools \citep{ochshorn2016gentle, povey11kaldi}.
The feature function extracts both the number of words in the past $m$ seconds and the time deltas between the current word and previous words in the window.
\item \textbf{Word frequency:}
We anticipate that interpreters often leave rarer source words untranslated because they are probably more difficult to recall from memory.
On the other hand, we would expect loan words, words adopted from a foreign language with little or no modification, to be easier to recognize and translate for an interpreter.
We extract the binned unigram frequency of the current source word from the large monolingual Google Web 1T Ngrams corpus \citep{brants2006web}.
We define a loan word as an English word with a Katakana translation in the bilingual dictionaries \citep{eijiro,breen2004jmdict}.
\item \textbf{Word characteristics and syntactic features:} We extract the number of characters and number of syllables in the word, as determined by lookup in the CMU Pronunciation dictionary \citep{weide1998cmu}.
Numbers are converted to their word form prior to dictionary lookup.
Generally, we expect longer words, both by character and syllable count, to represent more technical or marked vocabulary, which may be challenging to translate.
Additionally, we syntactically inform the model with POS tags and regular expression patterns for numerals.
\end{itemize}
These features are extracted via sliding a window over the sentence, as displayed in Fig.~\ref{fig:model} and discussed in \S\ref{sec:svmtagger}.
Thus, we also utilize previous information from the window when predicting for the current word.
This previous information includes past predictions, word characteristics and syntax, and source speech timing.
\section{Experimental Annotation and Analysis}\label{sec:annotation_stc}
In this section, we detail our application of the term annotation procedure in \S\ref{sec:taskintro} to an SI corpus and analyze our results.
\subsection{Annotation of NAIST STC}
For SI data, we use a seven-talk, manually-aligned subset of the English-to-Japanese NAIST STC \citep{shimizu2014collection}, which consists of source subtitle transcripts, En$\rightarrow$Ja offline translations, and interpretations of English TED talk videos from professional simultaneous interpreters with 1, 4, and 15 years of experience, who are dubbed B-rank, A-rank, and S-rank\footnote{\{B, A, S\}-rank is the Japanese equivalent to \{C, B, A\}-rank on the international scale.}.
TED talks offer a unique and challenging format for simultaneous interpreters because the speakers typically talk in-depth about a single topic, and such there are many new terms that are difficult for an interpreter to process consistently and reliably.
The prevalence of this difficult terminology presents an interesting testbed for our proposed method.
First, we use the Stanford POS Tagger \citep{toutanova2003feature} on the source subtitle transcripts to identify word chunks with a POS tag in $\{$CD, NN, NNS, NNP, NNPS$\}$, discarding words with other tags.
After performing word segmentation on the Japanese data using KyTea \citep{neubig2011kytea}, we automatically detect for translation coverage between the source subtitles, SI, and translator transcripts with a string-matching program, according to the relevance and coverage tests from \S\ref{sec:taskintro}.
The En$\leftrightarrow$Ja \textsc{Eijiro} (2.1m entries) \citep{eijiro} and \textsc{Edict} (393k entries) \citep{breen2004jmdict} bilingual dictionaries are combined to provide term translations.
Additionally, we construct individual dictionaries for each TED talk with key acronyms, proper names, and other exclusive terms (e.g., \textit{UNESCO}, \textit{CO2}, \textit{conflict-free}, \textit{Pareto-improving}) to increase this automatic coverage.
Nouns are lemmatized prior to lookup in the bilingual dictionary, and we discard any remaining closed-class function words.
While this automatic process is satisfactory for identifying if a translated term occurs in the translator's or interpreters' transcripts (relevancy), it is inadequate for verifying the terms that occur in the translator's transcript, but \textit{not} the interpreters' outputs (interpreter coverage).
Therefore, we commissioned seven professional translators to review and annotate those source terms that could not be marked as translated by the automatic process as either \textit{translated}, \textit{untranslated}, or \textit{non-literally translated} in each target sentence.
Lastly, we add I-tags to each word in the untranslated terms and O-tags to the words in literally and non-literally translated terms.
\subsection{Annotation Analysis}
\begin{table}[t]
\centering
\begin{tabular}{c c c c c c c c}
\toprule
& \multicolumn{2}{c}{\textbf{trans.}} & \multicolumn{2}{c}{\textbf{non-lit.}} & \multicolumn{2}{c}{\textbf{raw untrans.}}\\\cmidrule(lr){2-3}\cmidrule(lr){4-5}\cmidrule(lr){6-7}
\textbf{T/I} & \textbf{\#} & \textbf{\%} & \textbf{\#} & \textbf{\%} & \textbf{\#} & \textbf{\%} \\
\midrule
T & 2,213 & 80 & 158 & 6 & 401 & 14 \\
B & 1,134 & 41 & 92 & 3 & 1,546 & 56 \\
A & 1,151 & 42 & 114 & 4 & 1,507 & 54\\
S & 1,531 & 55 & 170 & 6 & 1,071 & 39\\
\bottomrule
\end{tabular}
\caption{
Translated, non-literally translated, and raw untranslated term annotations obtained in the annotation process using the NAIST STC for (T)ranslator, and \{B,A,S\}-rank SI. Note that these \textit{raw} untranslated term figures are directly from the annotation process, prior to filtering based off of the term relevancy constraint from \S\ref{sec:taskintro}.
}
\label{tab:annotation1}
\end{table}
Table \ref{tab:annotation1} displays the term coverage annotation statistics for the translators and interpreters.
Since translators performed in an offline setting without time constraints, they were able to translate the largest number of source terms into the target language, with 80\% being literally translated, and 6\% being non-literally translated.
On the other hand, interpreters tend to leave many source terms uncovered in their translations.
The A-rank and B-rank interpreters achieve roughly the same level of term coverage, with the A-rank being only slightly more effective than B-rank at translating terms literally and non-literally.
This is in contrast with \citet{shimizu2014collection}'s automatic analysis of translation quality on a three-talk subset, in which A-rank has slightly higher translation error rate and lower BLEU score \citep{papineni02bleu} than the B-rank interpreter.
The most experienced S-rank interpreter leaves 17\% fewer terms than B-rank uncovered in the translations.
More interestingly, the number of non-literally translated terms also correlates with experience-level.
In fact, the S-rank interpreter actually exceeds the translator in the number of non-literal translations produced.
Non-literal translations can occur when the interpreter fully comprehended the source expression, but chose to generate it in a way that better fit the translation in terms of fluency.
In Table \ref{tab:annotation2}, we show the number of terms left untranslated by each interpreter rank after processing our annotations for the relevancy constraint of \S\ref{sec:taskintro}.
Since the number of per-word I-tags is only slightly higher than the number of untranslated terms, most such terms consist of only a single word of about 6.5 average characters for all ranks.
Capitalized terms (i.e., named entities/locations) constitute about 14\% of B-rank, 13\% of A-rank, and 15\% of S-rank terms.
Numbers represent about 5\% of untranslated terms for each rank.
\begin{table}[t]
\centering
\begin{tabular}{l r r r}
\toprule
~ & ~ & \multicolumn{2}{c}{\textbf{\% I-tag of}} \\\cmidrule(lr){3-4}
\textbf{SI} & \textbf{\# untrans. terms} & \textbf{all} & \textbf{noun/\#} \\
\midrule
B-rank & 1,256 & 10.8 & 45.4\\
A-rank & 1,206 & 10.4 & 43.6\\
S-rank & 812 & 7.0 & 29.6\\
\bottomrule
\end{tabular}
\caption{
Final untranslated term count and number of I-tags after filtering based off of the \textit{relevancy} constraint (\S\ref{sec:taskintro}). That is, only the raw untranslated source terms that appear in the translator's transcript are truly considered untranslated.
}
\label{tab:annotation2}
\end{table}
\begin{figure}[t]
\centering
\includegraphics[trim={2.25cm 7.7cm 0cm 8cm},clip,width=.49\textwidth]{venn-untrans-terms-save.pdf}
\caption{Untranslated term overlap between interpreters.}
\label{fig:overlap}
\end{figure}
The untranslated term overlap between interpreters is visualized in Fig.~\ref{fig:overlap}.
Most difficult terms are shared amongst interpreter ranks as only 23.2\% (B), 22.1\% (A), and 11.7\% (S) of terms are unique for each interpreter.
We show a sampling of some unique noun terms on the outside of the Venn diagram, along with the untranslated terms shared among all ranks in the center.
Among these unique terms, capitalized terms make up 19\% of B-rank/S-rank, but only 13\% of A-rank.
7.4\% of S-rank's unique terms are numbers compared with about 5\% for the other two ranks.
\section{Term Prediction Experiments}
\subsection{Experimental Setting}
We design our experiments to evaluate both the effectiveness of a system to predict untranslated terminology in simultaneous interpretation and the usefulness of our features given the small amount of aligned and labeled training data we possess.
We perform leave-one-out cross-validation using five of the seven TED talks as the training set, one as the development set, and one as the test set.
Hyperparameters (SVM's penalty term, the number of bins for the word frequency feature=9, and sliding window size=8) are tuned on the dev. fold and the best model, determined by average precision score, is used for the test fold predictions.
Both training and predictions are performed on a sentence-level.
During training, we weight the two classes inversely proportional to their frequencies in the training data to ensure that the majority O-tag does not dominate the I-tag.
\begin{table}[t]
\centering
\begin{tabular}{l c c c}
\toprule
~ & \multicolumn{3}{c}{\textbf{AP}} \\\cmidrule(lr){2-4}
\textbf{Method} & \textbf{B} & \textbf{A} & \textbf{S}\\
\midrule
Select noun/\# POS tag & 45.4 & 43.6 & 29.6 \\
Optimal freq threshold & 49.7 & 48.1 & 32.9 \\
\midrule
SVM (all features) & 58.9 & 53.5 & 39.1 \\
$-$ elapsed time & 58.8 & 53.0 & 38.8 \\
$-$ word timing & 58.2 & 53.2 & 38.5 \\
$-$ word freq & \textbf{59.4} & 52.5 & 39.1 \\
$-$ characteristic/syntax & 59.3 & \textbf{55.1} & \textbf{42.5} \\
\bottomrule
\end{tabular}
\caption{
Average precision score cross-validation results with feature ablation for the untranslated term class on test data.
Optimal word frequency threshold is determined on dev set of each fold.
Evaluation performed on a word-level.
Highest numbers per column are bolded.
Each setting is statistically significant at $p < 0.05$ by paired bootstrap \citep{koehn04sigtest}.
}
\label{tab:results}
\end{table}
\subsection{Results and Analysis}\label{sec:results}
\begin{table}[t]
\centering
\begin{tabular}{l l}
\toprule
{Select POS} & \parbox{4.7cm}{
in the last
\textcolor{red}{5}
\textcolor{red}{years}
we
've
added
\textcolor{red}{70000000}
\textcolor{red}{tons}
of
\textcolor{red}{co2}
every
\textcolor{blue}{24}
\textcolor{blue}{hours}
\textcolor{blue}{25000000}
\textcolor{red}{tons}
every
\textcolor{blue}{day}
to
the
\textcolor{blue}{oceans}
} \\
\midrule
{Optimal freq} & \parbox{4.7cm}{
in the last
5
years
we
've
added
\textcolor{red}{70000000}
\textcolor{red}{tons}
of
\textcolor{red}{co2}
every
\textcolor{orange}{24}
\textcolor{orange}{hours}
\textcolor{blue}{25000000}
\textcolor{red}{tons}
every
\textcolor{orange}{day}
to
the
\textcolor{blue}{oceans}
} \\
\midrule
{SVM} & \parbox{4.7cm}{
in the last
5
years
we
've
added
70000000
tons
of
co2
every
\textcolor{orange}{24}
\textcolor{blue}{hours}
\textcolor{blue}{25000000}
\textcolor{red}{tons}
every
\textcolor{orange}{day}
to
the
\textcolor{blue}{oceans}
} \\
\bottomrule
\end{tabular}
\caption{B-rank output from our model contrasted with baselines. Type I errors are in \textcolor{red}{red}, type II errors in \textcolor{orange}{orange}, and correctly tagged untranslated terminology in \textcolor{blue}{blue}.}
\label{tab:example}
\end{table}
\begin{figure*}[t]
\centering
\begin{frame}
\hfil\hfil\includegraphics[width=7cm]{B-rankpr.pdf}\newline
\vfil
\hfil\hfil\includegraphics[width=7cm]{A-rankpr.pdf}\hfil\hfil
\includegraphics[width=7cm]{S-rankpr.pdf}\newline
\end{frame}
\caption{Precision-recall curves for each interpreter rank.}
\label{fig:prcurve}
\end{figure*}
Since we are ultimately interested in the precision and recall trade-off among the methods, we evaluate our results using precision-recall curves in Fig. \ref{fig:prcurve} and the average precision (AP) scores in Table \ref{tab:results}.
AP\footnote{We compute AP using the scikit-learn implementation \citep{scikit-learn}.} summarizes the precision-recall curve by calculating the weighted mean of the precisions at each threshold, where the weights are equal to the increase in recall from the previous threshold.
If the method is embedded in a CAI system, then the user could theoretically adjust the precision-recall threshold to balance helpful term suggestions with cognitive load.
Overall, we tend to see that all methods perform best when tested on data from the B-rank interpreter, and observe a decline in performance across all methods with an increase in interpreter experience.
We believe that this is due to a decrease in the number of untranslated terminology as experience increases (i.e., class imbalance) coupled with the difficulty of predicting such exclusive word occurrences from only source speech and textual cues.
Ablation results in Table \ref{tab:results} show that not all of the features are able to improve classifier performance for all interpreters.
While the elapsed time and word timing features tend to cause a degradation in performance when removed, ablating the word frequency and characteristic/syntax features can actually improve average precision score.
Word frequency, which is a recall-based feature, seems to be more helpful for B- and S-rank interpreters because it is challenging to recall the smaller number of untranslated terms from the data.
Although the characteristic/syntax features are also recall-based, we see a decline in performance for them across all interpreter ranks because they are simply too noisy.
When ablating the uninformative features for each rank, the SVM is able to increase AP vs. the optimal word frequency baseline by about 20\%, 15\%, and 30\% for the B, A, and S-rank interpreters, respectively.
In Table \ref{tab:example}, we show an example taken from the first test fold with results from each of the three methods.
The SVM's increased precision is able to greatly reduce the number of false positives, which we argue could overwhelm the interpreter if left unfiltered and shown on a CAI system.
Nevertheless, one of the most apparent false positive errors that still occurs with our method is on units following numbers, such as the word \textit{tons} in the example.
Also, because our model prioritizes avoiding this type I error, it is more susceptible to type II errors, such as ignoring untranslated terms \textit{24} and \textit{day}.
A user study with our method embedded in a CAI would reveal the true costs of these different errors, but we leave this to future work.
\section{Conclusion and Future Work}
In this paper, we introduce the task of automatically predicting terminology likely to be left untranslated in simultaneous interpretation, create annotated data from the NAIST ST corpus, and propose a sliding window, SVM-based tagger with task-specific features to perform predictions.
We plan to assess the effectiveness of our approach in the near future by integrating it in a heads-up display CAI system and performing a user study.
In this study, we hope to discover the ideal precision and recall tradeoff point regarding cognitive load in CAI terminology assistance and use this feedback to adjust the model.
Other future work could examine the effectiveness of the approach in the opposite direction (Japanese to English) or on other language pairs.
Additionally, speech features could be extracted from the source or interpreter audio to reduce the dependence on a strong ASR system.
\section{Acknowledgements}
This material is based upon work supported by the National Science Foundation Graduate Research Fellowship under Grant No. DGE1745016 and National Science Foundation EAGER under Grant No. 1748642.
We would like to thank Jordan Boyd-Graber, Hal Daum\'{e} III and Leah Findlater for helpful discussions, Arnav Kumar for assistance with the term annotation interface, and
the anonymous reviewers for their useful feedback.
\input{bib.bbl}
\end{document}
|
2,877,628,090,432 | arxiv | \section{Introduction}
Active galactic nuclei (AGN) are powered by accretion of material onto supermassive black holes (SMBHs), which releases energy in the form of radiation and/or mechanical outflows to the interstellar medium (ISM) of the host galaxy. The impact of the energy released by AGN in its surrounding environment has been proposed as a key mechanism responsible for regulating star formation in galaxies \citep{HopkinsQuataert10}. Although they comprise a relatively small fraction of the galaxies in the local universe ($\sim$10\%), AGN are now considered to be a short phase ($<$100~Myr; e.g. \citealt{Hopkins05}) that might take place in all galaxies (e.g. \citealt{Hickox14}).
Nearby Seyfert (Sy) galaxies are intermediate luminosity AGN which are close enough ($\sim$tens of Mpc) to study their nuclear emission and characterize the properties of the nuclear obscurer on $\sim$100~pc scales (at the average angular resolution of 8-10~m-class ground-based telescopes $\sim$0.3\arcsec ~at 10\,$\rm{\mu}$m). The dusty torus/disc\footnote{Hereafter we will use the terms dusty torus and disc interchangeably. This term does not necessarily refer to a geometrically thick torus. Note that the majority of torus models use a flared disc geometry (i.e. a disc whose thickness increases with the distance from the centre).} is the key piece of the AGN unified model \citep{Antonucci93}. Depending on its orientation, it obscures the central engines of type 2 AGN, and provides a direct view of the central engine in the case of type 1 AGN. This nuclear dust absorbs a significant part of the AGN radiation and, then, reprocesses it to emerge in the infrared (IR; e.g. \citealt{Pier92}).
Early works using direct imaging and interferometric data have found a relatively compact torus ($\sim$0.1-10~pc) in the mid-IR (MIR; $\sim$5--30~$\mu$m; e.g. \citealt{Jaffe04,Packham05,Tristram07,Tristram09,Tristram14,Radomski2008,Burtscher09,Burtscher13,Raban09,Lopez-Gonzaga16,Leftley18}). Recently, Atacama Large Millimeter/submillimeter Array (ALMA) observations of
the cold dust in Seyfert galaxies have spatially resolved the sub-mm counterpart of the torus (e.g. \citealt{Gallimore16,Garcia-Burillo16,Garcia-Burillo21,Imanishi18,Impellizzeri19}). \citet{Garcia-Burillo21} found that the bulk of the nuclear cold dust emission in Sy galaxies is equatorial with a median diameter of $\sim$42\,pc. These dusty molecular tori have been also detected in molecular gas observations of Seyfert galaxies and low-luminosity AGNs (e.g. \citealt{Herrero18,Herrero19,Herrero21,Combes19,Garcia-Burillo21}). These results suggest the multi-phase nature of the torus structure.
Due to the small angular size of the dusty and molecular tori, specially in the IR, 8-10~m-class ground-based telescopes cannot resolve it. Thus, comparing torus models to the observed nuclear IR spectral energy distributions (SEDs) is a powerful tool to constrain the properties of the nuclear dusty structure. Torus models can be broadly grouped in two categories: dynamical (i.e. radiation hydrodynamical and magneto-hydrodynamical simulations; e.g. \citealt{Wada02,Schartmann08,Wada12,Dorodnitsyn17,Kudoh20,Takasao22}) and static (i.e. radiative transfer models; e.g. \citealt{Pier92,Efstathiou95,Fritz06,Nenkova08A,Nenkova08B,Hoenig10B,Hoenig17,Stalevski2012,Stalevski16,Siebenmorgen2015}). The dynamical models include processes such as supernovae and AGN feedback. However, they require large computational times and thus it is more difficult to compare them with observations. On the other hand, static torus models can be easily compared with the observations, assuming various geometries and compositions of the dust (see \citealt{Ramos17,Honig19} for reviews).
For the sake of simplicity, the first geometrical torus models assumed a uniform distribution of the dust (e.g. \citealt{Pier92,Fritz06}). However, pioneering works showed that a clumpy distribution of the dust was necessary to prevent the destruction of grains \citep{Krolik88}. Therefore, a clumpy formalism has been employed in the majority of torus models (e.g. \citealt{Nenkova08A,Nenkova08B,Hoenig10B,Hoenig17}). Moreover, several hydrodynamical simulations predict that the torus is a multiphase structure (e.g. \citealt{Wada02,Schartmann14}) with a combination of smooth and clumpy dust distributions (i.e. two-phase torus models; e.g. \citealt{Stalevski2012,Stalevski16,Siebenmorgen2015}).
Since the first torus models were developed, our view of the dusty
torus has changed considerably. For instance, recent observations using IR interferometry have motivated a more complex scenario to explain the IR nuclear emission of Seyfert galaxies. \citet{Hoenig13} suggested that a significant fraction of the MIR emission is produced by dust located in the polar direction, whereas the near-infrared (NIR) flux is produced by a clumpy and compact disk (i.e. the dusty torus). Thus, some of the geometrical torus models also include a polar dust component (e.g. \citealt{Hoenig17}; hereafter clumpy disc$+$wind models). This polar emission has been detected at small scale (few pc) so far in six sources of the 23 observed using IR interferometry \citep{Lopez-Gonzaga16, Leftley18, Rosas22, Isbell22}. In addition, previous works also showed a large scale polar dust component (up to few hundred parsec; e.g. \citealt{Bock00,Radomski03,Packham05,Asmus14,Asmus19,Herrero21}).
Although the nuclear dust properties of nearby Seyfert galaxies have been extensively studied in the literature, only few works compared SED fits with different torus models to investigate which of them better reproduces the SED of Seyfert galaxies (e.g. \citealt{Gonzalez-Martin19A,Gonzalez-Martin19B}; hereafter GM19A,19B; \citealt{Esparza-Arredondo19,Esparza-Arredondo21}) and type-1 QSOs \citep{Martinez-Paredes21}. Given the different assumptions used to build the various available torus models, it is crucial to compare how they fit the observational data. Using \emph{Spitzer}/IRS spectra ($\sim$5-35\,$\mu$m), GM19B found that the clumpy disc$+$wind models \citep{Hoenig17} reproduce well the MIR emission of Sy1, whereas Sy2 are almost equally fitted by clumpy torus models (\citealt{Nenkova08A,Nenkova08B}; $\sim$43\% of the Sy2s) or clumpy disc$+$wind models ($\sim$40\% of the Sy2s). However, this study was limited by the spatial resolution ($\sim$4$\arcsec$) and spectral coverage (5-30\,$\mu$m) of Spitzer/IRS. Furthermore, \citet{Ramos14} reported that the minimum combination of subarcsecond angular resolution data needed to constrain torus model parameters is N-band spectroscopy (8--13\,$\mu$m) and NIR photometry (at least two data-points) when using the clumpy torus models by \citet{Nenkova08A,Nenkova08B}. However, there is a lack of detailed studies comparing different torus models to high angular resolution NIR-to-MIR data of Sy galaxies.
In this work, we investigate for the first time how various geometrical torus models (i.e. smooth, clumpy, two-phases) and disk$+$wind models fit the nuclear IR ($\sim$1-30~$\mu$m) SED of the ultra-hard X-ray volume-limited sample of Sy galaxies (BCS$_{40}$ sample) presented in \citet{Bernete16}. This will allow us to better understand the geometry, chemical composition, grain sizes and distribution of the nuclear dust. In addition, this will help to test the validity of the various torus models.
The paper is organized as follows. Section \ref{sample} describes the sample selection. Section \ref{sec:models} gives a summary of the models used throughout this paper. The nuclear IR SED modelling is presented in Section \ref{results}. The main results are included in Section \ref{comparison} and discussed in Section \ref{discussion}. Finally, in Section \ref{conclusions} we summarize the main conclusions of this work.
\begin{table*}
\footnotesize
\caption{Main properties of the BCS$_{40}$ sample.}
\centering
\begin{tabular}{lccccccccc}
\hline
Name & R.A.& Dec.&D$_{L}$ &Spatial&Sy &log (N$_{\rm H}$)&log (L$_{\rm int}^{\rm X-ray}$)&log (M$_{\rm BH}$/M$_{\sun}$)&log ($\lambda_{\rm Edd}$)\\
& (J2000)& (J2000)&(Mpc) &scale& type& (cm$^{-2}$)&(erg s$^{-1}$)&&\\
& && &(pc/arcsec)& &&&&\\
(1)&(2)&(3)&(4)&(5)&(6)&(7)&(8)&(9)&(10)\\
\hline
NGC\,1365 & 03h33m36.4s& -36d08m25s&21.5&103&1.8&22.21&42.32&7.92 (a)&-2.44 \\
NGC\,2110 & 05h52m11.4s& -07d27m22s&32.4&155&2.0&22.94&42.69&9.25 (b)&-3.40 \\
ESO\,005-G004 & 06h05m41.6s& -86d37m55s&24.1&116&2.0& 24.34 &42.78 & 6.98 (c) &-1.04\\
NGC\,2992 & 09h45m42.0s& -14d19m35s&34.4&164&1.9 &21.72&42.00&5.42 (b)&-0.26 \\
MCG-05-23-016 & 09h47m40.1s& -30h56m55s&35.8&171&2.0& 22.18&43.20&7.98 (a)&-1.62 \\
NGC\,3081 & 09h59m29.5s& -22d49m35s&34.5&164&2.0 &23.91&42.72&8.41 (b)&-2.53 \\
NGC\,3227 & 10h23m30.6s& +19d51m54s&20.4&98&1.5&20.95&42.10&6.62 (b)&-1.36 \\
NGC\,3783 & 11h39m01.7s& -37d44m19s&36.4&173&1.2 & 20.49&43.43&7.14 (a)&-0.55 \\
UGC\,6728 & 11h45m16.0s& +79d40m53s&32.1&153&1.2 & 20.00&41.80&5.32 (b)& -0.36 \\
NGC\,4051 & 12h03m09.6s& +44d31m53s&12.9&62&1.2&20.00&41.33&5.60 (b)& -1.11 \\
NGC\,4138 & 12h09m29.8s& +43d41m07s&17.7&85&1.9&22.89&41.23&7.30 (b)&-2.91 \\
NGC\,4151 & 12h10m32.6s& +39d24m21s&20.0&96&1.5& 22.71&42.31&7.43 (a)&-1.96 \\
NGC\,4388* & 12h25m46.7s& +12d39m44s&17.0&82&2.0&23.52&43.05&6.99 (b)&-0.78 \\
NGC\,4395 & 12h25m48.8s& +33d32m49s&3.84&19&1.8&21.04&40.50&4.88 (a)&-1.22 \\
NGC\,4945 & 13h05m27.5s& -49d28m06s&4.36&21&2.0&24.80&42.69&7.78 (a)&-1.93 \\
NGC\,5128/CenA & 13h25m27.6s& -43d01m09s&4.28&21&2.0&23.02&42.39&7.94 (a)&-2.39 \\
MCG-06-30-015 & 13h35m53.7s& -34d17m44s&26.8&128&1.2&20.85&42.74&7.42 (a)&-1.52 \\
NGC\,5506 & 14h13m14.9s& -03d12m27s&30.1&144&1.9&22.44&42.99&8.29 (a)&-2.14 \\
NGC\,6300 & 17h16m59.5s& -62d49m14s&14.0&68&2.0&23.31&41.84&7.01 (a)&-2.01 \\
NGC\,6814 & 19h42n40.6s& -10d19m25s&25.8&123&1.5 &20.97&42.31&6.46 (b)&-0.99 \\
NGC\,7172 & 22h02m01.9s& -31d52m11s&37.9&180&2.0 &22.91&42.76&8.45 (b)&-2.53 \\
NGC\,7213 & 22h09m16.3s& -47d10m00s&25.1&120&1.5& 20.00&41.95&7.37 (c)&-2.26 \\
NGC\,7314 & 22h35m46.2s& -26d03m02s&20.9&100&1.9&21.60&42.33&7.24 (b)&-1.75 \\
NGC\,7582 & 23h18m23.5s& -42d22m14s&22.1&106&2.0&24.33&42.86&7.52 (a)& -1.50 \\
\hline
\end{tabular}
\tablefoot{ Right ascension (R.A.), declination (Dec.) and Seyfert type. *This galaxy is part of the Virgo Cluster \citep{Binggeli85}. We assumed a cosmology with H$_0$=70 km~s$^{-1}$~Mpc$^{-1}$, $\Omega_m$=0.3, and $\Omega_{\Lambda}$=0.7, and a velocity-field corrected using the \citet{Mould00} model, which includes the influence of the Virgo cluster, the Great Attractor, and the Shapley supercluster. The X-ray hydrogen column density and intrinsic 2--10~keV X-ray luminosity were taken from \citet{Ricci17}. References for the BH masses: (a) GB19; (b) \citet{Koss17}; (c) \citet{Vasudevan10}. Note that the Eddington ratio is derived following the methodology as in \citet{Ricci17c}.}
\label{tab1}
\end{table*}
\section{Sample selection}
\label{sample}
Our sample consists of 24 Seyfert galaxies selected from the nine-month catalog \citep{Tueller2008} observed with {\textit{Swift/BAT}}. This sample was previously presented in \citet{Bernete16} (hereafter BAT Complete Seyfert sample at D$_L<$40\,Mpc, BCS$_{40}$ sample). The ultra-hard 14-195 keV band used to select the parent sample is far less sensitive to the effects of obscuration than optical or softer X-ray wavelengths, making this AGN selection one of the least biased for N$\rm{_H}$ $\rm{<}$10$\rm{^{24}}$~cm$\rm{^{-2}}$ to date (see e.g. \citealt{Winter2009,Winter2010,Weaver2010,Ichikawa2012,Ricci15,Ueda15}).
We selected all the Seyfert galaxies in the nine-month catalog with luminosity distances D$_L<$40\,Mpc. We used this distance limit to ensure a resolution element of $\leqslant$50\,pc in the MIR, considering the average angular resolution of 8-10~m-class ground-based telescopes ($\sim$0.3\arcsec ~at 10\,$\rm{\mu}$m). The sample contains 8~Sy1 (Sy1, Sy1.2 and Sy1.5), 6~Sy1.8/1.9 and 10 Sy2 galaxies. This sample covers an AGN luminosity range log(L${_{\textrm{bol}}}$\,erg~s$^{-1}$)$\sim$41.75-44.75\footnote{Throughout this paper we adopt the standard notation log(x)$\equiv$ log$_{10}$(x).} and X-ray hydrogen column densities of N$_{\rm H}^{\rm X-ray}\sim$1$\times$10$^{20}$-6$\times$10$^{24}$~cm$^{-2}$. Note that we use bolometric luminosities derived by using the commonly employed bolometric correction of 20 (L${_{\textrm{bol}}}$=20~$\times$~L${_{2-10~\textrm{keV}}}$; e.g. \citealt{Vasudevan09}) at the 2-10~keV luminosities listed in Table \ref{tab1}. The main properties of the BCS$_{40}$ sample are shown in Table\,\ref{tab1}.
\section{Torus models} \label{sec:models}
We have chosen six models comprising different dust compositions, distributions and geometries (see also \citealt{Gonzalez-Martin19A} and references therein). In Figure\,\ref{torus_geo} we summarize the dust geometries and compositions, sublimation temperatures, and main parameters of each model used in this work. We also present a brief description of the models below:\\
$\rm{\bullet}$ \underline{Smooth torus model} by \cite{Fritz06} (hereafter {\textit{smooth F06 torus models}}): The model used a simple toroidal geometry, consisting of a flared disc represented as two concentric spheres having the polar cones removed. These two spheres delimit the inner and the outer torus radius, respectively. For the composition of dust the model considered a standard Galactic mix of 53\% silicates and 47\% graphite. The silicate and graphite grains have radii of $\rm{a_{g}}$ = 0.025 - 0.25 and $\rm{a_{g}}$ = 0.005 - 0.25 $\rm{\mu m}$, respectively. The parameters of the model are the viewing angle toward the torus, $i$, the half opening angle of the torus, $\sigma$, the exponent of the logarithmic azimuthal and radial density distribution, $\gamma$ and $\beta$, respectively, the ratio between external and internal radii, $\rm{Y = R_{\rm o}/R_{\rm d}}$, and the edge-on optical depth at 9.7 $\mu m$, $\tau_{9.7 \mu m}$ (see Table \ref{fritz_tab_parameters} of Appendix \ref{nuclear_fits}).
$\rm{\bullet}$ \underline{Clumpy torus model} by \cite{Nenkova08A,Nenkova08B} (hereafter {\textit{clumpy N08 torus models}}): The model used a formalism that accounts for the concentration of dust in clouds, forming a torus-like structure. They assumed spherical dust grains and a standard Galactic mix of 53\% silicates and 47\% graphite. The parameters of the model are: the view angle toward the torus, $i$, the number of clouds, $N_{0}$, the half opening angle of the torus, $\sigma$, the ratio between external and internal radii, $\rm{Y = R_{\rm o}/R_{\rm d}}$, the slope of the radial density distribution, $q$, and the optical depth of the individual clouds, $\tau_{\nu}$ (see Table\,\ref{nenkova_tab_parameters} of Appendix\,\ref{nuclear_fits}).
\begin{figure*}
\centering
\par{
\includegraphics[width=14.5cm]{Figures/torus_models_v10nov.pdf}
\par}
\caption{Scheme showing the different dust geometries and compositions of the various torus models used in this work. See Appendix\,\ref{nuclear_fits} for further details on the individual model parameters.
Note that the clumpy disc H17D models are not represented in this figure. However, the latter models consist of clumpy disc component of the clumpy disc$+$wind H17 models.}
\label{torus_geo}
\end{figure*}
$\bullet$ \underline{Clumpy toroidal model} by \citet{Hoenig10B} \citep[see also][]{Hoenig06,Hoenig10A} (hereafter {\textit{clumpy H10 torus models}}): These are radiative transfer models of three-dimensional clumpy dust tori using optically thick dust clouds and a low torus volume filling factor. The majority of the models use a standard ISM dust mixture of graphite (47\%) and silicate (53\%) dust grains with a classical MRN size distribution (\citealt{Mathis77}) and a maximum size of 0.25 $\mu$m. By contrast, clumpy H10 torus models also include ISM-like large grains with sizes between 0.1 and 1.0 $\mu$m (i.e. using the same graphite/silicate mixture) and a population of graphite dominated grains (high refractory material), with 70\% fraction of graphites (30\% silicates) and maximum sizes of 0.25\,$\mu$m. The parameters of this library of SEDs are: the viewing angle $i$, the number of clouds along an equatorial line-of-sight $\rm{N_0}$, the half-opening angle of the distribution of clouds $\rm{\theta}$, the radial dust-cloud distribution power law index $a$, and the opacity of the clouds $\rm{\tau_{cl}}$. The outer torus radius $\rm{R_{\rm o}}$ is fixed to the inner radius as $\rm{R_{\rm o}=150\,R_{d}}$ (see Table \ref{hoenig10_tab_parameters} of Appendix \ref{nuclear_fits}).
$\rm{\bullet}$ \underline{Two-phase torus model} by \cite{Stalevski16} (hereafter {\textit{two-phase S16 torus models}}): The model used a torus geometry with a two-phase dusty medium, consisting of high-density clumps embedded in a smooth dusty component of low density. The dust chemical composition is set to a mixture of silicate and graphite grains. Model parameters are: the viewing angle toward the observer, $i$, the ratio between the outer and the inner radius of the torus, $Y = \rm{R_{\rm o}/R_{\rm d}}$, the half opening angle of the torus, $\sigma$, the indices that set dust density gradient with the radial $p$ and polar $q$ distribution of dust, and the $9.7 \mu m$ average edge-on optical depth, $\tau_{9.7\mu m}$ (see Table \ref{stalev_tab_parameters} of Appendix \ref{nuclear_fits}).
$\rm{\bullet}$ \underline{Clumpy disc and outflow model} by \cite{Hoenig17} (hereafter {\textit{clumpy disc$+$wind H17 models}}): The model consists of a clumpy disc plus a polar outflow. The authors used the same dust composition as in the clumpy H10 torus models, but they also included a second population of large pure-graphite grains (0.75-1.0 $\mu$m) which are more resilient than small silicates in hard environments (see e.g. \citealt{Waxman00,Perna03,Schartmann08,Lu16,Almeyda17,Garcia-Gonzalez17,Hoenig17}). The parameters for this model are the viewing angle, $i$, and the number of clouds in the equatorial plane, $N_{0}$, the exponent of the radial distribution of clouds in the disc, $a$, the optical depth of individual clouds in the disc, $\tau_{cl}$ (fixed to 50), the index of the dust cloud distribution power-law along the wind, $a_{w}$, the half-opening angle of the wind, $\theta$, the angular width of the hollow wind cone, $\sigma$, and the wind-to-disc ratio, $f_{wd}$ (defines the ratio between the number of clouds along the cone and $N_{0}$) (see Table \ref{hoenig17_tab_parameters} of Appendix\,\ref{nuclear_fits}). Note that this model assumes a fixed clouds radius (R$_{\rm cl}=$0.035$\times$r$_{\rm sub}$).
$\rm{\bullet}$ \underline{Clumpy disc} by \cite{Hoenig17} (hereafter {\textit{clumpy disc H17D models}}): The model consists of clumpy disc component of the previously described clumpy disc$+$wind H17 models (i.e. removing the wind component). Thus, the authors used the same dust grain composition and dust sublimation formalism as in the clumpy disc$+$wind H17 models (see text above and Table \ref{hoenig17_tab_parameters} of Appendix \ref{nuclear_fits}). Note that we include this model to further investigate the impact of the pure-graphite polar dust component on the fits (see Section \ref{discussion_dust_compo}).
It is worth noting that the main differences between the various models employed in this study are: a) nuclear dust geometry (i.e. torus, disc+wind), b) dust distribution (i.e. smooth, clumpy or two-phase) and c) dust composition and the treatment of the sublimation temperature of the dust grains. In particular, clumpy disc$+$wind H17 models are significantly different from the other torus models described above, both in the dust geometry and composition (see Section \ref{sec:models}). \citet{Hoenig17} proposed that a polar dusty outflow is launched near the dust sublimation zone and thus the polar dust composition should be similar to the dust in the inner regions of the disc (see \citealt{Hoenig17,Isbell21}). Therefore, they only included a population of large pure-graphite grains in the polar dust component assuming that it is swept-up dust from the inner wall of the torus/disc where silicate grains are destroyed by the intense emission from the AGN. In contrast, they included both silicate and graphite grains in the torus/disc component. To account for the different dust compositions, \citet{Hoenig17} used a physically motivated dust sublimation model considering that larger grains are heated less efficiently than smaller grains. This leads to various grain radial layers (species and sizes), where large graphite grains are hotter and closer to the AGN (e.g. \citealt{Schartmann08}). Note that this sublimation temperature treatment is not taken into account in the other torus models considered here (e.g. \citealt{Nenkova08A,Nenkova08B,Hoenig10B,Stalevski16}), although the smooth F06 torus models use different sublimation temperatures (T$_{\rm sub}^{\rm silicates}$=1000\,K and T$_{\rm sub}^{\rm graphites}$=1500\,K). Note that for simplicity throughout
this work we will use the term {\textit{torus models}} to refer to the smooth, clumpy and two-phases torus models (i.e. those models that do not include the dusty polar component).
\section{SED fitting with torus models}
\label{results}
\subsection{Accretion disc contribution}
\label{accretion_disk}
The subarcsecond resolution NIR fluxes of type 2 AGN are dominated by emission from hot AGN-heated dust with very little or no contribution from the accretion disc. However, another contribution to the NIR emission can be stellar emission from the host galaxy. To separate, as much as possible, the nuclear NIR emission from the stellar emission the highest possible spatial resolution is required (see e.g. \citealt{herrero98,herrero03}). Then, we assume that the flux contained in the scaled PSF (i.e. scaled PSF-star to the peak of the galaxy emission at different percentages; e.g. \citealt{Bernete2015,Bernete16}; GB19 and references therein) corresponds to the unresolved component and it is practically uncontaminated by star formation.
In the case of type 1 AGNs, the NIR emission is mainly produced by very hot dust and the direct emission from the accretion disc of the AGN (see e.g. \citealt{caballero16}, GB19, \citealt{Landt19} and references therein). To quantify the contribution from the accretion disc to the nuclear NIR emission, we followed the same procedure as described in \citet{caballero16} using a semi-empirical model consisting of a template for the accretion disc and two blackbodies to fit the optical and NIR emission of each galaxy individually (see GB19). In GB19 we found that the accretion disc contribution to the nuclear IR SEDs ($\sim$0.4 arcsec) of Sy1s was, on average, 46$\pm$28, 23$\pm$13, and 11$\pm$5\% in the J, H, and K bands, respectively. Therefore, we subtracted the accretion disc component in the NIR range of each source prior to fitting the nuclear IR SEDs with the various torus models (see GB19 for further details).
\begin{table*}
\begin{center}
\caption{Summary of models producing the best fit for each galaxy.}
\begin{tabular}{lll}
\hline \hline
Object & Best model--HR & Best model--LR\\
\hline
\multicolumn{3}{c}{Sy1 galaxies}\\
MCG-06-30-015 & H17 & H17\\
NGC3227 & H17 (H17D) & H17 (H17D)\\
NGC3783 & H17 & H17\\
NGC4051 & \dots & \dots\\
NGC4151 & H17 & H17\\
NGC6814 & H17 & H17\\
NGC7213 & H17D & H17D\\
UGC6728 & H17/H17D (H10) & H17/H17D\\
\vspace{0.01cm}\\
\multicolumn{3}{c}{Sy1.8/1.9 galaxies}\\
NGC1365 & H17 & H17\\
NGC2992 & H17 & H17\\
NGC4138 & H17/H17D (H10) & H17/H17D\\
NGC4395 & \dots & \dots\\
NGC5506 & H17 & H17\\
NGC7314 & H17 & H17\\
\vspace{0.01cm}\\
\multicolumn{3}{c}{Sy2 galaxies}\\
ESO005-G004 & \dots & \dots \\
MCG-05-23-016 & H17D (S16/H10/H17/N08) & H17D/H10 (H17) \\
NGC2110 & H17D (H17/H10/N08) & H17D (H17/H10)\\
NGC3081 & S16 (H17D/F06/H10/H17/N08) & H17D/H10/S16/N08 (H17)\\
NGC4388 & H10 & H10\\
NGC4945 & N08 (F06/H10/S16) & N08/S16 (H10/F06)\\
NGC5128 & H10 (F06/H17/S16/H17D) & H10/H17D/H17/S16 (F06)\\
NGC6300 & \dots & \dots \\
NGC7172 & \dots & \dots \\
NGC7582 & F06 (H17/S16/H17D) & F06/S16 (H17D/H17/H10)\\
\hline \hline
\end{tabular}
\tablefoot{The best fit for each galaxy is selected according to the Akaike information criterion ($\epsilon$<0.01; see Section \ref{comparison}). Comparably good fits ($\rm{(\chi^2_{red}-\chi^2_{red,min})<0.5}$) are shown in within parenthesis. Objects without an assigned model cannot be reproduced by any of the models.}
\label{tab:summaryfit}
\end{center}
\end{table*}
\subsection{SED fitting procedure}
\label{sedfittingsect}
Using the torus models described in Section \ref{sec:models} and XSPEC \citep{Arnaud96}, which is a command-driven and interactive spectral-fitting program within the HEASOFT\footnote{https://heasarc.gsfc.nasa.gov} software, we fitted all the nuclear NIR-to-MIR SEDs of our sample of Seyfert galaxies. XSPEC provides an easy way to incorporate new models using additive tables\footnote{\citet{Gonzalez-Martin19A} showed how to create an XSPEC additive table for each of the models employed.} together with a wide range of tools to perform spectral fittings to the data.
To construct high angular resolution NIR-to-MIR SEDs for the whole sample we compiled the highest angular resolution IR ($\sim$1-30~$\mu$m) nuclear fluxes available from the literature. The published MIR photometry and N-band spectroscopy (7.5--13~$\mu$m) used in this work was obtained with 8-10 m-class ground-based telescopes and different instruments (e.g. Gran Telescopio CANARIAS/CanariCam, Very Large Telescope/VISIR, Gemini/T-ReCS and MICHELLE). The nuclear NIR fluxes are from both ground- and space-based (i.e. Hubble Space Telescope; \emph{HST}) data (see Table 2 of GB19). In this work, we used the nuclear IR SEDs as in GB19 (see e.g. Fig. \ref{example_fit} and Appendix \ref{nuclear_fits}). We converted the N-band spectra and IR photometric data into XSPEC format using the {\sc flx2xsp} task within HEASOFT.
\begin{figure}
\centering
\includegraphics[width=1.\columnwidth]{Figures/NGC3227_Hoenig17_fit.png}
\caption{Example of the nuclear IR SED of NGC\,3227 fitted with the clumpy disc$+$wind H17 models (top panel) and its residuals (bottom panel). The grey diamonds correspond to the high angular resolution photometric points. The black arrows represent low angular resolution data, which are treated as upper limits. Black crosses correspond to the high angular resolution N-band spectrum. The solid blue line is the best-fitted model.}
\label{example_fit}
\end{figure}
We masked regions containing narrow spectral features, the 11.3 $\rm{\mu m}$ feature attributed to polycyclic aromatic hydrocarbon molecules (PAHs), and [S\,IV]$\lambda$10.5$\rm{\mu m}$ and [Ne\,II]$\lambda$12.8$\rm{\mu m}$ emission lines, to reveal the IR continuum. Note that other weak emission lines are not masked, since they do not affect the fit and that the other PAH emission bands are relatively weak in this sample. Previous studies also showed the importance of including an IR extinction law for fitting the IR SED of Sy galaxies (e.g. \citealt{Ramos11b}; hereafter RA11; \citealt{Herrero11}; hereafter AH11; \citealt{Ramos14} \& \citealt{Bernete19}; hereafter GB19). This is especially important for sources with very deep silicate features which generally show prominent dust lanes and/or are hosted in highly inclined galaxies (e.g. AH11, \citealt{gonzalez-martin13}). To do so, we use the IR extinction curve of \citet{Pei92}, which is already included as a multiplicative component within XSPEC.
High spatial resolution N-band spectroscopy provides information on the silicate feature around 9.7$\rm{\mu m}$ which is important for the restriction of the model parameters (see e.g. \citealt{Martinez-Paredes20}). However, including spectral and photometric data in the fit is not a straightforward task from the statistical point of view. The $\chi^2$ statistic method takes every point into account equally so the best fit would tend to match the N-band spectral region over the photometric points. To avoid this, we perform the spectral fitting into two steps. We first fit the photometric data and low spectral resolution N-band spectrum (low-resolution, LR, fit). The LR spectra are computed to match the average bandpass of the photometric data. Then, we compute the 3$\rm{\sigma}$ errors for each parameter, which we use as priors for the SED fitting using the full resolution N-band spectra (high resolution, HR, fit). Note that the same methodology has been used in \citet{Martinez-Paredes21} for a sample of QSOs.
We compute the $\chi^2$ statistics for both the LR and HR fits. We consider the fit to be acceptable if the reduced $\rm{\chi^2}$ (for both HR and LR; see e.g. GM19A,19B) is $\rm{\chi^2_{red}<2}$. Among them, the best fit is the one providing minimum $\rm{\chi^2_{red}}$ and we consider two fits equally good if $\rm{(\chi^2_{red} - \chi^2_{red,min})<0.5}$. In Appendix \ref{nuclear_fits}, we present the results of the nuclear IR SED fitting process with the various torus models (see Section \ref{sec:models}). Tables\,\ref{appendixfitSy1}, \ref{appendixfitSyInt}, and \ref{appendixfitSy2} of Appendix \ref{nuclear_fits} show the results for Sy1, Sy1.8/1.9, and Sy2 galaxies.
To further evaluate the goodness of the fits we use two methods. First, we use a qualitative method for performing a visual inspection of the residuals. To do so, we construct the average residuals using the entire N-band spectra and IR photometry, but excluding the upper limits (i.e. lower angular resolution data). For consistency, we use the same wavelength grid for all photometry (1.6, 2.2, 5.5, 18.0, 25.0 and 30\,$\mu$m) employing a quadratic interpolation of nearby values for each galaxy. Secondly, we use the Akaike information criterion (AIC). This method allows an evaluation of the best fit by comparing the minimum $\rm{\chi^2_{red}}$ with as comparably good
fits those ($\rm{(\chi^2_{red}-\chi^2_{red,min})<0.5}$). To determine which one is the best fit to the data, we compare the Akaike weights \citep{Emmanoulopoulos16} of the models providing acceptable fits ($\epsilon$=W$_{\rm model1}$/W$_{\rm model2}$). We consider the best fit when $\epsilon$<0.01, which means that a given fit is, at least, 10 times better than other good fits (see e.g. \citealt{Martinez-Paredes21}). Finally, to investigate whether the goodness of the fit depends on the Seyfert type and other AGN properties, we use Fisher's exact test\footnote{The Fisher's exact test is valid for all sample sizes, but it is commonly employed when sample sizes are small.} that is commonly employed for testing for the independence (see Section \ref{best_models}). In Table \ref{tab:summaryfit} we present the best (and comparably good) fits to the LR and HR SEDs, which are practically the same. Therefore in the following, we only discuss the HR results.
\section{Comparison of the various torus models}
\label{comparison}
\begin{figure*}
\centering
\par{
\includegraphics[width=17.5cm]{Figures/Fritz06_residuals_multiplot-eps-converted-to.pdf}
\includegraphics[width=17.5cm]{Figures/Nenkova_residuals_multiplot-eps-converted-to.pdf}
\includegraphics[width=17.5cm]{Figures/Hoenig10_residuals_multiplot-eps-converted-to.pdf}
\includegraphics[width=17.5cm]{Figures/Stalev16_residuals_multiplot-eps-converted-to.pdf}
\includegraphics[width=17.5cm]{Figures/Hoenig17_residuals_multiplot-eps-converted-to.pdf}
\includegraphics[width=17.5cm]{Figures/Hoenig17D_residuals_multiplot-eps-converted-to.pdf}
\par}
\caption{Average residuals (units as in Fig. \ref{example_fit}) of the spectral fitting for each torus model used in this work. Blue, green, red and black stars (and solid lines) correspond to Sy1, Sy1.8/1.9, Sy2 and the full sample, respectively. The regions masked in the fitting process are highlighted in beige vertical lines.}
\label{Residuals_histogram}
\end{figure*}
\subsection{Best model fits}
\label{best_models}
\begin{figure*}
\centering
\par{
\includegraphics[width=5.8cm]{Figures/variable_best_lum-eps-converted-to.pdf}
\includegraphics[width=5.8cm]{Figures/variable_best_edd-eps-converted-to.pdf}
\includegraphics[width=5.8cm]{Figures/variable_best_nh-eps-converted-to.pdf}
\par}
\caption{Best fit distributions of the BCS$_{40}$ sample per bolometric luminosity (left panel), Eddington ratio (central panel) and hydrogen column density (right panel) bin. The grey hatched and filled histograms are the distribution of unfitted and fitted sources. The blue and red filled histograms correspond to sources best fitted by clumpy disc$+$wind H17 models and torus models respectively. The orange filled histograms are sources equally fitted by clumpy disc$+$wind H17 models and torus models.}
\label{fit_histogram}
\end{figure*}
\subsubsection{Average fitting residuals}
\label{average_residual}
To determine which are the best suited models for reproducing the entire nuclear NIR-to-MIR SED of the various Sy groups, we first use a qualitative analysis of the average residuals of the spectral fitting. Fig.\,\ref{Residuals_histogram} presents these average residuals of our sample for each of the models considered in this work (see Section\,\ref{sec:models}). The average residuals are computed by grouping the various Seyfert types: from left to right panels of Fig.\,\ref{Residuals_histogram} are Sy1, Sy1.8/1.9, Sy2 and the full sample. From visual inspection of Fig.\,\ref{Residuals_histogram}, the average residuals of Sy1/1.8/1.9 indicate a clear excess at NIR emission for smooth, clumpy and two-phase torus models (i.e. torus models). This NIR excess was first reported by \citet{Neugebauer79} using a sample of quasars, and confirmed by \citet{Edelson86} in Seyfert galaxies. The clumpy disc H17D models generally produce slightly smaller residuals in the IR emission of Sy1 galaxies than other torus models used in this work. However, the models including the polar dust component produce the flattest residuals in the NIR for the entire sample (see Fig.\,\ref{Residuals_histogram}).
The N-band spectra are equally well-fitted with most of the models, except for the clumpy N08 torus models and two-phase S16 torus models in the 8--10\,$\mu$m range. However, this can be in part due to contamination from the 7.7\,$\mu$m PAH band. On the other hand, the clumpy H10 torus models, clumpy disc$+$wind H17 models and clumpy disc H17D models show flatter average fitting residuals for the N-band spectra than the other models. Furthermore, the 18--30\,$\mu$m range is generally well reproduced by the various torus models, with the only exception of the smooth F06 torus models, which slightly over-predicts the emission above 20\,$\mu$m. Therefore, the clumpy disc$+$wind H17 models produce the best fits in the entire NIR-to-MIR range of the Sy1 galaxies in our sample.
\begin{figure*}
\centering
\renewcommand{\arraystretch}{0.01}
\begin{tabular}{c c c}
\textbf{{\hspace{0.9cm} smooth F06 torus models}} & \textbf{{\hspace{0.9cm} clumpy N08 torus models}} & \textbf{{ \hspace{0.9cm} clumpy H10 torus models}}\\
\vspace{0.5cm}
\includegraphics[width=5.8cm]{Figures/fritz06/90minus_i_fritz06-eps-converted-to.pdf} & \includegraphics[width=5.8cm]{Figures/nenkova08/i-eps-converted-to.pdf} &
\includegraphics[width=5.8cm]{Figures/hoenig10/i-eps-converted-to.pdf}\\
\textbf{{\hspace{0.8cm} two-phase S16 torus models}} & \textbf{{\hspace{0.8cm} clumpy disc$+$wind H17 models}} \\
\includegraphics[width=5.8cm]{Figures/stalev16/i-eps-converted-to.pdf} & \includegraphics[width=5.8cm]{Figures/hoenig17/i-eps-converted-to.pdf} \\
\end{tabular}
\caption{Comparison between the torus/disc inclination combined probability distributions for different models considered here. Blue dotted, green dashed, red solid and black solid lines represent the parameter distributions of Sy1, Sy1.8/1.9, Sy2 and the entire sample, respectively. Note that 90-i$_{\rm F06}$=i$_{\rm N08}$=i$_{\rm H10}$=i$_{\rm S16}$=i$_{\rm H17}$.}
\label{i_distribution}
\end{figure*}
\begin{figure*}
\centering
\renewcommand{\arraystretch}{0.01}
\begin{tabular}{c c c}
\textbf{{\hspace{0.9cm} smooth F06 torus models}} & \textbf{{\hspace{0.9cm} clumpy N08 torus models}} & \textbf{{ \hspace{0.9cm} clumpy H10 torus models}}\\
\vspace{0.5cm}
\includegraphics[width=5.8cm]{Figures/fritz06/sigma-eps-converted-to.pdf} & \includegraphics[width=5.8cm]{Figures/nenkova08/sigma-eps-converted-to.pdf} &
\includegraphics[width=5.8cm]{Figures/hoenig10/sigma_hoening10-eps-converted-to.pdf}\\
\textbf{{\hspace{0.8cm} two-phase S16 torus models}} & \textbf{{\hspace{0.8cm} clumpy disc$+$wind H17 models}} \\
\includegraphics[width=5.8cm]{Figures/stalev16/sigma-eps-converted-to.pdf} & \includegraphics[width=5.8cm]{Figures/hoenig17/h-eps-converted-to.pdf} \\
\end{tabular}
\caption{Same as Fig. \ref{i_distribution} but for the torus/disc angular width. Note that $\sigma_{\rm H10}$=90-$\theta_{\rm H10}$. In the case of the clumpy disc$+$wind H17 models, the h parameter is the scale height of the dusty disc.}
\label{sigma_distribution}
\end{figure*}
\subsubsection{Quantitative methods}
\label{best_models_agn_prop}
In general, all models provide acceptable fits ($\rm{\chi^2_{red}<2}$) to the majority (19/24) of the nuclear IR SEDs (see Appendix\,\ref{nuclear_fits}). Using the AIC method (see Section\,\ref{sedfittingsect}), we find that the fraction of best fits provided by clumpy disc$+$wind and torus models is similar, 37.5 and 33.3\% (i.e. 9/24 and 8/24 sources), respectively. Furthermore, 2/24 galaxies (8.3\%) are equally fitted by clumpy disc$+$wind or torus models and 5/24 are not well fitted by any of the models used in this work (see Table\,\ref{tab:summaryfit}). According to the best fits, the IR SEDs of Sy1 (and Sy1.8/1.9) galaxies are best reproduced by clumpy disc$+$wind H17 models, whereas torus models are best suited to Sy2 galaxies (see Table\,\ref{statistics}). Using Fisher's exact test we find that these differences are statistically significant.
\begin{table}
\begin{center}
\caption{Summary of the Fisher's exact test results.}
\begin{tabular}{lcc}
\hline \hline
Test & Samples & p-value\\
(1)&(2)&(3)\\
\hline
{\bf{Disc$+$Wind best fits}} & {\bf{Sy1 vs. Sy2}} & {\bf{\textcolor{blue}{$<$0.05}}}\\
{\bf{Disc$+$Wind best fits}} & {\bf{Sy1/1.8/1.9 vs. Sy2}} & {\bf{\textcolor{blue}{$<$0.05}}}\\
Disc$+$Wind acceptable fits & Sy1 vs. Sy2 & 0.37\\
Disc$+$Wind acceptable fits & Sy1/1.8/1.9 vs. Sy2 & 0.31\\
{\bf{Torus best fits}} & {\bf{Sy1 vs. Sy2}} & {\bf{\textcolor{blue}{$<$0.05}}}\\
{\bf{Torus best fits}} & {\bf{Sy1/1.8/1.9 vs. Sy2}} & {\bf{\textcolor{blue}{$<$0.05}}}\\
Torus acceptable fits & Sy1 vs. Sy2 & 0.34\\
Torus acceptable fits & Sy1/1.8/1.9 vs. Sy2 & 0.12\\
Unfitted sources & Sy1 vs. Sy2 & 0.59\\
Unfitted sources & Sy1/1.8/1.9 vs. Sy2 & 1.00\\
\hline
\end{tabular}
\tablefoot{In bold we indicate distributions that can be considered statistically different (i.e. p-value<0.05).}
\label{statistics}
\end{center}
\end{table}
The difference in the results for Sy1 and Sy2 galaxies confirms the trend first reported by GM19B using lower spatial resolution \emph{Spitzer}/IRS MIR spectra of a sample of AGN. However, in this work we confirm them using, for the first time, an ultra-hard X-ray selected sample of Seyferts and high-spatial resolution NIR-to-MIR data that allow us to better isolate the nuclear emission. Moreover, we do not find a clear trend between the models producing the best fits and AGN luminosity or Eddington ratio (see left and central panels of Fig.\,\ref{fit_histogram}). However, the right panel of Fig.\,\ref{fit_histogram} shows that it depends on the line-of-sight hydrogen column density.
In particular, clumpy disc$+$wind H17 models better reproduce the IR emission of AGN with relatively low hydrogen column densities (median value of log (N$_{\rm H}^{\rm X-ray}$\,cm$^{-2}$)=21.0$\pm$1.0; i.e. Sy1 and Sy1.8/1.9 galaxies) than torus models. On the other hand, torus models better reproduce the SEDs of AGN with high X-ray hydrogen column densities (median value of log (N$_{\rm H}^{\rm X-ray}$\,cm$^{-2}$)=23.5$\pm$0.8; i.e. Sy2s). This is in good agreement with theoretical predictions reported by \citet{Venanzi20}, where the authors found that for nuclear column densities of log(N$_{\rm H}$\,cm$^{-2}$)$<$23 the IR
radiation pressure becomes effective and polar outflows start to emerge (see also AH21).
\subsection{Torus model parameters}
\label{torus_parameter}
In this section we investigate the main differences between the derived torus model parameters for the BCS\,40 sample using, for the first time, high angular resolution data and various models (see Section \ref{sec:models}). Note that we find similar fits using clumpy disc H17D and clumpy H10 torus models. Therefore, in the following, we will not discuss the individual parameters of clumpy disc H17D models.
A general trend is that, even for acceptable fits ($\rm{\chi^2_{red}<2}$), the model parameters are not well constrained (see Tables \ref{appendixfitSy1}, \ref{appendixfitSyInt}, and \ref{appendixfitSy2}). This result is independent of the torus model, Seyfert type, X-ray absorption along the line of sight or AGN luminosity. Nevertheless, rather than looking at the individual fits (see Appendix\,\ref{nuclear_fits}), we focus on the global statistics of the torus model parameters. For this purpose, we derived the combined probability distributions by concatenating together the individual arrays of the parameter probability distributions for all objects in each subgroup (see e.g. GB19). To quantify the differences between the combined probability distributions, we use the Kullback-Leibler divergence (KLD; \citealt{Kullback51}). This approach takes into account the overall shape of the combined distribution which always has a positive value. The larger the value the greater the difference of the distribution. This value is equal to zero for the case of two identical distributions. RA11 suggested that for values larger than 1 (boldface in Table \ref{fritz_tab_kdl}, \ref{nenkova_tab_kdl}, \ref{hoenig10_tab_kdl}, \ref{hoenig17_tab_kdl}, \ref{stalev_tab_kdl} in Appendix \ref{Combined_distribution}), two combined distributions may be considered to be significantly different.
Only a few model parameters can be directly compared between the various torus models, for example, the torus/disc inclination angle and its width. According to the KLD test, the differences in the torus/disc inclination angle between Sy subgroups are significant for the smooth F06 torus models, the clumpy H10 torus models and the clumpy disc$+$wind H17 models (see Fig. \ref{i_distribution}). In general, more edge-on values of the torus/disc inclination are needed for Sy2s than Sy1s. In particular, clumpy disc$+$wind H17 model results show the following trend for the disc inclination: i$_{\rm Sy1}$ $<$i$_{\rm Sy1.8/1.9}$ $<$i$_{\rm Sy2}$. The differences in the angular width of the torus/disc between Sy subgroups are also significant for the various models (see Fig.\,\ref{sigma_distribution} and Appendix \ref{Combined_distribution}). In general, the angular widths of the torus of Sy2 galaxies are larger than those of Sy1s (see Fig. \ref{sigma_distribution}). The only exception is found for the two-phase S16 torus models, which require a larger angular width of the torus for Sy1/1.8/1.9 than for Sy2 galaxies. Besides, for the clumpy disc$+$wind H17 models, there are no statistically significant differences between the angular width of Sy1 and Sy2 discs. This is likely related with the fact that clumpy disc$+$wind H17 models have relatively ``thin'' discs and, thus, it would be difficult to find differences between Sy1 and Sy2 discs. Summarizing, our results indicate that generally Sy1 galaxies have tori with smaller angular width and more face-on values of the torus inclination than those of type 2 Seyferts.
\begin{figure*}
\centering
\par{
\includegraphics[width=8.8cm]{Figures/ct/fcov_parameter_space_models.pdf}
\includegraphics[width=8.8cm]{Figures/ct/fcov_parameter_space_data.pdf}
\par}
\caption{Comparison between the covering factor parameter space. Left panel: combined probability distributions for all the models used in this work. Right panel: combined probability distributions derived for the entire Sy sample using each model.}
\label{covering_factor_parameter_space}
\end{figure*}
\label{covering_factor}
\begin{figure*}
\centering
\renewcommand{\arraystretch}{2.0}
\begin{tabular}{c c c}
\textbf{{\hspace{0.9cm} smooth F06 torus models}} & \textbf{{\hspace{0.9cm} clumpy N08 torus models}} & \textbf{{ \hspace{0.9cm} clumpy H10 torus models}}\\
\includegraphics[width=5.8cm]{Figures/ct/fcov_fritz06-eps-converted-to.pdf} & \includegraphics[width=5.8cm]{Figures/ct/fcov_nenkova08-eps-converted-to.pdf} &
\includegraphics[width=5.8cm]{Figures/ct/fcov_hoenig10-eps-converted-to.pdf}\\
\textbf{{\hspace{0.8cm} two-phase S16 torus models}} & \textbf{{\hspace{0.8cm} clumpy disc$+$wind H17 models}} \\
\includegraphics[width=5.8cm]{Figures/ct/fcov_stalev16-eps-converted-to.pdf} &
\includegraphics[width=5.8cm]{Figures/ct/fcov_hoenig17-eps-converted-to.pdf} \\
\end{tabular}
\caption{Same as Fig. \ref{i_distribution} but for the covering factor.}
\label{covering_factor_distributions}
\end{figure*}
\subsection{Derived torus covering factor, size and mass}
\label{AGN_proper}
\subsubsection{The Covering Factor of the nuclear obscuring material}
\label{ct_sec}
The nuclear obscuration is strongly dependent on the covering factor (C$_{\rm T}$) which is defined as the fraction of sky covered by the obscuring material. The covering factor is one of the main elements regulating the intensity of the reprocessed AGN radiation (e.g. RA11, \citealt{Ramos17}).
For the various models, C$_{\rm T}$ can be calculated as:
\begin{equation}
\begin{aligned}
C_{\rm T} = 1-\int_{0}^{\pi/2} e^{-\tau_{\nu}\rm(\alpha)}~\cos\alpha \,d\alpha
\end{aligned}
\end{equation}
\noindent where $\tau_{\nu}\rm(\alpha)$ is the line-of-sight optical depth, which depends on the azimuthal angle ($\alpha$). The line-of-sight optical depth is computed from the distribution of clouds for clumpy torus models and from the equatorial opacity, and from the density distribution for smooth torus models (see GM19B and references therein).
Since the covering factor is defined as the fraction of the sky at the AGN centre covered by obscuring material, it strongly depends on the torus dust distribution and geometry assumed by the model (see Fig.\,\ref{covering_factor_parameter_space}; also GB19B). To further investigate the differences between the various models, we produced the C$_{\rm T}$ combined probability distributions of the models sampling the entire space of parameters of each model. For example, the clumpy$+$wind H17 model consists of a clumpy dusty disc plus a hollow dusty cone which will naturally produce lower values of the covering factor (C$_T<$0.6; see yellow distribution in left panel of Fig.\,\ref{covering_factor_parameter_space}) than a dusty torus with a large range of angular sizes. On the other hand, the two-phase S16 models provide large values of the covering factor (C$_T>$0.6; see red dot distribution in the left panel of Fig.\,\ref{covering_factor_parameter_space}). Therefore, caution must be applied when comparing covering factors between various torus models due to the different ranges of parameter space.
In order to compare the C$_{\rm T}$ probability distributions of the models with those of the observations, we also derived the C$_{\rm T}$ combined probability distributions (for each model) by concatenating the individual C$_{\rm T}$ probability distributions of the various galaxies. Indeed, we find that the C$_{\rm T}$ combined probability distributions of the models and those derived for the entire sample are similar (Fig.\,\ref{covering_factor_parameter_space}). For instance, small values of the C$_{\rm T}$ are derived for the fitted data when using clumpy$+$wind H17 models whereas clumpy H10, clumpy N08 and two-phase S16 models require larger C$_{\rm T}$. However, the C$_{\rm T}$ distributions derived for clumpy$+$wind H17 models tend to have smaller values than the parent distribution. The same applies for the smooth F06 models, whereas the derived C$_{\rm T}$ distributions for clumpy N08 models favour intermediates C$_{\rm T}$ values compared with those of the models which peak at larger values. Considering the different C$_{\rm T}$ ranges covered by the various models, in the following, we have consistently used the same models when comparing covering factors of the Sy groups.
Fig.\,\ref{covering_factor_distributions} shows that generally Sy1 have smaller median values of the covering factor than Sy2. According to the KLD test, there are statistically significant differences in the covering factor of Sy1 and Sy2 for the smooth F06 torus models and the clumpy H17 disc$+$wind models (see Appendix\,\ref{Combined_distribution}). There is a similar trend, but with less significant (using the KLD test), for the clumpy N08 torus models (see e.g. RA11, AH11, \citealt{Ichikawa15} and GB19). Earlier works using clumpy N08 torus models also showed statistically significant differences between the covering factor of Sy1 and Sy2 galaxies (e.g. RA11, AH11, \citealt{Ichikawa15} and GB19). The lower significance found here might be due to the fact that we are not using priors for the angular width of the torus (based on [O\,III] data), unlike previous works.
Finally, we find that the covering factor remains broadly constant within the errors for the majority of the models throughout the luminosity range: log(L$_{\rm bol}$\,erg~s$^{-1}$)$\sim$41.8--45.9. The same applies when using Eddington ratios ($\lambda_{\rm Edd}$: -3.40 to -0.26) instead of the bolometric luminosity.
\subsubsection{Torus/disc size and mass}
\label{size}
Using the radial extent of the torus/disc (Y=R$_{\rm o}$/R$_{\rm d}$) and the dust sublimation radius (R$_{\rm d}$), we can derive the physical radius of the torus/disc (R$_{\rm o}$). The dust sublimation radius also depends on the dust sublimation temperature and the bolometric luminosity. Note that the clumpy H10 torus models and clumpy$+$disc H17 models fixed the Y parameter to a large value for all the SEDs (Y$=$150 and Y$=$500, respectively).
The radius distributions of Sy2 for the smooth F06 torus models and the two-phase S16 models show a tail towards larger tori in comparison with those of Sy1 (see Fig.\,\ref{rout_distributions}). In particular, in the case of the smooth F06 torus models, we derive median values of the torus size for Sy2 galaxies larger ($\sim$3-5 times) than those of Sy1 and Sy1.8/1.9. Note that using the smooth F06 torus models the radius probability distribution for Sy2 galaxies reaches maximum values of $\sim$30\,pc. However, the clumpy N08 torus models do not show statistically significant differences between Sy1 and Sy2 radii. In general, we find relatively compact (1-15\,pc) torus radii for all the Seyfert galaxies in our sample (see Fig.\,\ref{rout_distributions}). Note that we use the term compact torus for those with sizes below the largest resolution element in the MIR for our sample (i.e. $<$50\,pc).
\begin{figure*}
\centering
\begin{tabular}{c c c}
\textbf{{\hspace{0.9cm} smooth F06 torus models}} & \textbf{{\hspace{0.9cm} clumpy N08 torus models}} & \textbf{{\hspace{0.8cm} two-phase S16 torus models}}\\
\includegraphics[width=5.8cm]{Figures/size/rout_fritz06-eps-converted-to.pdf} & \includegraphics[width=5.8cm]{Figures/size/rout_nenkova08-eps-converted-to.pdf} &
\includegraphics[width=5.8cm]{Figures/size/rout_stalev16-eps-converted-to.pdf} \\
\end{tabular}
\caption{Same as Fig. \ref{i_distribution} but for the torus size (R$_{\rm out}$).}
\label{rout_distributions}
\end{figure*}
\begin{figure*}
\centering
\renewcommand{\arraystretch}{2.0}
\begin{tabular}{c c c}
\textbf{{\hspace{0.9cm} smooth F06 torus models}} & \textbf{{\hspace{0.9cm} clumpy N08 torus models}} & \textbf{{ \hspace{0.9cm} clumpy H10 torus models}}\\
\includegraphics[width=5.8cm]{Figures/mdust/dustmass_fritz06-eps-converted-to.pdf} & \includegraphics[width=5.8cm]{Figures/mdust/dustmass_nenkova08-eps-converted-to.pdf} &
\includegraphics[width=5.8cm]{Figures/mdust/dustmass_hoenig10-eps-converted-to.pdf}\\
\textbf{{\hspace{0.8cm} two-phase S16 torus models}} & \textbf{{\hspace{0.8cm} clumpy disc$+$wind H17 models}} \\
\includegraphics[width=5.8cm]{Figures/mdust/dustmass_stalev16-eps-converted-to.pdf}&
\includegraphics[width=5.8cm]{Figures/mdust/dustmass_hoenig17-eps-converted-to.pdf} \\
\end{tabular}
\caption{Same as Fig. \ref{i_distribution} but for the torus gas mass.}
\label{dustmass_distributions}
\end{figure*}
Using the Galactic dust-to-gas ratio \citep{Bohlin78}, we can also estimate the torus gas mass associated with the fitted nuclear dusty structure by integrating the density distribution function for each model (see Appendix \ref{equations}, see also GM19B and references therein). We computed the torus mass within the fitted dusty structure volume, thus, it may not be representative of the whole torus gas mass distribution which is traced by the cold gas (see e.g. \citealt{Honig19} and Section \ref{derived_high_angular} for further discussion). We find slightly larger values of the torus/disc gas mass for Sy2 than for Sy1 galaxies, but their differences are generally within the errors (see Fig.\,\ref{dustmass_distributions}). The total gas masses of the tori are in the range log(M$_{\rm torus}$)$\sim$2-6\,M$_\sun$ and, the majority of the models used in this work provide similar values of the total gas mass within the errors (median values of log(M$_{\rm torus}$)$\sim$4\,M$_\sun$). The exception are the smooth F06 torus (log(M$_{\rm torus}^{\rm F06}$)=5.6$\pm$1.9\,M$_\sun$) and disc$+$wind H17 models (log(M$_{\rm torus}^{\rm H17}$)=2.6$\pm$0.8\,M$_\sun$), for which we find larger gas masses and smaller masses respectively than for the other torus models.
\begin{figure*}
\centering
\begin{tabular}{c c}
\includegraphics[width=8.4cm, clip, trim=0 0 10 0]{Figures/Lagn_vs_y_all-eps-converted-to.pdf}&
\includegraphics[width=8.1cm, clip, trim=20 0 10 0]{Figures/Lagn_vs_size_all-eps-converted-to.pdf} \\
\end{tabular}
\caption{Luminosity dependence of the radial extent of the torus/disc (Y=R$_{\rm o}$/R$_{\rm d}$; left panels) and of the torus/disc radius (right panels). From top to bottom panels the result for the various models with free Y parameter used in this work (see text for details). The first 3 bins correspond to the Sy galaxies of BCS$_{40}$ sample. Note that we also include two additional bins of a sample of QSOs with higher bolometric luminosities from \citet{Martinez-Paredes21}.}
\label{rout_lum_distributions}
\end{figure*}
The derived dusty torus/disc sizes ($\sim$1-15\,pc) are similar to those found using MIR imaging and interferometric data (r$<10$~pc; see Section\,\ref{derived_high_angular} for further discussion). However, they are generally smaller than those observed in cold dust by ALMA ($\sim$42\,pc; \citealt{Garcia-Burillo21}), indicating that larger values of the of the radial extent of the torus/disc (Y) than those covered by the models are needed to match the torus sizes measured in ALMA sub-mm observations at Sy-like luminosities. Indeed, we found larger torus sizes ($\sim$1-34\,pc) for the clumpy$+$disc H17 models that use a large value of the radial extent (fixed value of Y=500)\footnote{As expected smaller torus sizes ($\sim$0.3-10.3\,pc) are found when using the clumpy H10 torus model (Y=150)}. Thus, we compare the fitted values of Y with the range covered by the models. The left panel of Fig.\,\ref{rout_lum_distributions} shows that the three models compared here do not favour the largest values of Y.
Finally, we also investigate the relation between the bolometric luminosity and the torus/disc size dividing our sample into several luminosity bins (see Fig.\,\ref{rout_lum_distributions}). In the first bin we include the three sources with log(L$_{\rm bol}$\,erg~s$^{-1}$)$<$42.75, while the rest of the sample was divided into two bins of equal logarithmic width (1~dex). Note that we also include data from QSOs (i.e. two additional bins from \citealt{Martinez-Paredes21}) to expand the range of luminosities beyond our original Sy sample. These authors used the same methodology as here to fit the high angular resolution NIR-to-MIR SEDs of a sample of type 1 QSOs with log(L$_{\rm bol}$\,erg~s$^{-1}$)$\sim$44.2-45.9. Therefore, these two bins do not include type 2 AGNs. All these models show the same trend throughout the entire luminosity range (log(L$_{\rm bol}$\,erg~s$^{-1}$)$\sim$41.8--45.9): the higher the luminosity the larger the size (see right panel of Fig.\,\ref{rout_lum_distributions}). The same applies when using the BH mass instead of the bolometric luminosity. However, we do not find a clear dependence of Y for higher luminosities (see left panel of Fig.\,\ref{rout_lum_distributions}).
On the other hand, the torus/disc size depends on the Y parameter and dust sublimation radius ($\propto $L$_{\rm bol}^{1/2}$). Therefore, to further investigate the relationship of the torus size with the luminosity, we compare our results with the expected torus sizes at a given bolometric luminosity and Y parameter (dashed grey lines in right panel of Fig.\,\ref{rout_lum_distributions}). Considering the almost constant Y values (within the errors) for each luminosity bin and model in the left panel of Fig.\,\ref{rout_lum_distributions}, the torus size--luminosity correlations might be caused, at least in part, by the sublimation radius dependence with the bolometric luminosity. Futhermore, we find that the derived torus/disc masses also depend on the bolometric luminosities, as expected, given the relation between the torus size and luminosity.
\section{Discussion}
\label{discussion}
\subsection{The covering factor}
It has been suggested that the bolometric luminosity (e.g. \citealt{Lawrence91,Simpson05}) and Eddington ratio (e.g. \citealt{Buchner17,Ricci17c}) may be the key parameters determining the covering factor. However, according to our results, we do not find a clear dependence of the torus model covering factor with the bolometric luminosity (or the Eddington ratio; see also GB19), although the ranges probed by our sample are relatively reduced (log(L$_{\rm bol}$\,erg~s$^{-1}$)$\sim$41.8--45.9; $\lambda_{\rm Edd}$: -3.40 to -0.26). This lack of dependence was also reported by \citet{Mateos16,Mateos17,Netzer16,Stalevski16,Lani17,Ichikawa18}; GM19B and GB19. Regarding the covering factor, we find that Sy2 galaxies generally have larger values of the covering factor (and angular width of the torus) than Sy1s, for the majority of the models used. This was first reported by RA11 (see also e.g. AH11, \citealt{Ichikawa15} and GB19) but using clumpy N08 torus models only.
Using high-spatial resolution NIR-to-MIR data of an ultra-hard X-ray selected sample of Seyferts, this work confirms that the covering factor of Sy1 and Sy2 galaxies are different. Therefore, our findings indicate that the Seyfert type classification depends not only on the dusty structure inclination but also on the intrinsic differences (e.g. covering factor) of type-1 and type-2 AGN.
\begin{figure*}
\centering
\par{
\includegraphics[width=8.8cm]{Figures/hoenig17/a-eps-converted-to.pdf}
\includegraphics[width=8.8cm]{Figures/hoenig17/aw-eps-converted-to.pdf}
\par}
\caption{Comparison between the clumpy disc$+$wind H17 model parameters combined probability distributions for the optical classification. Left panel: for the radial dust-cloud distribution power law index (a). Right panel: for the dust-cloud distribution power law along the wind (a$_{\rm w}$). Blue dot-dashed, green dashed, and red solid lines represent the parameter distributions of Sy1, Sy1.8/1.9, and Sy2 galaxies, respectively.}
\label{hoenig17_distribution_a_aw}
\end{figure*}
\subsection{Mid-IR versus sub-mm torus observations}
\label{derived_high_angular}
As shown in Section \ref{size}, we find differences on the outer radius of the torus with the AGN type and luminosity, although they depend on the torus models used. In this section we further explore these parameters by comparing the torus/disc size and mass derived from the fitted nuclear IR SED with those measured from IR and sub-mm data (i.e. VLT/SINFONI, NOEMA, and ALMA). For all the models, we derive relatively compact dusty torus/disc sizes ($\sim$1-15\,pc). This is in agreement with the torus sizes reported in previous works using the clumpy N08 torus models (see e.g. \citealt{Ramos09}; RA11; AH11; \citealt{Lira13,Ichikawa15,Fuller16}; GB19). The derived torus sizes in this work are of the same order of magnitude as those upper-limit sizes derived from MIR observations. For example, using MIR direct imaging, \citet{Packham05} and \citet{Radomski2008} found that the MIR size of the torus is less than $\sim$4~pc (diameter) for Circinus and Centaurus\,A. Furthermore, modelled MIR interferometric data (e.g. \citealt{Jaffe04,Tristram07,Tristram09,Burtscher09,Burtscher13,Raban09,Lopez-Gonzaga16}) also show a relatively compact torus of r$<10$~pc. However, recent works using ALMA sub-mm observations of low-luminosity AGN and Seyfert galaxies measure large molecular discs with physical scales (diameters) ranging from 10 to 130\,pc, with a typical value of 42\,pc (e.g. \citealt{Herrero18,Herrero19,Herrero21,Combes19,Garcia-Burillo21}). The larger sizes measured in the sub-mm compared to those inferred from IR observations are expected since sub-mm sizes correspond to the colder and, thus, more external material within the torus (e.g. \citealt{Lopez-Rodriguez18,Honig19,Herrero21,Nikutta21}).
\begin{figure*}
\centering
\includegraphics[width=1.07\columnwidth, clip, trim=0 0 40 20]{Figures/sy1_0-eps-converted-to.pdf}
\includegraphics[width=0.93\columnwidth, clip, trim=80 0 40 20]{Figures/sy2_0_a-1p0-eps-converted-to.pdf}
\includegraphics[width=1.07\columnwidth, clip, trim=0 0 40 20]{Figures/sy1_45-eps-converted-to.pdf}
\includegraphics[width=0.93\columnwidth, clip, trim=80 0 40 20]{Figures/sy2_45_a-1p0-eps-converted-to.pdf}
\includegraphics[width=1.07\columnwidth, clip, trim=0 0 40 20]{Figures/sy1_90-eps-converted-to.pdf}
\includegraphics[width=0.93\columnwidth, clip, trim=80 0 40 20]{Figures/sy2_90_a-1p0-eps-converted-to.pdf}
\caption{Clumpy disc$+$wind H17 (blue solid lines) and clumpy disc H17 (orange dashed lines) model SEDs (normalized at 20~$\mu$m). Left panel: Sy1 configuration, which consists of a concentrated disc (a=-2.5) and an extended wind (a$_{\rm w}$=-1.0). Right panel: Sy2 configuration, which consists of a relatively extended disc (a=-1.0) and a concentrated wind (a$_{\rm w}$=-2.0). From top to bottom panels: face-on, intermediate (45$^\circ$) and edge-on values of the inclination angle for the clumpy disk and disk$+$wind dusty structure. Note that all the models use N=7, and the clumpy disc$+$wind H17 models use f$_{\rm wd}$=0.6, $\theta$=45$^\circ$ and $\sigma$=10$^\circ$ (see main text for further details on the parameters of the models). }
\label{selfobscuration}
\end{figure*}
ALMA observations made possible to estimate the molecular gas masses of nearby Seyferts (including a large fraction of the galaxies in this work) and low-luminosity AGN, in the range 10$^5$-10$^7$~M$_\odot$ (see e.g. \citealt{Herrero18,Alonso-Herrero20,Combes19,Garcia-Burillo21}). Using the H$_2$~1-0S(1) emission line at 2.12~$\mu$m of a sample of Sy galaxies, \citet{Hicks09} derived the torus/disc gas masses within the inner 30~pc (radius): M$_{\rm gas}^{\rm H_2}$=0.9--9$\times$10$^6$~M$_\odot$. As expected, the torus masses derived from the fitted nuclear IR SED are relatively smaller ($\sim$10$^2$--10$^6$~M$_\odot$) than those derived from high angular resolution sub-mm data due to the different inferred IR sizes of the torus and those measured in the sub-mm. Therefore, our result is consistent with a temperature-driven stratified disc/torus, where the inner radius is dominated by the hot and warm dust emitting at NIR and MIR wavelengths while the sub-mm observations trace a more extended (and massive) colder component \citep[see also][]{Garcia-Burillo21}.
\subsection{Torus dust composition and geometry}
\label{discussion_dust_compo}
Our findings indicate that torus models better reproduce the NIR-to-MIR emission of AGN with relatively high hydrogen column density (i.e. Sy2s), whereas those of Sy1/Sy1.8/1.9 (with low hydrogen column density) are best fitted by the disc$+$wind H17 model (see Section\,\ref{results}). We also showed that the disc$+$wind dust models improve the spectral fit toward the NIR emission for Sy1/1.8/1.9 galaxies \citep[see also][]{Garcia-Gonzalez17, Gonzalez-Martin19B,Isbell21,Martinez-Paredes21}.
The origin of the NIR bump in the SED remains unclear. Direct emission from the accretion disc of the AGN might be an important contribution of the NIR emission for Sy1 galaxies (e.g. \citealt{caballero16}, GB19, \citealt{Landt19} and references therein), but in this work we have remove it from their SEDs. An alternative explanation for the observed NIR excess in Sy1s is an extra contribution of a hot pure-graphite component (T$_{\rm sub}^{\rm graphites}$>T$_{\rm sub}^{\rm silicates}$) heated by the AGN and located in the inner regions of torus \citep{Mor09}. \citet{Bernete17} found a tight correlation between the hard X-ray fluxes (Nuclear Spectroscopic Telescope Array; NuSTAR) and the NIR emission of a sample of 24 unobscured type 1 AGN, suggesting that the observed NIR bump is produced by AGN-heated hot dust (T>T$_{\rm sub}^{\rm silicates}$).
Clumpy disc$+$wind H17 models predict that the included polar dust (ranging from few pc to tens of pc) mainly contributes to the MIR and sub-mm emission. However, the clumpy disc$+$wind H17 models also include very hot dusty clouds close to the AGN that can reach T$\sim$1900\,K (pure-graphite dust; i.e. T>T$_{\rm sub}^{\rm silicates}$) whose emission peaks at NIR wavelengths. Thus, in Section \ref{results} we tested if the NIR bump can be explained by including graphite grains. To do so, we repeated the fitting process using only the SEDs of the clumpy disc component of these models (clumpy H17D disc models). The fits have slightly smaller residuals in the NIR range of Sy1 galaxies than other torus models (see Fig.\,\ref{Residuals_histogram}). This might be related, at least in part, to the addition of large graphite grains in the dust composition of the disc.
Therefore, it is important to include large pure-graphite grains that are able to survive at high temperatures, and physically motivated dust sublimation models for reproducing, at least in part, the nuclear IR emission of Sy1s. However, the clumpy flared disc from H17D models still produce larger NIR residuals than those of the models including the polar dust component (i.e. clumpy disc$+$wind H17 models). We also note that torus models (i.e. without including the polar dusty wind component) can produce NIR and MIR model images with emission strongly elongated in the polar directions for certain torus parameters \citep[e.g.][and references therein]{Lopez-Rodriguez18,Nikutta21}. However, \citet{Nikutta21} found that the observed elongations in IR interferometric data of Seyfert galaxies are difficult to reproduce with a single component torus model (see also \citealt{Stalevski17}).
\subsection{A clumpy disc$+$wind versus clumpy disc IR emission}
To further investigate how including the polar dust component modifies the predicted IR emission, we compare the SEDs of the clumpy disc$+$wind H17 models with those of the clumpy disc H17D models. We define two representative set of parameters for Sy1 and Sy2 based on the average values found for each subgroup (see Appendix\,\ref{Combined_distribution}). Using the combined probability distributions of the clumpy disc$+$wind H17 models, we find centrally peaked wind components (i.e. ${a_{\rm w}^{\rm Sy2}}<{a_{\rm w}^{\rm Sy1}}$) for Sy2 and less extended disc components (i.e. ${a_{\rm}^{\rm Sy1}}<{a_{\rm}^{\rm Sy2}}$) for Sy1 galaxies (see Fig.\,\ref{hoenig17_distribution_a_aw} and Appendix\,\ref{Combined_distribution}). Therefore, we select representative SEDs for Sy1 and Sy2 galaxies using different cloud radial distributions for the disc and the wind, but keeping the other parameters to the same values (N=7, h=0.20, f$_{\rm wd}$=0.6, $\theta$=45$^\circ$ and $\sigma$=10$^\circ$; see Table\,\ref{hoenig17_tab_parameters} and corresponding Appendix\,\ref{nuclear_fits} for a description of the model parameters). In particular, we use two configurations of the radial distributions of the clouds: a) Sy1 configuration with a centrally peaked disc (a=-2.5) and an extended wind (a$_{\rm w}$=-1.0); and b) Sy2 configuration with a relatively extended disc (a=-1.0) and a centrally peaked wind (a$_{\rm w}$=-2.0).
\begin{figure*}
\centering
\par{
\includegraphics[width=16cm, clip, trim=0 0 0 150]{Figures/type_nh_eddington_bestfits-eps-converted-to.pdf}
\par}
\caption{Relation between the X-ray hydrogen column density and the Eddington ratio for the BCS$_{40}$ sample. Blue, green and red symbols represent Sy1, Sy1.8/1.9 and Sy2, respectively. The yellow shaded region corresponds to the region where X-ray column density might not be representative of the molecular gas column density of the torus (i.e. N$_{\rm H}^{\rm X-ray}<$10$^{22}$\,cm$^{-2}$; \citealt{Garcia-Burillo21}). The orange solid line is the limit for IR dusty outflows derived by \citet{Venanzi20} (assuming L$_{\rm AGN}$= 2.2$\times$10$^{43}$ erg s$^{-1}$), and the black dashed line represents the blowout limit predicted by \citet{Fabian08}. Filled bar and hourglass symbols denote galaxies best fitted by torus and disc$+$wind models, respectively. Filled circles represent those galaxies not fitted by any of the models used in this work.}
\label{wind_dependence_nh_edd}
\end{figure*}
Fig.\,\ref{selfobscuration} shows the clumpy disc$+$wind H17 model SEDs (blue solid lines) of the Sy1 and Sy2 configurations for inclinations of 0, 45 and 90$^\circ$ (i.e. face-on, intermediate inclination and edge-on), and the clumpy disc H17D model SEDs (orange dashed lines) using the same parameters\footnote{Note that for clumpy disc H17D SEDs we take the closest value to h=0.20 available (i.e. h=0.25).}. The SEDs for the Sy2 configuration are practically identical regardless of the addition of the polar dust component (see right panels of Fig.\,\ref{selfobscuration}), except at intermediate inclinations (i.e. 45$^\circ$) in the NIR and MIR where the self-obscuration of the inner wall of the dusty structure produced by the cone walls is expected to be relevant (see some of the MIR model images presented by \citealt{Herrero21}). The SEDs of disc$+$wind H17 models are significantly different from those of the clumpy disc H17D models for the Sy1 configuration (see left panels of Fig.\,\ref{selfobscuration}). The far-IR and sub-mm emission of the Sy1 configuration is strongly enhanced by the extra polar dust component. In addition, the torus angular width can play an important role on the self-obscuration of the inner walls. Furthermore, extra self-obscuration takes place by including a dusty wind component. We find that the strong impact of the self-obscuration from the dusty cone takes place in the NIR and MIR range, especially at intermediate inclinations. The polar dust cone walls can produce moderate self-obscuration up to $\sim$10~$\mu$m at all inclinations (see left panels of Fig.\,\ref{selfobscuration}; see also \citealt{Herrero21}). However, the polar dust does not produce strong self-obscuration effects at long wavelengths ($>$20~$\mu$m). Thus, the polar-dust component has a negligible impact in the spectral fit for Sy2 nuclei whereas it produces an enhancement of emission at far-IR and sub-mm wavelengths and an extra self-obscuration at NIR and MIR wavelengths for Sy1 nuclei. This is key to explain the better performance of the disk+wind model for Sy1 nuclei.
\subsection{Dependence of the best--fitted model with the AGN properties}
\label{bestfits_discussion}
Recently, \citet{Venanzi20} presented a semi-analytical model to investigate the simulation of radiatively accelerated dusty winds launched by the AGN. In this model the primary mass reservoir for the outflow is the material within the dusty disc. Their simulations show that the wind and its orientation (polar vs. equatorial) depend on the Eddington ratio, AGN luminosity, and nuclear column density. At relatively high column densities (N$_{\rm H}>$10$^{23}$ cm$^{-2}$) the gravity strongly dominates and all the orbits are confined in a compact thick toroidal structure (i.e. the uplift of dusty material is suppressed) for representative values of Sy-like Eddington ratios. At lower values of the column density (N$_{\rm H}<$10$^{23}$ cm$^{-2}$) their model predicted that IR dusty outflows can take place above a certain Eddington ratio. From the observational point of view, \citet{Herrero21} found that 7 of 12 Sy galaxies showed Eddington ratios and nuclear N$_{\rm H}^{\rm ALMA}$ favourable for the launching of the dusty winds, unlike the remaining five galaxies.
Fig.\,\ref{wind_dependence_nh_edd} shows the line-of-sight hydrogen column density measured at X-rays (N$_{\rm H}^{\rm X-ray}$) versus the Eddington ratios for our sample. The black dashed line represents the blowout limit predicted by \citet{Fabian08}. The orange solid line is the limit for producing IR dusty outflows derived by \citet{Venanzi20} (assuming L$_{\rm AGN}$= 2.2$\times$10$^{43}$ erg s$^{-1}$). Note that we use N$_{\rm H}^{\rm X-ray}$ measurements which are representative of all the material along a pencil-beam line-of-sight to the accretion disc and, thus, it depends on the viewing direction. \citet{Herrero21} show a similar plot but but using N$_{\rm H}$ derived from ALMA observations. \citet{Garcia-Burillo21} compared the hydrogen column densities derived by X-rays and the nuclear integrated values from ALMA for the GATOS\footnote{Galaxy Activity, Torus and Outflow Survey.} sample and found a good agreement between these N$_{\rm H}$ estimates for obscured Sys (i.e. N$_{\rm H}^{\rm X-ray}>$10$^{22}$\,cm$^{-2}$), whereas the molecular gas column density of the torus probed by ALMA is systematically larger than the N$_{\rm H}^{\rm X-ray}$ for unobscured Sy galaxies (i.e. N$_{\rm H}^{\rm X-ray}<$10$^{22}$\,cm$^{-2}$). In particular, all Sy1 galaxies in \citet{Garcia-Burillo21} have N$_{\rm H}^{\rm ALMA}>$10$^{22}$\,cm$^{-2}$. This could be explained as the X-ray absorption in Sys is related with a smaller pc-scale dust-free gas component compared with the scales probed by ALMA ($\sim$10\,pc; see e.g. \citealt{Garcia-Burillo21}). Therefore, N$_{\rm H}^{\rm X-ray}$ of Sy1 and Sy1.8/1.9 galaxies might be underestimating the molecular gas column density of the torus. To interpret this plot, we add a yellow shaded region highlighting the region where the X-ray column density might be not representative of the torus structure.
We plot in Fig.\,\ref{wind_dependence_nh_edd} galaxies with best fits provided by torus models (i.e. smooth, clumpy, and two-phase torus models) and by disc$+$wind H17 models (see Section\,\ref{results}), using different symbols as shown in the legend. Galaxies with N$_{\rm H}^{\rm X-ray}>$10$^{22}$\,cm$^{-2}$ are located in a region not conducive to launching IR dusty polar outflows, in good agreement with our result that their SEDs are best fitted with torus models. On the other hand, relatively close to the favourable region in the diagram to launching dusty winds, we find a larger number Sys whose SEDs are better fitted with the clumpy disc$+$wind H17 models. Finally, the majority of Sy1 and Sy1.8/1.9 galaxies are located in the the blowout limit region. This might be related, at least in part, to the N$_{\rm H}^{\rm X-ray}$ measurements. Therefore, this dynamical model is able to broadly explain our main result on the dust configurations and it shows the complexity of the AGN torus.
\section{Conclusions}
\label{conclusions}
We presented a detailed comparison of the nuclear dust
emission of an ultra-hard X-ray (14–195~keV) volume-limited (D$_{\rm L}<$40~Mpc) sample of 24 Seyfert galaxies to a set of torus models comprising different dust compositions, distributions and geometries. This sample covers AGN luminosity log(L${_{\textrm{bol}}^{2-10~\textrm{keV}}}$)$\sim$41.75-44.75\,erg~s$^{-1}$) and Eddington ratio ($\lambda_{\rm Edd}$: -3.40 to -0.26) ranges. We include data from QSOs to expand the range of luminosities (log(L${_{\textrm{bol}}^{2-10~\textrm{keV}}}$\,erg~s$^{-1}$)$\sim$44.2-45.9) beyond our original Sy sample. We fitted for the first time the nuclear IR SEDs ($\sim$1-30~$\mu$m) obtained with high angular resolution data with six different torus models to find the model that most closely reproduces the nuclear IR SEDs of type 1 and 2 Seyfert galaxies. Finally, we investigated the relation of the bolometric luminosity, hydrogen column density, and Eddington ratio with different torus parameters. The main results are as follows.
\begin{enumerate}
\item The various torus models used in this work provide acceptable fits ($\rm{\chi^2_{red}<2}$) to the majority (19/24) of the nuclear IR SEDs. The fraction of best fits provided by
smooth, clumpy and two-phases torus models (i.e. those models that do not include the dusty polar component) and disc$+$wind models is practically the same, 33.3 and 37.5\% (i.e. 8/24 and 9/24 sources), respectively.\\
\item The disc$+$wind models reproduce better the NIR-to-MIR emission of AGN with relatively low X-ray hydrogen column density (median value of log (N$_{\rm H}^{\rm X-ray}$\,cm$^{-2}$)=21.0$\pm$1.0; i.e. Sy1/Sy1.8/1.9), whereas the nuclear IR SED of Sy2 (median value of log (N$_{\rm H}^{\rm X-ray}$\,cm$^{-2}$)=23.5$\pm$0.8) are best fitted by smooth, clumpy and two-phases torus models without including the polar dusty wind component.\\
\item The inclusion of large graphite grains with T$_{\rm sub}\sim$1900\,K, in addition to the self-obscuration produced by the polar component at intermediate inclinations (and/or a thick torus) are crucial to reproduce reproduce the observed nuclear NIR and MIR SED of Sy1/1.8/1.9s.\\
\item In general, we find that the Seyfert galaxies having unfavourable (favourable) conditions, i.e. nuclear hydrogen column density and Eddington ratio, for launching IR dusty polar outflows are best-fitted with smooth, clumpy and two-phase torus (disk$+$wind) models confirming the predictions from simulations.\\
\end{enumerate}
Our results indicate that there is a relationship between the choice of model and the hydrogen column density and, thus, the X-ray (unobscured/obscured) and/or optical (Sy1/Sy2) classification. These findings suggest that the torus dusty geometry/grain composition might depend on the amount of nuclear material (N$_{\rm H}$) and AGN properties. This work demonstrates the power of the spectral fitting technique to infer the properties of the inner dusty structure in AGN. In the future, the unprecedented combination of high sensitivity and spatial resolution provided by the James Webb Space Telescope \emph{(JWST)} will be crucial to better understand the nuclear dusty region of AGN using this technique.
\begin{acknowledgements}
IGB and DR acknowledge support from STFC through grant ST/S000488/1. OGM acknowledges support from the UNAM PAPIIT project IN105720. CRA acknowledges financial support from the European Union's Horizon 2020 research and innovation programme under Marie Sk\l odowska-Curie grant agreement No 860744 (BiD4BESt) and from the project ``Feeding and feedback in active galaxies'', with reference PID2019-106027GB-C42, funded by MICINN-AEI/10.13039/501100011033. AAH acknowledges support from grant PGC2018-094671-B-I00 funded by MCIN/AEI/10.13039/501100011033 and by ERDF A way of making Europe. EAD acknowledges support from the Agencia Estatal de Investigaci\'on del Ministerio de Ciencia e Innovaci\'on (AEI-MCINN) under project with reference PID2019-107010GB100. Finally, we thank the anonymous referee for their useful comments.
\end{acknowledgements}
|
2,877,628,090,433 | arxiv | \section{\label{}}
\section{Introduction}
We live in the age of large astronomical surveys. These surveys detect and record tracers of cosmic structure across vast volumes of the Universe, using electromagnetic and gravitational waves. A non-exhaustive list includes optical and infrared imaging and spectroscopic surveys such as LSST \citep{LSSTScienceCollaboration2012}, Euclid \citep{Euclid}, DESI \citep{DESI}, and SPHEREx \citep{SPHEREx}; catalogues and intensity maps from large radio surveys such as the square kilometer array \citep{SKA} and its precursors; cluster catalogues from high-resolution observations of the microwave sky (Advanced ACTPol, \citealp{AdvACT}; SPTPol, \citealp{SPTPol}; Simons Observatory, \citealp{SimonsObservatory}, and CMB-S4); X-ray surveys such as the eROSITA mission \citep{eROSITA}; as well as gravitational wave sirens across cosmological volumes with successive updates of (Advanced) LIGO \citep{AdvancedLIGO}, Virgo \citep{AdvancedVirgo} and LISA \citep{LISA}. Whilst these data sets will be prodigious sources of scientific discovery across astrophysics, their enormous volume and dense sampling of cosmic structure will make them uniquely powerful when studying some of the deepest scientific mysteries of our time: the statistical properties of the primordial perturbations, the nature of dark matter, and the physical properties of dark energy. Indeed many of these surveys were conceived to address these questions.
Accomplishing this promise requires the ability to model these surveys in sufficient detail and with sufficient accuracy. All but the most simplistic models require the production of cosmological light-cone simulations. In particular, cosmological inferences often rely on large numbers of mock catalogues, which are used to construct unbiased estimators and study their statistical properties, such as covariance matrices. As surveys are getting deeper, these mock catalogues now need to represent a sizeable portion of the observable Universe, up to a redshift of $\sim 2-3$ (e.g. $z=2.3$ for the Euclid Flagship simulation\footnote{\url{https://www.euclid-ec.org/?page_id=4133}}). Unfortunately, cosmological simulations put a heavy load on supercomputers. Even if only dark matter is included and resolution is minimised, they can require millions of CPU hours and hundreds of terabytes of disk space to solve the gravitational evolution of billions of particles and store the corresponding data. For instance, the DEUS-FUR simulation \citep{DEUSFUR}, containing $8192^3$ particles in a box of $21~\mathrm{Gpc}/h$ side length, required $10$~million hours of CPU time and $300$~TB of storage.
While computational needs are soaring, the performance of individual compute cores attained a plateau around 2015. Traditional hardware architectures are reaching their physical limit. Therefore, cosmological simulations cannot merely rely on processors becoming faster to reduce the computational time. Current hardware development focuses on increasing power efficiency\footnote{For example, Oak-Ridge National Laboratories' (ORNL) Summit machine has a typical power consumption of about $13$~MW.} and solving problems of heat dissipation to allow packing a larger number of cores into each CPU. As a consequence, the performance gains of the world's top supercomputers are the result of a massive increase in the number of parallel cores, currently\footnote{\url{https://www.olcf.ornl.gov/olcf-resources/compute-systems/summit/}} to $\mathcal{O}(10^5)$, and soon to $\mathcal{O}(10^{6-7})$ in systems that are currently being built.\footnote{See for example ORNL's next supercomputer, Frontier: \url{https://www.olcf.ornl.gov/wp-content/uploads/2019/05/frontier_specsheet.pdf}} Hybrid architectures, where CPUs work alongside GPUs and/or reconfigurable chips such as FPGAs, add to the massive parallelism. In the exa-scale world, raw compute cycles are no longer the scarce resource. The challenge is to access the available computational power when Amdahl's law demonstrates that communication latencies kill the potential gains due to parallelisation \citep{Amdahl1967}.
A way to embed high-resolution simulation of objects such as galaxy clusters, or even galaxies, in a cosmological context is through the use of varying particle mass resolution and the Adaptive Mesh Refinement technique \citep[AMR,][]{AMRpaper}. AMR is widely employed in grid-based simulation codes such as RAMSES \citep{RAMSES}, ENZO \citep{ENZO}, FLASH \citep{FLASH}, and AMIGA \citep{AMIGA}. It is also used in MUSIC \citep{Hahn2011} to generate zoom-in initial conditions for simulations. The AMR technique, which uses multi-grid relaxation methods \citep[e.g.][]{GuilletTeyssier2011}, allows focusing the effort on a specific region of the computational domain, but requires a two-way flow of information between small and large scales. More recently, leading computational cosmology groups have been developing sophisticated schemes to leverage parallel and hybrid computing architectures \citep{Gonnet2013,Theuns2015,Aubert2015,Ocvirk2016,Potter2017,Yu2018,Garrison2019,Cheng2020}.
Full simulations of large cosmological volumes, even limited to cold dark matter and at coarse resolution, involve multiple challenges. One of the main issues preventing their easy parallelisation is the long-range nature of gravitational interactions, which forestalls high-resolution, large-volume cosmological simulations. As a a response, much of the classical work in numerical cosmology focused on computational algorithms (tree codes, fast multipole methods, particle-mesh methods, and hybrids such as particle-particle--particle-mesh and tree--particle-mesh) that reduced the need for $\mathcal{O}(N^2)$ all-to-all communications between $N$ particles across the full computational volume.
While these algorithms are and remain the backbone of computational cosmology, they fail to fully exploit the physical scale hierarchy of cosmological perturbations. This hierarchy has first been used to push the results of $N$-body simulations to Universe scale for cosmic velocity fields \citep{Strauss1995}. At the largest scales, the dynamics of the Universe is not complicated, and in particular, is well-captured by Lagrangian Perturbation Theory \citep[LPT; see][]{Bouchet1995}. Building upon this view, \citet{Tassev2015} introduced spatial COmoving Lagrangian Acceleration (sCOLA). This algorithm, using a hybrid analytical and numerical treatment of particles' trajectories, allows one to perform simulations without the need to substantially extend the simulated volume beyond the region of interest in order to capture far-field effects, such as density fluctuations due to super-box modes. The sCOLA proof-of-concept focused on one sub-box embedded into a larger simulation box.
In this paper, we extend the sCOLA algorithm and use it within a novel method for perfectly parallel cosmological simulations. To do so, we rely on a tiling of the full cosmological volume to be simulated, where each tile is evolved independently using sCOLA. The principal challenge for the accuracy of such simulations are the boundary conditions used throughout the evolution of tiles, which can introduce artefacts. In this respect, we introduce three crucial improvements with respect to \citet{Tassev2015}: the use of a buffer region around each tile, the use of exact boundary conditions in the calculation of LPT displacements (which has the side benefit of reducing memory requirements), and the use of a Poisson solver with Dirichlet boundary conditions meant to approximate the exact gravitational potential around sCOLA boxes. The method proposed in this work shares similar goals with zoom-in simulation techniques, the main difference residing in the change of frame of reference introduced in sCOLA, which accounts for the dynamics of large scales without requiring flows of information during the evolution. On the other hand, our method is independent of the $N$-body integrator used to calculate the numerical part of particles' trajectories within each sCOLA box, and therefore, it cannot be related to specific approaches to do so, such as force-splitting. It is slightly approximate and more CPU-expensive than the corresponding ``monolithic'' simulation technique \citep[chosen in this paper as tCOLA,][]{Tassev2013}, but has the essential advantage of perfect scalability. This scalability comes from the removal of any kind of communication among tiles after the initialisation of the simulation. As a consequence, for its major part, the degree of parallelism of the algorithm equals the number of tiles, which means that the workload is perfectly parallel (also called embarrassingly parallel). This property can be exploited to produce cosmological simulations in very short wall-clock times on a variety of hardware architectures, as we discuss in this paper.
After reviewing Lagrangian Perturbation Theory and its use within numerical simulations in section \ref{sec:Cosmological simulations using Lagrangian Perturbation Theory}, we describe our algorithm for perfectly parallel cosmological simulations in section \ref{sec:Algorithm for perfectly parallel simulations using sCOLA}. In section \ref{sec:Accuracy and speed}, we test the accuracy and speed of the algorithm with respect to reference simulations that do not use the tiling. We discuss the implications of our results for computational strategies to model cosmic structure formation, and conclude, in section \ref{sec:Conclusion}. Details regarding the implementation are provided in the appendices.
\section{Cosmological simulations using Lagrangian perturbation theory}
\label{sec:Cosmological simulations using Lagrangian Perturbation Theory}
Throughout this section we denote by $a$ the scale factor of the Universe. For simplicity, some of the equations are abridged. We reintroduce the omitted constants, temporal prefactors, and Hubble expansion in appendix \ref{apx:Model equations}.
Particle simulators are algorithms that compute the final position $\textbf{x}$ and momentum $\textbf{p} \equiv \mathrm{d} \textbf{x}/ \mathrm{d} a$ of a set of particles, given some initial conditions. They can also be seen as algorithms that compute a displacement field $\boldsymbol{\Psi}$, which maps the initial (Lagrangian) position $\textbf{q}$ of each particle to its final (Eulerian) position $\textbf{x}$, according to the classic equation \citep[see e.g.][for a review]{Bernardeau2002}
\begin{equation}
\textbf{x}(a) = \textbf{q} + \boldsymbol{\Psi}(\textbf{q},a) .
\end{equation}
With this point of view, the outputs are $\textbf{x}$ and $\textbf{p}~=~\partial \boldsymbol{\Psi} / \partial a$.
\subsection{Lagrangian perturbation theory (LPT)}
\label{ssec:Lagrangian perturbation theory (LPT)}
In Lagrangian perturbation theory (LPT), the displacement field is given by an analytic equation which is used to move particles, without the need for a numerical solver. At second order in LPT, the displacement field is written
\begin{equation}
\boldsymbol{\Psi}_\mathrm{LPT}(\textbf{q},a) = \boldsymbol{\Psi}^{(1)}(\textbf{q},a) + \boldsymbol{\Psi}^{(2)}(\textbf{q},a),
\label{eq:Psi_LPT}
\end{equation}
where each of the terms is separable into a temporal and a spatial contribution deriving from a Lagrangian potential:
\begin{eqnarray}
\boldsymbol{\Psi}^{(1)}(\textbf{q},a) & = & -D_1(a) \, \boldsymbol{\nabla}_\textbf{q} \phi^{(1)}(\textbf{q}),\label{eq:LPT_Psi1}\\
\boldsymbol{\Psi}^{(2)}(\textbf{q},a) & = & D_2(a) \, \boldsymbol{\nabla}_\textbf{q} \phi^{(2)}(\textbf{q})\label{eq:LPT_Psi2}.
\end{eqnarray}
In equations \eqref{eq:LPT_Psi1} and \eqref{eq:LPT_Psi2}, $D_1$ and $D_2$ are the growth factor and second-order growth factor, respectively. The Lagrangian potentials obey Poisson-like equations \citep{Buchert1994}:
\begin{eqnarray}
\boldsymbol{\Delta}_\textbf{q} \phi^{(1)}(\textbf{q}) & = & \delta_\mathrm{i}(\textbf{q}), \label{eq:LPTpotential1}\\
\boldsymbol{\Delta}_\textbf{q} \phi^{(2)}(\textbf{q}) & = & \sum_{i>j} \left[ \phi^{(1)}_{,ii}\phi^{(1)}_{jj} - \left(\phi^{(1)}_{,ij}\right)^2 \right], \label{eq:LPTpotential2}
\end{eqnarray}
where $\delta_\mathrm{i}(\textbf{q})$ is the density contrast in the initial conditions, in Lagrangian coordinates, and the $\phi^{(1)}_{,ij}$ are spatial second derivatives of $\phi^{(1)}$, i.e. $\phi^{(1)}_{,ij} \equiv \partial^2 \phi^{(1)}/\partial \textbf{q}_i \partial \textbf{q}_j$.
If only the first-order term is included in equation \eqref{eq:Psi_LPT}, the solution is known as the Zel'dovich approximation \citep{Zeldovich1970}.
\subsection{Temporal comoving Lagrangian acceleration (tCOLA)}
\label{csec:Temporal comoving Lagrangian acceleration (tCOLA)}
In contrast to the analytical equations of LPT, particle-mesh (PM) codes \citep[see e.g.][]{Klypin1997} provide a fully numerical solution to the problem of large-scale structure formation. The equation of motion to be solved in a PM code reads schematically
\begin{equation}
\partial_a^2 \boldsymbol{\Psi}(\textbf{q},a) = -\boldsymbol{\nabla}_\textbf{x} \Phi(\textbf{x},a),
\label{eq:PM_EoM}
\end{equation}
where the gravitational potential $\Phi$ satisfies the Poisson equation,
\begin{equation}
\Delta_\textbf{x} \Phi(\textbf{x},a) = \delta(\textbf{x},a).
\label{eq:Poisson_full_box}
\end{equation}
Here, $\delta(\textbf{x},a)$ is the density contrast at a scale factor $a$, which is obtained from the set of particles' positions $\left\lbrace \textbf{x}(a) \right\rbrace$ through a density assignment operator that we denote $\mathrm{B}$ \citep[typically a cloud-in-cell (CiC) scheme, see][]{Hockney1981}:
\begin{equation}
\delta(\textbf{x},a) \equiv \mathrm{B}(\left\lbrace \textbf{x}(a) \right\rbrace).
\end{equation}
We denote by $\bar{\mathrm{B}}$ the corresponding interpolation operator, which is needed to obtain the accelerations of particles given the acceleration field on the grid:
\begin{equation}
\partial_a^2 \boldsymbol{\Psi}(\left\lbrace \textbf{x}(a) \right\rbrace) \equiv \bar{\mathrm{B}}(-\boldsymbol{\nabla}_\textbf{x} \Phi).
\end{equation}
The temporal COmoving Lagrangian Acceleration (tCOLA) algorithm seeks to decouple large and small scales by evolving large scales using analytic LPT results, and small scales using a numerical solver. This is achieved by splitting the Lagrangian displacement field into two contributions \citep{Tassev2012b}:
\begin{equation}
\boldsymbol{\Psi}(\textbf{q},a) \equiv \boldsymbol{\Psi}_\mathrm{LPT}(\textbf{q},a) + \boldsymbol{\Psi}_\mathrm{res}(\textbf{q},a),
\label{eq:displacement_split}
\end{equation}
where $\boldsymbol{\Psi}_\mathrm{LPT}(\textbf{q},a)$ is the LPT displacement field discussed in section \ref{ssec:Lagrangian perturbation theory (LPT)} and $\boldsymbol{\Psi}_\mathrm{res}(\textbf{q},a)$ is the residual displacement of each particle, as measured in a frame comoving with an ``LPT observer'', whose trajectory is given by $\boldsymbol{\Psi}_\mathrm{LPT}(\textbf{q},a)$. Using equation \eqref{eq:displacement_split}, it is possible to rewrite equation \eqref{eq:PM_EoM} as
\begin{equation}
\partial_a^2 \boldsymbol{\Psi}_\mathrm{res}(\textbf{q},a) = -\boldsymbol{\nabla}_\textbf{x} \Phi(\textbf{x},a) - \partial_a^2 \boldsymbol{\Psi}_\mathrm{LPT}(\textbf{q},a).
\label{eq:tCOLA_EoM}
\end{equation}
The term $\partial_a^2 \boldsymbol{\Psi}_\mathrm{LPT}(\textbf{q},a)$ can be thought of as a fictitious force acting on particles, caused by our use of a non-inertial frame of reference. Importantly, it can be computed analytically given the equations of Lagrangian perturbation theory.
The equations of motions \eqref{eq:PM_EoM} and \eqref{eq:tCOLA_EoM} are usually integrated by the use of time-stepping techniques (see appendix \ref{apx:Standard and modified time-stepping}). In the limit of zero time-steps used to discretise the left-hand side of equation \eqref{eq:tCOLA_EoM}, $\boldsymbol{\Psi}_\mathrm{res}=0$ and tCOLA recovers the results of LPT; therefore, tCOLA always solves the large scales with an accuracy of at least that of LPT. In contrast, PM codes require many time-steps in equation \eqref{eq:PM_EoM} just to recover the value of the linear growth factor $D_1$. In the limit where the number of time-steps becomes large, tCOLA reduces to a standard PM code. In the intermediate regime (for $\mathcal{O}(10)$ time-steps), tCOLA provides a good approximation to large-scale structure formation, at the expense of not solving the details of particle trajectories in deeply non-linear halos \citep[see][for further discussion]{Tassev2013,Howlett2015,Leclercq2015ST,Koda2016,Izard2016}. Since by construction, tCOLA always gets the large scales correct, contrary to a PM code, the trade-off between speed and accuracy only affects small scales.
\subsection{Spatial comoving Lagrangian acceleration (sCOLA)}
\label{ssec:Spatial comoving Lagrangian acceleration (sCOLA)}
During large-scale structure formation, non-linearities appear at late times and/or at small scales. tCOLA (equation \eqref{eq:tCOLA_EoM}) decouples LPT displacements and residual non-linear contributions ``in time'', so that, for a given accuracy, fewer time-steps are required to solve large-scale structure evolution than with a PM code. Following a similar spirit, the spatial COmoving Lagrangian Acceleration (sCOLA) framework decouples LPT displacements and residual non-linear contributions ``in space'', so that numerically evolved small scales can feel far-field effects captured analytically via LPT.
More specifically, for each particle in a volume of interest (the ``sCOLA box'') embedded in a larger cosmological volume (the ``full box''), the equation of motion of particles, which reads for a traditional $N$-body problem
\begin{equation}
\partial_a^2 \boldsymbol{\Psi}(\textbf{q},a) = \partial_a^2 \boldsymbol{\Psi}_\mathrm{LPT}(\textbf{q},a) + \partial_a^2 \boldsymbol{\Psi}_\mathrm{res}(\textbf{q},a) = \textbf{F}(\textbf{x},a)
\label{eq:NbodyEoM}
\end{equation}
is replaced by
\begin{equation}
\partial_a^2 \boldsymbol{\Psi}_\mathrm{res}(\textbf{q},a) = \textbf{F}^\mathrm{sCOLA}(\textbf{x},a) - \partial_a^2 \boldsymbol{\Psi}^\mathrm{sCOLA}_\mathrm{LPT}(\textbf{q},a) .
\label{eq:sCOLA_NbodyEoM}
\end{equation}
$\partial_a^2 \boldsymbol{\Psi}_\mathrm{res}(\textbf{q},a)$ is defined by equation \eqref{eq:displacement_split} as the residual displacement with respect to the LPT observer of the full box, whose trajectory is given by $\boldsymbol{\Psi}_\mathrm{LPT}(\textbf{q},a)$. In equation \eqref{eq:sCOLA_NbodyEoM}, $\boldsymbol{\Psi}_\mathrm{LPT}^\mathrm{sCOLA}(\textbf{q},a)$ is the trajectory prescribed by solving LPT equations (see section \ref{ssec:Lagrangian perturbation theory (LPT)}) in the sCOLA box. Note that $\boldsymbol{\Psi}_\mathrm{LPT}^\mathrm{sCOLA}(\textbf{q},a)$ may differ from $\boldsymbol{\Psi}_\mathrm{LPT}(\textbf{q},a)$, depending on the assumptions made for the boundary conditions of the sCOLA box, discussed in section \ref{ssec:Initial operations in the sCOLA boxes}. Denoting by $\mathcal{S} \subseteq \llbracket 1,N \rrbracket$ the set of particles in the sCOLA box, the gravitational force, which in equation \eqref{eq:NbodyEoM} reads
\begin{equation}
\textbf{F}(\textbf{x}_i,a) \equiv \sum_{j=1 \atop j \neq i}^N \frac{\textbf{x}_j(a)-\textbf{x}_i(a)}{|\textbf{x}_j(a)-\textbf{x}_i(a)|^3},
\label{eq:force_full}
\end{equation}
is replaced by
\begin{equation}
\textbf{F}^\mathrm{sCOLA}(\textbf{x}_i,a) \equiv \sum_{j\in \mathcal{S} \atop j \neq i} \frac{\textbf{x}_j(a)-\textbf{x}_i(a)}{|\textbf{x}_j(a)-\textbf{x}_i(a)|^3}.
\label{eq:force_sCOLA}
\end{equation}
It is possible to evaluate $\textbf{F}^\mathrm{sCOLA}(\textbf{x},a)$, and thus to solve equation \eqref{eq:sCOLA_NbodyEoM}, like equation \eqref{eq:NbodyEoM}, using any numerical gravity solver, such as particle-particle--particle-mesh, tree codes, or AMR. In this paper, we choose to focus on evaluating forces via a PM scheme. In this case, the equation of motion of particles in sCOLA reads schematically \citep{Tassev2015}
\begin{equation}
\partial_a^2 \boldsymbol{\Psi}_\mathrm{res}(\textbf{q},a) = -\boldsymbol{\nabla}_\textbf{x}^\mathrm{sCOLA} \Phi^\mathrm{sCOLA}(\textbf{x},a) - \partial_a^2 \boldsymbol{\Psi}_\mathrm{LPT}^\mathrm{sCOLA}(\textbf{q},a) .
\label{eq:sCOLA_EoM}
\end{equation}
The gravitational potential in the sCOLA box, $\Phi^\mathrm{sCOLA}(\textbf{x},a)$, obeys the near-field version of the Poisson equation,
\begin{equation}
\Delta_\textbf{x}^\mathrm{sCOLA} \Phi^\mathrm{sCOLA}(\textbf{x},a) = \delta^\mathrm{sCOLA}(\textbf{x},a) .
\label{eq:Poisson_sCOLA_box}
\end{equation}
The superscript ``sCOLA'' over the gradient and Laplacian operators, $\boldsymbol{\nabla}_\textbf{x}^\mathrm{sCOLA}$ and $\Delta_\textbf{x}^\mathrm{sCOLA}$, mean that they are restricted to the sCOLA box (contrary to that of equations \eqref{eq:Poisson_full_box} and \eqref{eq:tCOLA_EoM}). Over the density contrast $\delta^\mathrm{sCOLA}(\textbf{x},a)$, the superscript means that only particles in the sCOLA box $\left\lbrace \textbf{x}(a) \right\rbrace_\mathrm{sCOLA} \equiv \left\lbrace \textbf{x}_i(a) \right\rbrace_{i\in \mathcal{S}}$ (instead of the full box) are used within the density assignment $\mathrm{B}^\mathrm{sCOLA}$, i.e.
\begin{equation}
\delta^\mathrm{sCOLA}(\textbf{x},a) \equiv \mathrm{B}^\mathrm{sCOLA}\left(\left\lbrace \textbf{x}(a) \right\rbrace_\mathrm{sCOLA}\right).
\end{equation}
Contrary to tCOLA, which is an exact rewriting of the equations of motion of a PM code, sCOLA potentially involves approximations for the calculation of each quantity and operator with a superscript ``sCOLA'' instead of its full box equivalent. As a proof of concept, \citet{Tassev2015} showed that under certain circumstances, sCOLA provides a good approximation for the evolution of one sCOLA box embedded into a larger full box. As discussed in the introduction, we aim at generalising this result by using sCOLA within multiple sub-volumes of a full simulation box.
\section{Algorithm for perfectly parallel simulations using sCOLA}
\label{sec:Algorithm for perfectly parallel simulations using sCOLA}
\begin{figure}
\begin{tikzpicture}
\pgfdeclarelayer{background}
\pgfdeclarelayer{foreground}
\pgfsetlayers{background,main,foreground}
\tikzstyle{common}=[draw, align=center, thick, rounded corners, minimum height=1em, minimum width=1em, fill=black!20]
\tikzstyle{sCOLA}=[draw, align=center, thick, rounded corners, minimum height=1em, minimum width=1em, fill=C9!20]
\tikzstyle{tCOLA}=[draw, align=center, thick, rounded corners, minimum height=1em, minimum width=1em, fill=red!20]
\tikzstyle{plate} = [draw, thick, rectangle, rounded corners, dashed, fill=yellow!15]
\def1.0{1.0}
\def1.5{1.5}
\node (ICs) [common]
{Computation of the\\ initial conditions $\delta_\mathrm{i}$ (\hyperref[ssec:Initial conditions and Lagrangian potentials]{A.1.})};
\path (ICs.south)+(0,-1.0) node (phifullbox) [common]
{Computation of the Lagrangian\\ potentials $\phi^{(1)}$ and $\phi^{(2)}$ (\hyperref[ssec:Initial conditions and Lagrangian potentials]{A.2.})};
\path (phifullbox.south)+(-1.5,-1.0) node (tiling) [sCOLA]
{Tiling of the\\ Lagrangian lattice (\hyperref[ssec:Tiling and buffer region]{B.1.})};
\path (tiling.south)+(0,-1.0) node (phisCOLAbox) [sCOLA]
{Reception of\\ $\widetilde{\phi}^{(1)}$ and $\widetilde{\phi}^{(2)}$ (\hyperref[ssec:Initial operations in the sCOLA boxes]{C.1.})};
\path (phisCOLAbox.south)+(0,-1.0) node (LPTsCOLA) [sCOLA]
{Computation of the\\ Lagrangian displacement\\ field $\boldsymbol{\Psi}_\mathrm{LPT}^\mathrm{sCOLA}$ (\hyperref[ssec:Initial operations in the sCOLA boxes]{C.2.})};
\path (LPTsCOLA.east)+(1.5*1.5,0) node (LPTtCOLA) [tCOLA]
{Computation of the\\ Lagrangian displacement\\ field $\boldsymbol{\Psi}_\mathrm{LPT}$};
\path (LPTsCOLA.south)+(0,-1.0) node (phiBCs) [sCOLA]
{Precomputation of the\\ Dirichlet boundary\\ conditions $\Phi_\mathrm{BCs}$ (\hyperref[ssec:Initial operations in the sCOLA boxes]{C.3.})};
\path (phiBCs.south)+(0,-1.0) node (sCOLAevolution) [sCOLA]
{Evolution with\\ sCOLA (\hyperref[ssec:Evolution of sCOLA boxes]{D.})};
\path (sCOLAevolution.east)+(2*1.5,0) node (tCOLAevolution) [tCOLA]
{Evolution with\\ tCOLA};
\path (sCOLAevolution.south)+(0,-1.0) node (untiling) [sCOLA]
{Reception of $\left\lbrace\textbf{x}\right\rbrace_\mathrm{tile}$ and $\left\lbrace\textbf{p}\right\rbrace_\mathrm{tile}$\\ from each tile (\hyperref[ssec:Tiling and buffer region]{B.2.})};
\begin{pgfonlayer}{background}
\path [plate] (tiling.south)+(-6.5em,-0.9em) rectangle (1.9em,-30.4em)
{};
\path [draw, line width=0.7pt, arrows={-latex}] (ICs) -- (phifullbox);
\path [draw, line width=0.7pt, arrows={-latex}] (phifullbox) -- (LPTtCOLA);
\path [draw, line width=0.7pt, arrows={-latex}] (LPTtCOLA) -- (tCOLAevolution);
\path [draw, line width=0.7pt, arrows={-latex}] (phisCOLAbox) -- (LPTsCOLA);
\path [draw, line width=0.7pt, arrows={-latex}] (phiBCs) -- (sCOLAevolution);
\path [draw, line width=0.7pt, arrows={-latex}] (sCOLAevolution) -- (untiling);
\path [draw, line width=0.7pt, arrows={-latex}] (phifullbox) -- (phisCOLAbox);
\path (phisCOLAbox.south)+(-10pt,3.5pt) node (phisCOLAboxS) {};
\path (phiBCs.north)+(-10pt,-3.5pt) node (phiBCsN) {};
\path [draw, line width=0.7pt, arrows={-latex}] (phisCOLAboxS) -- (phiBCsN);
\path (LPTsCOLA.south)+(15pt,3.5pt) node (LPTsCOLAS) {};
\path (sCOLAevolution.north)+(15pt,-3.5pt) node (sCOLAevolutionN) {};
\path [draw, line width=0.7pt, arrows={-latex}] (LPTsCOLAS) -- (sCOLAevolutionN);
\end{pgfonlayer}
\end{tikzpicture}
\caption{Functional diagram of sCOLA (left) versus tCOLA (right). The grey boxes are common steps. sCOLA specific steps are represented in blue, and tCOLA specific steps in red. The yellow rectangle constitutes the perfectly parallel section, within which no communication is required with the master process or between processes. Arrows represent dependencies, and references to the main text are given between parentheses.}
\label{fig:sCOLA_diagram}
\end{figure}
\begin{table}[]
\centering
\begin{tabular}{cl}
\hline
\hline
Symbol & Meaning \\
\hline
$N$ & \footnotesize{LPT grid size in the full box} \\
$N_\mathrm{p}$ & \footnotesize{Lagrangian lattice size in the full box} \\
$N_\mathrm{tiles}$ & \footnotesize{Number of tiles in each direction} \\
$N_\mathrm{p,tile}$ & \footnotesize{Number of particles per direction in each tile} \\
$L_\mathrm{tile}$ & \footnotesize{Physical size of each tile} \\
$N_\mathrm{p,buffer}$ & \footnotesize{Number of buffer particles per direction} \\
$L_\mathrm{buffer}$ & P\footnotesize{hysical size of the buffer region} \\
$N_\mathrm{p,sCOLA}$ & \footnotesize{Number of particles per direction in each sCOLA box} \\
$L_\mathrm{sCOLA}$ & \footnotesize{Physical size of each sCOLA box} \\
$N_\mathrm{tile}$ & \footnotesize{LPT grid portion covering each tile} \\
$N_\mathrm{sCOLA}$ & \footnotesize{LPT grid portion covering each sCOLA box} \\
$N_\mathrm{ghost}$ & \footnotesize{Number of ghost cells depending on FDA} \\
$N_\mathrm{g}$ & \footnotesize{PM grid size in each sCOLA box} \\
$r$ & \footnotesize{Over-simulation factor} \\
$p$ & \footnotesize{Parallelisation potential factor} \\
\hline\hline
\end{tabular}
\caption{Nomenclature of symbols used in the present article.}
\label{tb:nomenclature}
\end{table}
In this section, we describe an algorithm for cosmological simulations using sCOLA, for which the time evolution of independent Lagrangian sub-volumes is perfectly parallel, without any communication. A functional block diagram representing the main steps and their dependencies is given in figure \ref{fig:sCOLA_diagram}. An illustration of the different grids appearing in the algorithm is presented in figure \ref{fig:sCOLA_grids}, and table \ref{tb:nomenclature} provides the nomenclature of some of the different variables appearing in this section.
We work in a cubic full box of side length $L$ with periodic boundary conditions, populated by $N_\mathrm{p}^3$ particles initially at the nodes $\left\lbrace \textbf{q} \right\rbrace$ of a regular Lagrangian lattice. We seek to compute the set of final positions $\left\lbrace \textbf{x}(a_\mathrm{f}) \right\rbrace$ and momenta $\left\lbrace \textbf{p}(a_\mathrm{f}) \right\rbrace$ at final scale factor $a_\mathrm{f}$. The model equations are reviewed in appendix \ref{apx:Model equations}. The time-stepping of these equations consists of a series of ``kick'' and ``drift'' operations and is discussed in appendix \ref{apx:Standard and modified time-stepping}.
We approximate the Laplacians $\Delta_\textbf{x}$, $\Delta_\textbf{q}$ and gradient operators $\boldsymbol{\nabla}_\textbf{x}$, $\boldsymbol{\nabla}_\textbf{q}$ by finite difference approximation (FDA) at order 2, 4, or 6. The coefficients of the finite difference stencils in configuration and in Fourier space are given for example in table 1 in \citet{Hahn2011}. We note $N_\mathrm{ghost} = 1, 2, 3$ if FDA is taken at order 2, 4, 6, respectively.
\begin{figure}
\includegraphics[width=\linewidth]{sCOLA_grids.pdf}
\caption{Illustration of the different grids used within sCOLA. The Lagrangian lattice is represented by dashed lines. For each tile, central particles (in black) are surrounded by buffer particles (in cyan), which are ignored at the end of the evolution. The corresponding buffer region in other grids is represented in cyan. The left panel represents the ``LPT grid'' on which Lagrangian potentials $\widetilde{\phi}^{(1)}$ and $\widetilde{\phi}^{(2)}$ are defined. The central region has $N_\mathrm{sCOLA}^3$ grid points (in red) and is padded by $2N_\mathrm{ghost}$ cells in each direction (pink region). The right panel shows the ``PM grid'' on which the density contrast $\delta^\mathrm{sCOLA}$, the gravitational potential $\Phi^\mathrm{sCOLA}$, and the accelerations $-\boldsymbol{\nabla}_\textbf{x}^\mathrm{sCOLA} \Phi^\mathrm{sCOLA}$ are defined. The density contrast is defined only in the central region (which has $N_\mathrm{g}^3$ grid points, in dark green). The gravitational potential is padded by $2N_\mathrm{ghost}$ cells in each direction (light green and yellow regions), and the gridded accelerations only by $N_\mathrm{ghost}$ cells in each direction (yellow region). Solving the Poisson equation requires Dirichlet boundary conditions in six layers of $N_\mathrm{ghost}$ cells, denoted as hatched regions. For simplicity of representation, we have used here $N_\mathrm{ghost}=1$.}
\label{fig:sCOLA_grids}
\end{figure}
\subsection{Initial conditions and Lagrangian potentials}
\label{ssec:Initial conditions and Lagrangian potentials}
Before the perfectly parallel section, two initialisation steps are performed by the master process in the full box.
\begin{enumerate}
\item[A.1.] The first step is to generate the initial density contrast $\delta_\mathrm{i}$ in the full box, on a cubic grid of $N^3$ cells (the ``LPT grid'', represented in red in the left panel of figure \ref{fig:sCOLA_grids}). This step can be done via the standard convolution approach \citep[e.g.][]{Hockney1981}, given the specified initial power spectrum.
\item[A.2.] The second step is to compute the Lagrangian potentials $\phi^{(1)}(\textbf{q})$ and $\phi^{(2)}(\textbf{q})$ on the LPT grid in the full box, which is achieved by solving equations \eqref{eq:LPTpotential1} and \eqref{eq:LPTpotential2}.
\end{enumerate}
If initial phases are generated in Fourier space, the Zel'dovich approximation (i.e. the calculation of $\phi^{(1)}$) requires only one inverse fast Fourier transform (FFT) on the LPT grid. For the second-order potential, the source term on the right-hand side of equation \eqref{eq:LPTpotential2} has to be computed from $\phi^{(1)}$; this can either be done in Fourier space (for a cost of six inverse FFTs) or in configuration space via finite differencing (for a cost of nine one-dimensional gradient operations). In both cases, the calculation of $\phi^{(2)}$ from its source then requires one forward and one inverse FFT.
These few FFTs in the full box are the most hardware-demanding requirement of the algorithm (particularly in terms of memory), and the only step which is not distributed and suitable for grid computing. These FFTs may however be performed on a cluster of computers with fast interconnection suitable for Message Passing Interface \citep{FFTW05,Johnson2008FFTW}.
\subsection{Tiling and buffer region}
\label{ssec:Tiling and buffer region}
\begin{enumerate}
\item[B.1.] After having computed the Lagrangian potentials, the master process splits the Lagrangian lattice (of size $N_\mathrm{p}^3$) into $N_\mathrm{tiles}^3$ cubic tiles (we require that $N_\mathrm{p}$ is a multiple of $N_\mathrm{tiles}$). Tiles are constructed to be evolved independently; therefore the main, perfectly parallel region of the algorithm starts here.
\end{enumerate}
To minimise artefacts due to boundary effects (see section \ref{ssec:Evolution of sCOLA boxes}), each tile is surrounded by a ``buffer region'' in Lagrangian space. This buffer region consists of $N_\mathrm{p,buffer}$ particles in each direction, so that each sCOLA box contains a total of $N_\mathrm{p,sCOLA}^3$ particles, where $N_\mathrm{p,sCOLA} \equiv N_\mathrm{p,tile} + 2 N_\mathrm{p,buffer}$ and $N_\mathrm{p,tile} \equiv N_\mathrm{p}/N_\mathrm{tiles}$. Corresponding physical sizes are $L_\mathrm{tile} \equiv L \, N_\mathrm{p,tile}/N_\mathrm{p}$, $L_\mathrm{buffer} \equiv L \, N_\mathrm{p,buffer}/N_\mathrm{p}$, and $L_\mathrm{sCOLA} \equiv L \, N_\mathrm{p,sCOLA}/N_\mathrm{p}$. The fraction of the full Lagrangian lattice assigned to one child sCOLA process is represented by dotted lines in figure \ref{fig:sCOLA_grids}. Particles of the tile are represented in black, and particles of the buffer region are represented in cyan.
The sCOLA box is chosen to encompass the tile and its buffer region. We define the over-simulation factor $r$ as the ratio between the total volume simulated in all sCOLA boxes and the target simulation volume, i.e.
\begin{eqnarray}
r & \equiv & \frac{N_\mathrm{tiles}^3 N_\mathrm{p,sCOLA}^3}{N_\mathrm{p}^3} = \frac{N_\mathrm{tiles}^3 (N_\mathrm{p,tile} + 2N_\mathrm{p,buffer})^3}{N_\mathrm{p}^3}\nonumber\\
& = & \frac{N_\mathrm{tiles}^3 L_\mathrm{sCOLA}^3}{L^3} = \frac{N_\mathrm{tiles}^3 (L_\mathrm{tile} + 2L_\mathrm{buffer})^3}{L^3}.
\end{eqnarray}
Since all sCOLA boxes can be evolved independently, the degree of parallelism of the algorithm is equal to the number of sCOLA boxes, $N_\mathrm{tiles}^3$. We call the ``parallelisation potential factor'' the quantity $p \equiv N_\mathrm{tiles}^3/r$, which balances the degree of parallelism with the amount of over-simulation. It is also
\begin{equation}
p = \frac{N_\mathrm{p}^3}{N_\mathrm{p,sCOLA}^3} = \frac{L^3}{L_\mathrm{sCOLA}^3}.
\end{equation}
For each sCOLA box, the corresponding child process computes the set of final positions $\left\lbrace \textbf{x} \right\rbrace_\mathrm{sCOLA}$ and momenta $\left\lbrace \textbf{p} \right\rbrace_\mathrm{sCOLA}$.
\begin{enumerate}
\item[B.2.] At the end of the evolution, each child process sends the set of final positions $\left\lbrace \textbf{x} \right\rbrace_\mathrm{tile}$ and momenta $\left\lbrace \textbf{p} \right\rbrace_\mathrm{tile}$ of particles of the tile back to the master process. Particles of the buffer region are ignored. The master process then ``untiles'' the simulation by gathering the results from all the tiles.
\end{enumerate}
\subsection{Initial operations in the sCOLA boxes}
\label{ssec:Initial operations in the sCOLA boxes}
A few steps are required in each sCOLA box before starting the evolution per se.
\begin{enumerate}
\item[C.1.] The sCOLA box receives the relevant portion of $\phi^{(1)}(\textbf{q})$ and $\phi^{(2)}(\textbf{q})$ from the master process. This is the only communication required with the master process before sending back the results at the end of the evolution.
\end{enumerate}
The portion of the LPT grid received by each process from the master process corresponds to the full spatial region covered by the sCOLA box, plus an additional padding of $2N_\mathrm{ghost}$ cells in each direction. We denote by $\widetilde{\phi}^{(1)}(\textbf{q})$ and $\widetilde{\phi}^{(2)}(\textbf{q})$ the parts of $\phi^{(1)}(\textbf{q})$ and $\phi^{(2)}(\textbf{q})$ received from the master process (we avoid the superscript ``sCOLA'' since no approximation is involved at this stage). They are defined on a grid of size $(N_\mathrm{sCOLA}+4N_\mathrm{ghost})^3$, where
\begin{equation}
N_\mathrm{tile} \equiv \left\lceil N_\mathrm{p,tile} \frac{N}{N_\mathrm{p}} \right\rceil, \; N_\mathrm{sCOLA} \equiv N_\mathrm{tile} + 2 \left\lceil N_\mathrm{p,buffer} \frac{N}{N_\mathrm{p}} \right\rceil
\end{equation}
($\left\lceil \cdot \right\rceil$ denotes the ceiling function). An illustration is provided in figure \ref{fig:sCOLA_grids}, left panel. There, the portion of the LPT grid corresponding to the sCOLA box, of size $N_\mathrm{sCOLA}$ in each direction, is represented in red and the padding region, of size $2N_\mathrm{ghost}$ in each direction, is represented in pink.
\begin{enumerate}
\item[C.2.] The sCOLA process locally computes the required time-independent LPT vectors $\boldsymbol{\Psi}_1^\mathrm{sCOLA}$ and $\boldsymbol{\Psi}_2^\mathrm{sCOLA}$ via finite differencing in configuration space and interpolation to particles' positions.
\end{enumerate}
The ghost cells included around $\widetilde{\phi}^{(1)}(\textbf{q})$ and $\widetilde{\phi}^{(2)}(\textbf{q})$ in the sCOLA box ensure that the proper boundary conditions are used when applying the gradient operator $\boldsymbol{\nabla}_\textbf{q}^\mathrm{sCOLA}$ in configuration space to get the LPT displacements on the grid. This step ``consumes'' $N_\mathrm{ghost}$ layers of ghost cells in each direction, so that the grid of LPT displacements has a size of $(N_\mathrm{sCOLA} + 2N_\mathrm{ghost})^3$. To use again the proper boundary conditions when going from the LPT grid to particles' positions, another $N_\mathrm{ghost}$ layers of ghost cells is consumed by the interpolation operator $\bar{\mathrm{B}}^\mathrm{sCOLA}$. The use of the exact boundary conditions at each of these two steps ensures that $\boldsymbol{\nabla}_\textbf{q}^\mathrm{sCOLA} = \boldsymbol{\nabla}_\textbf{q}$ and $\bar{\mathrm{B}}^\mathrm{sCOLA} = \bar{\mathrm{B}}$. Therefore, by construction, $\boldsymbol{\Psi}_1^\mathrm{sCOLA} \equiv \boldsymbol{\nabla}_\textbf{q}^\mathrm{sCOLA} \widetilde{\phi}^{(1)}(\textbf{q})$ and $\boldsymbol{\Psi}_2^\mathrm{sCOLA} \equiv \boldsymbol{\nabla}_\textbf{q}^\mathrm{sCOLA} \widetilde{\phi}^{(2)}(\textbf{q})$ in the sCOLA box are always the same as $\boldsymbol{\Psi}_1 \equiv \boldsymbol{\nabla}_\textbf{q} \phi^{(1)}(\textbf{q})$ and $\boldsymbol{\Psi}_2 \equiv \boldsymbol{\nabla}_\textbf{q} \phi^{(2)}(\textbf{q})$ in the full box (as would be computed by the master process). Consequently, we do not keep track of both $\boldsymbol{\Psi}_\mathrm{1,2}^\mathrm{sCOLA}$ and $\boldsymbol{\Psi}_\mathrm{1,2}$, contrary to \citet{Tassev2015}. In addition to being simpler, this scheme has the practical advantage of saving six floating-point numbers per particle in memory (three in the case of the Zel'dovich approximation).
\begin{enumerate}
\item[C.3.] The sCOLA process precomputes the Dirichlet boundary conditions $\Phi_\mathrm{BCs}$ that will be used at each calculation of the gravitational potential during the sCOLA evolution.
\end{enumerate}
For each sCOLA box, we define a particle-mesh grid of size $N_\mathrm{g}^3$ (the ``PM grid'', represented in dark green in the right panel of figure \ref{fig:sCOLA_grids}). The PM grid defines the force resolution; it should be equal to or finer than the LPT grid ($N_\mathrm{g} \geq N_\mathrm{sCOLA}$). Before starting the evolution with sCOLA, each process precomputes the Dirichlet boundary conditions that will be required by the Poisson solver at each value of the scale factor $a_\mathrm{K}$. This calculation takes as input the initial gravitational potential $\widetilde{\phi}^{(1)}(\textbf{q})$ and outputs $\Phi_\mathrm{BCs}(\textbf{x},a_\mathrm{K})$ for each $a_\mathrm{K}$, defined on the PM grid with a padding of $2N_\mathrm{ghost}$ cells around the sCOLA box in each direction (light green and yellow regions in figure \ref{fig:sCOLA_grids}, right panel). The approximation involved in this step is further discussed in section \ref{sssec:Gravitational potential}.
\subsection{Evolution of sCOLA boxes}
\label{ssec:Evolution of sCOLA boxes}
Each sCOLA box is then evolved according to the scheme reviewed in section \ref{ssec:Spatial comoving Lagrangian acceleration (sCOLA)} and appendices \ref{apx:Model equations} and \ref{apx:Standard and modified time-stepping}. Two specific approximations are needed to compute the operators and quantities with a superscript ``sCOLA''; we now discuss the choices that we made.
\subsubsection{Density assignment ($\mathrm{B}^\mathrm{sCOLA}$)}
\label{sssec:Density asssignement}
As mentioned in section \ref{ssec:Spatial comoving Lagrangian acceleration (sCOLA)}, only particles of the sCOLA box should contribute to $\delta^\mathrm{sCOLA}(\textbf{x},a)$. For particles that are fully in the sCOLA box, density assignment can be chosen as the same operation as would be used in a PM or tCOLA code (typically, a CiC scheme). A question is what to do with particles that have (partially) left the sCOLA box during the evolution, while keeping the requirement of no communication between boxes: this constitutes the only difference between the operators $\mathrm{B}$ and $\mathrm{B}^\mathrm{sCOLA}$. Possible choices include artificially periodising the sCOLA box (which is clearly erroneous) or stopping particles at its boundaries (which does not conserve momentum). Both of these choices assign the entire mass carried by the set of sCOLA particles $\mathcal{S}$ to the PM grid, but result in artefacts in the final conditions, if the buffer region is not large enough.
An alternative choice is simply to limit the (Eulerian) PM grid volume where we compute $\delta^\mathrm{sCOLA}(\textbf{x},a)$ to the (Lagrangian) sCOLA box, including central and buffer regions. In practice, this means ignoring the fractional particle masses that the CiC assignment would have deposited to grid points outside the sCOLA box. We have found in our tests that this choice gives the smallest artefacts of the three choices considered.\footnote{There is a certain symmetry to this choice, since particles that would have moved into the buffer region from the outside are also neglected in the force calculation, due to the lack of communication between different sCOLA boxes.} We note that (partially) erasing some particles' mass is an approximation that is only used in the $\mathrm{B}^\mathrm{sCOLA}$ operator to evaluate the source term in the Poisson equation, and therefore only affects the force calculation. The number of particles, both within each sCOLA process ($N_\mathrm{p,sCOLA}^3$) and in the full simulation ($N_\mathrm{p}^3$), is left unchanged during the evolution. Therefore, mass is always conserved both within each sCOLA process and within the full volume.
\subsubsection{Gravitational potential ($\Delta_\textbf{x}^\mathrm{sCOLA}$, $\boldsymbol{\nabla}_\textbf{x}^\mathrm{sCOLA}$ and $\bar{\mathrm{B}}^\mathrm{sCOLA}$)}
\label{sssec:Gravitational potential}
\paragraph{Poisson solver ($\Delta_\textbf{x}^\mathrm{sCOLA}$)}
To make sure that differences between $\Phi^\mathrm{sCOLA}(\textbf{x},a)$ and $\Phi(\textbf{x},a)$ are as small as possible, we make use of a Poisson solver with Dirichlet boundary conditions, instead of assuming periodic boundary conditions. Such a Poisson solver uses discrete sine transforms (DSTs) instead of FFTs, and requires the boundary values of $\Phi$ in six planes (west, east, south, north, bottom, top) surrounding the PM grid (see appendix \ref{apx:Poisson solver with Dirichlet boundary conditions}). These planes have a thickness of $N_\mathrm{ghost}$ cells (depending on the value of the FDA used to approximate the Laplacian); they are represented by hatched regions in figure \ref{fig:sCOLA_grids}, right panel. At each scale factor $a_\mathrm{K}$ when the computation of accelerations is needed, the Dirichlet boundary conditions are extracted from the precomputed $\Phi_\mathrm{BCs}(\textbf{x},a_\mathrm{K})$ (step C.3., see section \ref{ssec:Initial operations in the sCOLA boxes}).
Ideally, $\Phi_\mathrm{BCs}(\textbf{x},a_\mathrm{K})$ should be the exact, non-linear gravitational potential in the full volume at $a_\mathrm{K}$, $\Phi(\textbf{x},a_\mathrm{K})$. However, knowing this quantity would require having previously run the monolithic simulation in the full volume, which we seek to avoid. In this paper, we rely instead on the linearly-evolving potential (LEP) approximation \citep{Brainerd1993,Bagla1994}, namely
\begin{equation}
\Phi_\mathrm{BCs}(\textbf{x},a_\mathrm{K}) \approx \Phi_\mathrm{LEP}(\textbf{x},a_\mathrm{K}) \equiv D_1(a_\mathrm{K}) \, \widetilde{\phi}^{(1)}(\textbf{x}) .
\label{eq:LEP}
\end{equation}
The idea behind this approximation is that the gravitational potential is dominated by long-wavelength modes, and therefore it ought to obey linear perturbation theory to a better approximation than the density field.
In equation \eqref{eq:LEP}, we have assumed that the linear growth factor $D_1$ is normalised to unity at the scale factor corresponding to the initial conditions. The precomputation of $\Phi_\mathrm{BCs}$ in step C.3. is therefore an interpolation from the LPT grid to the PM grid and a simple scaling with $D_1(a_\mathrm{K})$.
The output of the Poisson solver is the gravitational potential $\Phi^\mathrm{sCOLA}(\textbf{x},a_\mathrm{K})$ on the PM grid, in the interior of the sCOLA box (dark green grid points in figure \ref{fig:sCOLA_grids}, right panel). Consistently with the treatment above, $\Phi^\mathrm{sCOLA}(\textbf{x},a_\mathrm{K})$ is padded using the values of $\Phi_\mathrm{BCs}(\textbf{x},a_\mathrm{K})$ in $2N_\mathrm{ghost}$ cells around the PM grid, in each direction (light green and yellow regions in figure \ref{fig:sCOLA_grids}, right panel).
Therefore, the only difference between $\Delta_\textbf{x}^\mathrm{sCOLA}$ and $\Delta_\textbf{x}$ resides in using the LEP instead of the true, non-linear gravitational potential at the boundaries of the sCOLA box.
\paragraph{Accelerations ($\boldsymbol{\nabla}_\textbf{x}^\mathrm{sCOLA}$ and $\bar{\mathrm{B}}^\mathrm{sCOLA}$)}
Given the gravitational potential $\Phi^\mathrm{sCOLA}(\textbf{x},a_\mathrm{K})$, accelerations are computed by finite differencing in configuration space and interpolation to particles' positions, similarly to step C.2. (see section \ref{ssec:Initial operations in the sCOLA boxes}). The application of $\boldsymbol{\nabla}_\textbf{x}^\mathrm{sCOLA}$ consumes $N_\mathrm{ghost}$ cells, so that accelerations are obtained on the PM grid with a padding of $N_\mathrm{ghost}$ cells (yellow region in figure \ref{fig:sCOLA_grids}, right panel). Interpolation from the grid to particles' position (the $\bar{\mathrm{B}}^\mathrm{sCOLA}$ operator) further consumes $N_\mathrm{ghost}$ cells.
As for the Laplacian, the only difference between $\boldsymbol{\nabla}_\textbf{x}^\mathrm{sCOLA}$ and $\boldsymbol{\nabla}_\textbf{x}$, and $\bar{\mathrm{B}}^\mathrm{sCOLA}$ and $\bar{\mathrm{B}}$, resides in using the LEP in $\Phi^\mathrm{sCOLA}(\textbf{x},a_\mathrm{K})$ instead of the true, non-linear gravitational potential at the boundaries of the sCOLA box.
\section{Accuracy and speed}
\label{sec:Accuracy and speed}
\begin{table*}
\centering
\begin{tabular}{ccccccccccc}
\hline\hline
$L$ [$\mathrm{Mpc}/h$] & $N_\mathrm{p}$ & $N$ & $N_\mathrm{tiles}$ & $N_\mathrm{p,tile}$ & $L_\mathrm{tile}$ [$\mathrm{Mpc}/h$] & $N_\mathrm{p,buffer}$ & $L_\mathrm{buffer}$ [$\mathrm{Mpc}/h$] & $N_\mathrm{g}$ & $r$ & $p$ \\
\hline
200 & 512 & 256 & 16 & 32 & 12.5 & 32 & 12.5 & 97 & 27 & 151.70 \\
& & & 8 & 64 & 25 & 32 & 12.5 & 129 & 8 & 64 \\
& & & 8 & 64 & 25 & 64 & 25 & 193 & 27 & 18.96 \\
& & & 4 & 128 & 50 & 32 & 12.5 & 193 & 3.38 & 18.96 \\
& & & 4 & 128 & 50 & 64 & 25 & 257 & 8 & 8 \\
& & & 4 & 128 & 50 & 128 & 50 & 385 & 27 & 2.37 \\
& & & 2 & 256 & 100 & 32 & 12.5 & 321 & 1.95 & 4.10 \\
& & & 2 & 256 & 100 & 64 & 25 & 385 & 3.38 & 2.37 \\
\hline
1000 & 1024 & 512 & 16 & 64 & 62.5 & 14 & 13.7 & 93 & 2.97 & 1378.91 \\
& & & 16 & 64 & 62.5 & 26 & 25.4 & 117 & 5.95 & 687.90 \\
& & & 16 & 64 & 62.5 & 40 & 39.1 & 145 & 11.39 & 359.59 \\
& & & 16 & 64 & 62.5 & 64 & 62.5 & 193 & 27 & 151.70 \\
& & & 8 & 128 & 125 & 10 & 9.8 & 149 & 1.55 & 331.22 \\
& & & 8 & 128 & 125 & 20 & 19.5 & 169 & 2.26 & 226.45 \\
& & & 8 & 128 & 125 & 30 & 29.3 & 189 & 3.17 & 161.59 \\
& & & 8 & 128 & 125 & 50 & 48.8 & 229 & 5.65 & 90.59 \\
\hline\hline
\end{tabular}
\caption{Different setups used to test the accuracy and speed of our sCOLA algorithm.}
\label{tb:simulations}
\end{table*}
We implemented the perfectly parallel sCOLA algorithm described in section \ref{sec:Algorithm for perfectly parallel simulations using sCOLA} in the \textsc{Simbelmynë} code \citep{Leclercq2015ST}, publicly available at \url{https://bitbucket.org/florent-leclercq/simbelmyne/} \citep[see also][appendix B, for technical details on the implementation of the PM and tCOLA models in \textsc{Simbelmynë}]{LeclercqThesis}. This section describes some tests of the accuracy and speed of the new sCOLA algorithm. Since our implementation, relying on evaluating forces with a PM scheme, introduces some additional approximations with respect to tCOLA, we compare our results to that of corresponding monolithic tCOLA simulations. The accuracy of tCOLA with respect to more accurate gravity solvers has been characterised in the earlier literature \citep{Tassev2013,Howlett2015,Leclercq2015ST,Koda2016,Izard2016}. The question of comparing the accuracy of our sCOLA algorithm to full $N$-body simulations would require building in a full $N$-body integrator for the sCOLA boxes (see equations \eqref{eq:sCOLA_NbodyEoM} and \eqref{eq:force_sCOLA}); this subject is left for future research.
Throughout the paper, we adopt the $\Lambda$CDM model with Planck 2015 cosmological parameters: $h=0.6774$, $\Omega_\Lambda = 0.6911$, $\Omega_\mathrm{b} = 0.0486$, $\Omega_\mathrm{m} = 0.3089$, $n_\mathrm{S} = 0.9667$, $\sigma_8 = 0.8159$ \citep[][page 31, table 4, last column]{PlanckCollaboration2015}. The initial power spectrum is computed using the \citet{Eisenstein1998,Eisenstein1999} fitting function.
\interfootnotelinepenalty=10000
We base our first tests on a periodic box of comoving side length $L=200$~$\mathrm{Mpc}/h$ populated with $N_\mathrm{p}^3 = 512^3$ dark matter particles. For all operators, we use FDA at order 2. The LPT grid has $N^3 = 256^3$ voxels. Particles are evolved to redshift $z=19$ using 2LPT. For all runs, we use $10$ time-steps linearly-spaced in the scale factor to evolve particles from $z=19$ ($a_\mathrm{i}=0.05$) to $z=0$ ($a_\mathrm{f}=1$) (see appendix \ref{apx:Standard and modified time-stepping}).\footnote{This means that in the case of our new sCOLA algorithm, we use COLA both ``in space and time'' \citep[see][]{Tassev2015}.} For tCOLA, the PM grid, covering the full box, has $512^3$ voxels. For sCOLA, we use eight different setups, with various parameters $\{N_\mathrm{tiles}, N_\mathrm{p,tile}, L_\mathrm{tile}, N_\mathrm{p,buffer}, L_\mathrm{buffer}, N_\mathrm{g}, r, p \}$ given in the first part of table \ref{tb:simulations}.
To assess more extensively the impact of using sCOLA on large scales, we used a second ensemble of simulations with the following differences: a box with comoving side length of $L=1$~$\mathrm{Gpc}/h$, $N_\mathrm{p}=1024^3$ particles, a LPT grid with $N^3 = 512^3$ voxels, and a PM grid of $1024^3$ voxels for tCOLA. For sCOLA, we use eight different setups given in the second part of table \ref{tb:simulations}.
\subsection{Qualitative assessments}
\label{ssec:Qualitative assessments}
The redshift-zero density field is estimated by assigning all particles to the LPT grid using the CiC scheme. Results for the $200$~$\mathrm{Mpc}/h$ box are shown in figure \ref{fig:200Mpc_density}. There, the bottom right panel shows the reference tCOLA density field and other panels show the differences between sCOLA and tCOLA results, for the eight different setups. Some qualitative observations can be made: when artefacts are visible in the sCOLA results, they mainly affect over-dense regions of the cosmic web (filaments and halos), whereas under-dense regions are generally better recovered. Artefacts are of two types: the position of a structure (usually a filament) can be imprecise due to a misestimation of bulk motions (this is visible as a ``dipole'' in figure \ref{fig:200Mpc_density}); or the density (usually of halos) can be over- or under-estimated (this is visible as a ``monopole'' in figure \ref{fig:200Mpc_density}). In all setups, artefacts are predominantly located close to the boundaries of tiles (represented as dashed lines) and are less visible in the centre of tiles. This can be easily understood given that the approximations made all concern the behaviour at the boundaries of sCOLA boxes. At fixed size for the buffer region, the correspondence between sCOLA and tCOLA density fields improves with increasing tile size. A minimum tile size of about $50$~$\mathrm{Mpc}/h$ seems necessary to limit the misestimation of halo densities (``monopoles'' in figure \ref{fig:200Mpc_density}). At low redshift, this scale is in the mildly non-linear regime, where LPT starts to break down; therefore, the LPT frame is inaccurate for particles, and the requirement of no communication between tiles leads to mispredicted clustering. As expected, at fixed tile size, the results are improved by increasing the buffer region around tiles: in each sCOLA box, boundary approximations are pushed farther away from the central region of interest. A good compromise between reducing artefacts and increasing the size of buffer regions seems to be found for a buffer region of $25$~$\mathrm{Mpc}/h$, which corresponds roughly to the maximum distance travelled by a particle from its initial to its final position. In particular, the setup $L_\mathrm{tile} = 50$~$\mathrm{Mpc}/h$, $L_\mathrm{buffer}=25$~$\mathrm{Mpc}/h$ leads to a satisfactory approximation of the tCOLA density with a parallelisation potential factor $p=8$.
{In a similar fashion, the velocity field is estimated on the LPT grid from particle information, using the simplex-in-cell estimator \citep{HahnAnguloAbel2015,Leclercq2017DMSHEET}. Using phase-space information, this estimator accurately captures the velocity field, even in regions sparsely sampled by simulation particles. Results for the $200$~$\mathrm{Mpc}/h$ box are shown in figure \ref{fig:200Mpc_velocity}, where one component of the tCOLA velocity field $v_\mathrm{tCOLA}$ (in km/s) is shown in the bottom right panel. Other panels \parfillskip=0pt\par}
\clearpage
\onecolumngrid
\begin{figure*}[!thp]
\includegraphics[width=\linewidth]{200Mpc_density.pdf}
\caption{Qualitative assessment of the redshift-zero density field from sCOLA for different tilings and buffer sizes, with respect to tCOLA. The bottom right panel shows the reference tCOLA density field in a $200$~$\mathrm{Mpc}/h$ box with periodic boundary conditions (the quantity represented is $\ln(2+\delta_\mathrm{tCOLA})$ where $\delta_\mathrm{tCOLA}$ is the density contrast). Other panels show the difference between sCOLA and tCOLA density fields, $\ln(2+\delta_\mathrm{sCOLA})-\ln(2+\delta_\mathrm{tCOLA})$, for different sizes of tile and buffer region, as indicated above the panels. The tiling is represented by dashed lines, and the central tile's buffer region is represented by solid lines. In the third dimension, the slices represented intersect the central tile at its centre. As can be observed in this figure, artefacts are predominantly located close to the boundaries of tiles; they are reduced with increasing tile size and buffer region size.}
\label{fig:200Mpc_density}
\end{figure*}
\clearpage
\begin{figure*}[!th]
\includegraphics[width=\textwidth]{200Mpc_velocity.pdf}
\caption{Same as figure \ref{fig:200Mpc_density}, but for one component of the velocity field, in km/s. Bulk flows are correctly captured if tiles and their buffer regions are large enough. Residual differences inside halos can be observed, but they are expected due to the limited number of time-steps, rendering both tCOLA and sCOLA velocities inaccurate in the deeply non-linear regime.}
\label{fig:200Mpc_velocity}
\end{figure*}
\twocolumngrid
\interfootnotelinepenalty=100
\noindent show the velocity error in sCOLA, $v_\mathrm{sCOLA}-v_\mathrm{tCOLA}$ in km/s. Differences between tCOLA and sCOLA velocity fields are of two kinds: misestimation of bulk flows (visible as light, spatially extended regions in figure \ref{fig:200Mpc_velocity}), or misestimation of particle velocities inside halos (visible as dark spots in figure \ref{fig:200Mpc_velocity}). We do not interpret the second kind of differences as errors made by our sCOLA algorithm: indeed, motions within virialised regions are not captured accurately by any simulation using only ten time-steps, even by tCOLA in the full box. Therefore, only the first kind of differences, that is, the misestimation of coherent bulk motions is physically interpretable. In this respect, the same behaviour as for density fields can be observed: artefacts are mostly located at the boundaries of tiles, and they are reduced with increasing tile size and buffer region size, with safe minima of $L_\mathrm{tile} \gtrsim 50$~$\mathrm{Mpc}/h$ and $L_\mathrm{buffer} \gtrsim 25$~$\mathrm{Mpc}/h$, respectively.
\subsection{Summary statistics}
\label{ssec:Summary statistics}
In this section, we turn to a more quantitative assessment of our results, by checking the power spectrum of final density fields and their cross-correlation to the tCOLA density field. Even if final density fields are non-Gaussian, two-point statistics (auto- and cross-spectra) are expected to be sensitive to the approximations made in our sCOLA algorithm, which involves both local and non-local operations in configuration space.
\begin{figure}
\includegraphics[width=\linewidth]{200Mpc_power.pdf}
\caption{Power spectrum relative to tCOLA (top panel) and cross-correlation with respect to tCOLA (bottom panel) of redshift-zero sCOLA density fields, in a $200~\mathrm{Mpc}/h$ box containing $512^3$ dark matter particles. Different sizes for the tiles (represented by different line styles) and buffer regions (represented by different colours) are used, as indicated in the legend. The vertical lines show the respective fundamental mode of different tiles, the light grey bands correspond to $3\%$ accuracy, and the dark grey bands to $1\%$ accuracy.}
\label{fig:200Mpc_power}
\end{figure}
\begin{figure}
\includegraphics[width=\linewidth]{1Gpc_power.pdf}
\caption{Same as figure \ref{fig:200Mpc_power}, but in a $1~\mathrm{Gpc}/h$ box containing $1024^3$ particles.}
\label{fig:1Gpc_power}
\end{figure}
According to \citet{Huterer2005} or \citet{Audren2013}, in the best cases, observational errors for a Euclid-like survey are typically of order $3\%$ for $k < 10^{-2}~(\mathrm{Mpc}/h)^{-1}$. These results do not account for any of the systematic uncertainties linked to selection effects or contamination of the clustering signal by foregrounds. At smaller scales, theoretical uncertainties take over, reaching $1\%$ and above for $k > 10^{-1}~(\mathrm{Mpc}/h)^{-1}$. In addition, the impact of baryonic physics is still largely uncertain, some models predicting an impact of at least $10\%$ at $k=1~(\mathrm{Mpc}/h)^{-1}$ \citep[e.g.][]{vanDaalen2011,Chisari2018,Schneider2019}. Any data model involving our sCOLA algorithm will be subject to these uncertainties. For this reason, we aim for no better than $3\%$ to $1\%$ accuracy at all scales up to $k =1$~$(\mathrm{Mpc}/h)^{-1}$, for any two-point measurement of clustering.
More precisely, we work with $P(k)$ and $R(k)$, defined for two density contrast fields $\delta$ and $\delta' = \delta_\mathrm{tCOLA}$, with our Fourier transform convention, by
\begin{eqnarray}
\updelta_\mathrm{D}(\textbf{k}-\textbf{k}') P(k) & \equiv & (2\pi)^{-3} L^6 \left\langle \delta^*(\textbf{k}) \delta(\textbf{k}') \right\rangle, \\
\updelta_\mathrm{D}(\textbf{k}-\textbf{k}') R(k) & \equiv & \frac{\left\langle \delta^*(\textbf{k})\delta'(\textbf{k}') \right\rangle}{\sqrt{\left\langle \delta^*(\textbf{k})\delta(\textbf{k}') \right\rangle \left\langle \delta'^*(\textbf{k})\delta'(\textbf{k}') \right\rangle}},
\end{eqnarray}
where $\updelta_\mathrm{D}$ is a Dirac delta distribution. For the estimation of $P(k)$ and $R(k)$, we use $100$ logarithmically-spaced $k$-bins from the fundamental mode of the box $k_\mathrm{min} \equiv 2\pi/L$ to $k =1$~$(\mathrm{Mpc}/h)^{-1}$.
In figures \ref{fig:200Mpc_power} and \ref{fig:1Gpc_power}, we plot the power spectrum of sCOLA density fields divided by the power spectrum of the reference tCOLA density field, $P_\mathrm{sCOLA}(k)/P_\mathrm{tCOLA}(k)$ (upper panels) and the cross-correlation between sCOLA and tCOLA density fields, $R(k)$ (bottom panels), for our $200~\mathrm{Mpc}/h$ (figure \ref{fig:200Mpc_power}) and $1~\mathrm{Gpc}/h$ box (figure \ref{fig:1Gpc_power}). The grey horizontal bands represent the target accuracies of $3\%$ and $1\%$, and the vertical lines mark the fundamental modes of the tiles, $k_\mathrm{tile} \equiv 2\pi/L_\mathrm{tile}$, for the different values of $L_\mathrm{tile}$ used.
Figure \ref{fig:200Mpc_power} quantitatively confirms the considerations of section \ref{ssec:Qualitative assessments}. Both the amplitudes (as probed by $P(k)/P_\mathrm{tCOLA}(k)$) and the phase accuracy (as probed by $R(k)$) of sCOLA simulations are improved with increasing tile size, for a fixed buffer region (different line styles, same colours). For a fixed tile size, results are also improved by increasing the size of the buffer region (same line styles, different colours). Remarkably, all setups yield perfect phase accuracy at large scales ($R(k)=1$ for $k \leq 0.2~(\mathrm{Mpc}/h)^{-1}$), even when the amplitude of corresponding modes deviates from the tCOLA result. Defects at small scales (lack of power and inaccurate phases) are only observed for the smallest tile sizes and are fixed by increasing the size of buffer region. This effect can be interpreted in Lagrangian coordinates: when the Lagrangian volume forming a halo is divided among different tiles that do not exchange particles, and if the buffer region is too small to contain the rest of the halo, the resulting structure is then split and under-clustered in Eulerian coordinates. In this respect, preferring a sCOLA box size ($L_\mathrm{sCOLA} \equiv L_\mathrm{tile} + 2L_\mathrm{buffer}$) of at least $100~\mathrm{Mpc}/h$ (and therefore $L_\mathrm{tile} \gtrsim 50~\mathrm{Mpc}/h$, $L_\mathrm{buffer} \gtrsim 25~\mathrm{Mpc}/h$, in most situations) seems to be sensible. A more difficult issue is the amplitude of large-scale modes, for $k < k_\mathrm{tile}$. These are sensitive to the tiling if buffer regions around tiles are too small. A safe requirement also seems to be $L_\mathrm{buffer} \gtrsim 25~\mathrm{Mpc}/h$. Putting everything together, in our $200~\mathrm{Mpc}/h$ box, three setups reach $3\%$ accuracy in amplitude and phases at all scales: $\{L_\mathrm{tile} = 50~\mathrm{Mpc}/h, L_\mathrm{buffer} = 25~\mathrm{Mpc}/h\}$ (discussed already in section \ref{ssec:Qualitative assessments}); $\{L_\mathrm{tile} = 100~\mathrm{Mpc}/h, L_\mathrm{buffer} = 25~\mathrm{Mpc}/h\}$; and $\{L_\mathrm{tile} = 50~\mathrm{Mpc}/h, L_\mathrm{buffer} = 50~\mathrm{Mpc}/h\}$. The last-mentioned performs even better, reaching $1\%$ accuracy at all scales, but at the price of over-simulating the volume by a larger factor.
Figure \ref{fig:1Gpc_power} shows the same diagnostics for a $1~\mathrm{Gpc}/h$ box, where the qualitative behaviour is the same as before. It confirms the requirement $L_\mathrm{buffer} \gtrsim 25~\mathrm{Mpc}/h$ to get sufficient accuracy at high $k$. The question of the accuracy reached at the largest scales is then jointly sensitive to $L_\mathrm{tile}$ and $L$. In our tests, the setups $\{L_\mathrm{tile} = 62.5~\mathrm{Mpc}/h, L_\mathrm{buffer} = 39.1~\mathrm{Mpc}/h\}$ and $\{L_\mathrm{tile} = 125~\mathrm{Mpc}/h, L_\mathrm{buffer} = 29.3~\mathrm{Mpc}/h\}$ yield $3\%$ accurate results at all scales, and the setups $\{L_\mathrm{tile} = 62.5~\mathrm{Mpc}/h, L_\mathrm{buffer} = 62.5~\mathrm{Mpc}/h\}$ and $\{L_\mathrm{tile} = 125~\mathrm{Mpc}/h, L_\mathrm{buffer} = 48.8~\mathrm{Mpc}/h\}$ almost reach $1\%$-level precision at all scales. We note that the two different boxes have different mass resolutions, which confirms that requirements for tile and buffer region sizes should be expressed in physical size.
\subsection{Tests of the approximations}
\label{ssec:Tests of the approximations}
\begin{figure}
\includegraphics[width=\linewidth]{200Mpc_power_approxs.pdf}
\caption{Tests of the approximations made in sCOLA for the density field and the gravitational potential. As in figure \ref{fig:200Mpc_power}, the diagnostic tools are the power spectrum relative to tCOLA (top panel) and the cross-correlation with tCOLA (bottom panel). Our sCOLA algorithm uses the approximate interior density field $\delta^\mathrm{sCOLA}$ and the LEP approximation for the boundary gravitational potential (dash-dotted blue line). In other simulations, as indicated in the legend, we use the true density field $\delta$ and/or the true gravitational potential $\Phi$ at the boundaries. The approximation made for the density field dominates, especially at large scales.}
\label{fig:200Mpc_power_approxs}
\end{figure}
As discussed in section \ref{ssec:Evolution of sCOLA boxes}, two approximations are introduced in our sCOLA algorithm with respect to a monolithic tCOLA approach. These concern density assignment in the interior of sCOLA boxes (approximation \hyperref[sssec:Density asssignement]{D.1.}) and the gravitational potential at the boundaries of sCOLA boxes (approximation \hyperref[sssec:Gravitational potential]{D.2.}). In this section, we test the impact of these approximations on final results, using two-point statistics as diagnostic tools. For this test we use our sCOLA run with $L=200~\mathrm{Mpc}/h$, $N_\mathrm{p}=512^3$, $64$ tiles ($N_\mathrm{tiles}=4$, $N_\mathrm{p,tile} = 128$) and $N_\mathrm{p,buffer}=32$ (i.e. $L_\mathrm{tile} = 50~\mathrm{Mpc}/h$, $L_\mathrm{buffer}=12.5~\mathrm{Mpc}/h$). We choose a small buffer size on purpose, to be sensitive to the approximations made.
Let us denote by $\delta_\mathrm{int}$ the density contrast in the interior of sCOLA boxes and by $\Phi_\mathrm{BCs}$ the gravitational potential at the boundaries of sCOLA boxes. As discussed in section \ref{ssec:Evolution of sCOLA boxes}, our algorithm involves an approximation regarding particles leaving the sCOLA box during the evolution, yielding $\delta^\mathrm{sCOLA}$, and relies on the LEP approximation at the boundaries. It therefore uses
\begin{equation}
\delta_\mathrm{int} = \delta^\mathrm{sCOLA} \quad \mathrm{and} \quad \Phi_\mathrm{BCs} = \Phi_\mathrm{LEP}. \label{eq:test_setup1}
\end{equation}
Everything else being fixed, we ran three investigative sCOLA simulations using respectively,
\begin{alignat}{4}
& \delta_\mathrm{int} = \delta && \quad \mathrm{and} \quad \Phi_\mathrm{BCs} = \Phi_\mathrm{LEP}, \label{eq:test_setup2}\\
& \delta_\mathrm{int} = \delta^\mathrm{sCOLA} && \quad \mathrm{and} \quad \Phi_\mathrm{BCs} = \Phi, \label{eq:test_setup3}\\
& \delta_\mathrm{int} = \delta && \quad \mathrm{and} \quad \Phi_\mathrm{BCs} = \Phi, \label{eq:test_setup4}
\end{alignat}
where $\delta$ is the ``true'' density contrast and $\Phi$ is the ``true'' gravitational potential, extracted at each time-step from the corresponding tCOLA simulation.
Figure \ref{fig:200Mpc_power_approxs} shows the auto- and cross-spectra of resulting sCOLA density fields, with respect to the reference tCOLA result. The use of $\delta_\mathrm{int} = \delta$ yields by construction $R(k)=1$ at all scales, as can be checked from the bottom panel. The setup given by equation \eqref{eq:test_setup4} is rid of the two approximations; it is therefore a consistency check: one should retrieve the tCOLA result if no bias is introduced by the tiling and different Poisson solver. As expected, figure \ref{fig:200Mpc_power_approxs} shows that our implementation recovers the tCOLA result at all scales, with only a small excess of power at $k>0.4~(\mathrm{Mpc}/h)^{-1}$ explained by the slightly higher force resolution of the sCOLA run with respect to tCOLA (the PM grid cell sizes are $0.3886$ and $0.3906~\mathrm{Mpc}/h$, respectively).
The setups given by equations \eqref{eq:test_setup2} and \eqref{eq:test_setup3} allow disentangling the impact of approximations \hyperref[sssec:Density asssignement]{D.1.} and \hyperref[sssec:Gravitational potential]{D.2}. In the standard run (equation \eqref{eq:test_setup1}), averaging over tiles and timesteps, $\sim 0.43\%$ of the $512^3$ particles, all of which belonging to the buffer region, do not deposit all of their mass in the calculation of $\delta^\mathrm{sCOLA}$, but $\sim 76.5\%$ on average. This number only slightly increases with time (from $\sim 0.35\%$ at $a=0.05$ to $\sim 0.47\%$ at $a=1$); in other simulations, we have found that it has a stronger dependence on the mass resolution and on the surface of sCOLA boxes. Regarding the accuracy of the LEP approximation, the ratio of the power spectra of $\Phi-\Phi_\mathrm{LEP}$ and of $\Phi$ goes to zero at early times and large scales, and stays below $12\%$ for all scales with wavenumber $k\leq 2\pi/L_\mathrm{sCOLA}$ at $a=1$. As can be observed in figure \ref{fig:200Mpc_power_approxs}, although using the non-linear gravitational potential instead of the LEP improves both $P(k)$ and $R(k)$ for the final density field at all scales with wavenumber $k > 7 \times 10^{-2}~(\mathrm{Mpc}/h)^{-1}$, it does not remove the $\gtrsim 5\%$ bias in amplitude at the largest scales. On the contrary, using the true density contrast solves this problem and yields a $3\%$ accurate result at all scales, which is remarkable given the small buffer size used in this case (the over-simulation factor is only $r=3.38$).
We conclude from these tests that the approximation made regarding the density field (\hyperref[sssec:Density asssignement]{D.1.}) has more impact than the one regarding the gravitational potential (\hyperref[sssec:Gravitational potential]{D.2.}), especially on the largest modes. This result is consistent with the standard paradigm for structure formation, where the density contrast undergoes severe non-linearity at small scales and late times, while the gravitational potential evolves very little. It also suggests that future improvements of our algorithm should focus on finding a better approximation for $\delta^\mathrm{sCOLA}$, rather than $\Phi_\mathrm{BCs}$.
\subsection{Computational cost}
\label{ssec:Computational cost}
\begin{figure}
\includegraphics[width=\columnwidth]{timings.pdf}
\caption{Plots of memory requirements (first row) and of timings for two corresponding tCOLA and sCOLA simulations. Although the CPU time required is higher for sCOLA, the memory consumption and wall-clock time are significantly reduced with respect to tCOLA, due to the perfectly parallel nature of most computations (second row). In the middle left panel, the height of the white bar shows the hypothetical cost of running tCOLA for the same volume as simulated with sCOLA, when taking buffer regions into account. The relative contributions of different operations, as detailed in the legend, is shown in the third row. The main difference in computational cost in sCOLA with respect to tCOLA comes from the use of DSTs instead of FFTs, which makes the evaluation of the potential significantly more expensive.}
\label{fig:timings}
\end{figure}
One of the main motivations for our perfectly parallel algorithm based on sCOLA is to be able to run very large volume simulations at reasonably high resolution. A detailed analysis of the speed and computational cost of our algorithm, as implemented in \textsc{Simbelmynë}, is therefore beyond the intent of this paper. However, in this section we discuss some performance considerations based on a sCOLA run with $L=1~\mathrm{Gpc}/h$, $N_\mathrm{p}=1024^3$, $512$ tiles ($N_\mathrm{tiles}=8$, $N_\mathrm{p,tile} = 128$), $N_\mathrm{p,buffer} = 30$ (i.e. $L_\mathrm{tile} = 125~\mathrm{Mpc}/h$, $L_\mathrm{buffer} = 29.3~\mathrm{Mpc}/h$), $N_\mathrm{g}=199$; and the corresponding monolithic tCOLA simulation. In this case, the over-simulation factor is $r \approx 3.17$ and the parallelisation potential factor is $p \approx 161.59$. To compare the theoretical parallelisation potential factor and the realised parallelisation efficiency, we use one process for tCOLA and $512$ processes for sCOLA. Each process is run on a node with $32$ cores using OpenMP parallelisation.
One of the main advantages of our sCOLA algorithm lies in its reduced memory consumption. In figure \ref{fig:timings} (first row), we show the memory requirements for the calculation of LPT potentials in the full box (common for tCOLA and sCOLA), for the evolution of the full box with tCOLA, and for the evolution of each sCOLA box, all in single-precision floating-point format. LPT requires eight grids of size $N^3$ (one for the initial conditions, one for the Zel'dovich potential, and six for the second-order term), occupying $\sim 4.3$~GB. Evolution with tCOLA requires one integer and $12$ floating-point numbers per particle (their identifier, their position $\textbf{x}$, their momentum $\textbf{p}$, and the vectors $\boldsymbol{\Psi}_1$ and $\boldsymbol{\Psi}_2$), plus a PM grid of $1024^3$ voxels, for a total of $\sim 60.1$~GB. Within each box, sCOLA requires the same memory per particle (but with $N_\mathrm{p,sCOLA}^3 \ll N_\mathrm{p}^3$), a PM grid of size $N_\mathrm{g}^3$, and some overhead for Dirichlet boundary conditions. The total is around $400$~MB per sCOLA box with the setup considered here.
In the second row of figure \ref{fig:timings}, we show the overall cost of tCOLA versus sCOLA, both in terms of CPU time (middle left panel) and wall-clock time (middle right panel). The key feature of our algorithm is that, although the overall CPU time needed is unavoidably higher than with tCOLA, the wall-clock time spent can be drastically reduced. This owes to the degree of parallelism of our algorithm, which is equal to the number of sCOLA boxes. In particular, if as many processes as sCOLA boxes can be allocated ($512$ in this case), the overall wall-clock time is determined by the initial full box operations (common with tCOLA, see section \ref{ssec:Initial conditions and Lagrangian potentials}), plus the cost of evolving only \emph{one} sCOLA box (an average of $30.9$ wall-clock seconds on 32 cores in this test). This is what is shown in the middle right panel of figure \ref{fig:timings}. The wall-clock time reduction factor is $\approx 93$ for the evolution only ($\approx 11$ when accounting for initialisation and writing outputs). Compared to the parallelisation potential factor $p \approx 162$, this number means that sCOLA-specific operations and the larger fractional parallelisation overhead in sCOLA boxes do not significantly hamper the perfectly parallel nature of the code.
The increased CPU time needed with sCOLA (see figure \ref{fig:timings}, middle left panel) is partly due to the necessity of over-simulating the volume of interest by a factor $r>1$ for accuracy. For comparison with the sCOLA CPU time, the height of the white bar shows the tCOLA CPU time multiplied by $r$. The rest of the difference in CPU time principally comes form the fact that simulations with our variant of sCOLA are intrinsically more expensive than with tCOLA for a periodic volume of the same size. This point is further discussed below.
In the third row of figure \ref{fig:timings}, we show the various relative contributions to CPU time and wall-clock time, both for full tCOLA/sCOLA runs and per tCOLA/sCOLA box. The generations of the initial conditions (brown, step \hyperref[ssec:Initial conditions and Lagrangian potentials]{A.1.}) and writing of outputs to disk (grey) are common to tCOLA and sCOLA and have an overall fixed cost. LPT calculations in the full box (pink) consist of computing the Lagrangian potentials and the particle-based LPT displacements in tCOLA, but are limited to computing the Lagrangian potentials in the full box in the case of sCOLA (step \hyperref[ssec:Initial conditions and Lagrangian potentials]{A.2.}). These full-box operations are only showed in the bars labelled ``tCOLA'' and ``sCOLA''. Within each box, the different operations are evaluating the density field (yellow), solving the Poisson equation to get the gravitational potential (green), differentiating the gravitational potential to get the accelerations (blue), ``kicking'' particles (red), and ``drifting'' particles (purple). sCOLA further requires some specific operations within each box: communicating with the master process (steps \hyperref[ssec:Tiling and buffer region]{B.1.}, \hyperref[ssec:Tiling and buffer region]{B.2.}, and \hyperref[ssec:Initial operations in the sCOLA boxes]{C.1.}), calculating the particle-based LPT displacements (step \hyperref[ssec:Initial operations in the sCOLA boxes]{C.2.}), grouped in figure \ref{fig:timings} and shown in orange; and pre-computing the Dirichlet boundary conditions with the LEP approximation (step \hyperref[ssec:Initial operations in the sCOLA boxes]{C.3.}, cyan). sCOLA-specific operations do not contribute more than $10\%$ of the CPU and wall-clock times per box.
A notable difference between evolving a given box with sCOLA or with tCOLA resides in the higher cost of evaluating the potential (green): in this case, $9\%$ of CPU time and $13\%$ of wall-clock time with sCOLA versus $6\%$ of CPU time and $3\%$ of wall-clock time with tCOLA. This effect is due to the use of DSTs, required by the Poisson solver with Dirichlet boundary conditions (see section \ref{ssec:Evolution of sCOLA boxes} and appendix \ref{apx:Poisson solver with Dirichlet boundary conditions}), instead of FFTs. Indeed, depending on the size of the PM grid, the evaluation of DSTs can be the computational bottleneck of our algorithm (up to $60\%$ of overall CPU time is some of our runs), as opposed to the evaluation of the density field (e.g. via CiC) in traditional tCOLA or PM codes ($37\%$ of overall CPU time). For this reason, within each setup, we recommend performing experiments to find a PM grid size giving a good compromise between force accuracy and computational efficiency. In particular, it is strongly preferable that $N_\mathrm{g}+1$ not contain large prime factors (this number appears in the basis functions of sine transforms, see appendix \ref{sapx:Zero-boundary condition Poisson solver}). Throughout this paper, we ensured that $N_\mathrm{g}+1$ is always even, while keeping roughly the same force resolution as the corresponding tCOLA simulation. We note that our choice of $N_\mathrm{g}+1=200$ in the present test, combined with the use of a power of two for the PM grid in the monolithic tCOLA run, favours tCOLA in the comparison of CPU times. The sCOLA CPU time shown in the middle left panel of figure \ref{fig:timings} could be further optimised by making $N_\mathrm{g}+1$ a power of two in sCOLA boxes.
\section{Discussion and conclusion}
\label{sec:Conclusion}
\subsection{Discussion}
\label{ssec:Discussion}
The principal computational challenge of the gravitational $N$-body problem is the long-range nature of the gravitational force. Our sCOLA approach enables perfectly parallel computations and therefore opens up profoundly new possibilities for how to compute large-scale cosmological simulations. We discuss these, some consequences and possible future directions in the following.
\paragraph*{Gravity and physics models}
It is important to note that the sCOLA algorithm introduced in this work is general, and not limited to the gravity model used here: while we focused on a tCOLA particle-mesh implementation to evolve the sCOLA tiles, this choice was designed to facilitate the assessment of tiling artefacts against monolithic tCOLA runs. Nonetheless, any $N$-body method, such as particle-particle--particle-mesh, tree methods or AMR, could be used to evolve each tile. In particular, since the sCOLA approach separates quasi-linear and non-linear scales, there is no need to cut off the computation on small scales. In concert with the approaches discussed below, this fact can be exploited to perform very high-resolution, fully non-linear simulations in cosmological volumes. In this case, the spatial decoupling due to sCOLA would render computations possible that would otherwise be prohibitive.
Similar comments apply to including non-gravitational physics: since hydrodynamical or other non-gravitational forces are typically much more local than gravitational interactions, there are no algorithmic barriers to including them in each sCOLA tile.\footnote{A potential exception is long-range radiative transport of energetic (X-ray or gamma ray) photons, requiring a non-trivial extension of the approach.}
\paragraph*{Construction of light-cones and mock catalogues}
The decoupling of computational volumes achieved by our approach means that each sCOLA box can be run completely independently. Therefore, it is not necessary to define a common final redshift for all tiles. This means that to compute a cosmological light-cone, only a single tile (the one containing the observer) needs to be run to redshift zero. Since the volume on the light-cone increases rapidly with redshift, the vast majority of tiles would only have to be run until they intersect the light-cone at high redshift. In monolithic $N$-body simulations, most of the computational time is spent at low redshift, since the local time-step of simulations decreases with the local dynamical time. Our approach would therefore greatly accelerate the time needed to complete light-cone simulations, by scheduling tiles in order of the redshift to which they should run (and therefore in reverse order of expected computational time), aiding load-balancing.
The construction of light-cones for surveys with large aspect ratios, such as pencil-beam surveys, can further benefit from sCOLA. Indeed, tiles that do not intersect the three-dimensional survey window do not need to be run at all for the construction of mock catalogues. In such a case, the algorithm will still capture the effects of large-scale transverse modes, even if the simulated volume is not substantially increased with respect to the survey volume.
\paragraph*{Low memory requirements}
sCOLA divides the computational volume into much smaller tiles and vastly reduces the memory footprint of each independent sCOLA tile computation, as shown in section \ref{ssec:Computational cost}. As an example, simulating a ${(16~\mathrm{Gpc}/h)}^3$ volume containing $8192^3$ particles to achieve a mass resolution of $10^{12.5}~M_\odot$ requires $\sim 19.8$~TB of RAM with a PM code and $\sim 33.0$~TB of RAM with tCOLA. The setup $\{L_\mathrm{tile} = 62.5~\mathrm{Mpc}/h, L_\mathrm{buffer} = 62.5 ~\mathrm{Mpc}/h\}$ would break down the problem into $256^3$ tiles, each with $(3\times 32)^3$ particles and a memory footprint of $\sim 53~\mathrm{MB}$. This has important consequences, which we explore in the following.
The very modest memory requirement of our algorithm opens up multiple possibilities to accelerate the computation: even on traditional systems, the entire computation of each sCOLA tile would fit entirely into the L3 cache of a multi-core processor. This would cut out the slowest parts of the memory hierarchy, leading to a large potential performance boost and reducing code complexity. Even more promising, many such tiles could be evolved entirely independently on GPU accelerators, or even de\-dicated FPGAs, taking advantage of hybrid architectures of modern computational platforms while reducing the need to develop sophisticated code to manage task parallelism. At this scale, each tile computation would even fit comfortably on ubiquitous small computational platforms such as mobile phones.
\paragraph*{Grid computing}
The perfect scalability achieved by our approach means that large $N$-body simulations can even be run on very inexpensive, strongly asynchronous networks designed for large throughput computing. An extreme example would be participatory computing platforms such as Cosmology@Home,\footnote{\url{https://www.cosmologyathome.org/}} where tens of thousands of users donate computational resources. The use of such platforms would be particularly suited to light-cone computations, as described above. Even if running the low-redshift part necessitates dedicated hardware, other workers could efficiently work independently to compute most of the volume, which lives at high-redshift. Only two communication steps are required for each tile: the LPT potentials are received at the beginning, and at the end of the computation each tile returns its final state at the redshift where it intersects the light-cone.
\paragraph*{Node Failures}
Robustness to node failure is an important consideration on all very large computational platforms. Even with extremely low failure probability for each node, since the number of nodes is high, the probability that some node fails during the course of a computation becomes high. After its initialisation steps (see section \ref{ssec:Initial conditions and Lagrangian potentials}), our approach is entirely robust to such failure, since any individual tile can be recomputed after the fact on a modest system, for very little cost.
\subsection{Conclusion}
\label{ssec:Conclusion}
In this paper, we introduced a perfectly parallel and easily applicable algorithm for cosmological simulations using sCOLA. Our approach is based on a tiling of the full simulation box, where each tile is run independently. By the use of buffer regions and appropriate Dirichlet boundary conditions, we improved the accuracy of the algorithm with respect to \citet{Tassev2015}. In particular, we showed that suitable setups can reach $3\%$ to $1\%$ accuracy at all the scales simulated, as required for data analysis of the next generation of large-scale structure surveys. In case studies, we tested the relative impact of the two approximations involved in our approach, for density assignment and the boundary gravitational potential. We considered the computational cost of our algorithm and demonstrated that even if the CPU time needed is unavoidably higher, the wall-clock time and memory footprint can be drastically reduced.
This study opens up a wide range of possible extensions, discussed in section \ref{ssec:Discussion}. Benefiting from its perfect scalability, the approach could also allow for novel analyses of cosmological data from fully non-linear models previously too expensive to be tractable. It could straightforwardly be used for the construction of mock catalogues, but also within recently introduced likelihood-free inference techniques such as \textsc{delfi} \citep{Alsing2018}, \textsc{bolfi} \citep{Leclercq2018BOLFI} and \textsc{selfi} \citep{Leclercq2019SELFI}, which have a need for cheap simulator-based data models. We therefore anticipate that sCOLA will become an important tool in computational cosmology for the coming era.
Our perfectly parallel sCOLA algorithm has been implemented in the publicly available \textsc{Simbelmynë} code,\footnote{\url{https://bitbucket.org/florent-leclercq/simbelmyne/}} where it is included in version 0.4.0 and later.
\onecolumngrid
|
2,877,628,090,434 | arxiv | \section{Introduction} In their seminal work \cite{PS} P\'olya and Schur completely characterized all sequences of real numbers $\{\gamma_k\}_{k=0}^{\infty}$ satisfying the following property.
\medskip
\noindent {\bf Property A.} Given any real polynomial
\[
f(x)=\sum_{k=0}^n a_k x^k
\]
with only real zeros, the polynomial
\[
\Gamma[f(x)]=\sum_{k=0}^n a_k\gamma_k x^k
\]
also has only real zeros.
\medskip
\noindent Such sequences are called classical multiplier sequences (of the first kind), where the word `classical' refers to the classical/standard basis for the polynomial ring $\mathbb{R}[x]$. Since the late $19^{th}$ century there has been quite a bit of work done in the area of multiplier sequences. Early contributions were made by C. Hermite, E.N. Laguerre, J. Jensen, G. P\'olya, J. Shur, and P. Tur\'an, while Bleecker, T. Craven, G. Csordas conducted most of their research in this area in the late $20^{th}$ and early $21^{st}$ century. For a list of papers highlighting the contributions of these mathematicians to the theory of multiplier sequences, we refer the reader to the extensive bibliography of \cite{andrzej}.
\newline \indent A natural question arising from the study of classical multiplier sequences is the following: which sequences of real numbers $\{\gamma_k\}_{k=0}^{\infty}$ possess the analog of Property A, where we expand our polynomials in a basis different from the standard one? In \cite{andrzej} Piotrowski characterized all multiplier sequences for the Hermite and generalized Hermite (or $\mathcal{H}^{(\alpha)})$ bases with $\alpha >0$. He also gave a characterization of all bases which share multiplier sequences with the standard basis. We now recall this result.
\begin{defn} Let $Q=\seq{q_k(x)}$ be a set of polynomials. $Q$ is called a {\it simple set of polynomials}, if $\deg q_k(x)=k$ for $k=0,1,2,\ldots$.
\end{defn}
\begin{defn} Let $\{\gamma_k\}_{k=0}^{\infty}$ be a sequence of real numbers, and let $Q=\seq{q_k(x)}$ be a simple set of polynomials. If
\[
\Gamma[f(x)]=\sum_{k=0}^n a_k\gamma_k q_k(x)
\]
has only real zeros whenever
\[
f(x)=\sum_{k=0}^n a_k q_k(x)
\]
has only real zeros, we say that $\{\gamma_k\}_{k=0}^{\infty}$ is a $Q$-multiplier sequence.
\end{defn}
\begin{thm}
[Lemma 157 in \cite{andrzej}] \label{andlem} Let $Q = \{q_k(x)\}^{\infty}_{k=0}$ be a simple set of polynomials. Suppose in addition that $\{c_k\}_{k=0}^{\infty}$ is a sequence of non-zero real numbers, $\alpha \in \mathbb{R} \setminus \{0\}$, and $\beta \in \mathbb{R}$. Let $\widehat{Q}= \{\widehat{q}_k(x)\}_{k=0}^{\infty}$, where we define
\[
\widehat{q}_k(x)=c_k q_k(\alpha x+\beta) \qquad \qquad (k=0,1,2,...).
\]
Then $\{\gamma_k\}_{k=0}^{\infty}$ is a $Q$-multiplier sequence if and only if $\{\gamma_k\}_{k=0}^{\infty}$ is a $\widehat{Q}$-multiplier sequence.
\end{thm}
It follows that the only simple sets of polynomials which share multiplier sequences with the standard basis are those obtained from the standard basis by affine transformations as described in Lemma \ref{andlem}.
\newline \indent Our paper pursues two avenues of investigation. In Section \ref{equivalencethm} we develop an alternative characterization of bases which share multiplier sequences. Instead of affine transformations (as in Lemma \ref{andlem}), our characterization uses the existence of certain geometric multiplier sequences for two given bases. In Section \ref{partitions} we investigate the question whether or not, and in what sense, the set of classical multiplier sequences can be partitioned. We conclude the paper with a list of open problems in Section \ref{concl}.
\section{An equivalence theorem}\label{equivalencethm}
We begin with two simple results.
\begin{lem}\label{powers}
Let $B = \{b_k(x)\}_{k=0}^\infty$ be any simple set of polynomials, and suppose $\{\gamma_k\}_{k=0}^\infty$ is a $B$-multiplier sequence. Then $\{\gamma_k^m\}_{k=0}^\infty$ is also a $B$-multiplier sequence for all $m\in\mathbb{N}.$
\end{lem}
\begin{proof}
Let $f(x) = \sum_{k=0}^nc_kb_k(x)$ be a polynomial with only real zeros. Since $\{\gamma_k\}_{k=0}^\infty$ is a $B$-multiplier sequence, $m$ consecutive application of this sequence yields the polynomial
\[
p_m(x):=\sum_{k=0}^n\gamma_k^mc_kb_k(x)
\]
with only real zeros. Hence $\{\gamma_k^m\}_{k=0}^\infty$ is a $B$-multiplier sequence for all $m\in\mathbb{N}$.
\end{proof}
\begin{lem} \label{changebasis}
Let $Q = \{q_k\}_{k=0}^\infty$ and $B = \{b_k\}_{k=0}^\infty$ be simple sets of polynomials. Suppose $\alpha>1$, and suppose that $\displaystyle{\left \{\alpha^k\right \}_{k=0}^\infty}$ and $\displaystyle{\left \{\left(1/\alpha\right)^k\right\}_{k=0}^\infty}$ are both $B$-multiplier sequences. Write $q_k(x) = \sum_{j=0}^ka_{k,j}b_j(x)$, and define $\widehat{Q} := \{\widehat{q}_k\}_{k=0}^\infty$ where $\widehat{q}_k = \sum_{j=0}^k \alpha^j a_{k,j}b_j(x)$. Then every $Q$-multiplier sequence is also a $\widehat{Q}$-multiplier sequence.
\end{lem}
\begin{proof}
Let $\{\gamma_k\}_{k=0}^\infty$ be a Q-multiplier sequence and let $f(x) = \sum_{k=0}^n c_k\widehat{q}_k(x)$ be a polynomial with only real zeros. After substituting in the expressions for the $\hat{q}_k$s we obtain
\[
f(x) = \sum_{k=0}^n \sum_{j=0}^k c_k\alpha^ja_{k,j}b_j(x) = \sum_{j=0}^n \alpha^jb_j(x)\left(\sum_{k=j}^nc_ka_{k,j}\right).
\]
Since $\displaystyle{\left\{\left(\frac{1}{\alpha}\right)^k\right\}_{k=0}^\infty}$ is a $B$-multiplier sequence, the function
\begin{eqnarray*}
f^*(x) &=& \sum_{j=0}^n\left(\frac{1}{\alpha}\right)^j\alpha^jb_j(x)\left(\sum_{k=j}^nc_ka_{k,j}\right) = \sum_{j=0}^n b_j(x)\sum_{k=j}^nc_ka_{k,j} = \sum_{k=0}^n \sum_{j=0}^k c_ka_{k,j}b_j(x) \\ &=& \sum_{k=0}^nc_kq_k(x)
\end{eqnarray*}
also has only real zeros. Applying the Q-multiplier sequenc $\{\gamma_k\}_{k=0}^\infty$ to $f^*(x)$ gives
\[
\sum_{k=0}^n\gamma_kc_kq_k(x) = \sum_{k=0}^n \sum_{j=0}^k \gamma_kc_ka_{k,j}b_j(x) = \sum_{j=0}^n b_j(x)\sum_{k=j}^n\gamma_kc_ka_{k,j},
\]
which also has only real zeros. Finally, since $\displaystyle{\left \{\alpha^k\right \}_{k=0}^\infty}$ is a $B$-multiplier sequence,
\[
\sum_{j=0}^n \alpha^jb_j(x)\sum_{k=j}^n\gamma_kc_ka_{k,j} = \sum_{k=0}^n \sum_{j=0}^n \gamma_kc_k\alpha^ja_{k,j}b_j(x) = \sum_{k=0}^n\gamma_kc_k\widehat{q}_k(x)
\]
has only real zeros, establishing that $\{\gamma_k\}_{k=0}^\infty$ is a $\widehat{Q}$-multiplier sequence. The proof is complete.
\end{proof}
We are now in position to prove our main theorem.
\begin{thm} \label{classicalequivalence} Let $B\,=\,\{b_k(x)\}_{k=0}^\infty$ and $Q = \{q_k(x)\}_{k=0}^\infty$ be simple sets of polynomials. If there exists an $\alpha>1$ such that both $\{\alpha^k\}_{k=0}^{\infty}$ and $\{\left(\frac{1}{\alpha}\right)^k\}_{k=0}^\infty$ are $B$-multiplier sequences, then every $Q$-multiplier sequence is also a $B$-multiplier sequence.
\end{thm}
\begin{rem}\label{allclassical} Before we prove the theorem, we point out that in the statement of the theorem, the simple set $Q$ is completely arbitrary. Thus if an $\alpha>1$ as in the statement of the theorem exists, then {\it every $Q$-multiplier sequence for every simple set $Q$} is a $B$-multiplier sequence.
\end{rem}
\begin{proof} [Proof (of Theorem \ref{classicalequivalence})]
\noindent For each $k=0,1,2,3,...$ we can choose $a_{k,j} \in \mathbb R$ such that
\begin{eqnarray*}
q_k(x)\,&=&\,\sum_{j=0}^ka_{k,j}b_j(x).
\end{eqnarray*}
Suppose that $\alpha>1$ is as in the statement of the theorem, and define
\[
\alpha_{\ell} := \alpha^{\ell} \qquad \forall\ell \in \mathbb{N}.
\]
From Lemma \ref{powers} we know that $\{\alpha_{\ell}^k\}_{k=0}^\infty$ is a $B$-multiplier sequence for all $\ell\in\mathbb{N}$. Suppose $\{\gamma_k\}_{k=0}^\infty$ is a $Q$-multiplier sequence. By Lemma \ref{changebasis} $\{\gamma_k\}_{k=0}^\infty$ is also a multiplier sequence for each of the sets
\[
Q_{\alpha_{\ell}} = \left \{q_k^{\alpha_{\ell}}(x)\right\}_{k=0}^\infty = \left\{\sum_{j=0}^k \alpha_{\ell}^j a_{k,j}b_j(x)\right\}_{k=0}^\infty.
\]
By Lemma \ref{andlem}, $\{\gamma\}_{k=0}^\infty$ is then also a multiplier sequence for each of the sets
\[
Q_{\alpha_{\ell}}^*:=\left \{p_k^{\alpha_{\ell}}\right\}_{k=0}^\infty=\left\{\dfrac{1}{\alpha_{\ell}^k a_{k,k}}q_k^{\alpha_{\ell}}(x)\right\}_{k=0}^\infty=\left \{\sum_{j=0}^k\dfrac{a_{k,j}b_j(x)}{a_{k,k}\alpha_{\ell}^{k-j}}\right \}_{k=0}^\infty.
\]
\noindent Suppose now that $f(x)\,=\,\sum_{k=0}^nm_kb_k(x)$ has only real zeros. For each $\ell\in\mathbb{N}$ we can expand $f$ in the basis $Q_{\alpha_{\ell}}^*$:\\
\begin{displaymath}
f(x)\,=\,\sum_{k=0}^nc_{{\alpha_{\ell}},k}p_k^{\alpha_{\ell}}(x).
\end{displaymath}
Since $\{\gamma_k\}_{k=0}^{\infty}$ is a $Q_{\alpha_{\ell}}^*$-multiplier sequence for every $\ell$, $f_{\alpha_{\ell}}(x)\,:=\,\sum_{k=0}^nc_{{\alpha_{\ell}},k}\gamma_kp_k^{\alpha_{\ell}}(x)$ has only real zeros for each $\ell \in \mathbb{N}$.
\medskip
\noindent The three claims constituting the remainder of the proof are dedicated to showing that
\[
f_{\alpha_{\ell}}(x) \longrightarrow \sum_{k=0}^nm_k\gamma_k{b_k(x)}
\]
locally uniformly. This fact, together with (i) the fact that each $f_{\alpha_{\ell}}(x)$ has only real zeros and (ii) Hurwitz' theorem, will establish that $\sum_{k=0}^nm_k\gamma_k{b_k(x)}$ has only real zeros, and hence $\{\gamma_k\}_{k=0}^{\infty}$ is a B-multiplier sequence.
\medskip
\noindent \underline{{\bf Claim 1}} \quad As $\ell \to \infty$, $p_k^{\alpha_{\ell}}$ converges locally uniformly to ${b_k(x)}$ for every $k=0,1,2, \ldots, n$.
\noindent {\bf Reason:} \quad Let $K$ be a compact subset of $\mathbb{C}$, let $\epsilon>0$ be given and set $\displaystyle{R=\max_{k,\ x \in K}|b_k(x)|}$. We calculate
\begin{eqnarray*}
|p_k^{\alpha_{\ell}}-{b_k(x)}|&=&\left|\sum_{j=0}^k\dfrac{a_{k,j}{b_j(x)}}{a_{k,k}{\alpha_{\ell}}^{k-j}}-{b_k(x)}\right|\\
&=&\left|{b_k(x)}+\sum_{j=0}^{k-1}\dfrac{a_{k,j}{b_j(x)}}{a_{k,k}{\alpha_{\ell}}^{k-j}}-{b_k(x)}\right|\\
&=&\left|\sum_{j=0}^{k-1}\dfrac{a_{k,j}{b_j(x)}}{a_{k,k}{\alpha_{\ell}}^{k-j}}\right|\\
&\leq&\sum_{j=0}^{k-1}\left|\dfrac{a_{k,j}}{a_{k,k}{\alpha_{\ell}}^{k-j}}\right||{b_j(x)}|\\
&\leq&\sum_{j=0}^{k-1}\left|\dfrac{a_{k,j}}{a_{k,k}{\alpha_{\ell}}^{k-j}}\right|R^j<\epsilon \qquad \forall x \in K, \quad \ell>>1
\end{eqnarray*}
since $\alpha_{\ell} \rightarrow \infty$ as $\ell \rightarrow \infty$. This establishes Claim 1.
\medskip
\noindent \underline{{\bf Claim 2}} \quad $\displaystyle{\lim_{\ell \to \infty} c_{\alpha_{\ell},k}=m_k}$ for all $k=0,1,2,3,\ldots,n$.
\noindent {\bf Reason:} \quad By expanding $f(x)$ in the $Q_{\alpha_{\ell}}^*$ and $B$ bases we get
\[
\sum_{k=0}^nc_{{\alpha_{\ell}},k}p_{\alpha_{\ell}}^k(x)=\sum_{k=0}^nm_k{b_k(x)},
\]
which, after writing the $p_{\alpha_{\ell}}^k(x)$s in terms of the $b_k(x)$s, gives
\[
(\dag) \quad \sum_{k=0}^nc_{{\alpha_{\ell}},k}\left(\sum_{j=0}^k\dfrac{a_{k,j}b_j(x)}{a_{k,k}\alpha_{\ell}^{k-j}} \right)=\sum_{k=0}^nm_k{b_k(x)}.
\]
Note that the coefficients of the $n^{th}$ degree terms on either side of this equation are $c_{\alpha_{\ell},n}$ and $m_n$ respectively. Thus we must in fact have $c_{\alpha_{\ell},n}=m_n$.
Using this, we rewrite $(\dag)$ as
\[
m_n\left(\sum_{j=0}^n\dfrac{a_{n,j}b_j(x)}{a_{n,n}\alpha_{\ell}^{n-j}} \right)+\sum_{k=0}^{n-1}c_{{\alpha_{\ell}},k}\left(\sum_{j=0}^k\dfrac{a_{k,j}b_j(x)}{a_{k,k}\alpha_{\ell}^{k-j}} \right)=m_nb_n(x)+\sum_{k=0}^{n-1}m_k{b_k(x)}.
\]
We now look at the coefficients of the degree $n-1$ terms, and conclude that
\[
m_n \frac{a_{n,n-1}}{a_{n,n} \alpha_{\ell}}+c_{\alpha_{\ell},n-1}=m_{n-1},
\]
which implies that $\displaystyle{\lim_{\ell \to \infty} c_{\alpha_{\ell},n-1}=m_{n-1}}$. Continuing in this fashion until the process terminates we get that
\[
\lim_{\ell \to \infty} c_{\alpha_{\ell},k}=m_k \qquad (k=0,1,2,3,\ldots,n)
\]
as desired.
\medskip
\noindent \underline{{\bf Claim 3}} $f_{\alpha_{\ell}}(x)$ converges locally uniformly to $\sum_{k=0}^nm_k\gamma_k{b_k(x)}$ as $\ell \to \infty$.
\noindent {\bf Reason:} \quad Let $K$ be a compact subset of $\mathbb{C}$, and let $\epsilon>0$ be given.
\begin{eqnarray*}
\left|f_{\alpha_{\ell}}(x)-\sum_{k=0}^nm_k\gamma_k{b_k(x)}\right|&=&\left|\sum_{k=0}^nc_{{\alpha_{\ell}},k}\gamma_kp_k^{\alpha_{\ell}}(x)-\sum_{k=0}^nm_k\gamma_k{b_k(x)}\right|\\
&=&\left|\sum_{k=0}^nc_{{\alpha_{\ell}},k}\gamma_kp_k^{\alpha_{\ell}}(x)-m_k\gamma_k{b_k(x)}\right|\\
&\leq&C\sum_{k=0}^n\left|c_{{\alpha_{\ell}},k}p_k^{\alpha_{\ell}}(x)-m_k{b_k(x)}\right|,
\end{eqnarray*}
where $\displaystyle{C=\max_k \{|\gamma_0|, |\gamma_1|, \ldots, |\gamma_n| \}}$. In addition,
\begin{eqnarray*}
\left|c_{{\alpha_{\ell}},k}p_k^{\alpha_{\ell}}(x)-m_k{b_k(x)}\right|&\leq& \left|c_{{\alpha_{\ell}},k}p_k^{\alpha_{\ell}}(x)-c_{{\alpha_{\ell}},k}b_k(x)\right|+\left|c_{{\alpha_{\ell}},k}b_k(x)-m_k{b_k(x)}\right|\\
&=&\left|c_{{\alpha_{\ell}},k}\right| \left|p_k^{\alpha_{\ell}}(x)-b_k(x)\right|+\left|c_{{\alpha_{\ell}},k}-m_k\right| \left|{b_k(x)}\right|\\
&< & \frac{\epsilon}{n} \qquad (\ell \gg 1)
\end{eqnarray*}
as a consequence of Claims 1 \& 2. Thus, for all $x \in K$, and $\ell \gg 1$, we have
\[
\left|f_{\alpha_{\ell}}(x)-\sum_{k=0}^nm_k\gamma_k{b_k(x)}\right| <\epsilon,
\]
which establishes Claim 3, and completes the proof of Theorem \ref{classicalequivalence}.
\end{proof}
\begin{rem} \label{simplesetcontHermite} The converse of Theorem \ref{classicalequivalence} is false in general. Consider the containments
\begin{eqnarray*} \set{ \textrm{Generalized Laguerre multiplier sequences}} &\subsetneq& \set{\textrm{Hermite multiplier sequences}} \\
\set{\textrm{Legendre multiplier sequences}} &\subsetneq& \set{\textrm{Hermite multiplier sequences}}
\end{eqnarray*}
established in \cite{tomandrzej} and in \cite{bdfu} respectively. These containments coupled with the fact that $\seq{r^k}$ is a Hermite multiplier sequence if and only if $r \geq 1$ (Theorem 127 in \cite{andrzej}) provide bases $Q$ and $B$ for which the converse of Theorem \ref{classicalequivalence} fails.
\end{rem}
Nonetheless, if the set $Q$ is the standard basis, the converse of Theorem \ref{classicalequivalence} does hold, as we next demonstrate.
\begin{thm} \label{converse} Let $B=\{b_k(x)\}_{k=0}^\infty$ be a simple set of polynomials. If every classical multiplier sequence is a $B$-multiplier sequence, then there exists an $\alpha>1$ such that $\displaystyle{\left \{\alpha^k\right \}_{k=0}^\infty}$ and $\displaystyle{\left \{\left(1/\alpha\right)^k\right\}_{k=0}^\infty}$ are both $B$-multiplier sequences.
\end{thm}
\begin{proof}
Suppose that every classical multiplier sequence is a $B$-multiplier sequence and suppose that $\displaystyle{ f(x) = \sum_{k=0}^n a_k x^n}$ has only real zeroes. Since the transformation $x\mapsto\beta x$ preserves reality of zeroes for all non-zero, real $\beta$, we have that for all $\alpha>1$,\quad $f(\alpha x)=\displaystyle{\sum_{k=0}^\infty\alpha^k a_k x^k}$ and $f\left(\frac{x}{\alpha}\right)=\displaystyle{\sum_{k=0}^\infty}\frac{1}{\alpha^k}a_kx^k$ both have only real zeroes. Thus both $\displaystyle{\left \{\alpha^k\right \}_{k=0}^\infty}$ and $\displaystyle{\left \{\left(1/\alpha\right)^k\right\}_{k=0}^\infty}$ are classical multiplier sequences, and therefore $B$-multiplier sequences.
\end{proof}
It is a natural question to ask whether the existence of an $\alpha >1$ as in Theorems \ref{classicalequivalence} \& \ref{converse} implies the existence of more than one such $\alpha$. We settle this question in the next Proposition.
\begin{prop}
Let $B=\{b_k(x)\}_{k=0}^\infty$ be a simple set of polynomials. The following are equivalent:
\begin{itemize}
\item[(i)] There exists an $\alpha >1$ such that $\displaystyle{\left \{\alpha^k\right \}_{k=0}^\infty}$ and $\displaystyle{\left \{\left(1/\alpha\right)^k\right\}_{k=0}^\infty}$ are both $B$-multiplier sequences.
\item[(ii)] $\displaystyle{\left \{\alpha^k\right \}_{k=0}^\infty}$ and $\displaystyle{\left \{\left(1/\alpha\right)^k\right\}_{k=0}^\infty}$ are $B$-multiplier sequences for all $\alpha>1$.
\end{itemize}
\end{prop}
\begin{proof} It is clear that $(ii)$ implies $(i)$. To see why $(i)$ implies $(ii)$, suppose that there exists an $\alpha >1$ such that $\displaystyle{\left \{\alpha^k\right \}_{k=0}^\infty}$ and $\displaystyle{\left \{\left(1/\alpha\right)^k\right\}_{k=0}^\infty}$ are both $B$-multiplier sequences. By Theorem \ref{classicalequivalence} (and the remark following the statement of the theorem), every $Q$-multiplier sequence for every simple set $Q$ is a $B$-multiplier sequence. In particular, every classical multiplier sequence is a $B$-multiplier sequence. But by Theorem \ref{converse}, $\displaystyle{\left \{\alpha^k\right \}_{k=0}^\infty}$ and $\displaystyle{\left \{\left(1/\alpha\right)^k\right\}_{k=0}^\infty}$ are classical multiplier sequences for all $\alpha>1$, and hence they are also $B$-multiplier sequences for all $\alpha >1$.
\end{proof}
We close this section with a classification theorem, which is an immediate consequence of the results contained in the section.
\begin{thm} \label{classicalclassification} Let $B=\{b_k(x)\}_{k=0}^\infty$ be a simple set of polynomials. The set of $B$-multiplier sequences coincides with the set of classical multiplier sequences if and only if there exists an $\alpha >1$ such that $\displaystyle{\left \{\alpha^k\right \}_{k=0}^\infty}$ and $\displaystyle{\left \{\left(1/\alpha\right)^k\right\}_{k=0}^\infty}$ are both $B$-multiplier sequences. \end{thm}
\section{Partitions}\label{partitions}
A quick observation reveals that given any $\alpha >1$, both the sequence $\{\alpha^k\}_{k=0}^{\infty}$ and the sequence $\displaystyle{\left\{\left(1/\alpha \right)^k\right \}_{k=0}^{\infty}}$ are classical multiplier sequences. By the remark following Theorem \ref{classicalequivalence} we conclude that if $Q$ is a simple set of polynomials, then every $Q$-multiplier sequence must be a classical multiplier sequence. As a result, $Q$-multiplier sequences inherit a list of properties from the classical multiplier sequences. In particular, if $Q$ is a simple set of polynomials and $\seq{\gamma_k}$ is a $Q$-multiplier sequence, then the following hold:
\begin{itemize}
\item[(i)] if there exists integers $n> m \geq 0$ such that $\gamma_m \neq 0$ and $\gamma_n=0$, then $\gamma_k=0$ for all $k \geq n$,
\item[(ii)] the terms of $\{\gamma_k\}_{k=0}^{\infty}$ are either all of the same sign, or they alternate in sign,
\item[(iii)] for any $r \in \mathbb{R}$, the sequence $\{r \gamma_k\}_{k=0}^{\infty}$ is also a $Q$-multiplier sequence,
\item[(iv)] the terms of $\{\gamma_k\}_{k=0}^{\infty}$ satisfy Tur\'an's inequality
\[
\gamma_k^2-\gamma_{k-1}\gamma_{k+1} \geq 0 \quad \quad (k=1,2,3, \ldots).
\]
\end{itemize}
Since the set of classical multiplier sequences contains all $Q$-multiplier sequences for every simple set $Q$, we ask whether there exists a collection of simple sets of polynomials $\left\{ Q_j \right \}$, such that the sets of $Q_j$-multiplier sequences forms a partition for the set of all classical multiplier sequences? One immediately notices that this cannot be true in the traditional sense of a partition, since the intersection of the sets of $Q_j$-multiplier sequences is trivially non-empty: it contains all constant sequences. In what follows, we establish that this intersection contains very little more than the constant sequences. We show that a logical choice for a `minimal' partition are two sets of generalized Hermte polynomials $\mathcal{H}^{(\alpha)}$: one with with $\alpha >0$ and one with $\alpha <0$, and exhibit that despite its appeal, this choice fails to produce the required partition.
\medskip
\noindent {\bf Notation:} For ease of exposition given a simple set of polynomials $Q$, we denote by $Q_{MS}$ the set of $Q$-multiplier sequences. In addition we shall write $SSP$ for the set of all simple sets of polynomials.
\medskip
We begin with a useful result which allows us to decide whether a classical multiplier sequence is a geometric sequence just by looking at the first three terms of the sequence.
\begin{prop}\label{geometric}
Suppose $\{\gamma_k\}_{k=0}^\infty$ is a classical multiplier sequence with the first three terms in geometric progression, i.e.,
\begin{equation}\label{geomeq}
\frac{\gamma_1}{\gamma_0} = \frac{\gamma_2}{\gamma_1} = \alpha
\end{equation}
for some $\alpha \in \mathbb{R}$. Then $\{\gamma_k\}_{k=0}^\infty$ is a geometric sequence, where
\[\gamma_n = \gamma_0\alpha^n\]
for all $n \in \mathbb{N}$.
\end{prop}
\begin{proof}
We proceed by induction. Let $n > 2$, and assume that $\gamma_m = \gamma_0\alpha^m$ for all $m<n$. (We are given that this is true for $n = 3$). We will show that $\gamma_n = \gamma_0\alpha^n.$\\
From the algebraic characterization of multiplier sequences,
\[\sum_{k=0}^n\gamma_k\binom{n}{k}x^k\]
must have only real zeros. But this is equivalent to saying
\begin{eqnarray*}
\sum_{k=0}^n \gamma_{n-k}\binom{n}{k}x^k &=& \sum_{k=1}^n \gamma_{n-k}\binom{n}{k}x^k + \gamma_n \\
&=& \sum_{k=1}^n \gamma_0\alpha^{n-k}\binom{n}{k}x^k + \gamma_n \\
&=& \gamma_0\left(x+\alpha\right)^n + \left(\gamma_n - \gamma_0\alpha^n\right)
\end{eqnarray*}
has only real zeros. The transformation $(x+\alpha)\mapsto x$ preserves reality of zeros, so
\[\gamma_0x^n + \left(\gamma_n - \gamma_0\alpha^n\right)\]
must have only real zeros. But polynomials of the form $ax^n + b$ have only real zeros for $n>2$ iff b = 0. So $\gamma_n - \gamma_0\alpha^n$ must be zero, hence $\gamma_n = \gamma_0\alpha^n$.\\\end{proof}
\begin{rem}
With the choice $\alpha = 1$ in equation \ref{geomeq} we obtain that a classical multiplier sequence $\seq{\gamma_k}$ with $\gamma_0 = \gamma_1 = \gamma_2$ must be a constant sequence.
\end{rem}
\begin{thm} Let $\displaystyle{S=\bigcap_{Q \in SSP} Q_{MS}}$. If $\{\gamma_k\}_{k=0}^\infty \in S$, then one of the following holds:
\begin{itemize}
\item[(i)] $\{\gamma_k\}_{k=0}^\infty$ is a constant sequence, or
\item[(ii)] $\{\gamma_k\}_{k=0}^\infty=\{ \gamma_0, \gamma_1, 0,0,0,\ldots \}$
\end{itemize}
\end{thm}
\begin{proof} Suppose $\{\gamma_k\}_{k=0}^\infty \in S$. Since a linear polynomial has only real zeros, it is immediate that sequences of the form $\{ \gamma_0, \gamma_1, 0,0,0,\ldots \}$ are in $S$. Thus we proceed by assuming that $\{\gamma_k\}_{k=0}^\infty \in S$ is not of this type and we show that in that case it must be a constant sequence. To this end consider the following three simple sets of polynomials:
\begin{eqnarray*}
Q_1&=&\left\{ 1,x,x+x^2,x^3,x^4, \ldots, x^n, \ldots \right\} \\
Q_2&=&\left\{1,x+1, x^2+x+1, x^3, x^4, \ldots, x^n,\ldots \right\} \qquad \textrm{and} \\
Q_3&=&\left\{ 1,x,1+x^2,x^3,x^4, \ldots, x^n, \ldots \right\}.
\end{eqnarray*}
The polynomial $p(x)=4x^2+4x+1$ has only real zeros. Since $\{\gamma_k\}_{k=0}^\infty$ is a $Q_1$ multiplier sequence, it follows that $\tilde{p}(x)=\gamma_0+4 \gamma_2 x+4 \gamma_2x^2$ has only real zeros as well, and hence we must have
\[
16\gamma_2^2-16\gamma_2\gamma_0=16 \gamma_2 (\gamma_2-\gamma_0) \geq 0.
\]
This implies that $\gamma_2$ and $\gamma_2-\gamma_0$ have the same sign, or $\gamma_2=\gamma_0$. Expanding $f(x)=x^2$ in $Q_3$ and applying $\{\gamma_k\}_{k=0}^\infty$ leads to the polynomial $\gamma_2x^2+(\gamma_0-\gamma_2)$, which should have only real zeros, since $\{\gamma_k\}_{k=0}^\infty$ is also a $Q_3$ multiplier sequence. Thus $\gamma_2$ and $\gamma_0-\gamma_2$ also have the same sign, unless $\gamma_2=\gamma_0$. We conclude that $\gamma_0=\gamma_2$. Note that $\gamma_0=\gamma_2 \neq 0$, since we assumed that our sequence $\{\gamma_k\}_{k=0}^\infty$ is not of the second type. Thus, we may assume that (after perhaps a division by a constant factor) $\gamma_0=\gamma_2=1$. Let
\[
f(x) = ax^2 + bx + \frac{b^2}{4a}
\] where $a = \gamma_1 + 1$ and $b = \gamma_1 + 2$. Then $f(x)$ has only real zeros, since $b^2 - (4ab^2)/(4a) = 0$. Expanding $f(x)$ in terms of the basis $Q_2$ we get
\[
f(x) = a(x^2 + x + 1) + (x + 1) + \left(\frac{b^2}{a} - b\right).
\]
Since $\{\gamma_k\}_{k=0}^\infty$ is also a $Q_2$ multiplier sequence, we see that the polynomial
\begin{eqnarray}
f^*(x) &=& a(x^2 + x + 1) + \gamma_1(x + 1) + \left(\dfrac{b^2}{4a} - b\right)\nonumber\\
&=& ax^2 + (a + \gamma_1)x + a + \gamma_1 + \dfrac{b^2}{4a} - b\nonumber\\
&=& ax^2 + (2\gamma_1 + 1)x + \left(\gamma_1 - 1 + \dfrac{b^2}{4a}\right)\nonumber
\end{eqnarray}
also has only real zeros. Hence the discriminant of $f^*$, given by $\Delta=1-\gamma_1^2$, must be non-negative. Consequently $\gamma_1^2 \leq 1$. On the other hand, by Tur\'an's inequality, $\gamma^2_1 \geq 1$. Thus $\gamma_1^2=1$, from which we conclude that $\gamma_1=1$. (Note that $\gamma_1\neq-1$ since $f(x) = x^2 = (x^2 + x + 1) - (x + 1)$ has only real zeros, but $f^*(x) = (x^2 + x + 1) + (x + 1) = x^2 + 2x + 2 = (x + 1)^2 + 1$ does not have any real zeros.) Finally, by invoking Proposition \ref{geometric} we see that $\{\gamma_k\}_{k=0}^\infty$ is a geometric sequence with $\alpha=1$, and is hence a constant sequence. The proof is complete.
\end{proof}
Next we give a quick generalization of Theorem 5.6 in \cite{tomandrzej} by proving the following, somewhat surprising fact: if $\seq{\gamma_k}$ is a multiplier sequence for a set of simple polynomials with only simple zeros, then it is a $\mathcal{H}^{(\alpha)}$-multiplier sequence\footnote{In \cite{andrzej} Piotrowski shows that the set of Hermite multiplier sequences coincides with the set of $\mathcal{H}^{(\alpha)}$-multiplier sequences for every $\alpha > 0$.} for every $\alpha >0$. In particular, if $Q$ is a simple set of orthogonal polynomials, then every $Q$-multiplier sequence is a Hermite multiplier sequence (for more orthogonal polynomials see \cite{szego}).
\begin{thm} \label{Hermiteallcontain} Suppose that $Q=\seq{q_k(x)}$ is a simple set polynomials such that each $q_k(x)$ has only simple real zeros, normalized so that leading coefficient of each $q_k(x)$ is positive. If $\seq{\gamma_k}$ is a $Q$-multiplier sequence, then $\seq{\gamma_k}$ is a Hermite multiplier sequence.
\end{thm}
\begin{proof} It suffices to establish the result for non-negative sequences. If the result holds for all non-negative sequences, and $\seq{\gamma_k}$ is any $Q$-multiplier sequence, then either
\[
\seq{\gamma_k}, \seq{-\gamma_k}, \seq{(-1)^k\gamma_k} \ \textrm{or} \ \seq{(-1)^{k+1}\gamma_k}
\]
is a non-negative sequence, and is hence a Hermite multiplier sequence. But then $\seq{\gamma_k}$ is itself a Hermite multiplier sequence as a consequence of properties (ii) and (iii) at the beginning of this section. Thus for the rest of the proof we assume that $\seq{\gamma_k}$ is a non-negative Q-multiplier sequence. Observe that the set
\[
E_n=\set{b \in \mathbb{R} \ \big| \ q_n(x)+b q_{n-2}(x) \quad \textrm{has only real zeros}} \qquad (n \geq 2)
\]
is (i) closed, essentially as consequence of Hurwitz' theorem, and (ii) bounded above by a positive number, since
\[
\frac{d^{n-2}}{dx^{n-2}}\left( q_n(x)+bq_{n-2}(x) \right)=k_1 x^2+ k_2x+bk_3 \qquad (k_1,k_3 >0)
\]
has complex zeros for large enough $b$, and hence so does $q_n(x)+bq_{n-2}(x)$. The fact that the upper bound is positive follows from the simplicity of the zeros of $q_n(x)$, for this condition implies that $(-\varepsilon_n,\varepsilon_n) \subset E_n$ for some $\varepsilon_n >0$. If $\seq{\gamma_k}$ is of the form
\[
\ldots, 0,0,\gamma_n, \gamma_{n+1},0,0,\ldots
\]
for some $n \in \mathbb{N}$, then it is automatically a (trivial) Hermite multiplier sequence. If $\seq{\gamma_k}$ is not of this form, then there must exist an $m\in \mathbb{N}$ such that $\gamma_k=0$ for $k<m$ and $\gamma_k \neq 0$ for $k \geq m$. Using this fact, we show that $\seq{\gamma_k}$ must be non-decreasing. To this end let $b_n=\max E_n$, and consider the polynomial
\[
q_n(x)+b_nq_{n-2}(x),
\]
which has only real zeros. Since $\seq{\gamma_k}$ is a Q-multiplier sequence, it follows that
\[
\gamma_n q_n(x)+\gamma_{n-2}b_n q_{n-2}(x)=\gamma_n \left(q_n(x)+\frac{\gamma_{n-2}}{\gamma_n}b_n q_{n-2}(x) \right)
\]
has only real zeros as well. By the maximality of $b_n$ we must then have $\displaystyle{0<\frac{\gamma_{n-2}}{\gamma_n} \leq 1 }$. On the other hand, using Tur\'an's inequality we see that
\[
\left(\frac{\gamma_{n-1}}{\gamma_{n-2}} \right)^2 \geq \frac{\gamma_{n}}{\gamma_{n-2}} \geq 1,
\]
which in turn implies that $\gamma_{n-1} \geq \gamma_{n-2}$ for $n \geq m+2$. Since the same inequality holds trivially for $n <m+2$, we conclude that $\seq{\gamma_k}$ in non-decreasing. Finally, by Proposition 151 in \cite{andrzej}, every non-decreasing non-negative classical multiplier sequence is a Hermite multiplier sequence. The proof is complete.
\end{proof}
The preceding theorem together with Proposition 151 in \cite{andrzej} suggest that when trying to partition the set of all classical multiplier sequences, one should look to the generalized Hermite bases. In line with this suggestion, at the end of his dissertation, Piotrowski posed the following problem (Problem 165, p. 152):
\begin{quotation} If $\seq{\gamma_k}$ is a classical multiplier sequence, then does there exists a non-zero real constant $\alpha$ such that $\seq{\gamma_k}$ is an $\mathcal{H}^{(\alpha)}$-multiplier sequence?
\end{quotation}
We are now in position to answer this question in the negative.
\begin{lem} \label{noninc} Let $\seq{\gamma_k}$ be a classical multiplier sequence of non-negative terms. If there exists $n \in \mathbb{N}$ such that $\gamma_n \leq \gamma_{n-1}$. Then $\gamma_k \leq \gamma_{k-1}$ for all $k \geq n$.
\end{lem}
\begin{proof} Suppose that $n\in \mathbb{N}$ is such that $\gamma_n \leq \gamma_{n-1}$. Then
\[
\gamma_n(\gamma_n-\gamma_{n+1}) \geq \gamma_n^2-\gamma_{n-1}\gamma_{n+1} \stackrel{*}{\geq} 0,
\]
where the starred inequality is Tur\'an's inequality. If $\gamma_n=0$ then $\gamma_k=0$ for all $k\geq n$. Otherwise we conclude that $\gamma_{n+1} \leq \gamma_n$. The result follows.
\end{proof}
\begin{prop} Let $\alpha <0$. If a sequence of non-negative terms $\seq{\gamma_k}$ is a $\mathcal{H}^{(\alpha)}$-multiplier sequence, then $\gamma_{k+1} \leq \gamma_k$ for $k \geq1$.
\end{prop}
\begin{proof} If $\gamma_2=0$, the claim follows immediately. Suppose now that $\gamma_2 \neq 0$ and let $\Gamma=\seq{\gamma_k}$ be a sequence as in the statement of the proposition. Consider the polynomial
\[
x^2=\mathcal{H}^{(\alpha)}_2+\alpha \mathcal{H}^{(\alpha)}_0.
\]
Then
\[
\Gamma[x^2]=\gamma_2x^2+\alpha(\gamma_0-\gamma_2)
\]
has only real zeros if and only if $\gamma_0 \geq \gamma_2$. Note that we must have either $\gamma_1\geq \gamma_0\geq \gamma_2$ or $\gamma_0\geq \gamma_1 \geq \gamma_2$. In both cases $\gamma_1\geq \gamma_2$, and hence by Lemma \ref{noninc} $\gamma_{k+1} \leq \gamma_k$ for $k\geq 1$.
\end{proof}
\begin{lem} \label{nowhere}The sequence $\Gamma=\displaystyle{\left\{\frac{1}{8}, 1, 2, 0 ,0,\ldots \right\}}$ is a classical multiplier sequence.
\end{lem}
\begin{proof} By the classification theorem for classical multi[lier sequences in \cite{PS}, it is enough to show that $\Gamma[(1+x)^n]$ has only real, non-positive zeros for $n \geq 1$. The result is immediate if $n=1$. For $n \geq 2$ we have that
\[
\Gamma[(1+x)^n]=\frac{1}{8}+n x+n(n-1)x^2
\]
is a quadratic polynomial with roots
\[
r_{1,2}=\frac{-\sqrt{2n^2} \pm \sqrt{n(n+1)}}{\sqrt{2}2n (n-1)},
\]
both of which are negative. The proof is complete.
\end{proof}
\begin{prop} There does not exists a non-zero constant $\alpha$ such that the sequence $\displaystyle{\left\{\frac{1}{8}, 1, 2, 0 ,0,\ldots \right\}}$ is a $\mathcal{H}^{(\alpha)}$-multiplier sequence.
\end{prop}
\begin{proof} If $\alpha >0$, every $\mathcal{H}^{(\alpha)}$-multiplier sequence must be non-decreasing. If $\alpha<0$, then every $\mathcal{H}^{(\alpha)}$-multiplier sequence must be non-increasing after the first term. The result follows.
\end{proof}
\section{Open questions}\label{concl}
We have obtained some first results regarding the partitioning of the set of classical multiplier sequences. Answers to the following questions would greatly enhance our understanding of this problem. \\
1. \ Given $\alpha>0, \beta<0$, the set $\mathcal{H}^{(\alpha)}_{MS} \cup \mathcal{H}^{(\beta)}_{MS}$ is a strict subset of the set of classical multiplier sequences. There are classical multiplier sequences which `peak' (in the sense of Lemma \ref{nowhere}) in the $n^{th}$ term and are unaccounted for. Is there a (finite) collection of bases which would account for all such multiplier sequences?
\smallskip
\noindent 2. Is there a collection of infinitely many simple sets of polynomials $\set{Q_j}$such that
\begin{itemize}
\item[(i)] $Q_{jMS} \vartriangle Q_{iMS} \neq \emptyset$ for $j\neq i$
\item[(ii)] $\displaystyle{\bigcup_{j=1}^{\infty} Q_{jMS} \subsetneq Q_{MS}}$
\end{itemize}
We know that the collection $\displaystyle{Q_j:=\left \{ 1,1+x,1+x+x^2, \ldots, \sum_{i=0}^j x^i, x^{j+1}, x^{j+2}, \ldots \right\}}$ satisfies $(ii)$, but we do not know whether it satisfies $(i)$.
\smallskip
\noindent 3. Are there classical multiplier sequences which are not multi[lier sequences for any other simple set of polynomials? If there are, can one find `maximal' subset (with corresponding simple sets of polynomials) of the set of classical multiplier sequences?
|
2,877,628,090,435 | arxiv | \section{Introduction}
\label{sec:introduction}
\textcolor{black}{Numerous problems of relevance in biomechanics, have at their core, the presence of a deformable solid matrix which experiences flow-induced strain : skin (brain (\citet{Budday_2019, HosseiniFarid2020, Franceschini2006, Urcun_2022}), muscle tissue (\citet{Lavigne2022b}), tumour (\citet{Scium2013, Scium2021}, \citet{OFTADEH2018249}), artciular cartilage (\citet{ATESHIAN20091163}) and lumbar intervertebral discs (\citet{Argoubi1996}) just to name a few). The time-dependent mechanical properties of soft tissues influence their physiological functions and are linked to several pathological processes. Although a fluid-structure interaction (FSI) problem, the number and range of fluid flows is generally so vast that the direct approach of a defined boundary between fluid and solid is impossible to apply. In these cases, homogenisation and statistical treatment of the material-fluid system is possibly the only way forward. A prominent technique of this type is that of poroelasticity.}
\textcolor{black}{Extensive studies have shown that poroelastic models can accuratley reproduce the time-dependent behavior of soft tissues under different loading conditions (\citet{Gimnich2019,Argoubi1996,Peyrounette2018,Siddique2017,HosseiniFarid2020,Franceschini2006,Lavigne2022b}). Compared to a visco-(hyper)-elastic formulation (\citet{VanLoocke2009,Simms2012,Wheatley2015, Vaidya2020}), the poroelastic properties are independant of the sample size (\citet{Urcun_2022}). Furthermore, a poroelastic approach can integrate} multiscale/multiphysics data to probe biologically relevant phenomena at a smaller scale and embed the relevant mechanisms at the larger scale (in particular, biochemistry of oxygen and of inflammatory signalling pathways), allowing the interpretation of the different time characteristics (\citet{Stphane2020, Scium2013,Scium2021,Gray2014,Mascheroni2016}).
\textcolor{black}{In most commercially available FE software packages used in research in biomecahnics (ABAQUS, ANSYS, RADIOSS, etc), pre-programmed material models for soft biological tissues are available. The disadvantage of these pre-programmed models is that they are presented to the user as a black box. Therefore, many researchers turn to implementing their own material formulations through user subroutines (the reader is refered, for example, to the excellent tutorial of \citet{FEHERVARY2020103737} on the implementation of a nonlinear hyperelastic material model using user subroutines in ABAQUS). This task, however, is complex. When documentation is available, these only provide expressions, without any derivations, lack details and background information, making the implementation complex and error-prone. In addition, in case of a custom formulation or the introduction of bio-chemical equations for example, specific computational skills are required making the task even more challenging. In the end, the use of commercially available FE software packages} limit the straightforward reproducibility of the research by other teams.
The interest for open-source tools has skyrocketed to increase the impact of the studies within the community. For Finite Element modeling, the FEniCS project (\citet{FEniCS}) is an OpenAccess software which has proven its efficiency in biomechanics (\citet{Mazier_2022}). Based on a Python/C++ coding interface and the Unified Form Language, it allows to easily solve a defined variational form. Furthermore, its compatibility to open-source meshers like GMSH make its use appealing. The project has already shown is capacity solving large deformation problems (\citet{Mazier_2021}) and mixed formulations (\citet{Stphane2020, Urcun_2022, bulle2022}). Previous work provided the implementation of poro-mechanics within the FEniCS project (\citet{Haagenson2020,Joodat2018}). However, the FEniCS project is legacy and has been replaced by the FEniCSx project in August 2022 (\citet{ufl, basix, basix2}).
\textcolor{black}{The aim of this paper is to propose a step-by-step explanation on how to implement several poro-mechanical models in FEniCSx with a special attention to parallel computation}. First, an instantaneous uni-axial confined compression of a porous elastic medium is proposed. This example corresponds to an avascular tissue. Then, the same single-compartment model is computed for a hyper-elastic solid scaffold followed by a 3D confined bi-compartment modeling.
\section{Confined compression of a column: geometrical definition}
\textcolor{black}{The time-dependant response of soft tissues are often assessed based on confined compression creep and stress relaxation test data (\citet{Budday_2019, HosseiniFarid2020, Franceschini2006, Urcun_2022}). In this section, therefore, all the benchmark examples focus} on uni-axial confined compression of a column sample as shown in figure \ref{fig:1}. Both 2D and 3D geometries are studied. The column is described by its width (0.1*h) and height (h) in 2D and its length in 3D.
\begin{figure}[ht!]
\centering
\begin{tikzpicture}
\node[anchor=south west, inner sep =0] (image) at (0,0){
\includegraphics[width=\textwidth]{Figure_1.jpg}};
\begin{scope}[x={(image.south east)},y={(image.north west)}]
\draw[-,black] (0.42,0.91) -- (0.524,0.91);
\draw[-,black] (0.42,0.09) -- (0.524,0.09);
\draw[-,ultra thick,blue] (0.476,0.91) -- (0.524,0.91);
\draw[<->,black] (0.44,0.09) -- (0.44,0.91);
\draw[-,red] (0.476,0.97) -- (0.524,0.97);
\draw[->,red] (0.476,0.97) -- (0.476,0.91);
\draw[->,red] (0.524,0.97) -- (0.524,0.91);
\draw[->,red] (0.5,0.97) -- (0.5,0.91);
\draw[->,red] (0.512,0.97) -- (0.512,0.91);
\draw[->,red] (0.488,0.97) -- (0.488,0.91);
\draw[<-,blue] (0.524,0.5) to[out=0,in=180] (0.58,0.5);
\draw[<-,blue] (0.476,0.5) to[out=-120,in=180] (0.58,0.5);
\draw[<-,blue] (0.5,0.09) to[out=-90,in=180] (0.58,0.05);
\draw[black] (0.42,0.5) node[left] {h};
\draw[red] (0.5,0.97) node[above] {$p_0$};
\draw[blue] (0.58,0.5) node[right] {$\mathbf{u}^s\cdot\mathbf{x}=0$};
\draw[blue] (0.58,0.05) node[right] {$\mathbf{u}^s\cdot\mathbf{y}=0$};
\draw[blue] (0.53,0.91) node[right] {$p^l=0~~(p^b=0)$};
\draw[->,black] (0.3,0.05) -- (0.35,0.05);
\draw[->,black] (0.3,0.05) -- (0.3,0.15);
\draw[black] (0.3,0.15) node[left] {$\mathbf{y}$};
\draw[black] (0.35,0.05) node[below] {$\mathbf{x}$};
\end{scope}
\end{tikzpicture}
\caption{Load (red), Boundary conditions (blue) and mesh (gray) of the uni-axial confined compression of a porous 2D column of height h.}
\label{fig:1}
\end{figure}
All the proposed codes were computed within a docker image of FEniCSx. The dolfinx version used in this paper is v0.5.2. FEniCSx is a proficient platform for parallel computation. All described codes here-under are compatible with multi-kernel computation. The corresponding terminal command is:
\begin{lstlisting}[language=bash]
mpirun -n <N> python3 <filename>
\end{lstlisting}
\noindent Where $<$N$>$ is the number of threads to use and $<$filename$>$ is the python code of the problem.
Within the FEniCSx software, the domain (geometry) is discretized to match with the Finite Element (FE) method. The space is thus divided in $n_x\times n_y = 2\times 40$ elements in 2D and $n_x\times n_y\times n_z= 2\times 40 \times 2$ elements in 3D. The choice of the number of elements is further discussed section \ref{linearel}.
In this article, the meshes are directly created within the FEniCSx environment. However, as a strong compatibility exists with the GMSH api (\citet{gmsh}), it is recommended to use GMSH for this step. An example of the use of GMSH API for a more complex geometry is given section \ref{sec:4.3.1}. It is worth noting that we identify all the boundaries of interest at this step for the future declaration of boundary conditions.
\subsection{2D mesh}
Working in the python environment requires to import the working libraries. To create a 2D mesh, the first step is to import the following libraries:
\begin{lstlisting}[language=python]
import dolfinx
import numpy as np
from dolfinx.mesh import create_rectangle, CellType, locate_entities, meshtags
from mpi4py import MPI
\end{lstlisting}
Then, the domain of resolution (mesh) is computed with:
\begin{lstlisting}[language=python]
Width, Height = 1e-5, 1e-4 #[m]
nx, ny = 2, 40 #[ ]
mesh = create_rectangle(MPI.COMM_WORLD, np.array([[0,0],[Width, Height]]), [nx,ny], cell_type=CellType.quadrilateral)
\end{lstlisting}
Once the mesh object has been created, its boundaries are identified using couples of (marker, locator) to tag with a marker value the elements of dimension \textit{fdim} fulfilling the locator requirements.
For the 2D mesh, the (marker, locator) couples are given by
\begin{lstlisting}[language=python]
# identifiers: 1 , 2, 3, 4 = bottom, right, top, left
boundaries = [(1, lambda x: np.isclose(x[1], 0)),
(2, lambda x: np.isclose(x[0], Width)),
(3, lambda x: np.isclose(x[1], Height)),
(4, lambda x: np.isclose(x[0], 0))]
\end{lstlisting}
Finally the entities are marked by:
\begin{lstlisting}[language=python]
facet_indices, facet_markers = [], []
# dimension of the elements we are looking for
fdim = mesh.topology.dim - 1
for (marker, locator) in boundaries:
facets = locate_entities(mesh, fdim, locator)
facet_indices.append(facets)
facet_markers.append(np.full_like(facets, marker))
facet_indices = np.hstack(facet_indices).astype(np.int32)
facet_markers = np.hstack(facet_markers).astype(np.int32)
sorted_facets = np.argsort(facet_indices)
# the meshtags() function requires sorted facet_indices
facet_tag = meshtags(mesh, fdim, facet_indices[sorted_facets], facet_markers[sorted_facets])
\end{lstlisting}
\subsection{3D mesh}
The method for a 3D mesh is similar to the 2D case. First the libraries are imported and the geometry is created using a 3D function. The (marker, locator) tuples are completed to describe all the boundaries of the domain. The same tagging routine is used.
\begin{lstlisting}[language=python]
## libraries
import dolfinx
import numpy
from dolfinx.mesh import create_box, CellType, locate_entities, meshtags
from mpi4py import MPI
## Mesh generation
Length, Height, Width = 0.1, 1, 0.1 #[m]
nx, ny, nz = 2, 40, 2
mesh = create_box(MPI.COMM_WORLD, numpy.array([[0.0,0.0,0.0],[Length, Height, Width]]), [nx, ny, nz], cell_type=CellType.hexahedron)
## Define the boundaries of the domain:
# 1, 2, 3, 4, 5, 6 = bottom, right, top, left, back, front
boundaries = [(1, lambda x: numpy.isclose(x[1], 0)),
(2, lambda x: numpy.isclose(x[0], Length)),
(3, lambda x: numpy.isclose(x[1], Height)),
(4, lambda x: numpy.isclose(x[0], 0)),
(5, lambda x: numpy.isclose(x[2], Width)),
(6, lambda x: numpy.isclose(x[2], 0))]
facet_indices, facet_markers = [], []
fdim = mesh.topology.dim - 1
for (marker, locator) in boundaries:
facets = locate_entities(mesh, fdim, locator)
facet_indices.append(facets)
facet_markers.append(numpy.full_like(facets, marker))
facet_indices = numpy.hstack(facet_indices).astype(numpy.int32)
facet_markers = numpy.hstack(facet_markers).astype(numpy.int32)
sorted_facets = numpy.argsort(facet_indices)
facet_tag = meshtags(mesh, fdim, facet_indices[sorted_facets], facet_markers[sorted_facets])
\end{lstlisting}
\section{Single-compartment porous medium}
\label{sec:2}
We propose to reproduce the instantaneous uni-axial confined compression at the top surface of a single-compartment porous column of height $h$, Figure \ref{fig:1}, described by a 2D elastic or a 3D hyper-elastic solid scaffold. Regarding the 2D elastic case, the column has a height of $h=100\si{\micro\meter}$, the instantaneous load $p_0$ has a magnitude of $100\si{\pascal}$ and is applied during 6 \si{\second}. Regarding the 3D hyper-elastic case, the column has a height of $h=1\si{\meter}$, the instantaneous load $p_0$ has a magnitude of $p_0=0.3\si{\mega\pascal}$ and is applied during 100000 \si{\second}. The mechanical parameters are respectively given Table \ref{tab:1} and Table \ref{tab:2}. To assess the reliability of our results, we compare our computed solutions to the Terzaghi's analytical solution and to the results of \citet{SELVADURAI2016}, for the elastic and hyper-elastic scaffolds respectively.
\begin{table}[ht!]
\centering
\begin{tabular}{llll}
\hline
Parameter & Symbol & Value & Unit \\ \hline
Young modulus & E & 5000 & \si{\pascal} \\
Poisson ratio & $\nu$ & 0.4 & - \\
Intrinsic permeability & $k^\varepsilon$ & $1.8\times 10^{-15}$ & \si{\square\meter} \\
Biot coefficient & $\beta$ & 1 & - \\
Density of phase $\alpha$ & $\rho^\alpha$ & - & \si{\kilo\gram\per\cubic\meter} \\
IF viscosity & $\mu^l$ & $1\times 10^{-3}$ & \si{\pascal\second} \\
Porosity & $\varepsilon^l$ & 0.5 & - \\
Solid grain Bulk modulus & $K^s $ & $1.\times 10^{10}$ & \si{\pascal} \\
Fluid Bulk modulus & $K^l $ & $2.2\times 10^{9}$ & \si{\pascal} \\\hline
\end{tabular}%
\caption{Elastic mechanical parameters to compare with the Terzaghi solution}
\label{tab:1}
\end{table}
\begin{table}[ht!]
\centering
\begin{tabular}{llll}
\hline
Parameter & Symbol & Value & Unit \\ \hline
Young modulus & E & 600000 & \si{\pascal} \\
Poisson ratio & $\nu$ & 0.3 & - \\
Bulk modulus & K & 500000 & \si{\pascal} \\
Intrinsic permeability & $k^\varepsilon$ & $3\times 10^{-14}$ & \si{\square\meter} \\
IF viscosity & $\mu^l$ & $1\times 10^{-3}$ & \si{\pascal\second} \\
Porosity & $\varepsilon^l$ & 0.2 & - \\
Solid grain Bulk modulus & $K^s $ & $1.\times 10^{10}$ & \si{\pascal} \\
Fluid Bulk modulus & $K^l $ & $2.2\times 10^{9}$ or $5\times 10^{5}$ & \si{\pascal} \\
Biot coefficient & $\beta $ & $1-\frac{K}{K^s}\approx 1$ & - \\\hline
\end{tabular}%
\caption{Hyper-elastic mechanical parameters from \citet{SELVADURAI2016}. In the absence of information on the porosity, solid grain bulk modulus and fluid bulk modulus, the parameter are arbitrarily chosen.}
\label{tab:2}
\end{table}
\subsection{Terzaghi's Analytical solution}
The Terzaghi consolidation problem is often used for benchmarking porous media mechanics, as an analytical solution of this problem exists. An implementation of this experiment was proposed by \citet{Haagenson2020}, within the legacy FEniCS project. The Terzaghi problem consists in an uni-directional confined compression experiment of a column (see Figure \ref{fig:1}). Assuming small and uni-directional strains, incompressible homogeneous phases and constant mechanical properties, the analytical expression of the pore pressure is given in terms of series in Equation \ref{eq:30}.
\begin{align}
p^l=\frac{4p_0}{\pi}\sum_{k=1}^{+\infty}\frac{(-1)^{k-1}}{2k-1}\cos\left[(2k-1)\frac{\pi}{2}\frac{y}{h}\right]\exp\left[-(2k-1)^2\frac{\pi^2}{4}\frac{c_vt}{h^2}\right]
\label{eq:30}\\
c_v = \frac{k^\varepsilon}{\mu^l( S_{\beta}+\frac{\beta^2}{M})} \label{eq:31}\\
M = \frac{3K^s(1-\nu)}{(1+\nu)} \label{eq:32}\\
S_{\beta} = \frac{\beta-\varepsilon^l_0}{K^s} + \frac{\varepsilon^l_0}{K^l}\label{eq:33}
\end{align}
{\noindent Where $p_0$=$\mathbf{t}^\text{imposed} \cdot \mathbf{n}$ is the full applied load, y is the altitude, h is the initial height of the sample , $c_v$ is the consolidation coefficient defined by (Equation \ref{eq:31}), M the longitudinal modulus (Equation \ref{eq:32}), $S_{\beta}$ the inverse of the Biot Modulus (Equation \ref{eq:33}) and $\varepsilon^l_0$ is the initial porosity.}
\subsection{Governing equations}
\label{sec:2.1}
Let one consider a bi-phasic structure composed of a solid scaffold filled with interstitial fluid (IF). The medium is assumed fully saturated. In this section, to set up the governing equations, we make the hypothesis of a Biot coefficient equal to 1. The following convention is assumed: $\bullet^s$ denotes the solid phase and $\bullet^l$ denotes the fluid phase (IF). The primary variables of the problem are the pressure applied in the pores of the porous medium, namely $p^l$, and the displacement of the solid scaffold, namely $\mathbf{u}^s$. (Equation \ref{eq:1}) constrains the different volume fractions. The volume fraction of the phase $\alpha$ is defined by (Equation \ref{eq:2}). $\varepsilon^l$ is called the porosity of the medium.
\begin{align}
\varepsilon^s+\varepsilon^l=1 \label{eq:1}\\
\varepsilon^\alpha = \frac{\text{Volume}^\alpha}{\text{Volume}^{total}}
\label{eq:2}
\end{align}
Assuming that there is no inter-phase mass transport, the continuity equations (mass conservation) of the liquid and solid phases are respectively given by Equation \ref{eq:3} and Equation \ref{eq:4}.
\begin{align}
\frac{\partial}{\partial t}(\rho^l\varepsilon^l)+\nabla\cdot(\rho^l\varepsilon^l \mathbf{v}^l) = 0\label{eq:3}\\
\frac{\partial}{\partial t}(\rho^s(1-\varepsilon^l))+\nabla\cdot(\rho^s(1-\varepsilon^l)\mathbf{v^s}) = 0
\label{eq:4}
\end{align}
Regarding the distributivity of the divergence term, with \textit{a} scalar and \textbf{V} vector,
\begin{equation}
\nabla\cdot(a\mathbf{V}) = a \nabla\cdot(\mathbf{V}) + \nabla a\cdot\mathbf{V}
\label{eq:5}
\end{equation}
Applied to \ref{eq:3} and Equation \ref{eq:4}, and considering the definition of the material derivative, $\frac{\mathrm{D}^s}{\mathrm{D}t} f = \frac{\partial f}{\partial t} + \mathbf{\nabla} f \cdot \mathbf{v}^s $, the continuity equations are given by:
\begin{align}
\frac{\mathrm{D}^s}{\mathrm{D}t}(\rho^s(1-\varepsilon^l))+\rho^s(1-\varepsilon^l)\nabla\cdot\mathbf{v^s} = 0 \label{eq:6}\\
\frac{\mathrm{D}^s}{\mathrm{D}t}(\rho^l\varepsilon^l)+\nabla\cdot(\rho^l\varepsilon^l (\mathbf{v}^l-\mathbf{v}^s)) + \rho^l\varepsilon^l \nabla\cdot\mathbf{v^s} = 0
\label{eq:7}
\end{align}
For the fluid phase, the Darcy's law (Equation \ref{eq:8}) is used to evaluate the fluid flow in the porous medium.
\begin{align}
\varepsilon^l(\mathbf{v}^l-\mathbf{v}^s) = - \frac{k^\varepsilon}{\mu^l}(\mathbf{\nabla} p^l-\rho^l\mathbf{g})
\label{eq:8}
\end{align}
{\noindent Where $k^\varepsilon$ is the intrinsic permeability (\si{\square \meter}), $\mu^l$ is the dynamic viscosity (\si{\pascal \second}) and $\mathbf{g}$ the gravity.}
Introducing the state law $\frac{1}{\rho^\alpha}\frac{\mathrm{D}^s\rho^\alpha}{\mathrm{D}t}=\frac{1}{K^\alpha}\frac{\mathrm{D}p^\alpha}{\mathrm{D}t}$, $K^\alpha$ being the bulk modulus of the phase alpha, the Darcy's law and summing \ref{eq:6} and Equation \ref{eq:7}, we obtain:
\begin{equation}
\left(\frac{\varepsilon^l}{K^l}+\frac{1-\varepsilon^l}{K^s} \right)\frac{\mathrm{D}^s p^l}{\mathrm{D}t}+\nabla\cdot\mathbf{v}^s -\nabla\cdot\left(\frac{k^\varepsilon}{\mu^l}\mathbf{\nabla}p^l\right) = 0
\label{eq:9}
\end{equation}
{\noindent Where $S= \left(\frac{\varepsilon^l}{K^l}+\frac{1-\varepsilon^l}{K^s} \right)$ is called the storativity coefficient.}
Once the continuity equations are settled, one can define the quasi-static momentum balance of the porous medium, Equation \ref{eq:10}.
\begin{equation}
\mathbf{\nabla}\cdot\mathbf{t}^{\text{tot}} = 0
\label{eq:10}
\end{equation}
\noindent Where $\mathbf{t}^{\text{tot}}$ is the total Cauchy stress tensor. We introduce an effective stress tensor denoted $\mathbf{t}^\text{eff}$, responsible for all deformation of the solid scaffold. Then, $\mathbf{t}^{\text{tot}}$ can be expressed as:
\begin{equation}
\mathbf{t}^{\text{tot}}=\mathbf{t}^{\text{eff}}-\beta p^l\mathbf{I_d}
\end{equation}
{\noindent Where $\mathbf{I_d}$ is the identity matrix and $\beta$ is the Biot coefficient.}
Finally, the governing equations of this single compartment porous medium are:
\begin{align}
\left(\frac{\varepsilon^l}{K^l}+\frac{1-\varepsilon^l}{K^s} \right)\frac{\mathrm{D}^s p^l}{\mathrm{D}t}+\nabla\cdot\mathbf{v}^s -\nabla\cdot\left(\frac{k^\varepsilon}{\mu^l}\mathbf{\nabla}p^l\right) = 0~\text{on }\Omega\label{eq:12}\\
\mathbf{\nabla}\cdot\mathbf{t}^{\text{tot}} = 0~\text{on }\Omega
\label{eq:13}
\end{align}
Three boundaries are defined: the first one, $\Gamma_u$ has imposed displacement (Equation \ref{eq:14}), the second one $\Gamma_s$ has imposed external forces (Equation \ref{eq:15}) and $\Gamma_p$ is submitted to an imposed pressure (fluid leakage condition (Equation \ref{eq:16})). We obtain:
\begin{align}
\mathbf{t}^\text{eff} = \mathbf{t}^\text{imposed}~\text{on}~\Gamma_s \label{eq:14}\\
\mathbf{u}^s=\mathbf{u}^\text{imposed}~\text{on}~\Gamma_u \label{eq:15}\\
p=0~\text{on}~\Gamma_p \label{eq:16}
\end{align}
According to Figure \ref{fig:1}, $\Gamma_p = \Gamma_s$ is the top surface and $\Gamma_u$ covers the lateral and bottom surfaces.
\subsection{Effective stress}
Two type of solid constitutive laws are considered: an elastic scaffold and a hyper-elastic one.
\subsubsection{Linear elasticity}
In case of a Elastic scaffold, the effective stress tensor is defined as follows:
\begin{align}
\mathbf{\epsilon}(\mathbf{u})=\frac{1}{2}(\nabla\mathbf{u}+\nabla\mathbf{u}^\text{T})\label{eq:19}\\
\mathbf{t}^\text{eff} = 2\mu\mathbf{\epsilon}(\mathbf{u}^s)+\lambda \text{tr}(\mathbf{\epsilon}(\mathbf{u}^s))\mathbf{I_d} \label{eq:20}
\end{align}
{\noindent Where $\mathbf{I_d}$ is the identity matrix and ($\lambda,\mu$) the Lame coefficients.}
\subsubsection{Hyper-elasticity}
In case of an hyper-elastic scaffold, other quantities are required.
Let one introduce the deformation gradient $\mathbf{F}$:
\begin{equation}
\mathbf{F}=\mathbf{I}_d+\mathbf{\nabla}\mathbf{u}^s
\label{eq:21}
\end{equation}
Then, J is the determinant of $\mathbf{F}$:
\begin{align}
J = \text{det}(\mathbf{F})
\label{eq:22}
\end{align}
According to the classic formulation of a finite element procedure, we introduce $\mathbf{C}$ the right Cauchy-Green stress tensor and its first invariant $I_1$. By definition:
\begin{align}
\mathbf{C} = \mathbf{F}^{\text{T}}\mathbf{F} \label{eq:23}\\
I_1 = \text{Tr}(\mathbf{C})
\label{eq:24}
\end{align}
The theory of hyper-elasticity defines a potential of elastic energy $W(\mathbf{F})$. The generalized Neo-Hookean potential (Equation \ref{eq:26b}) introduced by \citet{treloar1975physics}, implemented in Abaqus and used by \citet{SELVADURAI2016} is evaluated in this article.
\begin{equation}
{W}(\mathbf{F})= \frac{\mu}{2}(\mathrm{J}^{-2/3}I_1-\text{tr}(\mathbf{I_d}))+\left(\frac{\lambda}{2}+\frac{\mu}{3}\right)*(\mathrm{J}-1)^2
\label{eq:26b}
\end{equation}
However, other potential were developed. It was shown that the hyper-elastic potential can be expressed as the combination of a isochoric component and a volumetric component (\citet{Michele2018,Horgan2004}). We define the lame coefficients by $\mu=\frac{E}{2(1-\nu)}$ and $\lambda=\frac{E\nu}{(1+\nu)(1-2\nu)}$. For a Neo-Hookean material, we further have:
\begin{equation}
W(\mathbf{F})=\Tilde{W}(I_1,J)+U(J)
\label{eq:25}
\end{equation}
Where $\Tilde{W}(I_1,J)$ is the isochoric part and $U(J)$ the volumetric one. The study of \citet{SELVADURAI2016} presented a compressible case ($\nu=0.3$) reaching high deformation. Therefore, a compressible formulation of the Neo-Hookean strain-energy potential from \citet{Pence2014,Horgan2004} is also computed for comparison. Therefore, the implemented isochoric part of the strain energy potential is:
\begin{equation}
\Tilde{W}_1(I_1,J)= \frac{\mu}{2}(I_1-\text{tr}(\mathbf{I_d})-2\log[\mathrm{J}])
\label{eq:26}
\end{equation}
Two different volumetric parts ($U_1$ and $U_2$) which were proposed in \citet{doll2000development} are implemented,
\begin{equation}
U_1(J)=\frac{\lambda}{2} \log[\mathrm{J}]^2
\label{eq:27}
\end{equation}
\begin{equation}
U_2(J)=\frac{\lambda}{2} (J-1)^2
\label{eq:28}
\end{equation}
Finally, from the potential (Equation \ref{eq:25} or \ref{eq:26b}) derives the first Piola-Kirchhoff stress tensor as the effective stress such that:
\begin{equation}
\mathbf{t}^\text{eff}=\frac{\partial W}{\partial \mathbf{F}}
\label{eq:29}
\end{equation}
\subsection{Variational formulation}
For the computation of the Finite Element (FE) model, the variational form of Equation \ref{eq:12} and Equation \ref{eq:13} is introduced. Let one consider (q,v) the test functions defined in the mixed space $\text{L}_0^2(\Omega)\times[\text{H}^1(\Omega)]^2$.
With a first order approximation in time, Equation \ref{eq:12} gives:
\begin{align}
\begin{split}
\frac{S}{dt}\int_{\Omega} (p^l-p^l_n)q\text{d}\Omega+\frac{1}{dt}\int_{\Omega} \mathbf{\nabla}\cdot(\mathbf{u}^s-\mathbf{u}^s_n)q\text{d}\Omega \\
+ \frac{k^\varepsilon}{\mu^l} \int_{\Omega} \mathbf{\nabla}p^l\mathbf{\nabla}q\text{d}\Omega = 0, \forall~q\in~\text{L}_0^2(\Omega)
\end{split}
\label{eq:17}
\end{align}
Similarly, by integrating by part Equation \ref{eq:13}, and including the Neumann boundary conditions, we get:
\begin{align}
\begin{split}
\int_{\Omega} \mathbf{t}^\text{eff}:\mathbf{\nabla}\mathbf{v}\text{d}\Omega-\int_{\Omega}\beta p^l\mathbf{\nabla}\cdot\mathbf{v}\text{d}\Omega - \int_{\Gamma_s} \mathbf{t}^\text{imposed} \cdot \mathbf{n} \cdot \mathbf{v} \text{d}\Gamma_s=0,\\ \forall~\mathbf{v}\in~[\text{H}^1(\Omega)]^2
\label{eq:18}
\end{split}
\end{align}
The first order approximation in time impose to define the initial conditions which are fixed according to Table \ref{tab:3}.
\begin{table}[ht!]
\centering
\begin{tabular}{llll}
\hline
Parameter & Symbol & Value & Unit \\ \hline
Displacement & $\mathbf{u}^s$ & 0 & \si{\meter} \\
Displacement at previous step & $\mathbf{u}^s_n$ & 0 & \si{\meter} \\
IF pressure & $p^l$ & $\mathbf{t}^\text{imposed} \cdot \mathbf{n}$ & \si{\pascal} \\
IF pressure at previous step & $p^l_n$ & 0 & \si{\pascal} \\ \hline
\end{tabular}%
\caption{Initial conditions for the single compartment model}
\label{tab:3}
\end{table}
Finally, the problem to solve is: $\text{Find}~(p^l, \mathbf{u}^s)\in\text{L}_0^2(\Omega)\times[\text{H}^1(\Omega)]^2$ such that Equation \ref{eq:17} and Equation \ref{eq:18} are verified.
\subsection{2D linear elastic solid scaffold}
\subsubsection{FEniCSx implementation}
This section aims to provide a possible implementation of a 2D elastic problem and its comparison with the Terzaghi analytical solution. Conversely to the former FEniCS project, Dolfinx is based on a more explicit use of the libraries and requires to import them in the FEniCSx environment separately.
Therefore, each function used in the following implementation of the problem needs to be imported as a first step.
\begin{lstlisting}[language=python]
import numpy as np
from dolfinx import nls
from dolfinx.fem.petsc import NonlinearProblem
from ufl import VectorElement, FiniteElement, MixedElement, TestFunctions, TrialFunction
from ufl import Measure, FacetNormal
from ufl import nabla_div, dx, dot, inner, grad, derivative, split
from petsc4py.PETSc import ScalarType
from mpi4py import MPI
from dolfinx.fem import (Constant, dirichletbc, Function, FunctionSpace, locate_dofs_topological)
from dolfinx.io import XDMFFile
\end{lstlisting}
Then, the time parametrization is introduced, the load value T such that $\mathbf{t}^\text{imposed} = p_0 \cdot \mathbf{n}$ with $\mathbf{n}$ the outward normal to the mesh, and the material parameters which are defined as ufl constants over the mesh.
\begin{lstlisting}[language=python]
## Time parametrization
t = 0 # Start time
Tf = 6. # End time
num_steps = 1000 # Number of time steps
dt = (Tf-t)/num_steps # Time step size
## Material parameters
E = Constant(mesh, ScalarType(5000))
nu = Constant(mesh, ScalarType(0.4))
lambda_m = Constant(mesh, ScalarType(E.value*nu.value/((1+nu.value)*(1-2*nu.value))))
mu = Constant(mesh, ScalarType(E.value/(2*(1+nu.value))))
rhos = Constant(mesh, ScalarType(1))
kepsilon = Constant(mesh, ScalarType(1.8e-15))
mul = Constant(mesh, ScalarType(1e-2))
rhol = Constant(mesh, ScalarType(1))
beta = Constant(mesh, ScalarType(1))
epsilonl = Constant(mesh, ScalarType(0.2))
Kf = Constant(mesh, ScalarType(2.2e9))
Ks = Constant(mesh, ScalarType(1e10))
S = (epsilonl/Kf)+(1-epsilonl)/Ks
## Mechanical loading
pinit = 100 #[Pa]
T = Constant(mesh,ScalarType(-pinit))
\end{lstlisting}
The surface element for integration based on the tags and the normals of the mesh are computed.
\begin{lstlisting}[language=python]
# Create the surfacic element
ds = Measure("ds", domain=mesh, subdomain_data=facet_tag)
# compute the mesh normals to express t^imposed = T.normal
normal = FacetNormal(mesh)
\end{lstlisting}
Two type of elements are defined for displacement and pressure, then combined to obtain the mixed space (MS) of the solution.
\begin{lstlisting}[language=python]
displacement_element = VectorElement("CG", mesh.ufl_cell(), 2)
pressure_element = FiniteElement("CG", mesh.ufl_cell(), 1)
MS = FunctionSpace(mesh, MixedElement([displacement_element,pressure_element]))
\end{lstlisting}
The space of resolution being defined, we can introduce the Dirichlet boundary conditions according to Equation \ref{eq:15}, Equation \ref{eq:16} and Figure \ref{fig:1}.
\begin{lstlisting}[language=python]
# 1 = bottom: uy=0, 2 = right: ux=0, 3=top: pl=0 drainage, 4=left: ux=0
bcs = []
fdim = mesh.topology.dim - 1
# uy=0
facets = facet_tag.find(1)
dofs = locate_dofs_topological(MS.sub(0).sub(1), fdim, facets)
bcs.append(dirichletbc(ScalarType(0), dofs, MS.sub(0).sub(1)))
# ux=0
facets = facet_tag.find(2)
dofs = locate_dofs_topological(MS.sub(0).sub(0), fdim, facets)
bcs.append(dirichletbc(ScalarType(0), dofs, MS.sub(0).sub(0)))
# ux=0
facets = facet_tag.find(4)
dofs = locate_dofs_topological(MS.sub(0).sub(0), fdim, facets)
bcs.append(dirichletbc(ScalarType(0), dofs, MS.sub(0).sub(0)))
# leakage p=0
facets = facet_tag.find(3)
dofs = locate_dofs_topological(MS.sub(1), fdim, facets)
bcs.append(dirichletbc(ScalarType(0), dofs, MS.sub(1)))
\end{lstlisting}
The problem depends on the time Equation \ref{eq:17}. Initial conditions in displacement and pressure are required. Therefore, we defined X0 the unknown function and Xn the solution at the previous step. Giving the \textit{collapse}() function, we identify the initial displacement function Un\_ and its mapping within the Xn solution. Then, its values are set to 0 and reassigned in Xn using the map. Xn.x.\textit{scatter\_forward}() allows to update the values of Xn in case of parallel computation. The same method is used to set up the initial pressure field. To fit with the studied problems, the load is instantaneously applied. Therefore, the initial pore pressure of the sample is assumed equal to $p_0$.
\begin{lstlisting}[language=python]
# X0, Xn: Solution and previous functions of space
X0 = Function(MS)
Xn = Function(MS)
# Initial values
# Solid Displacement
Un_, Un_to_MS = MS.sub(0).collapse()
FUn_ = Function(Un_)
with FUn_.vector.localForm() as initial_local:
initial_local.set(ScalarType(0.0))
# Assign in Xn and broadcast to all the threads
Xn.x.array[Un_to_MS] = FUn_.x.array
Xn.x.scatter_forward()
# IF Pressure
Pn_, Pn_to_MS = MS.sub(1).collapse()
FPn_ = Function(Pn_)
with FPn_.vector.localForm() as initial_local:
initial_local.set(ScalarType(pinit))
# Assign in Xn and broadcast to all the threads
Xn.x.array[Pn_to_MS] = FPn_.x.array
Xn.x.scatter_forward()
\end{lstlisting}
The deformation and effective stress given Equation \ref{eq:19} and Equation \ref{eq:20} are defined by the following function:
\begin{lstlisting}[language=python]
def teff_Elastic(u,lambda_m,mu):
from ufl import sym, grad, nabla_div, Identity
## Deformation
epsilon = sym(grad(u))
## Stress
return lambda_m * nabla_div(u) * Identity(u.geometric_dimension()) + 2*mu*epsilon
\end{lstlisting}
Finally, splitting the two functions X0, Xn, and introducing the test functions, the weak form is implemented as follows.
\begin{lstlisting}[language=python]
u,p =split(X0)
u_n,p_n=split(Xn)
# Set up the test functions
v,q = TestFunctions(MS)
# Equation 33
F = (1/dt)*nabla_div(u-u_n)*q*dx + (kepsilon/mul)*dot(grad(p),grad(q))*dx + ( S/dt )*(p-p_n)*q*dx
# Equation 34
F += inner(grad(v),teff(u))*dx - beta * p * nabla_div(v)*dx - T*inner(v,normal)*ds(3)
\end{lstlisting}
Introducing the trial function of the mixed space dX0, we define the non-linear problem based on the variational form, the unknown, the boundary conditions and the Jacobian:
\begin{lstlisting}[language=python]
dX0 = TrialFunction(MS)
Js = derivative(F, X0, dX0)
Problem = NonlinearProblem(F, X0, bcs = bcs, J = Js)
\end{lstlisting}
\subsubsection{Solving and results}
\label{linearel}
To solve the non-linear problem defined here-above, a Newton solver is tuned.
\begin{lstlisting}[language=python]
solver = nls.petsc.NewtonSolver(mesh.comm, Problem)
# Absolute tolerance
solver.atol = 5e-10
# relative tolerance
solver.rtol = 1e-11
solver.convergence_criterion = "incremental"
\end{lstlisting}
The parameters were set according to Table \ref{tab:1}. During the resolution, we computed for each step the error in $L^2$-norm in pressure defined Equation \ref{eq:34}. These formulations are easily evaluated within the FEniCSx environment by defining the following functions:
\begin{equation}
E({p^l})~=~\frac{\sqrt{\int_\Omega (p^l-p^{ex})^2\mathrm{d} x}}{\sqrt{\int_\Omega (p^{ex})^2\mathrm{d} x}}
\label{eq:34}
\end{equation}
Where $p^{ex}$ is the exact solution, computed from the Terzaghi's analytical formula.
\begin{lstlisting}[language=python]
def terzaghi_p(x):
kmax=1e3
p0,L=pinit,Height
cv = kepsilon.value/mul.value*(lambda_m.value+2*mu.value)
pression=0
for k in range(1,int(kmax)):
pression+=p0*4/np.pi*(-1)**(k-1)/(2*k-1)*np.cos((2*k-1)*0.5*np.pi*(x[1]/L))*np.exp(-(2*k-1)**2*0.25*np.pi**2*cv*t/L**2)
pl=pression
return pl
def L2_error_p(mesh,pressure_element,__p):
V2 = FunctionSpace(mesh, pressure_element)
pex = Function(V2)
pex.interpolate(terzaghi_p)
L2_errorp, L2_normp = form(inner(__p - pex, __p - pex) * dx), form(inner(pex, pex) * dx)
error_localp = assemble_scalar(L2_errorp)/assemble_scalar(L2_normp)
error_L2p = np.sqrt(mesh.comm.allreduce(error_localp, op=MPI.SUM))
return error_L2p
\end{lstlisting}
To get a code suitable for parallel computation, the solutions needed to be gathered on a same processor using the MPI.\textit{allreduce}() function.
Once the error functions were defined, the problem is solved within the time loop:
\begin{lstlisting}[language=python]
# Create an output xdmf file to store the values
xdmf = XDMFFile(mesh.comm, "./terzaghi.xdmf", "w")
xdmf.write_mesh(mesh)
# Solve the problem and evaluate values of interest
t = 0
L2_p = np.zeros(num_steps, dtype=PETSc.ScalarType)
for n in range(num_steps):
t += dt
num_its, converged = solver.solve(X0)
X0.x.scatter_forward()
# Update Value
Xn.x.array[:] = X0.x.array
Xn.x.scatter_forward()
__u, __p = X0.split()
# Export the results
__u.name = "Displacement"
__p.name = "Pressure"
xdmf.write_function(__u,t)
xdmf.write_function(__p,t)
# Compute L2 norm for pressure
error_L2p = L2_error_p(mesh,pressure_element,__p)
L2_p[n] = error_L2p
# Solve tracking
if mesh.comm.rank == 0:
print(f"Time step {n}, Number of iterations {num_its}, Load {T.value}, L2-error p {error_L2p:.2e}")
xdmf.close()
\end{lstlisting}
The results obtained for pressure and displacements are provided Figure \ref{fig:2}. The code to evaluate the pressure at given points is provided \ref{appendix:eval}.
\begin{figure}[ht!]
\centering
\begin{subfigure}[t]{0.47\textwidth}
\includegraphics[width=\textwidth]{Figure_2a.jpg}
\caption{}
\label{fig:2a}
\end{subfigure}
\begin{subfigure}[t]{0.47\textwidth}
\includegraphics[width=\textwidth]{Figure_2b.jpg}
\caption{}
\label{fig:2b}
\end{subfigure}
\caption{Comparison of the computed pore pressure against the analytical solution: in (a) time and (b) space. The pressure was well recovered based on the evaluation of the $L^2$-norm error $(3.57\pm2.46)\times10^{-3}$. }
\label{fig:2}
\end{figure}
The curves show the efficiency of the simulation to reproduce the analytical solution. The accuracy of the simulation was also supported by the estimation of the error based on the $L^2$-norm of the pressure equal to $(3.57\pm2.46)\times10^{-3}$ which is deemed satisfactory. The same problem was solved using the legacy FEniCS version. The proposed FEniCSx implementation was faster. It was computed in 9.48 seconds compared to the previously 31.82 seconds.
To show the efficiency of the parallel computation, the 3D case \ref{appendix:3D:case} is considered. For a given spatio-temporal discretisation, a larger computational time of 1 hour 4 minutes 29 seconds is needed using FEniCSx. To reduce the time, the code naturally supports parallel computation. The same code was run for several number of threads. Computed on 2 threads, the code required 53 minutes 27 seconds. For 4 threads, the running time was further reduced to 46 minutes 27 seconds. Finally, using 8 threads, the computation time was reduced up to 28 minutes 9 seconds.
Finally, a convergence analysis on the meshing of the column was carried out. The $L_2$ error metric was used and its evolution for a $n_x\times n_y$ discretized mesh is given Figure \ref{fig:3}. As we could have expected from the 1D behavior of a confined compression Terzaghi case, the error is almost independent from the $n_x$ choice. Figure \ref{fig:3}(a) shows that a $n_y\geq10$ gives better estimations. According to Figure \ref{fig:3}(b), a balance between precision and computation time must be considered. The more elements, the higher the computation time. To ensure obtaining a reliable solution, a mesh of $n_x\times n_y = 2 \times 40$ was used.
\begin{figure}[ht!]
\centering
\begin{subfigure}[t]{0.47\textwidth}
\includegraphics[width=\textwidth]{Figure_3a.jpg}
\caption{}
\end{subfigure}
\begin{subfigure}[t]{0.47\textwidth}
\includegraphics[width=\textwidth]{Figure_3b.jpg}
\caption{}
\end{subfigure}
\caption{Convergence analysis for a $n_x\times n_y$ discretized mesh: L-2 norm (a) and Computation time (b)}
\label{fig:3}
\end{figure}
\subsection{3D hyper-elastic scaffold}
\subsubsection{FEniCSx implementation}
The implementation method of the 3D case is the same. However, special attention must be placed on the boundary. Indeed, moving from 2D to 3D introduces two more boundaries. Therefore, the Dirichlet boundary conditions definition is completed with:
\begin{lstlisting}[language=python]
# uz=0
facets = facet_tag.find(5)
dofs = locate_dofs_topological(MS.sub(0).sub(2), fdim, facets)
bcs.append(dirichletbc(ScalarType(0), dofs, MS.sub(0).sub(2)))
# uz=0
facets = facet_tag.find(6)
dofs = locate_dofs_topological(MS.sub(0).sub(2), fdim, facets)
bcs.append(dirichletbc(ScalarType(0), dofs, MS.sub(0).sub(2)))
\end{lstlisting}
The effective stress tensor is also different. As an example, the stress tensor resulting from the potential $W(\mathbf{F})=\Tilde{W}_1(I_1,J)+U_1(J)$ is defined in FEniCSx by:
\begin{lstlisting}[language=python]
def teff(u,lambda_m,mu):
from ufl import variable, Identity, grad, det, tr, ln, diff
## Deformation gradient
F = variable(Identity(len(u)) + grad(u))
J = variable(det(F))
## Right Cauchy-Green tensor
C = variable(F.T * F)
##Invariants of deformation tensors
Ic = variable(tr(C))
## Potential
W = (mu / 2) * (Ic - 3) - mu * ln(J) + (lambda_m / 2) * (ln(J))**2
return diff(W, F)
\end{lstlisting}
All other developed potential are available in the supplementary material.
\subsubsection{Results}
The same solver options as for the 2D case were used. To limit the computation time, the time step was made variable: dt=500 for $t\in[0,20000]$, dt=1000 for $t\in[20000,60000]$ and dt=10000 for $t\in[60000,100000]$. A total of 84 time steps was then considered.
The parameters were set according to Table \ref{tab:2}. The results for the previously defined strain-energy potential are given Figure \ref{fig:4}. Each finite element problem was computed in $23.6\pm4.3$ seconds on 8 threads. Independently from the choice of the potential, the consolidated pressure was retrieved. On the contrary, the resulting displacement depends on the chosen potential but a same order of magnitude is found for all the cases and describe well the observations proposed in \citet{SELVADURAI2016}.
In the absence of information about the porosity or the fluid bulk modulus in the referent study, two fluid bulk modulus were considered.
In case where the fluid bulk modulus is made close to the water one ($K^f=2.2\times10^{9}$), the hyper-elastic material well recovers the expected values. However, mismatches appear for a linear scaffold. This can result from the use of a elastic law for large deformations. In case of a lower value of the fluid bulk modulus $K^f=5\times10^{5}$ (\textit{i.e.,} this can correspond to a non-constant value of the permeability and the porosity), the elastic behavior was recovered but differences on the hyper-elastic formulation were obtained.
We believe that these differences result from a permeability depending on the stress state of the column which has not been developed in the referent paper ('Initial values of the permeability and viscosity are the same for all three materials.' from \citet{SELVADURAI2016}).
\begin{figure}[H]
\centering
\begin{subfigure}[t]{0.47\textwidth}
\includegraphics[width=\textwidth]{Figure_4a.jpg}
\caption{}
\label{fig:4a}
\end{subfigure}
\begin{subfigure}[t]{0.47\textwidth}
\includegraphics[width=\textwidth]{Figure_4b.jpg}
\caption{}
\label{fig:4b}
\end{subfigure}
\begin{subfigure}[t]{0.47\textwidth}
\includegraphics[width=\textwidth]{Figure_4c.jpg}
\caption{}
\label{fig:4c}
\end{subfigure}
\begin{subfigure}[t]{0.47\textwidth}
\includegraphics[width=\textwidth]{Figure_4d.jpg}
\caption{}
\label{fig:4d}
\end{subfigure}
\caption{$K^f=2.2\times10^{9}$: (a) Displacement of the to surface points and (b) pressure at the bottom of the column. $K^f=5\times10^{5}$: (c) Displacement of the to surface points and (d) pressure at the bottom of the column. The computed Linear Elastic (LE) and Neo-Hookean (NH) for both volumetric functions and the found calibrated parameters are super-imposed with the expected values from \citet{SELVADURAI2016}.}
\label{fig:4}
\end{figure}
\section{Confined bi-compartment porous-elastic medium}
\label{sec:4}
Sections \ref{sec:2} proposed a poro-mechanical modeling of a single-compartment porous medium (suitable for an avascularised tissue for instance). In case of \textit{in vivo} modeling, at least one more fluid phase is required: the blood. A 3D confined compression example of a column of height 100 \si{\micro\meter} is proposed, based on the here-after variational formulation and \citet{Scium2021} study. The load is applied as a sinusoidal ramp up to the magnitude of 100 \si{\pascal} during 5 seconds. Then, the load is sustained for 125 seconds.
For more complex geometries, a gmsh example of a rectangle geometry indented by a cylindrical beam on its top surface and the corresponding local refinement are proposed \ref{sec:4.3.1}.
\begin{table}[ht!]
\centering
\begin{tabular}{llll}
\hline
Parameter & Symbol & Value & Unit \\ \hline
Young modulus & E & 5000 & \si{\pascal} \\
Poisson ratio & $\nu$ & 0.2 & - \\
IF viscosity & $\mu^l$ & 1 & \si{\pascal\second} \\
Intrinsic permeability & $k^{\varepsilon}$ & $1.\times 10^{-14}$ & \si{\square\meter} \\
Biot coefficient & $\beta$ & 1 & - \\
Density of phase $\alpha$ & $\rho^\alpha$ & - & \si{\kilo\gram\per\cubic\meter} \\
Porosity & $\varepsilon^l$ & 0.5 & - \\
Vessel Bulk modulus & $K^\nu $ & $1\times 10^{3}$ & \si{\pascal} \\
vessel Intrinsic permeability & $k^\varepsilon_b$ & $2\times 10^{-16}$ or $4\times 10^{-16}$ & \si{\square\meter} \\
Blood viscosity & $\mu^b$ & $4.0\times 10^{-3}$ & \si{\pascal\second} \\
Initial vascular porosity & $\varepsilon^b_0$ & 0\% or 2\% or 4\% & - \\
Vascular porosity & $\varepsilon^b $ & Equation \ref{eq:48} & - \\ \hline
\end{tabular}%
\caption{Mechanical parameters for the bi-compartment model}
\label{tab:5}
\end{table}
\subsection{Governing Equations}
Let one consider a vascular multi-compartment structure composed of a solid scaffold filled with interstitial fluid (IF) and blood. The medium is assumed fully saturated. The following convention is assumed: $\bullet^s$ denotes the solid phase, $\bullet^l$ denotes the interstitial fluid phase (IF) and $\bullet^b$ denotes the vascular part. The primary variables of the problem are the pressure applied in the pores of the extra-vascular part of the porous medium, namely $p^l$, the blood pressure, namely $p^b$, and the displacement of the solid scaffold, namely $\mathbf{u}^s$. (Equation \ref{eq:36}) links the different volume fractions. The volume fraction of the phase $\alpha$ is defined by (Equation \ref{eq:2}). $\varepsilon^l$ is called the extra-vascular porosity of the medium.
\begin{align}
\varepsilon^s+\varepsilon^l+\varepsilon^b=1 \label{eq:36}
\end{align}
Assuming that there is no inter-phase mass transport (\textit{i.e.} the IF and the blood are assumed pure phases), the continuity equations (mass conservation) of the solid, the IF and the blood phases are respectively given by Equation \ref{eq:37}, \ref{eq:38}, \ref{eq:39}.
\begin{align}
\frac{\partial}{\partial t}(\rho^s(1-\varepsilon^l-\varepsilon^b))+\nabla\cdot(\rho^s(1-\varepsilon^l-\varepsilon^b)\mathbf{v^s}) = 0 \label{eq:37}\\
\frac{\partial}{\partial t}(\rho^l\varepsilon^l)+\nabla\cdot(\rho^l\varepsilon^l \mathbf{v}^l) = 0\label{eq:38}\\
\frac{\partial}{\partial t}(\rho^b\varepsilon^b)+\nabla\cdot(\rho^b\varepsilon^b \mathbf{v}^b) = 0
\label{eq:39}
\end{align}
According to section \ref{sec:2.1}, and dividing each equation by the corresponding density, the continuity equations can be re-expressed as:
\begin{align}
\frac{\mathrm{D}^s}{\mathrm{D}t}(1-\varepsilon^l-\varepsilon^b)+(1-\varepsilon^l-\varepsilon^b)\nabla\cdot\mathbf{v^s} = 0 \label{eq:40}\\
\frac{\mathrm{D}^s\varepsilon^l}{\mathrm{D}t}+\nabla\cdot(\varepsilon^l (\mathbf{v}^l-\mathbf{v}^s)) + \varepsilon^l \nabla\cdot\mathbf{v^s} = 0
\label{eq:41}\\
\frac{\mathrm{D}^s\varepsilon^b}{\mathrm{D}t}+\nabla\cdot(\varepsilon^b (\mathbf{v}^b-\mathbf{v}^s)) + \varepsilon^b \nabla\cdot\mathbf{v^s} = 0\label{eq:42}
\end{align}
For the fluid phase, Darcy's law (Equation \ref{eq:43}, \ref{eq:44}) is used to evaluate the fluid flow in the porous medium.
\begin{align}
\varepsilon^l(\mathbf{v}^l-\mathbf{v}^s) = - \frac{k^\varepsilon}{\mu^l}(\mathbf{\nabla}p^l-\rho^l\mathbf{g})\label{eq:43}\\
\varepsilon^b(\mathbf{v}^b-\mathbf{v}^s) = - \frac{k^b}{\mu^b}(\mathbf{\nabla}p^b-\rho^b\mathbf{g}) \label{eq:44}
\end{align}
{\noindent Where $k^\varepsilon$, $k^b$ are the intrinsic permeabilities (\si{\square \meter}), $\mu^l$, $\mu^b$ are the dynamic viscosities (\si{\pascal \second}), $p^l$, $p^b$ the pressures and $\mathbf{g}$ the gravity.}
Equation \ref{eq:39} gives the following relationship:
\begin{align}
\frac{\mathrm{D}^s\varepsilon^l}{\mathrm{D}t}=-\frac{\mathrm{D}^s\varepsilon^b}{\mathrm{D}t}+(1-\varepsilon^l-\varepsilon^b)\nabla\cdot\mathbf{v^s} \label{eq:45}
\end{align}
Considering Equations \ref{eq:43}, \ref{eq:45}, Equation \ref{eq:41} becomes:
\begin{align}
-\frac{\mathrm{D}^s\varepsilon^b}{\mathrm{D}t}-\nabla\cdot(\frac{k^\varepsilon}{\mu^l}\mathbf{\nabla}p^l) +(1-\varepsilon^b)\nabla\cdot\mathbf{v^s}= 0
\label{eq:46}
\end{align}
Then, reading Equation \ref{eq:44}, Equation \ref{eq:42} gives:
\begin{align}
\frac{\mathrm{D}^s\varepsilon^b}{\mathrm{D}t}- \nabla\cdot(\frac{k^b}{\mu^b}\mathbf{\nabla}p^b) + \varepsilon^b \nabla\cdot\mathbf{v^s} = 0\label{eq:47}
\end{align}
Considering a vascular tissue, we assume that the blood vessels are mostly surrounded by IF so they have weak direct interaction with the solid scaffold. Furthermore, the vessels are assumed compressible. Therefore, a state equation for the volume fraction of blood is introduced Equation \ref{eq:48}.
\begin{align}
\varepsilon^b = \varepsilon^b_0 \cdot \left( 1 - \frac{p^l-p^b}{K^{\nu}}\right)\label{eq:48}
\end{align}
\noindent Where $\varepsilon^b_0$ denotes the blood volume fraction when $p^l=p^b$, $K^{\nu}$ is the vessel compressibility.
It follows that Equations \ref{eq:46}, \ref{eq:47} can be re-written as:
\begin{align}
-\frac{\varepsilon^b_0}{K^{\nu}}\left(\frac{\mathrm{D}^s p^l}{\mathrm{D}t}-\frac{\mathrm{D}^s p^b}{\mathrm{D}t}\right)-\nabla\cdot(\frac{k^\varepsilon}{\mu^l}\mathbf{\nabla}p^l) +(1-\varepsilon^b)\nabla\cdot\mathbf{v^s}= 0
\label{eq:49}\\
\frac{\varepsilon^b_0}{K^{\nu}}\left(\frac{\mathrm{D}^s p^l}{\mathrm{D}t}-\frac{\mathrm{D}^s p^b}{\mathrm{D}t}\right) - \nabla\cdot(\frac{k^b}{\mu^b}\mathbf{\nabla}p^b) + \varepsilon^b \nabla\cdot\mathbf{v^s} = 0\label{eq:50}
\end{align}
Once the continuity equations are settled, one can define the quasi-static momentum balance of the porous medium, Equation \ref{eq:51}.
\begin{equation}
\mathbf{\nabla}\cdot\mathbf{t}^{\text{tot}} = 0
\label{eq:51}
\end{equation}
\noindent Where $\mathbf{t}^{\text{tot}}$ is the total Cauchy stress tensor. We introduce an effective stress tensor denoted $\mathbf{t}^\text{eff}$, responsible for all deformation of the solid scaffold. Then, $\mathbf{t}^{\text{tot}}$ can be expressed as:
\begin{align}
\mathbf{t}^{tot} = \mathbf{t}^\text{eff} - (1-\zeta)p^l\mathbf{I_d} - \zeta p^b\mathbf{I_d} \label{eq:52}\\
\mathbf{\epsilon}(\mathbf{u})=\frac{1}{2}(\nabla\mathbf{u}+\nabla\mathbf{u}^\text{T})\label{eq:53}\\
\mathbf{t}^\text{eff} = 2\mu\mathbf{\epsilon}(\mathbf{u}^s)+\lambda \text{tr}(\mathbf{\epsilon}(\mathbf{u}^s))\mathbf{I_d} \label{eq:54}\\
\zeta = \varepsilon_0^b\left(1-2\frac{p^l-p^b}{K^\nu}\right)\label{eq:55}
\end{align}
Four boundaries are defined: the first one, $\Gamma_u$ has imposed displacement (Equation \ref{eq:56}), the second one $\Gamma_s$ has imposed external forces (Equation \ref{eq:57}) and $\Omega_p$ has imposed pressure (fluid leakage condition (Equation \ref{eq:58}, \ref{eq:59})). We obtain:
\begin{align}
\mathbf{t}^\text{eff} = \mathbf{t}^\text{imposed}~\text{on}~\Gamma_s \label{eq:56}\\
\mathbf{u}^s=\mathbf{u}^\text{imposed}~\text{on}~\Gamma_u \label{eq:57}\\
p^l=0~\text{on}~\Gamma_{p}\label{eq:58}\\
p^b=0~\text{on}~\Gamma_{p}\label{eq:59}
\end{align}
The initial conditions are given Table \ref{tab:6}.
\begin{table}[ht!]
\centering
\begin{tabular}{llll}
\hline
Parameter & Symbol & Value & Unit \\ \hline
Displacement & $\mathbf{u}^s$ & 0 & \si{\meter} \\
Displacement at previous step & $\mathbf{u}^s_n$ & 0 & \si{\meter} \\
IF pressure & $p^l$ & 0 & \si{\pascal} \\
IF pressure at previous step & $p^l_n$ & 0 & \si{\pascal} \\
Blood pressure & $p^b$ & 0 & \si{\pascal} \\
Blood pressure at previous time step & $p^b$ & 0 & \si{\pascal} \\
Vascular porosity & $\varepsilon^b$ & $\varepsilon^b_0$ & - \\ \hline
\end{tabular}%
\caption{Initial conditions for the bi-compartment model}
\label{tab:6}
\end{table}
\subsection{Variational Form}
For the computation of the FE model, the variational form of Equation \ref{eq:49}-\ref{eq:51} must be introduced. Let one consider ($q^l$,$q^b$,v) the test functions defined in the mixed space $\text{L}_0^2(\Omega)\times\text{L}_0^2(\Omega)\times[\text{H}^1(\Omega)]^3$.
With a first order approximation in time, Equation \ref{eq:49}, \ref{eq:50} gives:
\begin{align}
\begin{split}
-\frac{\varepsilon^b_0}{K^{\nu}}\frac{1}{dt}\int_{\Omega} (p^b-p^b_n-p^l+p^l_n)q^l\text{d}\Omega+\frac{1-\varepsilon^b}{dt}\int_{\Omega} \mathbf{\nabla}\cdot(\mathbf{u}^s-\mathbf{u}^s_n)q^l\text{d}\Omega \\
+ \frac{k^\varepsilon}{\mu^l} \int_{\Omega} \mathbf{\nabla}p^l\mathbf{\nabla}q^l\text{d}\Omega = 0, \forall~q^l\in~\text{L}_0^2(\Omega)
\end{split}
\label{eq:60}\\
\begin{split}
\frac{\varepsilon^b}{K^{\nu}}\frac{1}{dt}\int_{\Omega} (p^b-p^b_n-p^l+p^l_n)q^b\text{d}\Omega+\frac{\varepsilon^b}{dt}\int_{\Omega} \mathbf{\nabla}\cdot(\mathbf{u}^s-\mathbf{u}^s_n)q^b\text{d}\Omega \\
+ \frac{k^b}{\mu^b} \int_{\Omega} \mathbf{\nabla}p^b\mathbf{\nabla}q^b\text{d}\Omega = 0, \forall~q^b\in~\text{L}_0^2(\Omega)
\end{split}
\label{eq:61}
\end{align}
Similarly, by integrating by part Equation \ref{eq:51}, and including the Neumann boundary conditions, we get:
\begin{align}
\begin{split}
\int_{\Omega} \mathbf{t}^\text{eff}:\mathbf{\nabla}\mathbf{v}\text{d}\Omega -\int_{\Omega} (1-\zeta)p^l\mathbf{\nabla}\cdot\mathbf{v}\text{d}\Omega\\ -\int_{\Omega} \zeta p^b\mathbf{\nabla}\cdot\mathbf{v}\text{d}\Omega \\
- \int_{\Gamma_s} \mathbf{t}^\text{imposed} \cdot \mathbf{v} \text{d}\Gamma_s=0, \forall~v\in~[\text{H}^1(\Omega)]^3
\label{eq:62}
\end{split}
\end{align}
\subsection{FEniCSx Implementation}
This section provides the code of a multi-compartment 3D column in confined compression. In order to evaluate the FEniCSx implementation, this case is similar to the Cast3m solution proposed in \citet{Scium2021}. 3 cases are studied: avascular tissue, vascular porosity of 2\% and vascular porosity of 4\%. The load is applied as a sine ramp during 5 seconds and then sustained during 125 seconds.
The time discretisation is introduced.
\begin{lstlisting}[language=python]
t, t_ramp, t_sust = 0, 5, 125 # Start time
Tf = t_ramp+t_sust # End time
num_steps = 1301 # Number of time steps
dt = (Tf-t)/num_steps # Time step size
\end{lstlisting}
We then introduce the material parameters according to Table \ref{tab:6}. The three cases of vascularisation and Equation \ref{eq:55} are defined.
\begin{lstlisting}[language=python]
E = Constant(mesh, ScalarType(5000))
nu = Constant(mesh, ScalarType(0.2))
kepsilon_l = Constant(mesh, ScalarType(1e-14))
mu_l = Constant(mesh, ScalarType(1))
lambda_m = Constant(mesh, ScalarType(E.value*nu.value/((1+nu.value)*(1-2*nu.value))))
mu = Constant(mesh, ScalarType(E.value/(2*(1+nu.value))))
Knu = Constant(mesh, ScalarType(1000)) #compressibility of the vessels
mu_b = Constant(mesh, ScalarType(0.004)) #dynamic mu_l of the blood
case=1
if case ==0:
epsilon_b_0=Constant(mesh, ScalarType(0.00)) #initial vascular porosity
k_b=Constant(mesh, ScalarType(2e-16)) #intrinsic permeability of vessels
def zeta(pl,pb):
return Constant(mesh,ScalarType(0.))
elif case ==1:
epsilon_b_0=Constant(mesh, ScalarType(0.02)) #initial vascular porosity
k_b=Constant(mesh, ScalarType(2e-16)) #intrinsic permeability of vessels
def zeta(pl,pb):
return epsilon_b_0.value*(1-2*(pl-pb)/Knu.value)
elif case ==2:
epsilon_b_0 = Constant(mesh, ScalarType(0.04)) #initial vascular porosity
k_b = Constant(mesh, ScalarType(4e-16)) #intrinsic permeability of vessels
def zeta(pl,pb):
return epsilon_b_0.value*(1-2*(pl-pb)/Knu.value)
\end{lstlisting}
Then, the integration space, boundary and initial conditions are set up for the displacement, the IF pressure and the blood pressure.
\begin{lstlisting}[language=python]
## Mechanical loading (Terzaghi)
pinit = 200 #[Pa]
T = Constant(mesh,ScalarType(-pinit))
## Define Mixed Space (R2,R, R) -> (u,pl, pb)
element = VectorElement("CG", mesh.ufl_cell(), 2)
pressure_element = FiniteElement("CG", mesh.ufl_cell(), 1)
MS = FunctionSpace(mesh, MixedElement([element,pressure_element,pressure_element]))
# Create the solution and initial spaces
X0 = Function(MS)
Xn = Function(MS)
# Create the surfacic element
ds = Measure("ds", domain=mesh, subdomain_data=facet_tag)
# compute the normals
normal = FacetNormal(mesh)
# Define the Dirichlet conditions
bcs = []
# uy=0
facets = facet_tag.find(1)
dofs = locate_dofs_topological(MS.sub(0).sub(1), fdim, facets)
bcs.append(dirichletbc(ScalarType(0), dofs, MS.sub(0).sub(1)))
# ux=0
facets = facet_tag.find(2)
dofs = locate_dofs_topological(MS.sub(0).sub(0), fdim, facets)
bcs.append(dirichletbc(ScalarType(0), dofs, MS.sub(0).sub(0)))
# ux=0
facets = facet_tag.find(4)
dofs = locate_dofs_topological(MS.sub(0).sub(0), fdim, facets)
bcs.append(dirichletbc(ScalarType(0), dofs, MS.sub(0).sub(0)))
# uz=0
facets = facet_tag.find(5)
dofs = locate_dofs_topological(MS.sub(0).sub(2), fdim, facets)
bcs.append(dirichletbc(ScalarType(0), dofs, MS.sub(0).sub(2)))
# uz=0
facets = facet_tag.find(6)
dofs = locate_dofs_topological(MS.sub(0).sub(2), fdim, facets)
bcs.append(dirichletbc(ScalarType(0), dofs, MS.sub(0).sub(2)))
# leakage pl=pb=0
facets = facet_tag.find(3)
dofs = locate_dofs_topological(MS.sub(1), fdim, facets)
bcs.append(dirichletbc(ScalarType(0), dofs, MS.sub(1)))
dofs = locate_dofs_topological(MS.sub(2), fdim, facets)
bcs.append(dirichletbc(ScalarType(0), dofs, MS.sub(2)))
# Set Initial values
# Displacement
Un_, Un_to_MS = MS.sub(0).collapse()
FUn_ = Function(Un_)
with FUn_.vector.localForm() as initial_local:
initial_local.set(ScalarType(0.0))
# Update Xn for all threads
Xn.x.array[Un_to_MS] = FUn_.x.array
Xn.x.scatter_forward()
# IF Pressure
Pn_, Pn_to_MS = MS.sub(1).collapse()
FPn_ = Function(Pn_)
with FPn_.vector.localForm() as initial_local:
initial_local.set(ScalarType(0))
# Update Xn for all threads
Xn.x.array[Pn_to_MS] = FPn_.x.array
Xn.x.scatter_forward()
# Blood Pressure
Pbn_, Pbn_to_MS = MS.sub(2).collapse()
FPbn_ = Function(Pbn_)
with FPbn_.vector.localForm() as initial_local:
initial_local.set(ScalarType(0))
# Update Xn for all threads
Xn.x.array[Pbn_to_MS] = FPbn_.x.array
Xn.x.scatter_forward()
\end{lstlisting}
Internal variables are required. The vessels are compressible so we include the evolution of the vascular porosity as a function representing Equation \ref{eq:48}.
\begin{lstlisting}[language=python]
# Internal variables: vascular porosity
Poro_space = FunctionSpace(mesh,pressure_element)
poro_b = Function(Poro_space) # vascular porosity
# Initialize
with poro_b.vector.localForm() as initial_local:
initial_local.set(ScalarType(epsilon_b_0.value))
# Update
poro_b.x.scatter_forward()
poro_b.name="poro_b"
\end{lstlisting}
A xdmf file is opened to store the results.
\begin{lstlisting}[language=python]
xdmf = XDMFFile(mesh.comm, "terzaghi.xdmf", "w")
xdmf.write_mesh(mesh)
\end{lstlisting}
The test functions as well as the variational form are introduced according to Equations \ref{eq:60}, \ref{eq:61}, \ref{eq:62}.
\begin{lstlisting}[language=python]
u, pl, pb = split(X0)
u_n, pl_n, pb_n = split(Xn)
v, ql, qb = TestFunctions(MS)
dx = Measure("dx", metadata={"quadrature_degree": 4})
F = (1-poro_b)*(1/dt)*nabla_div(u-u_n)*ql*dx + ( kepsilon_l/(mu_l) )*dot( grad(pl),grad(ql) )*dx - (epsilon_b_0/Knu)*( (1/dt)*(pb-pb_n-pl+pl_n) )*ql*dx
F += poro_b*(1/dt)*nabla_div(u-u_n)*qb*dx + ( k_b/(mu_b) )*dot( grad(pb),grad(qb) )*dx + (epsilon_b_0/Knu)*( (1/dt)*(pb-pb_n-pl+pl_n) )*qb*dx
F += inner(grad(v),teff(u))*dx - (1-zeta(pl,pb))*pl*nabla_div(v)*dx - zeta(pl,pb)*pb*nabla_div(v)*dx - T*inner(v,normal)*ds(3)
\end{lstlisting}
Finally, the problem to be solved is defined and a Newton method is used for each time step, the vascular porosity is updated and the results are stored in the xdmf file.
\begin{lstlisting}[language=python]
dX0 = TrialFunction(MS)
J = derivative(F, X0, dX0)
Problem = NonlinearProblem(F, X0, bcs = bcs, J = J)
solver = nls.petsc.NewtonSolver(mesh.comm, Problem)
# Set Newton solver options
solver.atol = 5e-10
solver.rtol = 1e-11
solver.convergence_criterion = "incremental"
t = 0
for n in range(num_steps):
t += dt
if t < t_ramp:
f1 = 0.5 * (1 - np.cos(np.pi*t/t_ramp))
else:
f1 = 1
T.value = -200*f1
num_its, converged = solver.solve(X0)
X0.x.scatter_forward()
# Update Value
Xn.x.array[:] = X0.x.array
Xn.x.scatter_forward()
# Update porosity
poro_b.x.array[:] = epsilon_b_0.value*(1-(1/Knu.value)*(X0.x.array[Pn_to_MS]-X0.x.array[Pbn_to_MS]))
poro_b.x.scatter_forward()
# Save data
__u, __pl, __pb = X0.split()
__u.name = "Displacement"
__pl.name = "Pressure IF"
__pb.name = "Pressure blood"
xdmf.write_function(__u,t)
xdmf.write_function(__pl,t)
xdmf.write_function(__pb,t)
xdmf.write_function(poro_b,t)
xdmf.close()
\end{lstlisting}
\subsection{Results}
\begin{figure}[ht!]
\centering
\begin{subfigure}[t]{0.47\textwidth}
\includegraphics[width=\textwidth]{Figure_5a.jpg}
\caption{}
\end{subfigure}
\begin{subfigure}[t]{0.47\textwidth}
\includegraphics[width=\textwidth]{Figure_5b.jpg}
\caption{}
\end{subfigure}
\begin{subfigure}[t]{0.47\textwidth}
\includegraphics[width=\textwidth]{Figure_5c.jpg}
\caption{}
\end{subfigure}
\begin{subfigure}[t]{0.47\textwidth}
\includegraphics[width=\textwidth,trim={3cm 2.5cm 3cm 2.8cm},clip]{Figure_5d.jpg}
\end{subfigure}
\caption{Comparison of the results obtained using FEniCSx against \citet{Scium2021} results. All results were shifted to obtain similar figures. The solid, doted and dashed lines respectively represent the 0\%, 2\% 4\% initial vascular porosity. (a) Evolution of the pressure at the bottom points. (b) Displacement of the top points. (c) Vascular porosity at the bottom points. The behaviour was well retrieved for all the cases with a NRMSE lower than 10\% for all variables according to Table \ref{tab:7}.}
\label{fig:5}
\end{figure}
The evolution of the vascular and interstitial pressures at the bottom points and the vertical displacement at the top points are provided Figure \ref{fig:5}. Each solution was obtained in $6\pm 2$ minutes on 8 threads. The overall behavior of the interstitial fluid pressure, the blood pressure and the solid displacement were retrieved. To quantitatively assess the reliability of our implemented model, The normalized root mean square error (NRMSE, Equation \ref{eq:63}) was computed for each case with the results obtained with Cast3m in \citet{Scium2021}, Table \ref{tab:7}.
\begin{equation}
\text{NRMSE}(x,x^\text{ref})=\frac{\sqrt{\frac{1}{N}\sum_{i\in [1,N]}(x-x^\text{ref})^2}}{\text{mean}(x^\text{ref})}
\label{eq:63}
\end{equation}
\begin{table}[ht!]
\centering
\begin{tabular}{lccc}
\hline
Parameter & 0\% & 2\% & 4\% \\ \hline
$p^l$ & 1.4 \si{\percent} & 3.1 \si{\percent} & 5.1 \si{\percent} \\
$u_y$ & 0.3 \si{\percent} & 2.2 \si{\percent} & 3.7 \si{\percent} \\
$p^b$ & - & 4.7 \si{\percent} & 8.8 \si{\percent} \\
$\varepsilon^b_0$ & - & 0.4 \si{\percent} & 0.6 \si{\percent} \\
\end{tabular}%
\caption{NRMSE computed for each studied variable.}
\label{tab:7}
\end{table}
The NRMSE was found lower than 10\% for all unknowns. The differences are assumed to result from the method of resolution which differs between Cast3m and FEniCSx. Indeed, the Cast3m procedure relies on a staggered solver whereas our results were obtained using a monolithic solver. The order of magnitudes of the NRMSE made us however consider our solution as trustworthy.
\section{Conclusion}
\textcolor{black}{The objective of this paper was to propose a step-by-step explanation on how to implement several poro-mechanical models in FEniCSx with a special attention to parallel computation}. Several benchmark cases for a mixed formulation were evaluated. First, a confined column was simulated under compression. Accurate results according to the L2-norm were found compared to the analytical solution. Furthermore, the code was computed 3 times faster than in the legacy FEniCS environment. Then, a possible implementation of an hyper-elastic formulation was proposed. The model was validated using \citet{SELVADURAI2016} values. Finally, a confined bi-compartment sample was simulated. The results were compared to \citet{Scium2021} data. Small differences were observed due to the choice of the solver (staggered or monolithic) but remained acceptable. \textcolor{black}{The authors hope that this paper will contribute to facilitate the use of poroelasticity in the biomechanical engineering community. This article and its supplementary material constitute a starting point to implement their own material models at a preferred level of complexity.}
\section{Supplementary material}
The python codes corresponding to the workflows and the docker file of this article are made available for 2D and 3D cases on the following link: \url{https://github.com/Th0masLavigne/Dolfinx_Porous_Media.git}.
\section{Declaration of Competing Interest}
Authors have no conflicts of interest to report.
\section{Acknowledgment}
This research was funded in whole, or in part, by the Luxembourg National Research Fund (FNR),grant reference No. 17013182. For the purpose of open access, the author has applied a Creative Commons Attribution 4.0 International (CC BY 4.0) license to any Author Accepted Manuscript version arising from this submission. The present project is also supported by the National Research Fund, Luxembourg, under grant No. C20/MS/14782078/QuaC.
\clearpage
|
2,877,628,090,436 | arxiv | \section{Introduction}
\label{section:Intro}
The realization of Majorana bound states (MBSs) in topological superconductors (SCs) has received great attention during the last decade, not only because they represent a new state of matter but also due to their potential for novel applications~
\cite{kitaev09,RevModPhys.80.1083,Leijnse2012Introduction,Aguadoreview17,Lutchyn2018Majorana,zhang2019next,beenakker2019search,doi:10.1146/annurev-conmatphys-031218-013618,2020Aguado}. A promising route to engineer this topological state combines one-dimensional (1D) semiconducting nanowires (NWs) with strong Rashba spin-orbit coupling (SOC), proximity induced $s$-wave superconductivity, and large enough magnetic fields \cite{PhysRevLett.105.077001,PhysRevLett.105.177002,Alicea:PRB10}. Here, MBSs emerge at the ends of the NW and tunneling into one MBS has theoretically been shown to produce zero-bias conductance peaks of height $2e^{2}/h$ \cite{PhysRevLett.98.237002,PhysRevLett.103.237001,PhysRevB.82.180516}. These ideas have motivated large experimental efforts and have already led to the fabrication of high quality samples and zero-bias conductance measurements which, however, only partially agree with the theoretical predictions \cite{Mourik:S12,Higginbotham2015Parity,Deng16,Albrecht16,zhang16,Suominen17,Nichele17,gul2018ballistic}.
Part of the disagreement likely stems from the fact that recent studies have reported zero-bias conductance peaks due to topologically trivial zero-energy Andreev bound states, and, therefore, not related to MBSs or topology \cite{PhysRevB.86.100503,PhysRevB.86.180503,PhysRevB.91.024514,JorgeEPs,StickDas17,Ptok2017Controlling,Fer18,PhysRevB.98.245407,Hell2018Distinguishing,PhysRevLett.123.107703,PhysRevB.100.155429,PhysRevLett.123.217003,10.21468/SciPostPhys.7.5.061,avila2019non,PhysRevResearch.2.013377,PhysRevLett.125.017701,PhysRevLett.125.116803,Olesia2020,yu2020non,valentini2020nontopological,prada2019andreev,Schulenborg2020Absence,PhysRevB.104.134507,PhysRevB.104.L020501,marra2021majoranaandreev,Feng2022Probing,Schuray2020Transport,Zhang2020Dsitinguishing,Grabsch2020Andreev,Zhang2020Transport,Mukhopadhyay2021Thermal,Zhang2021Large}. A particular relevant mechanism for generating such topologically trivial zero-energy states (TZES), very likely present in many recent experiments, is spatial inhomogeneities in the chemical potential profile \cite{PhysRevB.86.180503,PhysRevB.91.024514,PhysRevB.91.024514,JorgeEPs,DasSarma2021Disorder}. Interestingly, such inhomogeneities, and thus TZES, have been shown to naturally appear due to finite size of the SC when strongly coupled to the NW \cite{Reeg2018Metallization,Reeg2018Zero,ReegBeilstein,Awoga2019Supercurrent}.
Strong coupling between SC and NW also leads to a renormalization of the normal-state parameters in the NW~\cite{Potter2011Enginering,Stanescu:PRB11,Bena2,StanescuModel13,Cole2015Effects,Stanescu2017Proximity,Reeg2017Finite,Reeg2018Metallization,ReegBeilstein,deMoor2018Electric,Awoga2019Supercurrent}, which both substantially change the NW properties and also forces the use of a larger magnetic field to reach the topological phase transition. Such large magnetic fields, in turn, can deteriorate the induced superconductivity in the NW, introducing strict requirements on the superconducting material in the strong coupling regime. Thus, while the strong coupling regime naturally provides a strong superconducting proximity-effect into the NW, it also introduces complications that easily challenge the realization and proper identification of MBSs.
\begin{figure}[!t]
\centering
\includegraphics[width=0.47\textwidth]{Fig1.pdf}
\caption{(a) Schematics of a 1D NW (cyan) with length $L_{\rm nw}$ in a parallel magnetic field, $B$, coupled with strength $\Gamma$ to a 2D SC with superconducting order parameter $\Delta_{\rm sc}$ and length $L_x$ and width $L_y$ (green). (b) Same as (a) but where part of the NW is not coupled to the SC, remaining in the normal state, such that the NW-SC hybrid system forms a SN junction.
}
\label{fig1}
\end{figure}
In this work we consider a 1D semiconductor NW with Rashba SOC coupled to a 2D conventional $s$-wave SC, see Figs.\,\ref{fig1}(a,b), and investigate the emergence of topological superconductivity at finite magnetic fields. We demonstrate that, in the weak coupling regime, the topological phase transition does not depend on the finite size of the SC and can be reached by relatively small magnetic fields, in contrast to the strong coupling regime where strong dependence on SC size exists and substantially larger magnetic fields are required. Most interestingly, we find that the induced energy gap in the topological phase at weak coupling is similar or even larger than the gap in strongly coupled NWs. Moreover, this energy gap is tunable by the chemical potential in the SC, such that it easily acquires large values for both thin and thick SCs, which is crucial for the topological protection of MBSs. Furthermore, we show that TZESs do not emerge in the weak coupling regime, contrary to the strong coupling regime which is plagued by the natural appearance of TZESs.
Our results thus demonstrate that the weak coupling regime of NW-SC systems is surprisingly beneficial for low magnetic field topological superconductivity and topologically protected MBSs.
The remainder of this article is organized as follows. We introduce the model and method used in this study in Section~\ref{sec:Model}. In Section~\ref{sec:PhaseDiag} we present the phase diagram of the system and discuss the effects of finite size and chemical potential of the SC on the topological phase transition. In Section~\ref{sec:ZeemanDep} we compare the induced energy gap in the topological phase for weakly and strongly coupled NW-SC systems and also illustrate the sensitivity of the induced energy gap to the chemical potential of the SC. In Section~\ref{sec:NoTZES}, we discuss the absence or presence of TZES from a coupling strength perspective. Finally, in Section~\ref{sec:concl}, we present our conclusions.
\section{Model and method}
\label{sec:Model}
We consider a one-dimensional NW with strong SOC in a parallel magnetic field, which induces a Zeeman field $B$, coupled to a conventional 2D spin-singlet $s$-wave SC, as schematically shown in Fig.\,\ref{fig1}(a). The total coupled NW-SC system is modelled by
\begin{equation}\label{eq:Hamilt}
H=H_{\rm nw}+H_{\rm sc}+H_{\Gamma}\,,
\end{equation}
where
\begin{equation*}
\begin{split}
H_{\rm nw}&=\sum_{x,\sigma,\sigma'} d_{x\sigma}^{\dagger} \left[\varepsilon_{\rm nw}\sigma_{\sigma,\sigma'}^0 + B \sigma_{\sigma,\sigma'}^x \right] d_{x\sigma'}\\
&+ \sum_{x,\sigma,\sigma'} d^{\dagger}_{x\sigma} \left[ -t_{\rm nw} \sigma_{\sigma,\sigma'}^0 +i\alpha_{\rm nw}\sigma_{\sigma,\sigma'}^y \right] d_{x+1,\sigma'}+ \text{H.c.}\,,\\
H_{\rm sc}&=\sum_{ij\sigma}c^{\dagger}_{i\sigma} \left[\varepsilon_{\scaleto{\rm SC}{4pt}} \delta_{i,j}-t_{\rm sc}\delta_{\langle i,j \rangle}\right]c_{j\sigma}\\
&+\sum_{i}\Delta_{\rm sc}\big(c^{\dagger}_{i\uparrow}c^{\dagger}_{\downarrow}+c_{i\downarrow}c_{i\uparrow} \big)\,,\\
H_{\Gamma}&=-\Gamma\sum_{x,i\sigma}c^{\dagger}_{i\sigma}d_{x\sigma}\delta_{i_x,x}\delta_{i_y,\frac{L_y+1}{2}} + \text{H.c}.
\end{split}
\end{equation*}
Here, $H_{\rm nw}$ represents the 1D NW Hamiltonian, where the operator $d_{x,\sigma}$ destroys an electron with spin $\sigma$ at site $x$ in the NW of length $L_{\rm nw}$, $\sigma^i$ is the $i$-Pauli matrix in spin space, $\varepsilon_{\rm nw}=\left( 2t_{\rm nw}-\mu_{\rm nw}\right)$ is the NW onsite energy, $\mu_{\rm nw}$ is the NW chemical potential, $t_{\rm nw}$ is the nearest neighbor NW hopping strength, $B$ is the Zeeman interaction that results from the external magnetic field along the NW, and $\alpha_{\rm nw}$ is the Rashba SOC hopping strength. Moreover, $H_{\rm sc}$ represents the Hamiltonian of the 2D SC with length $L_x$, width $L_y$, and where $c_{i,\sigma}$ destroys an electron with spin $\sigma$ at site $i=(i_x,i_y)$ in the SC, as well as $\varepsilon_{\rm sc}=\left( 4t_{\rm sc}-\mu_{\rm sc}\right)$ being the onsite energy, where $\delta_{\langle i,j \rangle}$ implies only nearest neighbor hopping allowed, and $\Delta_{\rm sc}$ the spin-singlet $s$-wave (i.e.~onsite) order parameter. Last, $H_{\Gamma}$ denotes the coupling between the NW and SC with coupling strength $\Gamma \leq t_{\rm sc}$, where as seen in Fig.\,\ref{fig1}(a), the NW is positioned to the middle of the 2D SC.
We solve the full NW-SC system in Eq.~\eqref{eq:Hamilt} within the Bogoliubov-de-Gennes (BdG) formalism~\cite{DeGennes} for experimentally realistic parameters. Since we are mainly interested in the low-energy states, we take advantage of the sparseness of the Hamiltonian in Eq.\,(\ref{eq:Hamilt}) and carry out a partial diagonalization using the Arnoldi iteration method~\cite{arnoldi1951principle} to extract the low-energy spectrum. We have further verified that self-consistent calculations of the superconducting order parameter do not modify the results presented here~\cite{Awoga2017disorder,theiler2018majorana,Mashkoori2019Majorana,Awoga2019Supercurrent}. The parameters we consider in the SC are $t_{\rm sc}=15$meV and $|\Delta_{\rm sc}|=0.1t_{\rm sc}$, which is in the range of experimentally measured values for NbTiN~\cite{Lutchyn2018Majorana}. For the NW we use $t_{\rm nw}=4t_{\rm sc}$, consistent we earlier works~\cite{Reeg2018Zero,Awoga2019Supercurrent} and accounting for the difference in the effective masses and lattice constant mismatch in the NW and SC. For the NW we also use $\mu_{\rm nw}=0.02t_{\rm nw}$, and $\alpha_{\rm nw}=0.05t_{\rm nw}$. The SOC strength is then $ \alpha_{\rm R}=2a\alpha_{\rm nw}$ giving $\alpha_{\rm R}=0.9$\,eV\AA, when using a lattice constant $a=1.5$\,nm, which is a large value but in line with reports for InSb and InAs NWs~\cite{Lutchyn2018Majorana}. We further consider a NW of length $L_{\rm nw}=1000a=1.5$\,$\mu$m, again realistic for experiments. The length of the SC is taken to be substantially longer than the NW to avoid boundary effects from the SC, while we usually vary the width of the SC. For the setup in Fig.~\ref{fig1}(b) the NW is partly left uncovered by the SC to simulate a superconductor-normal state (SN) junction, where we keep the N part $L_{\rm N} = 4a$ long.
In what follows, all energies are given in units of $t_{\rm sc}$ and lengths in the unit of the lattice constant, $a$.
The NW-SC system, modeled by Eq.~\eqref{eq:Hamilt}, is expected to enter into a topological phase, with MBSs at the ends of the NW, for Zeeman fields $B$ above a critical value $B_{\rm c}$, namely, $B>B_{\rm c}$, see e.g.~\cite{Aguadoreview17}. Here, all the ingredients, SOC, superconductivity, and a Zeeman field, are crucial to reach the topological phase. Of particular importance is the proximity-induced superconductivity in the NW, characterized by the induced energy gap $\Delta_{{\rm ind}}$, which is effectively determined by the lowest energy level, i.e.~closest to zero, in the full NW-SC spectrum,
\begin{equation}
\label{Deltain}
\Delta_{{\rm ind}} = \begin{cases}
& |E_0|,\quad B < B_{\rm c}\\
& |E_1|, \quad B > B_{\rm c}
\end{cases}\ .
\end{equation}
where $E_{0 (1)}$ is the lowest (first excited) energy level. Here the first excited energy level is needed in the topological phase, $B>B_{\rm c}$, since here $E_0$ corresponds to the energy of the MBSs that appear at or close to zero.
In order to visualize the behavior of $\Delta_{{\rm ind}}$, we present in Fig.~\ref{fig2} the dependence of $\Delta_{\rm ind}$ on $\Gamma$ for several SC width $L_y$ and chemical potential $\mu_{\rm sc}$ at $B=0$. We see that, although there is an appreciable sensitivity to these parameters, in general, $\Delta_{{\rm ind}}\propto \Gamma$ at low $\Gamma$, while $\Delta_{{\rm ind}}$ has a nonlinear and saturating behavior at larger $\Gamma$. This identifies two distinct regimes: $\Delta_{\rm ind}$ linear in $\Gamma$ we refer to as the \emph{weak coupling regime}, while $\Delta_{{\rm ind}}$ nonlinear in $\Gamma$ we refer to as the \emph{strong coupling regime}. For our parameters the weak coupling regime is generally present when $0<\Gamma/t_{\rm sc} \le 0.3$. We therefore probe these two different regimes by fixing $\Gamma/t_{\rm sc}=0.2,\, 0.3$ for weak coupling and $\Gamma/t_{\rm sc}=0.7$ for strong coupling, see vertical dashed lines in Fig.~\ref{fig2}.
This definition of the weak and strong coupling regimes is also qualitative consistent with earlier works \cite{Cole2015Effects,Stanescu2017Proximity}.
\begin{figure}[!t]
\centering
\includegraphics[width=0.45\textwidth]{Fig2.png}
\caption{Induced energy gap in the NW, $\Delta_{\rm ind}$, as a function of NW-SC coupling strength $\Gamma$ for several different values of $L_y$ and $\mu_{\rm sc}$ at zero Zeeman field $B=0$. Vertical dashed lines denote the weak, $\Gamma/t_{\rm sc}=0.2, 0.3$, and strong, $\Gamma/t_{\rm sc}= 0.7$, coupling values used throughout our work and $\mu_{\rm nw}/t_{\rm nw}=0.02$. For remaining parameters, see main text.}
\label{fig2}
\end{figure}
The strong coupling regime has gathered a large amount of attention lately mainly because it allows for large induced gaps, of similar size as in the parent SC, at $B=0$, see both Fig.~\ref{fig2} and e.g.~\cite{Deng16,gul2018ballistic}. However, as we discussed in the introduction, the strong coupling regime also brings unwanted effects such as renormalization of the normal-state NW parameters and the formation of TZES that can easily obscure an unambiguous identification of MBSs, see e.g.\,\cite{Awoga2019Supercurrent}.
\begin{figure*}[!t]
\centering
\includegraphics[width=1\textwidth]{Fig3.pdf}
\caption{(a) Topological phase diagram calculated using Wilson loop, $W$ as a function of coupling $\Gamma$ and Zeeman field $B$ for several different SC widths using $\mu_{\rm nw}/t_{\rm nw}=0.02 $ and $\mu_{\rm sc}/ t_{\rm sc}=0.5$. The curves denote TPT i.e. $B_{\rm c}$ for each $L_y$. Vertical lines in denote weak (two left most line) and strong (right most line) coupling.
(b) Critical field $B_{\rm c}$ as a function of $\mu_{\rm sc}$ for a thin SC, $L_y=11a$, at weak (red, yellow) and strong (black) coupling. (c,d) $W$ as a function of $\mu_{\rm nw}$ and $\mu_{\rm sc}$ for $L_y/a=11$, (purple curve in (a)), for weak coupling (c) and strong coupling (d).}
\label{fig3}
\end{figure*}
\section{Phase diagram}\label{sec:PhaseDiag}
As explained in the previous section, the setup modeled by Eq.~\eqref{eq:Hamilt} realizes a topological phase for large enough Zeeman fields with MBSs located at the ends of the NW. To proceed, we first analyze how the phase diagram, which shows the appearance of trivial and topological phases, depends on properties of the SC, in particular $L_y$ and $\mu_{\rm sc}$. To characterize the phase diagram, we calculate the topological invariant using the Wilson loop $W$\,\cite{PhysRevB.89.155114,PhysRevB.95.241101,Mashkoori2019Majorana,Mashkoori2020Identification}. For this purpose we use the setup in Fig.~\ref{fig1}(a), and also assume that $L_x$ and $L_{\rm NW}$ are infinitely long, such that the wave-vector along $x$, $k_x$, is a good quantum number. Then $W$ is obtained as~\cite{PhysRevB.89.155114,PhysRevB.95.241101},
\begin{equation}\label{eq:TopInv}
\begin{split}
W &= \det\left[\hat{U}_{\rm o}(-\pi)^\dagger \hat{U}_{\rm o}(-\pi+(n-1)\delta k_x) \right. \\
&\left. \times\prod_{i=1}^{n-2}\lbrace \hat{U}_{\rm o}(-\pi+(i+1)\delta k_x)^\dagger \hat{U}_{\rm o}(-\pi+i\delta k_x) \rbrace \right. \\
&\left. \times \hat{U}_{\rm o}(-\pi+i\delta k_x)^\dagger \hat{U}_{\rm o}(-\pi) \right] \\
&= e^{i\gamma}
\end{split}
\end{equation}
where $W=+1(-1)$ dictates that the system is in the topologically trivial (nontrivial) phase. Here, $\hat{U}_{\rm o}$ is the matrix of occupied states and a function of $k_x$, $\delta k_x$ the discretization of $k_x$, $n$ the number of discretized points, and $\gamma$ the Berry phase. Note that $\hat{U}_{\rm o}(-\pi)$ is used instead of $\hat{U}_{\rm o}(\pi)$, since the wave functions are the same at the boundaries of the Brillouin zone and this trick makes $W$ gauge invariant. The quantity $W$ in Eq.~\eqref{eq:TopInv} provides the same information as the Pfaffian but is simpler to calculate, see \cite{Kobialka2021Majorana,Maiani2021Topological} for related Pfaffian studies.
In Fig.\,\ref{fig3}(a) we plot $W$ as a function of $B$ and $\Gamma$ for several different values of $L_y$ and fixed $\mu_{\rm sc}/t_{\rm sc}=0.5$, where each curve represents the topological phase transition (TPT) separating the trivial and topological regimes. This TPT corresponds to a critical Zeeman field denoted $B_{\rm c}$. The general observation is that the TPT curves exhibit a strong dependence on $L_y$ when the SC is not in the bulk regime. When reaching the bulk regime, $L_y/a\geq 41$ in our case, this dependence saturates and the TPT curves appear superimposed.
Most importantly, each TPT curve strongly depends on the values of $\Gamma$, where larger Zeeman fields are needed to reach the topological phase when $\Gamma$ is large, whereas notably lower Zeeman fields are enough at weak $\Gamma$. There is thus an interplay between the size of the SC and the coupling to the NW which strongly affect the TPT.
This effect can be understood to arise from an effective energy shift induced in the NW when the coupling $\Gamma$ is strong, which both renormalizes the NW chemical potential and make it strongly dependent on $L_y$~\cite{Reeg2017Finite,Awoga2019Supercurrent}. This, in turn, moves the TPT to higher $B$ values, even possibly making it difficult to reach the topological phase without destroying superconductivity at strong coupling. In contrast, the renormalization of the chemical potential in the weak coupling regime is negligible small and, hence, the TPT does not considerably depend on $L_y$ in this regime. Moreover, as noted above, the weak coupling regime requires relatively small Zeeman fields to reach the TPT for essentially all reasonable widths of the SC
As elucidated above, the TPT separating the trivial and topological regimes is highly dependent on the coupling strength and SC thickness. Given fixed coupling and thickness, which is the realistic setup, we next explore the possibility to control the TPT by tuning the chemical potentials in the NW and SC, $\mu_{\rm nw}$ and $\mu_{\rm sc}$, which are experimentally tunable by means of voltage gates. In Fig.\,\ref{fig3}(b) we present the critical Zeeman fields $B_{\rm c}$ needed to reach the TPT as a function of $\mu_{\rm sc}$ in the weak, $\Gamma/t_{\rm sc}=0.2,\, 0.3$, and strong, $\Gamma/t_{\rm sc}=0.7$, coupling regimes, at fixed thin SC with $L_y/a=11$ and $\mu_{\rm nw}=0.02t_{\rm nw}$. Here we focus on a thin SC, $L_y/a=11$, motivated by the thin SCs currently employed in several experiments, see e.g.~\cite{Deng16,gul2018ballistic}. For completeness, we also provide the corresponding results for a bulk SC with $L_y/a=41$ in Appendix \ref{Appendix}.
We observe that the TPT in Fig.\,\ref{fig3}(b) is largely insensitive to $\mu_{\rm sc}$ at weak coupling (red and yellow), but very sensitive at strong coupling (black). This result is qualitatively unchanged for a bulk SC, see Appendix \ref{Appendix}.
Moreover, the critical fields $B_{\rm c}$ are much larger for strong coupling compared to weak coupling, implying that $B_{\rm c}$ could even be experimentally unreachable for some values of $\mu_{\rm sc}$ as superconductivity might be destroyed before reaching $B_{\rm c}$. In stark contrast, in the weak coupling regime, a low Zeeman field is enough for the system to reach $B_{\rm sc}$ and thus become topological, highlighting again a clear advantage for weakly coupled hybrid systems.
Having seen that the weak coupling regime needs lower Zeeman fields to reach the topological phase, we finally present in Fig.\,\ref{fig3}(c,d) the phase diagram, calculated using $W$, as a function of $\mu_{\rm nw}$ and $\mu_{\rm sc}$ for $L_y/a=11$ in the weak and strong coupling regimes, respectively and for fixed, but different, $B$. In the weak coupling case, Fig.\,\ref{fig3}(c), the topological phase emerges at small NW doping and is notably largely insensitive to the SC doping. The latter is a result of the negligible renormalization of the NW chemical potential at weak coupling.
For strong coupling, a substantially larger $B$ is needed to produce a phase diagram with a reasonably sized topological region, see Fig.\,\ref{fig3}(d), and even then there is a strong dependence on the properties of the SC. We have verified that the phase diagrams remain qualitatively the same when changing $B$ or $L_y$ or both.
To summarize the results above, the topological phase in strongly coupled NW-SC hybrid structures is very sensitive to the properties of the SC and notably also needs strong Zeeman fields, which can easily be detrimental for superconductivity. In stark contrast, the topological phase in the weak coupling regime is not sensitive the properties of the SC and mainly instead only requires that the NW is lightly doped, which opens a promising route for low Zeeman field topological superconductivity and MBSs.
\section{Low-energy spectrum and induced gap}
\label{sec:ZeemanDep}
Having established that a sizable topological phase regime emerges at low Zeeman fields in the weak coupling regime of NW-SC hybrid structures, we next investigate the possibility to produce appreciable induced energy gaps, $\Delta_{\rm ind}$ defined in Eq.~\eqref{Deltain}. The need for a large induced gap in the topological phase, often simply called the topological gap, is motivated by the fact that this gap separates the discrete MBSs from the quasi-continuum, thus providing the operation protection of MBSs from quasi-particle poisoning, see e.g.~\cite{Rainis2012Majorana,Higginbotham2015Parity}.
The induced gap is naively set by the proximity-induced superconductivity in the NW. As a consequence, stronger coupling between NW and SC is expected to generate a larger energy gap. However, as we established in the previous section, strong coupling also requires larger Zeeman fields to reach the topological regime and additionally renormalizes the properties of the NW, and it is a prior not clear if these might also have an effect on the topological gap. In this section, we therefore investigate the induced gap for both strong and weak coupling across the TPT and into the topological phase.
\begin{figure}[!t]
\centering
\includegraphics[width=0.49\textwidth]{Fig4.pdf}
\caption{(a-c) Low-energy spectrum as a function of normalized Zeeman field, $B/B_{\rm c}$, for weak and strong coupling $\Gamma$ at different SC chemical potentials $\mu_{\rm sc}$ for the geometry depicted in Fig.\,\ref{fig1}(a). (d-f) Induced gap, $\Delta_{\rm ind}$, extracted from (a-c) using Eq.~\eqref{Deltain} as a function of $\mu_{\rm sc}$ for weak and strong coupling $\Gamma$ at different Zeeman fields $B$. Here $ L_y/a=11$ and $\mu_{\rm nw}/t_{\rm nw}=0.02$.
}
\label{fig4}
\end{figure}
We start by obtaining the low-energy spectrum in the setup schematically shown in Fig.~\ref{fig1}(a) with both SC and NW considered finite and the NW terminated within the SC to avoid boundary effects from the SC. In Figs.~\ref{fig4}(a-c) we plot the low-energy spectrum as a function of Zeeman field $B$ (renormalized by $B_{\rm c}$) both in the weak (red, yellow curves) and strong (black) coupling regimes for several different values of $\mu_{\rm sc}$. Note that only the lowest positive energy levels are shown for visualization purposes. In general, for all $\mu_{\rm sc}$ and $\Gamma$, a substantial induced energy gap is opened at $B=0$. In this zero field limit, the induced gap is particularly large in the strong coupling regime and it represents proximity-induced superconductivity in the NW with effective order parameter $\Delta_{\rm ind}$.
By increasing the Zeeman field, $\Delta_{\rm ind}$ overall becomes smaller due to Zeeman depairing and it eventually even vanishes when $B=B_{\rm c}$ (black arrows), since the bulk spectrum is necessarily closing at the TPT. Beyond the TPT, the induced gap $\Delta_{\rm ind}$, the topological gap, again acquires a finite value in the topological phase, but notably now it is the energy gap separating the MBS and the first excited state.
As a side note, we have verified that the MBSs spatially reside in the NW (SC) in the weak (strong) coupling regime, thus conditioning the regions where they have to be probed, for details see Appendix \ref{AppA0}.
What is most remarkable in Figs.~\ref{fig4}(a-c) is that the topological gap is generally very similar in the weak and strong coupling regimes. In particular, the topological gap is not much smaller, but instead sometimes even larger, at weak coupling compared to strong coupling. This is very different from the behavior at low Zeeman fields, where strong coupling always gives the larger gap.
Moreover, the topological gap is also varying with $\mu_{\rm sc}$, which enables an experimental tunable level of control.
The surprising similarity in topological gap sizes in the weak and strong coupling regimes can be explained by an interplay of effects. First of all, strong coupling should generate stronger induced superconductivity in the NW, which should naively give a larger induced gap compared to weakly coupled structures. But strong coupling also renormalizes the NW normal-state properties, in particular it reduces the SOC strength, see e.g.~\cite{Awoga2019Supercurrent}, and the topological gap is known to be proportional to the SOC~\cite{Alicea:RPP12}. Thus, the topological gap is directly reduced by this SOC renormalization always present in strongly coupled structures. On the other hand, at weak coupling, the SOC is not renormalized (or only slightly renormalized in the worse case), resulting in a sizable topological gap, despite the initially smaller $\Delta_{\rm ind}$ at $B=0$ in this regime. Moreover, strong coupling also requires larger Zeeman field to reach the TPT, which further suppresses the induced gap compared to the weak coupling regime. Taken together, we find that the interplay of these effects results in very similar induced gaps in the topological phase for weakly and strongly coupled NW-SC hybrid structures.
To further elucidate the behavior of the induced gap, $\Delta_{\rm ind}$, and particularly its tunability, we plot in Fig.~\ref{fig4}(d-e) $\Delta_{\rm ind}$ as a function of $\mu_{\rm sc}$ for both weak and strong coupling and at several different $B$. At $B=0$, $\Delta_{{\rm ind}}$ is substantially larger in the strong coupling regime compared to weak coupling for all $\mu_{\rm sc}$, albeit hole doping does not favor proximity effect as much and generates a smaller $\Delta_{{\rm ind}}$, see Fig.~\ref{fig3}(d). As the Zeeman field increases but still $B<B_{\rm c}$, $\Delta_{{\rm ind}}$ reduces due to the detrimental effect of magnetism on superconductivity, see Fig.~\ref{fig4}(e). This suppression of $\Delta_{{\rm ind}}$ is larger in the strong coupling regime for a fixed ratio of $B/B_{\rm c}$, as $B_{\rm c}$ is then also larger. In the topological regime, $B>B_{\rm c}$, the situation is notably different from at zero field: Overall, the induced gap $\Delta_{{\rm ind}}$ is similar in the weakly and strongly coupled regimes. We also observe that by tuning $\mu_{\rm sc}$, $\Delta_{{\rm ind}}$ can easily be even larger in a weakly coupled NW-SC hybrid structure than in the strongly coupled regime. This is both a surprising and highly useful result as it implies that weakly coupled NW-SC hybrid structures can achieve a similar or even larger topological gap than strongly coupled structures, and that the gap is also tunable. We have verified that these findings remain robust for larger bulk-like SC (see Appendix \ref{Appendix}) and also in the presence of weak to moderate scalar disorder in the superconductor (results to be published elsewhere).
In summary, weakly coupled NW-SC hybrid structures can achieve robust topological superconductivity with a large topological gap and stable MBS. In contrast, the large induced gap in the trivial phase of strongly coupled NW-SC hybrid structures does not translate into a large induced gap in the topological phase due to the combined detrimental effects of large magnetic fields and significant reduction of SOC.
\section{Trivial zero-energy states}
\label{sec:NoTZES}
Hitherto we have focused on the setup in Fig.~\ref{fig1}(a) where the whole NW is in contact with the SC. As a final part, we study the setup presented in Fig.~\ref{fig1}(b), where part of the NW is left uncovered with the SC, thus forming an effective SN junction. This type of junction is experimentally relevant in transport experiments but has been shown to host TZES in the strong coupling regime, with properties similar to those of MBSs, see e.g.\,\cite{Reeg2017Finite,Awoga2019Supercurrent}. Here we are interested in exploring whether TZES emerge, or not, in SN junctions in weakly coupled NW-SC hybrid structures. To address this question, we plot in Fig.~\ref{fig5} the low-energy spectrum obtained by solving Eq.~\eqref{eq:Hamilt} for the setup in Fig.~\ref{fig1}(b) as a function of coupling, SC chemical potential, and Zeeman field.
To start, we display in Fig.~\ref{fig5}(a,b) the low-energy spectrum as a function the Zeeman field for two different values of $\Gamma$. In the case of strong coupling, Fig.~\ref{fig5}(a), the low-energy spectrum has a finite induced gap at $B=0$, as expected, but this gap is then reduces as $B$ increases and also gives rise to the formation of TZES for $B<B_{\rm c}$, well before the TPT. After the TPT, the system hosts a pair of MBSs at zero energy, which exhibit similar spectral properties as the TZES. The appearance of the TZES is a consequence of the renormalization of the NW chemical potential in the S part of the NW. Then, because the NW chemical potential in the uncoupled N region is left unchanged, the full NW develops an effective potential that resembles that of a quantum dot forming in the N part of the junction. This quantum dot region favors the formation of bound states, which can easily appear at zero-energy. The quantum-dot TZES are also located at the wire end point, just as the topologically protected MBSs and, therefore, they become very challenging to distinguish from MBSs.
In stark contrast to the strong coupling regime, we find for the weak coupling that the SN junction does not host any TZES below $B_{\rm c}$, but only MBSs for $B>B_{\rm c}$, see Fig.~\ref{fig5}(b). Along the same argument above, this stems from the fact that the NW chemical potential profile in the weak coupling regime is not overly affected by the SC, thereby, avoiding the creation of an unwanted quantum dot with TZES.
\begin{figure}[!t]
\centering
\includegraphics[width=0.49\textwidth]{Fig5.pdf}
\caption{ (a,b) Low-energy spectrum as a function of the Zeeman field $B$ for strong (a) and weak coupling (b) $\Gamma$ at fixed $\mu_{\rm sc}/t_{\rm sc} = 0.5$ for the geometry depicted in Fig.\,\ref{fig1}(b). Points 1 and 2 corresponds to the same points in (e). (c,d) Low-energy spectrum as a function of chemical potential in the SC $\mu_{\rm sc}$ for strong (c) and weak (d) coupling at fixed magnetic field $B/t_{\rm sc}=0.5$. (e) Lowest positive energy plotted in a color scale as a function of $\mu_{\rm sc}$ and $\Gamma$ for fixed $B/t_{\rm sc} = 0.5$. Dashed vertical lines indicate weak and strong coupling, while dashed green curve dashed green curve denotes the TPT
with the trivial phase with MBSs to the left. The trivial phase hosts TZES between the green (TPT) and dashed red curve. Here $ L_y/a=11$ and $\mu_{\rm nw}/t_{\rm nw}=0.02$.
}
\label{fig5}
\end{figure}
The results above can be further confirmed by obtaining the low-energy spectrum as a function of the SC chemical potential in the weak and strong coupling regimes at a fixed magnetic field, shown in Fig.~\ref{fig5}(c,d). While the strong coupling regime allows for both TZES and topological MBSs, indicated by red and green arrows in (c), the weak coupling regime interestingly only permits the formation of MBSs in (d).
The robustness and emergence of the TZES for a wide range of parameters at strong coupling is clearly a property that might challenge experimental interpretation. To further illustrate this issue, we plot in color scale in Fig.~\ref{fig5}(e) the lowest positive energy level as a function of $\mu_{\rm sc}$ and $\Gamma$ at fixed magnetic field. Here, the TPT is denoted by a dashed green curve, obtained by calculating the Wilson loop in Eq.~\eqref{eq:TopInv}. We have also checked that each point on this curve coincides with bulk gap closing in our real space calculations, as it should. The left side of the TPT curve corresponds to the topological phase with $E_0$ being the energy of the MBSs, while the right side is the trivial phase which hosts TZES within the region enclosed by the TPT and the dashed red curve. The most relevant feature of this plot is the very large region with TZES for all larger couplings $\Gamma$, which are energy-wise impossible to distinguish from the phase with MBSs. In contrast, in the weak coupling regime, TZESs do not even emerge and this complication is altogether avoided. We have verified that this conclusion also holds in the presence of weak to moderate scalar disorder.
\section{Conclusions}
\label{sec:concl}
In this work we have studied the realization of topological superconductivity in a nanowire-superconductor hybrid structure in the presence of an external magnetic field. We have shown that, when the coupling between nanowire and superconductor is strong, the topological phase transition point is very sensitive to the finite size of the superconductor and, importantly, requires strong magnetic fields to reach the topological phase, a situation that can easily be detrimental for superconductivity. In contrast, in the weak coupling regime, we have found that the topological transition point is largely insensitive to the finite size of the superconductor and can also be reached by relatively small magnetic fields.
Moreover, and very important for the practical applicability, the induced energy gap in the topological phase in the weakly coupled regime easily acquires similarly large values as in the strong coupling regime. This is a result of the induced gap being heavily suppressed in the strong coupling regime, due to both renormalization of the nanowire spin-orbit coupling and the larger magnetic fields needed to reach the topological phase. As a consequence, it is not necessary to use a system with strong coupling between nanowire and superconductor to achieve a large topological gap, but in fact, the weak coupling regime is actually more advantageous as it has a large and tunable topological gap, which is of great importance for topological protection of Majorana bound states.
Furthermore, we have also demonstrated that the weak coupling regime does not allow for the formation of topological trivial zero-energy states, easily present in strongly coupled superconductor-semiconductor hybrid structures. This stems from the fact that the nanowire chemical potential does not get renormalized in the weak coupling regime, leading to an homogeneous potential profile in the wire, which cannot accommodate trivial zero-energy states.
Our findings thus show clear and multiple advantages of the weak coupling regime for the realization of low Zeeman field topological superconductivity and Majorana bound states in semiconductor-superconductor hybrid structures.
\begin{acknowledgements}
We acknowledge financial support from the Swedish Research Council (Vetenskapsr\aa{det} Grants No.~2018-03488 and 2021-04121) and the Knut and Alice Wallenberg Foundation through the Wallenberg Academy Fellows program, as well as the EU-COST Action CA-16218 Nanocohybri. Simulations were enabled by resources provided by the Swedish National Infrastructure for Computing (SNIC) at the Uppsala Multidisciplinary Center for Advanced Computational Science (UPPMAX), partially funded by the Swedish Research Council through Grant No. 2018-05973.
\end{acknowledgements}
|
2,877,628,090,437 | arxiv | \section{Introduction}
\label{sec:intro}
\setcounter{equation}{0}
The unification of the standard model (SM) gauge groups into a larger
group, like in $SU(5)$ grand unified theories (GUTs)
\cite{Georgi:1974sy, Georgi:1974yf, Buras:1977yy}, is an attractive
possibility of a new physics beyond the SM. One of the important
check points of GUTs is the gauge coupling unification which predicts
that the gauge coupling constants of the SM become equal at the
unification scale up to threshold corrections. It is well known that,
in the SM, there is no strong indication of the unification of the
gauge coupling constants. In supersymmetric (SUSY) models, on the
other hand, the situation changes because of the existence of the
superparticles as well as up- and down-type Higgses. In particular,
with the renormalization group equations (RGEs) of the minimal SUSY SM
(MSSM), three gauge coupling constants more-or-less meet at the GUT
scale $M_{\rm GUT}\sim 10^{16}\ {\rm GeV}$ if the mass scale of the
superparticles is $O(1-10)\ {\rm TeV}$ \cite{Dimopoulos:1981yj,
Einhorn:1981sx, Marciano:1981un, Amaldi:1991cn, Ellis:1990wk,
Langacker:1991an}.
In simple GUT models based on $SU(5)$, quarks and leptons are embedded
into full multiplets of $SU(5)$. In particular, the right-handed
down-type quarks and the left-handed lepton doublets are embedded into
the anti-fundamental representations of $SU(5)$, resulting in the
unification of the down-type and charged lepton Yukawa coupling
constants. In particular, the unification of the Yukawa coupling
constants of $b$-quark and $\tau$-lepton is an interesting check point
of (SUSY) GUTs. Indeed, the $b$-$\tau$ unification based on SUSY GUTs
has been extensively studied in many literatures \cite{Tobe:2003bc,
Baer:2012by, Baer:2012cp, Badziak:2012mm, Joshipura:2012sr,
Elor:2012ig, Baer:2012jp, Anandakrishnan:2012tj, Ajaib:2014ana,
Miller:2014jza, Gogoladze:2015tfa, Chigusa:2016ody}.
The renormalization group behaviors of the coupling constants are
sensitive to the existence of new particles. If full multiplets of
$SU(5)$ are added at a single scale, the unification of the gauge
coupling constants is unaffected (at least at the one-loop level),
although the values of the gauge coupling constants depend on the
particle content. Contrary to the gauge coupling unification, the
unification of the $b$ and $\tau$ Yukawa coupling constants is
expected to be significantly affected by new particles, because the
renormalization group runnings of Yukawa coupling constants are
strongly dependent on the behaviors of the gauge coupling constants.
Importantly, there are various candidates of such new particles, like
new fermions (as well as their superpartners) to realize Peccei-Quinn
symmetry \cite{Peccei:1977hh}, extra chiral superfields in gauge
mediation models \cite{Dine:1993yw, Dine:1994vc, Dine:1995ag}, and so
on. In addition, existence of extra matters at the mass scale of the
superparticles is required if there exists a non-anomalous discrete
$R$-symmetry \cite{Kurosawa:2001iq, Asano:2011zt}. Thus, their
effects on the renormalization group runnings of the $b$ and $\tau$
Yukawa coupling constants are of great interest in particular from the
point of view of the Yukawa unification based on SUSY GUTs.
In this paper, we study the $b$-$\tau$ unification in SUSY models with
extra matters which have gauge quantum numbers under the SM gauge
group. We will see that the existence of the extra matters may
significantly affect the renormalization group running of the $b$ and
$\tau$ Yukawa coupling constants, and hence modify the $b$-$\tau$
unification. As we will discuss, the ratio of the $b$ to $\tau$
Yukawa coupling constants may become very close to $1$ if the extra
matters have Yukawa couplings with MSSM particles even though in a
large fraction of the parameter space of the MSSM, the Yukawa coupling
constant of $b$ becomes sizably smaller than that of $\tau$ at the GUT
scale. We will also see that the ratio of the $b$ to $\tau$ Yukawa
coupling constants at the GUT scale becomes smaller in models with
extra matters if they do not have Yukawa interactions.
\section{Model: Brief Overview}
\label{sec:model}
\setcounter{equation}{0}
\begin{table}[t]
\begin{center}
\begin{tabular}{|c|c|c|c|c|}
\hline
Classification & Notation ($SU(5)$) & Notation & $SU(5)$ & $G_{\rm SM}$
\\ \hline
\multirow{5}{*}{MSSM 3rd generation}
& \multirow{2}{*}{$\bar{F}$}
& $b_R^c$ & \multirow{2}{*}{$\bar{\textbf{5}}$}
& $(\bar{\textbf{3}}, \textbf{1}, \frac{1}{3})$
\\ \cline{3-3} \cline{5-5}
& & $l_L$ & & $(\textbf{1}, \textbf{2}, -\frac{1}{2})$
\\ \cline{2-5}
& \multirow{3}{*}{$T$}
& $q_L$ & \multirow{3}{*}{$\textbf{10}$}
& $(\textbf{3}, \textbf{2},\frac{1}{6})$
\\ \cline{3-3} \cline{5-5}
& & $t_R^c$ & & $(\bar{\textbf{3}}, \textbf{1}, -\frac{2}{3})$
\\ \cline{3-3} \cline{5-5}
& & $\tau_R^c$ & & $(\textbf{1}, \textbf{1}, 1)$
\\ \hline
\multirow{10}{*}{Extra Matters}
& \multirow{2}{*}{$\bar{F}'$}
& $D'$
& \multirow{2}{*}{$\bar{\textbf{5}}$}
& $(\bar{\textbf{3}}, \textbf{1}, \frac{1}{3})$
\\ \cline{3-3} \cline{5-5}
& & $L'$ & & $(\textbf{1}, \textbf{2}, -\frac{1}{2})$
\\ \cline{2-5}
& \multirow{3}{*}{$T'$}
& $Q'$ & \multirow{3}{*}{$\textbf{10}$}
& $(\textbf{3}, \textbf{2}, \frac{1}{6})$
\\ \cline{3-3} \cline{5-5}
& & $U'$ & & $(\bar{\textbf{3}}, \textbf{1}, -\frac{2}{3})$
\\ \cline{3-3} \cline{5-5}
& & $E'$ & & $(\textbf{1}, \textbf{1}, 1)$
\\ \cline{2-5}
& \multirow{2}{*}{$F'$}
& $\bar{D}'$ & \multirow{2}{*}{$\textbf{5}$}
& $(\textbf{3}, \textbf{1}, -\frac{1}{3})$
\\ \cline{3-3} \cline{5-5}
& & $\bar{L}'$ & & $(\textbf{1}, \textbf{2}, \frac{1}{2})$
\\ \cline{2-5}
& \multirow{3}{*}{$\bar{T}'$}
& $\bar{Q}'$ & \multirow{3}{*}{$\overline{\textbf{10}}$}
& $(\bar{\textbf{3}}, \textbf{2}, -\frac{1}{6})$
\\ \cline{3-3} \cline{5-5}
& & $\bar{U}'$ & & $(\textbf{3}, \textbf{1}, \frac{2}{3})$
\\ \cline{3-3} \cline{5-5}
& & $\bar{E}'$ & & $(\textbf{1}, \textbf{1}, -1)$
\\ \hline
\end{tabular}
\end{center}
\caption{Notations of chiral superfields used throughout this paper.
Each column denotes, from left to right, the classification between
MSSM particles and extra matters, the notation of the $SU(5)$
multiplet, the notation of the multiplet of the SM gauge group
$G_{\rm SM} \equiv SU(3)_C \times SU(2)_L \times U(1)_Y$, the
representation under the unified gauge group $SU(5)$, and the
representation under $G_{\rm SM}$.} \label{notations}
\end{table}
We first introduce the model we consider. The present analysis is
based on \cite{Chigusa:2016ody}, in which $b$-$\tau$ unification in
the MSSM was studied considering proper effective theories. In the
present study, we include the effects of extra matters into the
analysis of \cite{Chigusa:2016ody}. We study the model with extra
matters which can be embedded into complete $SU(5)$ representations.
We concentrate on the case where the extra matters are embedded into
$\textbf{5}+\bar{\textbf{5}}$ or $\textbf{10}+\overline{\textbf{10}}$
representations. The notations for the chiral superfields are
summarized in Tab.\ \ref{notations}.
In the model of our interest, the superpotential can be denoted
as\footnote{Hereafter, the $SU(3)_C$ and $SU(2)_L$ indices are omitted
for notational simplicity.}
\begin{align}
W = W_{\rm Yukawa} + \mu H_u H_d + W_{\bf 5} + W_{\bf 10},
\label{Superpot}
\end{align}
with $H_u$ and $H_d$ being the up- and down-type Higgses,
respectively. Here, $W_{\rm Yukawa}$ is for Yukawa interaction, while
$W_{\bf 5}$ and $W_{\bf 10}$ are SUSY invariant mass terms for extra
matters. If the Yukawa interactions of the extra matters are
negligible, the relevant part of $W_{\rm Yukawa}$ is given by
\begin{align}
W_{\rm Yukawa} = y_b H_d q_L b_R^c + y_\tau H_d l_L \tau_R^c + y_t H_u
q_L t_R^c + \cdots .
\label{Wyukawa0}
\end{align}
When there exist Yukawa couplings involving extra matters, $W_{\rm
Yukawa}$ is modified as described below in Sec.\
\ref{sec:extra_Yukawa}. We consider $N_{\bf 5}$ pairs of ${\bf
5}+{\bf \bar{5}}$ (or $N_{\bf 10}$ pairs of ${\bf 10}+\overline{\bf
10}$), and hence
\begin{align}
W_{\bf 5} &= \sum_{i=1}^{N_{\bf 5}}
(\mu_D D_i' \bar{D_i}' + \mu_L L_i' \bar{L_i}'), \\
W_{\bf 10} &= \sum_{i=1}^{N_{\bf 10}}
(\mu_Q Q_i' \bar{Q_i}' + \mu_U U_i' \bar{U_i}' +
\mu_{E} E_i' \bar{E_i}'),
\end{align}
where $i$ is the label of each ${\bf 5}+{\bf \bar{5}}$ or ${\bf
10}+{\bf \overline{10}}$ pair. For simplicity, we assume that the
mass parameters for the extra matters in the same standard-model
representations are identical. Furthermore, the relevant part of the
soft SUSY breaking terms are given by
\begin{align}
{\cal L}^{\rm (soft)} &= {\cal L}^{\rm (soft)}_{\rm scalar\ mass} +
{\cal L}^{\rm (soft)}_{\rm trilinear} + \left( - B \mu H_u H_d
- \frac{1}{2} M_1 \tilde{B} \tilde{B} - \frac{1}{2} M_2 \tilde{W} \tilde{W}
- \frac{1}{2} M_3 \tilde{g} \tilde{g} + {\rm h.c.} \right) \notag \\
&\quad + {\cal L}^{\rm (soft)}_{\bf 5} + {\cal L}^{\rm (soft)}_{\bf 10} + \cdots ,
\label{soft}
\end{align}
where $\tilde{B}$, $\tilde{W}$ and $\tilde{g}$ are Bino, Wino and
gluino, respectively. (The ``tilde'' is used for SUSY particles.)
Here, ${\cal L}^{\rm (soft)}_{\rm scalar\ mass}$ is soft SUSY breaking
scalar mass terms. Furthermore, ${\cal L}^{\rm (soft)}_{\rm
trilinear}$ denotes trilinear couplings; when the trilinear couplings
of the extra matters are negligible, it is given by
\begin{align}
{\cal L}^{\rm (soft)}_{\rm trilinear} =
- A_b H_d \tilde{q}_L \tilde{b}_R^c
- A_\tau H_d \tilde{l}_L \tilde{\tau}_R^c
- A_t H_u \tilde{q}_L \tilde{t}_R^c + {\rm h.c.} + \cdots .
\end{align}
(The effects of the trilinear couplings of the extra matters are
discussed in Sec.\ \ref{sec:extra_Yukawa}.) ${\cal L}^{\rm
(soft)}_{\bf 5}$ and ${\cal L}^{\rm (soft)}_{\bf 10}$ contain
bilinear terms of extra matters,
\begin{align}
{\cal L}^{\rm (soft)}_{\bf 5} &= \sum_{i=1}^{N_{\bf 5}} (B_D \mu_D \tilde{D}_i'
\tilde{\bar{D}}_i' + B_L \mu_L \tilde{L}_i' \tilde{\bar{L}}_i') +
{\rm h.c.}\ , \\
{\cal L}^{\rm (soft)}_{\bf 10} &= \sum_{i=1}^{N_{\bf 10}} (B_Q \mu_Q \tilde{Q}_i'
\tilde{\bar{Q}}_i' + B_U \mu_U \tilde{U}_i' \tilde{\bar{U}}_i' +
B_{E} \mu_{E} \tilde{E}_i' \tilde{\bar{E}}_i') + {\rm h.c.}\ .
\end{align}
As for the SUSY invariant masses of extra matters, we assume that the
SUSY breaking bilinear terms are universal for extra matters with the
same standard-model representations.
Below the mass scale of the SUSY particles, the effective theory
contains only the SM particles (as well as the extra matter fermions
if the SUSY invariant masses of extra matters are smaller than the
masses of SUSY particles). We denote the Lagrangian of such an
effective theory as
\begin{align}
{\cal L} =&\ {\cal L}_{\rm kin}^{\rm (SM)}
+ \left( \tilde{y}_b \tilde{H}_{\rm SM} q_L b_R^c +
\tilde{y}_\tau \tilde{H}_{\rm SM} l_L \tau_R^c + y_t H_{\rm SM} q_L
t_R^c + {\rm h.c.} \right) \notag \\
&\quad - m_{H_{\rm SM}}^2 H_{\rm SM}^\dagger H_{\rm SM}
- \frac{\lambda}{2} (H_{\rm SM}^\dagger H_{\rm SM})^2
+ {\cal L}^{\rm (extra)} + {\cal L}^{\rm (\tilde{G})} + \cdots ,
\label{SMlagrangian}
\end{align}
where ${\cal L}_{\rm kin}^{\rm (SM)}$ is the kinetic terms of SM
fields and $H_{\rm SM}$ is the SM-like Higgs doublet (with
$\tilde{H}_{\rm SM}\equiv\epsilon H_{\rm SM}^{*}$). Yukawa coupling
constants for the effective theories below the mass scale of the SUSY
particles are denoted as $\tilde{y}_b$, $\tilde{y}_\tau$, and
$\tilde{y}_t$. Furthermore,\footnote{We use same notations for the
${\rm SM_{ex}}$ fermions and the corresponding superfields.}
\begin{align}
{\cal L}^{\rm (extra)} &= {\cal L}_{\rm kin}^{\rm (extra)}
- \sum_{i=1}^{N_{\bf 5}} (\mu_D \bar{D}_i' D_i' + \mu_L \bar{L}_i'
L_i' + {\rm h.c.})
- \sum_{i=1}^{N_{\bf 10}} (\mu_Q \bar{Q}_i' Q_i' + \mu_U \bar{U}_i' U_i'
+ \mu_E \bar{E}_i' E_i' + {\rm h.c.}),
\end{align}
and
\begin{align}
{\cal L}^{(\tilde{G})} = {\cal L}_{\rm kin}^{(\tilde{G})}
- \frac{1}{2}
\left(
M_1 \tilde{B} \tilde{B} + M_2 \tilde{W} \tilde{W} + M_3 \tilde{g}
\tilde{g} + \mbox{h.c.}
\right),
\end{align}
where ${\cal L}_{\rm kin}^{\rm (extra)}$ and ${\cal L}_{\rm kin}^{\rm
(\tilde{G})}$ are kinetic terms of extra matter fermions and
gauginos, respectively. In Eq.\ \eqref{SMlagrangian}, ${\cal L}^{\rm
(extra)}$ and ${\cal L}^{\rm (\tilde{G})}$ should be omitted below
the mass scale of the extra matter fermions and gauginos,
respectively.
Some of the Lagrangian parameters are related to each other at the GUT
scale $M_{\rm GUT}$. (In our analysis, we define $M_{\rm GUT}$ as the
scale at which $U(1)_Y$ and $SU(2)_L$ gauge coupling constants become
equal.) For simplicity, we assume that the SUSY breaking scalar mass
parameters are degenerate at the GUT scale for scalars with same
$SU(5)$ representations. For the bilinear terms and the soft SUSY
breaking parameters, we neglect the threshold corrections at the GUT
scale. Then, we parametrize the Lagrangian parameters at $Q=M_{\rm
GUT}$ (with $Q$ being the renormalization scale) as
\begin{align}
& \mu_{D} (M_{\rm GUT}) = \mu_{L} (M_{\rm GUT}) \equiv \mu_{\bf 5}, \\
& \mu_{Q} (M_{\rm GUT}) = \mu_{U} (M_{\rm GUT}) = \mu_{E} (M_{\rm
GUT}) \equiv \mu_{\bf 10} \\ &
m_{\tilde{D}}^2 (M_{\rm GUT}) = m_{\tilde{L}}^2 (M_{\rm GUT}) \equiv
m_{\bf \bar{5}}^2, \\ &
m_{\tilde{D'}}^2 (M_{\rm GUT}) = m_{\tilde{L'}}^2 (M_{\rm GUT}) =
m_{\bf \bar{5}}^2, \\ &
m_{\tilde{Q}}^2 (M_{\rm GUT}) = m_{\tilde{U}}^2 (M_{\rm GUT}) =
m_{\tilde{E}}^2 (M_{\rm GUT}) \equiv m_{\bf 10}^2, \\ &
m_{\tilde{Q'}}^2 (M_{\rm GUT}) = m_{\tilde{U'}}^2 (M_{\rm GUT}) =
m_{\tilde{E'}}^2 (M_{\rm GUT}) = m_{\bf 10}^2, \\ &
m_{H_u}^2 (M_{\rm GUT}) \equiv m_{H{\bf 5}}^2, \\ &
m_{H_d}^2 (M_{\rm GUT}) \equiv m_{H\bar{\bf 5}}^2, \\ &
A_b (M_{\rm GUT}) = A_\tau (M_{\rm GUT}) \equiv a_d,\\ &
A_t (M_{\rm GUT}) \equiv a_u, \\ &
B_D (M_{\rm GUT}) = B_L (M_{\rm GUT}) = m_{\bf \bar{5}}, \\ &
B_Q (M_{\rm GUT}) = B_U (M_{\rm GUT}) = B_E (M_{\rm GUT}) = m_{\bf 10},
\end{align}
where $m_{\tilde{D}}^2$, $m_{\tilde{L}}^2$, $m_{\tilde{Q}}^2$,
$m_{\tilde{U}}^2$ and $m_{\tilde{E}}^2$ are soft SUSY breaking mass
squared parameters of $\tilde{b}_R^c$, $\tilde{l}_L$, $\tilde{q}_L$,
$\tilde{t}_R^c$, and $\tilde{\tau}_R^c$, respectively. In addition,
we impose the same boundary condition for $m_{\tilde{\bar{\Phi}}'}^2$
as $m_{\tilde{\Phi}'}^2$ ($\Phi = D,L,Q,U,E$). For gaugino masses, we
adopt the simple GUT relation:
\begin{align}
M_1 (M_{\rm GUT}) = M_2 (M_{\rm GUT}) = M_3 (M_{\rm GUT}) \equiv m_{1/2}.
\end{align}
The mass spectrum of the SUSY particles (including those in the extra
matter sector) is determined by solving the RGEs with the boundary
conditions given above. Importantly, the masses of the scalars are
comparable to or larger than the gaugino masses in the model of our
interest because of the renormalization group effects. We also
comment here that the gaugino masses may be much smaller than the
scalar masses, as suggested by several models of SUSY breaking
\cite{Ibe:2006de, Ibe:2011aa, ArkaniHamed:2012gw}. Thus, we do not
exclude the possibility that the gaugino masses are hierarchically
smaller than the scalar masses.
\begin{table}[t]
\begin{center}
\begin{tabular}{|c||c|c|}
\hline
Effective theories & Particle content & RGEs \\ \hline
SM & SM particles & two-loop \\ \hline
${\rm SM_{ex}}$ & SM particles and extra fermions & two-loop \\ \hline
${\rm \tilde{G} SM}$ & SM particles and gauginos & two-loop \\ \hline
${\rm \tilde{G} SM_{ex}}$ & SM particles, gauginos and extra fermions & two-loop \\ \hline
MSSM & MSSM particles & two-loop \\ \hline
${\rm MSSM_{ex}}$ & MSSM particles and extra matters
& two-loop \\ \hline
\end{tabular}
\caption{Effective theories used in our analysis.}
\label{tab:effectiveth}
\end{center}
\end{table}
\begin{table}[t]
\begin{center}
\begin{tabular}{|c||c|c|c|}
\hline
Effective theories & $M_{\rm ex} < M_{\tilde{G}} < M_S$ &
$M_{\tilde{G}} < M_{\rm ex} < M_S$ & $M_{\tilde{G}} < M_S <
M_{\rm ex}$ \\ \hline
SM & $m_t < Q < M_{\rm ex}$ & $m_t < Q < M_{\tilde{G}}$ & $m_t < Q <
M_{\tilde{G}}$ \\ \hline
${\rm SM_{ex}}$ & $M_{\rm
ex} < Q < M_{\tilde{G}}$ & & \\ \hline
${\rm \tilde{G} SM}$ & & $M_{\tilde{G}} < Q < M_{\rm ex}$ &
$M_{\tilde{G}} < Q < M_S$ \\ \hline
${\rm \tilde{G} SM_{ex}}$ & $M_{\tilde{G}} < Q < M_S$ & $M_{\rm ex} <
Q < M_S$ & \\ \hline
MSSM & & & $M_S < Q < M_{\rm ex}$ \\ \hline
${\rm MSSM_{ex}}$ & $M_S < Q < M_{\rm GUT}$ & $M_S < Q < M_{\rm GUT}$
& $M_{\rm ex} < Q < M_{\rm GUT}$ \\ \hline
\end{tabular}
\caption{Effective theories for each renormalization scale $Q$.}
\label{tab:effective}
\end{center}
\end{table}
In order to calculate the renormalization group running of coupling
constants from the weak scale to the GUT scale $M_{\rm GUT}$, we
consider several effective theories; particle contents of all the
effective theories used in our analysis are summarized in Tab.\
\ref{tab:effectiveth}. In the present analysis, there are three
important mass scales, i.e., the gaugino mass scale $M_{\tilde{G}}$,
the sfermion mass scale $M_S$, and the extra fermion mass scale
$M_{\rm ex}$, at which the effective theory changes from one to
another. As we have mentioned, the scalar masses are comparable to or
larger than the gaugino masses, and hence $M_{\tilde{G}}<M_S$. In
addition, the masses of the scalars in the extra matter sector have
two contributions, i.e., SUSY invariant mass parameters and soft SUSY
breaking masses (denoted as $\mu_\Phi$ and $m_{\tilde{\Phi}}^2$,
respectively); the scalar masses in the extra matter sector is
$\sim\sqrt{\mu_\Phi^2+m_{\tilde{\Phi}}^2}$. Thus, we assume that the
scalars in the extra matter sector decouple from the effective theory
at the renormalization scale $Q=\mbox{max} (M_S,M_{\rm ex})$. The
relevant effective theory for each renormalization scale depends on
$M_{\tilde{G}}$, $M_S$, and $M_{\rm ex}$, as summarized in Tab.\
\ref{tab:effective}.
For all the effective theories mentioned above, we use two-loop RGEs;
for SUSY models, we use the Susyno package \cite{Fonseca:2011sy},
while the RGEs for non-SUSY theories are calculated based on
\cite{Machacek:1983tz, Machacek:1983fi, Machacek:1984zw}. In
addition, at each energy threshold, one effective theory is matched to
another, taking into account one-loop threshold corrections to
Lagrangian parameters. In the following, we summarize the important
effects.
At the sfermion mass scale $M_S$, two Higgs doublets in ${\rm
MSSM_{(ex)}}$ are matched to the SM-like Higgs as
\begin{align}
H_{\rm SM} = H_u \sin \beta + H_d^{*} \cos \beta,
\end{align}
where $\tan \beta$ is the ratio of the vacuum expectation value of
$H_u^0$ to that of $H_d^0$. The boundary condition for the Higgs
quartic coupling $\lambda$ at $M_S$ is
\begin{align}
\lambda (M_S) = \frac{g_1^2 (M_S) + g_2^2 (M_S)}{4} \cos^2 2\beta +
\delta \lambda,
\end{align}
where $\delta \lambda$ is the threshold correction due to heavy scalar
particles (in particular, stops). In addition, the mass of the
pseudo-scalar Higgs, which is a component of the heavy Higgs multiplet
$H_{\rm heavy} = H_u \cos \beta - H_d^{*} \sin \beta$, is determined at
this scale as
\begin{align}
m_A^2 = [m_{H_u}^2 + m_{H_d}^2 + 2\mu^2 - m_{H_{\rm SM}}^2]_{Q = M_S},
\end{align}
where $\mu^2$ is determined from the following radiative electroweak
symmetry breaking condition:
\begin{align}
\mu^2 &= - m_{H_{\rm SM}^2} - m_{H_u}^2 \sin^2 \beta - m_{H_d}^2 \cos^2
\beta + B \mu \sin 2\beta,\label{muSq}\\
B \mu &= \frac{1}{2} (m_{H_u}^2 - m_{H_d}^2) \tan 2\beta.
\end{align}
The Yukawa coupling constants $y_f$ (with $f=t$, $b$, and $\tau$) are
matched to $\tilde{y}_f$ at $Q=M_S$, using the mixing angle $\beta$.
In our analysis, the threshold correction to the bottom Yukawa
coupling constant at $Q = M_S$ is important. The correction
$\Delta_b$ is defined by
\begin{align}
\tilde{y}_b (M_S) = y_b (M_S) \cos \beta (1 + \Delta_b),
\end{align}
where $\tilde{y}_b$ and $y_b$ are the bottom quark Yukawa coupling
constants in the effective theory used just below and just above
$M_S$, respectively. The most important contributions to $\Delta_b$
come from the sbottom-gluino and stop-chargino diagrams
\cite{Hall:1993gn, Carena:1994bv, Blazek:1995nv}; at the leading order
of the mass-insertion approximation, these contributions are given by
\begin{align}
\Delta_b \simeq
\left[
\frac{g_3^2}{6\pi^2} M_3 I(m_{\tilde b_1}^2,m_{\tilde b_2}^2,M_3^2)
+ \frac{y_t}{16\pi^2} A_t I(m_{\tilde t_1}^2,m_{\tilde t_2}^2,\mu^2)
\right] \mu \tan\beta,
\label{eq:deltab}
\end{align}
where $m_{\tilde b_1}$ and $m_{\tilde t_1}$ ($m_{\tilde b_2}$ and
$m_{\tilde t_2}$) are masses of lighter (heavier) stop and sbottom,
respectively, and
\begin{align}
I(a,b,c) = -\frac{ab\ln (a/b)+bc\ln (b/c)+ca\ln (c/a)}{(a-b)(b-c)(c-a)}.
\end{align}
(In our numerical analysis, we use the full one-loop expression of
$\Delta_b$.) The important point is that $\Delta_b$ is approximately
proportional to $\mu \tan \beta$, resulting in the large correction to
the bottom Yukawa coupling constant in the models with heavy Higgsinos
or those with large $\tan \beta$.
We also include threshold corrections to the Wino and Bino masses at $Q
= M_S$ due to the Higgs-Higgsino loop diagram \cite{Giudice:1998xp}:
\begin{align}
\delta M_1 = \frac{g_1^{2} (M_{\rm S})}{16 \pi^{2}} L, ~~~
\delta M_2 =&\, \frac{g_2^{2} (M_{\rm S})}{16 \pi^{2}} L,
\end{align}
where
\begin{eqnarray}
L \equiv \mu \sin2\beta
\frac{m_{A}^{2}}{\mu^{2}-m_{A}^{2}} \ln \frac{\mu^{2}}{m_{A}^{2}}.
\end{eqnarray}
At $Q = M_{\tilde{G}}$ and $Q = M_{\rm ex}$, we take into account
one-loop threshold corrections to gauge coupling constants, gaugino
masses, and scalar masses due to loop diagrams involving gauginos and
extra matters. Then, at $Q = m_t$ the SM-like Higgs mass is evaluated
as
\begin{align}
m_h^2 = 2 \lambda(m_t) v^2 + \delta m_h^2,
\end{align}
where $v \simeq 174\ {\rm GeV}$ is the vacuum expectation value of the
SM-like Higgs boson and $\delta m_h^2$ is the threshold correction.
\section{Numerical Results}
\label{sec:numerical}
\setcounter{equation}{0}
In this section, we show the results of our numerical study. In
addition to the SM parameters, the present model contains ten new
parameters, $\tan\beta$, $m_{\bf \bar{5}}^2$, $m_{\bf 10}^2$,
$m_{H{\bf{5}}}^2$, $m_{H{\bf \bar{5}}}^2$, $m_{1/2}$, $\mu$, $B$,
$\mu_{\bf 5}$, and $\mu_{\bf 10}$, ignoring the Yukawa and the
trilinear couplings related to the extra matters. Among them, $\mu$
and $B$ are determined at the sfermion mass scale $M_S$ to fix the
vacuum expectation value of the SM-like Higgs boson $v$ and
$\tan\beta$.
We numerically solve RGEs from the weak scale to the GUT scale. Our
numerical calculation is based on the SOFTSUSY package
\cite{Allanach:2001kg}, in which three-loop RGEs for the effective
theory below the electroweak scale and two-loop RGEs for the MSSM are
implemented. We have implemented into SOFTSUSY package two-loop RGEs
for the other effective theories listed in Tab.\ \ref{tab:effective},
i.e., SM, ${\rm SM_{ex}}$, $\tilde{G}$SM, ${\rm \tilde{G}SM_{ex}}$,
and ${\rm MSSM_{ex}}$. In addition, one-loop threshold corrections
due to the diagrams with SUSY particles or extra matters in the loop
are included at relevant thresholds. In our numerical calculation,
$M_{\rm S}$ is taken to be the geometric mean of the stop masses,
while we take $M_{\tilde{G}}=M_3$. $M_{\rm ex}$ is set to the mass of
the bottom-like extra fermion mass $\mu_D$ for models with $N_{\bf 5}
> 0$, and is set to the geometric mean of top-like extra fermion
masses for models with $N_{\bf 10} > 0$. The gauge and Yukawa
coupling constants are determined based on \cite{Agashe:2014kda}. In
particular, we use the bottom quark mass of $m_b^{(\overline{\rm
MS})}(m_b)=4.18\ {\rm GeV}$, the top quark mass of $m_t=173.21\
{\rm GeV}$, and $\alpha_{3} (M_Z)=0.1185$ (with
$\alpha_{3}=g_3^2/4\pi$).
\subsection{Extra matters without Yukawa couplings}
\label{sec:extra_noYukawa}
Let us now study the effects of extra matters on the $b$-$\tau$
unification in SUSY GUT. We first consider the case where the extra
matters interact with the MSSM particles only through gauge
interactions.
Because the boundary conditions for the Yukawa coupling constants are
fixed by using the fermion masses, $y_b(M_{\rm GUT})$ and $y_{\tau}
(M_{\rm GUT})$ may differ in the present analysis. To quantify the
difference, we define
\begin{align}
R_{b\tau} \equiv \frac{y_b (M_{\rm GUT})}{y_\tau (M_{\rm GUT})}.
\end{align}
We calculate $R_{b\tau}$ as a function of model parameters, and study
how it is affected by extra matters. If there is no source of the GUT
scale threshold corrections other than the splitting of the masses of
GUT scale particles, then $(R_{b\tau} - 1) \sim O(1) \%$. Thus, if
$R_{b\tau}$ is (much) larger than $\sim O(1) \%$, it indicates a
sizable threshold correction at the GUT scale and/or a non-trivial
flavor physics at the GUT scale or below.
If the Yukawa interactions of the extra matters are negligible, the
main effect of extra matters on the Yukawa unification is through the
enhancement of the gauge coupling constants at high energy scale.
With extra matters, the coupling constants of $SU(3)_C \times SU(2)_L
\times U(1)_Y$ become larger. This can be understood by examining the
RGEs of the gauge coupling constants; in SUSY models, they are given
by
\begin{align}
\frac{d}{d \ln Q} g_a &= \frac{g_a^3}{16\pi^2}
\left[b_a + (N_{\bf 5} + 3 N_{\bf 10})\right] + \cdots,
\end{align}
where $(b_1,b_2,b_3)=(\frac{33}{5}, 1, -3)$, and $\cdots$ in the above
equation denotes higher order effects. One can see that, with
non-vanishing $N_{\bf 5}$ or $N_{\bf 10}$, the beta-function
coefficients become larger, resulting in the enhancement of the gauge
coupling constants at higher scale. In particular, the enhancement of
$g_3$ is the most important because of the largeness of the coupling
constant $g_3$ itself. The enhanced gauge coupling constants affect
the renormalization group running of $y_b$ and $y_\tau$, whose RGEs
are given by
\begin{align}
\frac{d}{d \ln Q} y_b &= \frac{y_b}{16\pi^2}
\left(
y_t^2 + 6 y_b^2 + y_\tau^2 - \frac{7}{15} g_1^2
- 3 g_2^2 - \frac{16}{3} g_3^2
\right) + \cdots,
\label{RGEyb}
\\
\frac{d}{d \ln Q} y_\tau &= \frac{y_\tau}{16\pi^2}
\left(
3 y_b^2 + 4 y_\tau^2 -
\frac{9}{5} g_1^2 - 3 g_2^2
\right) + \cdots,
\label{RGEytau}
\end{align}
where the mixings between different generations are neglected. With
the low-scale values of the Yukawa coupling constants being fixed to
realize the observed fermion masses, the above equations indicate that
the Yukawa coupling constants at $M_{\rm GUT}$ is more suppressed as
the gauge coupling constants become larger. Due to this effect, $y_b$
is more suppressed than $y_\tau$ because $g_3$ only affects the
running of $y_b$.
\begin{figure}[t]
\begin{minipage}{0.48\hsize}
\begin{center}
\includegraphics[width=0.98\hsize]{figure/MexVsR_lowTanb.eps}
\end{center}
\end{minipage}
\begin{minipage}{0.48\hsize}
\begin{center}
\includegraphics[width=0.98\hsize]{figure/MexVsR_highTanb.eps}
\end{center}
\end{minipage}
\caption{$R_{b\tau}$ as a function of the mass scale of the extra
matters for models with low $\tan \beta$ (left) and high $\tan
\beta$ (right), taking $(N_{\bf 5},N_{\bf 10}) = (1,0)$ (red),
$(2,0)$ (green), and $(0,1)$ (blue). The horizontal axis denotes
the value of $\mu_{\bf 5}$ for red and green lines and that of
$\mu_{\bf 10}$ for blue lines. We consider both signs of the SUSY
invariant Higgs mass: $\mu > 0$ (dotted) and $\mu < 0$ (solid).
The boundary conditions used in the left figure are $m_{\bf
\bar{5}} = m_{\bf 10} = m_{1/2} = 100\ {\rm TeV}$, $m_{H{\bf 5}}
= m_{H{\bf \bar{5}}} = 80\ {\rm TeV}$, and $a_d = a_u = 0$. $\tan
\beta$ is determined so that $m_h = 125.09\ {\rm GeV}$, which
results in $2.9 < \tan \beta < 3.1$. The boundary conditions for
the right figure are $\tan \beta = 50$, $m_{\bf \bar{5}} = m_{\bf
10} = m_{H{\bf 5}} = m_{H{\bf \bar{5}}} = 0$, and $a_d = a_u =
0$. $m_{1/2}$ is determined so that $m_h = 125.09\ {\rm GeV}$,
which results in $4.5\ {\rm TeV} < m_{1/2} < 6.5\ {\rm TeV}$.}
\label{MexVsR}
\end{figure}
In Fig.\ \ref{MexVsR}, in order to investigate how these effects
affect $R_{b\tau}$, we show $R_{b\tau}$ as a function of the mass
scale of extra matters. The red, green, and blue lines correspond to
the models with $(N_{\bf 5},N_{\bf 10}) = (1,0)$, $(2,0)$, and
$(0,1)$, respectively. (Thus, the horizontal axis corresponds to
$\mu_{\bf 5}$ for red and green lines and $\mu_{\bf 10}$ for blue
lines.) The dotted and solid lines are for models with $\mu > 0$ and
$\mu < 0$, respectively. For the left figure, we take mSUGRA-like
boundary conditions, $m_{\bf \bar{5}} = m_{\bf 10} = m_{1/2} = 100\
{\rm TeV}$, $m_{H{\bf 5}} = m_{H{\bf \bar{5}}} = 80\ {\rm TeV}$, and
$a_d = a_u = 0$. Here, $\tan \beta$ is determined so that the SM-like
Higgs mass is given by the observed value $m_h = 125.09\ {\rm GeV}$;
then, it takes values in the range $2.9 < \tan \beta < 3.1$. The
right figure shows the results for the model with gaugino mediation
boundary conditions \cite{Kaplan:1999ac, Chacko:1999mi}, taking $\tan
\beta = 50$, $m_{\bf \bar{5}} = m_{\bf 10} = m_{H{\bf 5}} = m_{H{\bf
\bar{5}}} = 0$, and $a_d = a_u = 0$. In this case, $m_{1/2}$ is tuned
so that $m_h$ is equal to the observed Higgs mass, which gives $4.5\
{\rm TeV} < m_{1/2} < 6.5\ {\rm TeV}$.
As can be easily understood, the effects of extra matters on the
runnings of $y_b$ and $y_\tau$ are more enhanced as the masses of the
extra matters become smaller. We can see that $R_{b\tau}$ is
suppressed by $\sim 10\ \%$ when the mass scale of the extra matters
is at around the TeV scale, while $R_{b\tau}$ approaches to the MSSM
value when the mass scale $M_{\rm ex}$ becomes close to the GUT scale.
In the MSSM, it is often the case that $R_{b\tau}$ is significantly
smaller than 1 in particular when $\tan \beta$ is small. As we have
seen, the effects of extra matters make $R_{b\tau}$ smaller if extra
matters interact with MSSM particles only through gauge interactions.
We also note here that the deviation between a solid line and the
corresponding dotted line approximately shows (twice) the size of the
threshold correction $\Delta_b$, since the sign of $\Delta_b$ is
determined by that of $\mu$. The size of $\Delta_b$ is also affected
by the change of $M_{\rm ex}$ because the mass spectrum of the MSSM
particles also depends on $M_{\rm ex}$. With the model parameters for
the solid lines in Fig.\ \ref{MexVsR} (right), for example,
$|\Delta_b|$ is enhanced as $M_{\rm ex}$ becomes smaller. However,
the suppression of $R_{b\tau}$ due to the enhancement of $g_3$ at
higher scale is more significant; consequently, $R_{b\tau}$ becomes
smaller as $M_{\rm ex}$ decreases as shown in Fig.\ \ref{MexVsR}.
\subsection{Extra matters with Yukawa couplings}
\label{sec:extra_Yukawa}
So far, we have considered the case where the Yukawa interactions of
the extra matters are negligibly small. However, the extra matters
may couple to MSSM chiral multiplets via Yukawa couplings. With such
new interaction, the renormalization group runnings of the coupling
and mass parameters may be changed, affecting the unification of
Yukawa coupling constants. In this subsection, we show that this is
indeed the case. Among several possibilities, we introduce the Yukawa
interaction for the extra matter in ${\bf 10}$ representation of
$SU(5)$. We will see that, in such a model, the extra matters may
help to make the $b$-$\tau$ unification successful.
Here, we consider the case with $(N_{\bf 5},N_{\bf 10}) = (0,1)$, and
study the effect of the Yukawa interactions of extra matters. We
concentrate on the case where the extra matter scale is comparable to
or higher than $M_{\rm S}$ so that all the extra matters (i.e.,
fermions and scalars) simultaneously decouple from the effective
theory at a single scale $M_{\rm ex}$.
For the study of such a case, it is instructive to use the fact that
the Yukawa interaction above the GUT scale can be written in the
following form:
\begin{align}
W_{\rm Yukawa}^{(SU(5))} = \eta_{b,\tau} \bar{\bf 5}_H {\bf 10}_Y \bar{F}
+
{\bf 5}_H
\left( \begin{array}{cc}
{\bf 10}_Y & {\bf 10}_0
\end{array} \right)
\left( \begin{array}{cc}
\eta_t & \eta_t^{(1)} \\
\eta_t^{(1)} & \eta_t^{(2)} \\
\end{array} \right)
\left( \begin{array}{c}
{\bf 10}_Y \\
{\bf 10}_0 \\
\end{array} \right),
\end{align}
where $\eta$'s are coupling constants. Here, ${\bf 10}_Y$ and ${\bf
10}_0$ are chiral multiplets in the ${\bf 10}$ representation of
$SU(5)$, and are given by linear combinations of $T$ and $T'$. Notice
that, in this basis, only ${\bf 10}_Y$ couples to $\bar{F}$ though the
Yukawa interaction. In addition, ${\bf 5}_H$ and $\bar{\bf 5}_H$ are
chiral multiplets containing up- and down-type Higgses, respectively.
In order to make our point clearer, we take
$\eta_t^{(1)}=\eta_t^{(2)}=0$ in the following analysis. With such an
assumption, the Yukawa interaction below the GUT scale is given by
\begin{align}
W_{\rm Yukawa} = &\,
y'_b H_d Q_Y b_R^c
+ y'_\tau H_d l_L E_Y
+ y'_t H_u Q_Y U_Y,
\label{W_yukawa}
\end{align}
where $Q_Y$, $U_Y$, and $E_Y$ are chiral superfields embedded into
${\bf 10}_Y$. (Chiral multiplets embedded into ${\bf 10}_0$ are
denoted as $Q_0$, $U_0$, and $E_0$.) Denoting the SUSY invariant mass
terms for the extra matters as
\begin{align}
W_{\bf 10} = \sum_{\Phi=Q,U,E}
(\mu_{\Phi_Y} \Phi_Y \bar{\Phi}' + \mu_{\Phi_0} \Phi_0 \bar{\Phi}'),
\end{align}
we obtain
\begin{align}
\left( \begin{array}{c}
Q_Y \\
Q_0 \\
\end{array} \right) &=
\left( \begin{array}{cc}
\cos \theta_Q & \sin \theta_Q \\
- \sin \theta_Q & \cos \theta_Q \\
\end{array} \right)
\left( \begin{array}{c}
q_L \\
Q'
\end{array} \right),
\label{mixingQ}
\end{align}
where
\begin{align}
\cos \theta_Q &= \frac{\mu_{Q_0}}{\mu_Q},\quad \sin \theta_Q =
\frac{\mu_{Q_Y}}{\mu_Q},
\end{align}
with $\mu_Q=\sqrt{\mu_{Q_Y}^2+\mu_{Q_0}^2}$. Similar relations hold
for $(U_Y,U_0)$ and $(E_Y,E_0)$, with the mixing angles
$\theta_{U,E}=\cos^{-1}(\mu_{U_0,E_0}/\sqrt{\mu_{U_Y,E_Y}^2+\mu_{U_0,E_0}^2})$.
At the mass scale of the extra matters, MSSM Yukawa coupling constants
are given by
\begin{align}
y_b (M_{\rm ex}) = &\, y_b' (M_{\rm ex}) \cos\theta_Q,
\label{ybprime} \\
y_\tau (M_{\rm ex}) = &\, y_\tau' (M_{\rm ex}) \cos\theta_E,
\label{ytauprime} \\
y_t (M_{\rm ex}) = &\, y_t' (M_{\rm ex}) \cos\theta_Q \cos\theta_U.
\label{ytprime}
\end{align}
In the present set up, the Yukawa structure is like that of the MSSM
as Eq.\ \eqref{W_yukawa} is obtained from the Yukawa interaction of
the MSSM by replacing $y_{t,b,\tau}\rightarrow y'_{t,b,\tau}$ and
$(q_L, t_R^c, \tau_R^c) \to (Q_Y, U_Y, E_Y)$. Numerically, however,
such a replacement may give significant effects on the Yukawa
unification. This is because, as shown in Eq.\ \eqref{ytprime},
$y'_t$ can be significantly larger than $y_t$ because
$\cos\theta_Q\cos\theta_U<1$. Such an enhancement of the coupling
constant may have the following consequences:
\begin{enumerate}
\item $y_b' (M_{\rm GUT})$ is enhanced through the renormalization
group effect while $y_\tau' (M_{\rm GUT})$ is not (see Eqs.\
\eqref{RGEyb} and \eqref{RGEytau}).
\item $m_{H_u}^2 (M_S)$ is suppressed through the renormalization
group effect. This can be understood from the RGE of $m_{H_u}^2$,
which is given by
\begin{align}
\frac{d}{dt} m_{H_u}^2 =
6 {y_t'}^2 (m_{H_u}^2 + m_{\tilde{Q}}^2 + m_{\tilde{U}}^2 + A_t^2)
+ \cdots,
\label{mhubeta}
\end{align}
where only the $y'_t$-dependence of the beta-function at the
one-loop level is shown in the above equation. This may result in
the enhancement of $|\mu|$ and $|\Delta_b|$.
\end{enumerate}
In order to study the effects of the Yukawa couplings of the extra
matters on the $b$-$\tau$ unification, we solve the RGEs numerically,
taking into account the effects of extra Yukawa couplings. We assume
that the threshold correction to the SUSY invariant masses of extra
matters are negligible, and that they are unified at the GUT scale; we
parameterize their boundary conditions as
\begin{align}
&
\mu_{Q_0} (M_{\rm GUT}) = \mu_{U_0} (M_{\rm GUT}) = \mu_{E_0} (M_{\rm GUT})
\equiv \mu_{\bf 10},
\\
&
\mu_{Q_Y} (M_{\rm GUT}) = \mu_{U_Y} (M_{\rm GUT}) = \mu_{E_Y} (M_{\rm GUT})
\equiv X \mu_{\bf 10}.
\end{align}
Here, $X^{-1}$ is approximately equal to $\cos\theta_\Phi$ (with
$\Phi=Q,U,E$), although they slightly differ because of the
renormalization group effects from the GUT scale to the extra matter
scale. (In our numerical analysis, we have taken into account the
effects of the renormalization group running of SUSY invariant mass
parameters.) In addition, for $Q>M_{\rm ex}$, the scalars in the
extra matter sector have trilinear interactions. We assume that, at the
GUT scale, the trilinear couplings are proportional to the corresponding
Yukawa coupling constants, and that the trilinear interactions above the
extra matter scale are given by
\begin{align}
{\cal L}_{\rm trilinear}^{\rm (soft)} =
- A'_b H_d \tilde{Q}_Y \tilde{b}_R^c
- A'_\tau H_d \tilde{l}_L \tilde{E}_Y
- A'_t H_u \tilde{Q}_Y \tilde{U}_Y,
\end{align}
with the boundary conditions $A'_b (M_{\rm GUT}) = A'_\tau (M_{\rm
GUT}) \equiv a'_d$ and $A'_t (M_{\rm GUT}) \equiv a'_u$.
Furthermore, the SUSY breaking mass parameters above the extra
matter scale can be written as
\begin{align}
{\cal L}^{\rm (soft)}_{\rm scalar\ mass} =
\sum_{\Phi = Q,U,E}
(m_{\Phi_Y}^2 | \tilde{\Phi}_Y |^2 + m_{\Phi_0}^2 | \tilde{\Phi}_0 |^2)
+ \cdots.
\label{softmassEx}
\end{align}
Notice that, with the present choice of parameters, there is no mixing
term between $\tilde{\Phi}_Y$ and $\tilde{\Phi}_0$. Soft SUSY
breaking parameters defined above and below the extra matter scale are
matched at $Q=M_{\rm ex}$ using Eq.\ \eqref{mixingQ} (as well as the
mixing angles for $(U_Y,U_0)$ and $(E_Y,E_0)$).
We show examples of the running of Yukawa coupling constants in Fig.\
\ref{fig:YukawaRunning}, which demonstrates the possibility to make
the unification of the Yukawa coupling constants successful due to the
effects of the Yukawa interactions of extra matters. Solid lines show
the results for the present model while dotted lines denote the result
for the MSSM as a reference. Here, we take $\tan \beta = 27$, $m_{\bf
\bar{5}} = m_{{\bf 5}_H} = m_{{\bf \bar{5}}_H} = m_{1/2} = 3\ {\rm
TeV}$, $a_d = a_u = 0$, $\mu_{\bf 10} = 10^{10}\ {\rm GeV}$, and $X
= 1.4$. $m_{\bf 10}$ is used to adjust the SM-like Higgs mass in each
model to be the observed value, which gives $m_{\bf 10} = 14\ {\rm
TeV}$ for the model with $(N_{\bf 5}, N_{\bf 10}) = (0,1)$ and $12\
{\rm TeV}$ for the MSSM. The vertical dotted lines denote the
matching scales in the model with extra matters: from left to right,
they correspond to $Q = m_t, M_{\tilde{G}}, M_S, M_{\rm ex}$, and
$M_{\rm GUT}$, respectively. The large ``jumps'' of solid lines in
Fig.\ \ref{fig:YukawaRunning} at $Q = M_S$ are due to the threshold
corrections, while those at $Q = M_{\rm ex}$ are mainly due to the
mixing effect represented in Eq.\ (\ref{ybprime}) and Eq.\
(\ref{ytauprime}) since we plot $y_b' \cos \beta$ and $y_\tau' \cos
\beta$ instead of $y_b \cos \beta$ and $y_\tau \cos \beta$ by solid
lines in the range $Q > M_{\rm ex}$. We can see from the figure that
the enhancement of $|\Delta_b|$ significantly modifies the prediction
for Yukawa unification. Together with the change in the running of
$y_b$ as we discussed before, the mixing $X > 1$ enlarges the
prediction for $R'_{b\tau}$, which is defined as
\begin{align}
R'_{b\tau} \equiv \frac{y_b' (M_{\rm GUT})}{y_\tau' (M_{\rm GUT})}.
\label{Rbtaunew}
\end{align}
It becomes almost $1$ in the present choice of parameters, while
$R_{b\tau} \simeq 0.91$ in the case of the MSSM with the same choice
of GUT scale boundary conditions for SUSY breaking parameters.
\begin{figure}
\begin{minipage}{0.48\hsize}
\begin{center}
\includegraphics[width=0.98\hsize]{figure/YtRunning.eps}
\end{center}
\end{minipage}
\begin{minipage}{0.48\hsize}
\begin{center}
\includegraphics[width=0.98\hsize]{figure/BTauRunning.eps}
\end{center}
\end{minipage}
\caption{Runnings of $\tilde{y}_t$ and $y^{(\prime)}_t \sin \beta$ (left)
and those of $\tilde{y}_b$, $y^{(\prime)}_b \cos \beta$,
$\tilde{y}_\tau$ and $y^{(\prime)}_\tau \cos \beta$ (right) in the
model with $(N_{\bf 5},N_{\bf 10}) = (0,1)$ (solid lines) and in
the MSSM (dotted lines). Here, we take $\tan \beta = 27$, $m_{\bf
\bar{5}} = m_{{\bf 5}_H} = m_{{\bf \bar{5}}_H} = m_{1/2} = 3\
{\rm TeV}$, $a_d = a_u = 0$, $\mu_{\bf 10} = 10^{10}\ {\rm GeV}$,
and $X = 1.4$. Here, $m_{\bf 10}$ is tuned to adjust the SM-like
Higgs mass to be $m_h = 125.09\ {\rm GeV}$, which gives $m_{\bf
10} = 14\ {\rm TeV}$ for the model with $(N_{\bf 5}, N_{\bf 10})
= (0,1)$ and $m_{\bf 10} = 12\ {\rm TeV}$ for the MSSM. The
vertical dotted lines denote the matching scales in the model with
extra matters: $Q = m_t$, $M_{\tilde{G}}$, $M_S$, $M_{\rm ex}$,
and $M_{\rm GUT}$ from left to right. For $M_S < Q < M_{\rm ex}$
($Q > M_{\rm ex}$), the solid lines denote $y_t \sin \beta$ ($y'_t
\sin \beta$) or $y_f \cos \beta$ ($y'_f \cos \beta$)
with $f=b$, or $\tau$.}
\label{fig:YukawaRunning}
\end{figure}
In Fig.\ \ref{mixingVsR}, we show the $X$ dependence of $R'_{b\tau}$,
taking $\tan \beta = 27$, $m_{\bf \bar{5}} = m_{{\bf 5}_H} = m_{{\bf
\bar{5}}_H} = m_{1/2} = 3\ {\rm TeV}$, and $a_d = a_u = 0$.
$m_{\textbf{10}}$ is tuned for each value of $X$ to adjust the SM-like
Higgs mass to be the observed value; as a result, $m_{\textbf{10}}$
takes values in the range between $11.5\ {\rm TeV}$ and $14\ {\rm
TeV}$. The red line shows the result for the case with
$\mu_{\textbf{10}} = 10^{10}\ {\rm GeV}$ and the green one shows that
with $\mu_{\textbf{10}} = 10^4\ {\rm GeV}$. We can understand from
the figure that $R'_{b\tau}$ is enhanced with larger $X$. In the
present choice of parameters with $\mu_{\textbf{10}} = 10^{10}\ {\rm
GeV}$, $R'_{b\tau}=1$ is possible. On the other hand, for
$\mu_{\textbf{10}} = 10^4\ {\rm GeV}$, the enhancement is not so
important since in this case, the suppression of $y_b$ due to the
enhancement of the gauge coupling constants is so large (see Fig.\
\ref{MexVsR}) that it cancels the advantage of the mixing effect.
Notice that the lines in Fig.\ \ref{mixingVsR} terminate at some value
of $X$. This is because, for larger value of $X$, $m_{\textbf{10}}$
becomes larger in order to fix the SM-like Higgs mass, which in turn
may make the right-handed sbottom mass squared or the left-handed
slepton mass squared being negative at $Q=M_S$, causing the tachyonic
sfermion problem.
\begin{figure}[t]
\begin{center}
\includegraphics[width=90mm]{figure/mixingVsR.eps}
\end{center}
\caption{Mixing parameter $X$ dependence of $R'_{b\tau}$ defined in
Eq.\ (\ref{Rbtaunew}). The parameters used are the same as Fig.\
\ref{fig:YukawaRunning}. $m_{\textbf{10}}$ is again used to adjust
the SM-like Higgs mass to be $m_h = 125.09\ {\rm GeV}$ at each value
of $X$. The red and the green lines denote the model with
$\mu_{\textbf{10}} = 10^{10}\ {\rm GeV}$ and $\mu_{\textbf{10}} =
10^4\ {\rm GeV}$, respectively.} \label{mixingVsR}
\end{figure}
\section{Summary}
\setcounter{equation}{0}
In this letter, we have studied the $b$-$\tau$ unification in SUSY
$SU(5)$ models with extra matters. We have assumed that the extra
matters are embedded into full $SU(5)$ multiplets. We have seen that
the extra matters may significantly affect the $b$-$\tau$ unification
in particular when the mass scale of the extra matters is much lower
than the GUT scale.
We have first considered the case where the extra matters interact
with the MSSM particles only through gauge interaction. In such a
case, the ratio of $y_b$ and $y_\tau$ at the GUT scale, which we
called $R_{b\tau}$, becomes suppressed as the mass scale of the extra
matters becomes smaller. This is because, with the extra matters, the
$SU(3)_C$ gauge coupling constant is enhanced at higher scale,
resulting in the suppression of the bottom Yukawa coupling constant at
the GUT scale. The suppression of $R_{b\tau}$ has been found to be
$\sim 10\ \%$.
We have also studied the effects of the Yukawa couplings of the extra
matters with MSSM particles. In the case we have studied, the Yukawa
couplings above the mass scale of the extra matters are effectively
enhanced, resulting in the change of the ratio of the (effective) $b$
and $\tau$ Yukawa coupling constants. In particular, we have shown
that a simple Yukawa unification (i.e., $R'_{b\tau}=1$) can be
realized via the effects of extra matters with Yukawa interaction even
though $R_{b\tau}$ is significantly smaller than $1$ for the case
without extra matters with the same GUT scale boundary conditions.
\vspace{5mm}
\noindent {\it Acknowledgements}: This work was supported by the
Grant-in-Aid for Scientific Research C (No.26400239), and Innovative
Areas (No.16H06490). The work of S.C. was also supported in part by
the Program for Leading Graduate Schools, MEXT, Japan.
|
2,877,628,090,438 | arxiv | \section{Introduction}
Planetary transits are one of the many methods by which extrasolar planets
have been discovered, and are also the one that provides the most complete set
of information about the planetary system. Only planets with very
specific orbital characteristics have a transit visible from Earth,
because the orbital plane has to be aligned to within a few degrees of
the line of sight. Therefore transiting planets are
rare. Nevertheless, a transiting extrasolar planet offers the
opportunity to determine the mass of the planet - when combined with radial velocity (RV)
measurements - since the inclination is now measurable, as well as the
planetary radius, the density, the composition of the planetary
atmosphere, the thermal emission from the planet, and many other
properties (see \cite[Charbonneau et al. (2007)]{charbonneau2007} for a review). Additionally,
and unlike RV surveys, transiting planets should be readily detectable
down to $1 R_\oplus$ and beyond, even for relatively long periods.
Having accurate predictions of the number of detectable transiting
planets is immediately important for the evaluation and design of
current and future transit surveys. For the current surveys,
predictions allow the operators to judge how efficient are their
data-reduction and transit detection algorithms. Future surveys can
use the general prediction method that we describe here to optimize
their observing set-ups and strategies. More generally, such
predictions allow us to test different statistical models of
extrasolar planet distributions. As more transiting planets are discovered, these statistical properties are increasingly becoming the frontier of research, shifting the focus of the field away from individual detections.
\section{The Failure of Simple Estimates}
Using straightforward estimates it appears that observing a planetary
transit should not be too difficult, presuming that one observes a
sufficient number of stars with the requisite precision during a given
photometric survey. Specifically, if we assume that the probability of
a short-period giant planet (as an example) transiting the disk of its
parent star is 10\%, and take the results of RV surveys which indicate
the frequency of such planets is about 1\% (\cite[Cumming et al. 2008]{cumming08}),
together with the assumption that typical transit depths are also
about 1\%, the number of detections should be $\approx 10^{-3}N_{\leq
1\%}$, where $N_{\leq 1\%}$ is the number of surveyed stars with a
photometric precision better than 1\%.
Unfortunately, this simple and appealing calculation fails. Using this
estimate, we would expect that the TrES survey, which has examined
approximately 70,000 stars with better than 1\% precision, to have
discovered 70 transiting short period planets. But, at the date of
this writing, they have found four. Indeed, overall only 51 transiting
planets have been found at this time by photometric surveys
specifically designed to find planets around bright stars\footnote{As
of June 2008. See the Extrasolar Planets Encyclopedia at
http://exoplanet.eu for an up-to-date list.}. This is almost one
hundred times less than what was originally predicted by somewhat more
sophisticated estimates (\cite[Horne 2003]{horne2003}).
Clearly then, there is something amiss with this method of estimating
transiting planet detections. Several other authors have developed
more complex models to predict the expected yields of transit
surveys. \cite[Pepper at al. (2003)]{pepper2003} examined the potential of all-sky surveys,
which was expanded upon and generalized for photometric searches in
clusters \cite[Pepper \& Gaudi (2005)]{pepper2005}. \cite[Gould et al. (2006)]{gould2006} and \cite[Fressin et al. (2007)]{fressin2007} tested whether the OGLE planet detections are statistically consistent
with radial velocity planet distributions. \cite[Brown (2003)]{brown2003} was the
first to make published estimates of the rate of false positives in
transit surveys, and \cite[Gillon et al. (2005)]{gillon2005} model transit detections to
estimate and compare the potential of several ground- and space-based
surveys.
As has been recognized by these and other authors, there are four
primary reasons why the simple way outlined above of estimating
surveys yields fails.
First, the frequency of planets in close orbits about their parent
stars (the planets most likely to show transits) is likely lower than
RV surveys would indicate. Recent examinations of the results from the
OGLE-III field by \cite{gould2006} and \cite{fressin2007} indicate
that the frequency of short-period Jovian-worlds is on the order of
$0.45\%$, not $1.2\%$ as is often assumed by extrapolating from RV
surveys \cite{marcy2005}. \cite{gould2006} point out that most
spectroscopic planet searches are usually magnitude limited, which
biases the surveys toward more metal-rich stars, which are brighter
at fixed color. These high metallicity stars are expected
to have more planets than solar-metallicity stars \cite{santos2004,
fischer2005}.
Second, a substantial fraction of the stars within a survey field that
show better than 1\% photometric precision are either giants or early
main-sequence stars that are too large to enable detectable transit
dips from a Jupiter-sized planet \cite{gould2003, brown2003}.
Third, robust transit detections usually require more than one transit
in the data. This fact, coupled with the small fraction of the orbit a
planet actually spends in transit, and the typical observing losses at
single-site locations due to factors such as weather, create low
window probabilities for the typical transit survey in the majority of
orbital period ranges of interest \cite{vonbraun2007}.
Lastly, requiring better than 1\% photometric precision in the data is
not a sufficient condition for the successful detection of transits:
identifiable transits need to surpass some kind of a detection
threshold, such as a signal-to-noise ratio (S/N) threshold. The S/N of
the transit signal depends on several factors in addition to the
photometric precision of the data, such as the depth of the transit
and the number of data points taken during the transit
event. Additionally, ground-based photometry typically exhibits
substantial auto-correlation in the time series data points, on the
timescales of the transits themselves. This additional red noise,
which can come from a number of environmental and instrumental
sources, substantially reduces the statistical power of the data
\cite{pont2006}.
\section{Approaches to Modeling}
There are, generally speaking, two different approaches to the problem of statistically modeling transit surveys: forward and backward modeling. Forward modeling is the more general of the two, since it strives to predict the number of detections a survey should see by working ``forward'' from the parameters of the survey telescopes and target fields. As a result, forward modeling schemes are dependent not only on the uncertainties in extrasolar planet statistics, but also on the uncertainties in the stellar and galactic properties that must be used to model the observed star field.
Backwards modeling avoids these uncertainties, at the loss of generality, by taking a known set of stars and survey parameters. This differs from forward modeling in that the properties of the target stars are known \emph{a priori}, usually in the form of an input catalog or something similar. This knowledge considerably increases the accuracy of the predictions compared to the forward modeling case, since one no longer needs to be concerned with statistically modelling the target stars. Nevertheless, only a few transit surveys, planned or operating, have devoted the telescope time towards constructing thorough input catalogs.
\subsection{Forward Modeling}
Thinking about the forward modeling problem in the most general sense, we can describe the average number of planets that a transit survey should detect as the probability of
detecting a transit multiplied by the local stellar mass function,
integrated over mass, distance, and the size of the observed field
(described by the Galactic coordinates $(l,b)$:
\begin{equation}\label{eq:10}
\frac{d^6 N_{det}}{dR_p\ dp\ dM\ dr\ dl\ db} = \rho_*(r,l,b)\ r^2 \cos b\ \frac{dn}{dM}\ \frac{df(R_p,p)}{dR_p\ dp}\ P_{det}(M,r,R_p,p),
\end{equation}
where $P_{det}(M,r,R_p,p)$ is the probability that a given star of
mass $M$ and distance $r$ orbited by a planet with radius $R_p$ and
period $p$ will present a detectable transit to the observing
set-up. $\frac{df(R_p,p)}{dR_p\ dp}$ is the probability that a star
will possess a planet of radius $R_p$ and period $p$. $dn/dM$ is the
present day mass function in the local solar neighborhood, and
$\rho_*$ is the local stellar density for the three-dimensional
position defined by ($r,l,b$). We use $r^2 \cos b$ instead of the
usual volume element for spherical coordinates, $r^2 \sin \phi$,
because $b$ is defined opposite to $\phi$: $b=90^\circ$ occurs at the
pole.
As an example of forward modeling, \cite{beatty2008} (hereafter B\&G) simulated the TrES and XO transit surveys under various magnitude and signal-to-noise (S/N) limits using the planet frequencies of \cite{gould2006}. We found that both surveys have discovered about the number of transits that one would expect, though the exact numbers are highly sensitive to the assumed magnitude and S/N cut-offs, as well as to the amount of red-noise.
For the TrES survey, we simulated 13 of the TrES target fields. The locations and observation times for each was collected from the Sleuth observing website\footnote{http://www.astro.caltech.edu/$\sim$ftod/tres/sleuthObs.html}. Figure 1 shows the expected distribution of detections, in terms of host star mass and radius, for the Lyr1 field under three different scenarios. Lyr1 is the field containing TrES-1, and is adjacent to the field containing TrES-2; both stars are marked in the figure, and both lie close to the predicted maximum number of detections.
\begin{figure}[h]
\begin{center}
\includegraphics[width=3.5in]{tres2x1.eps}
\caption{Predicted TrES detections in the Lyr1 field. The three scenarios demonstrate the effect that uncertainties in the magnitude cut-off and amount of red-noise can have on survey predictions.}
\label{fig1}
\end{center}
\end{figure}
In terms of raw numbers, B\&G find that with cut-offs of $m_R\leq13$ and S/N $\geq30$, TrES should detect 13 planets, as compared to the 4 planets that have been (as of June 2008) found. We selected these cut-offs as those used by the TrES team in their publications. Interestingly, however, none of the TrES planets have host stars dimmer than $m_R=12$. When we re-simulate the TrES fields using a limit of $m_R\leq12$, the number of detections dropped to 8.16. Alternatively, if we keep the $m_R\leq13$ limit - but add in 3 mmag of red-noise (\cite[Pont et al. 2006]{pont2006}) - the predicted number of detections drops again, to 7.68.
All of which serves to demonstrate the sensitivity of survey predictions to the statistical cut-offs used.
In the case of XO, which uses a drift-scanning technique, B\&G simulated each of the six XO fields, spaced evenly in right ascension around the sky. The first planet discovered by the XO survey, XO-1b, (\cite[McCullough et al. (2006)]{mccullough2006}) is located in the field at 16 hrs, and so Figure 2 plots the distribution of detections within this field over the masses and radii of the host stars.
\begin{figure}[h]
\begin{center}
\includegraphics[width=3.5in]{xo2x1.eps}
\caption{Predicted XO detections in the survey's field at RA = 16 hrs. As with the TrES survey example, several different cut-off scenarios are shown.}
\label{fig2}
\end{center}
\end{figure}
Similar to TrES, Figure 2 shows several different cut-off scenarios. The fiducial case examined in B\&G is for $m_V\leq12$ and S/N $\geq30$, and yields 2.02 predicted detections over all fields (of which only one is displayed in Figure 2). XO has actually detected 3 planets. If the S/N limit is lowered to S/N $\geq20$, the number of predicted detections increases to 5.86. Also similar to TrES, the actual magnitude limit of the XO survey may be brighter than what is in the literature. Indeed, all three XO host stars are at $m_V\leq11.5$ (XO-1, the dimmest, is at $m_V=11.3$). In the case of $m_V\leq11.5$ and S/N $\geq20$, the predicted detections are 4.21.
Again, all of which is to show that not only is it possible to make reasonable predictions for the number of planets that a photometric survey will find using a forward modeling approach, but also to illustrate how much those predictions, and by extension the real numbers, depend upon the statistical cut-offs used by the transit surveys.
\subsection{Backwards Modeling}
In the case that a survey has a known set of target stars, it is possible to use the properties of those stars directly in the simulations of the survey, and work ``backwards'' from that starting point to a set of predicted detections. Therefore, and unlike Equation 3.1, for backwards modeling the statistical treatment of the target stars is abandoned, and replaced by a direct summation over all the target stars,
\begin{equation}\label{eq:20}
\frac{d^2 N_{det}}{dR_p\ dp} = \sum_i \frac{df(R_p,p)}{dR_p\ dp}\ P_{det}(M_i,r_i,R_p,p).
\end{equation}
Here we use the same definitions as Equation 3.1, but sum over $i$ targets.
Kepler is one of the very few surveys that has put together a comprehensive input catalog of all the stars in their target field. I was able to use the Kepler Input Catalog (KIC) as a demonstration of backwards modeling after David Latham kindly provided access to an early version of the KIC for use in these predictions. The target stars were pulled from the KIC by selecting all the entries with a spectroscopic $T_{eff}\leq10,000$K, $m_{Kepler}\leq15$, and spectroscopic $log(g)\geq3.5$. These criteria yielded 109,400 targets. I then used the mass-temperature and mass-radius relations from B\&G to estimate masses and radii for all of the selected stars. Table 1 shows the spectral distribution of the targets from the KIC, as well as the distribution predicted by B\&G using forward modeling.
\begin{table}[h]
\begin{center}
\caption{Spectral Distribution of the Kepler Field}
\label{tab1}
{\scriptsize
\begin{tabular}{cccccc|c}
\hline
& A & F & G & K & M & All\\
\hline
KIC: & 3549 & 55720& 45216& 4735& 180& 109400\\
B\&G: & 5066 & 41094& 45568& 13351& 942& 106342\\
\hline
\end{tabular}}
\end{center}
\end{table}
Compared to ground-based transit surveys, Kepler will be able to discover many more close in giant planets. B\&G divides these planets into Very Hot Jupiters (VHJs) and Hot Jupiters (HJs) depending on their orbital period (1-3 and 3-5 days, respectively). Using the planet frequencies from \cite{gould2006}, Table 2 shows the results of using the KIC to predict the number of VHJs and HJs that Kepler will discover, and also the results of the similar forward modeling simulations done in B\&G. The numbers are very similar, which reflects the similarity of the total star counts in both the KIC and B\&G: Jupiter-sized planets are so easy for Kepler to detect, it gets all of them. The only question is exactly how many target stars Kepler will observe.
\begin{table}[h]
\begin{center}
\caption{Kepler's Transiting VHJs (1-3 days) and HJs (3-5 days)}
\label{tab1}
{\scriptsize
\begin{tabular}{cccccc|c}
\hline
& A & F & G & K & M & All\\
\hline
VHJs:& 1.31& 16.35& 10.31& 0.88& 0.02& 28.86\\
HJs:& 1.75& 21.91& 13.81& 1.17& 0.03& 38.67\\
\hline
Total:& 3.06& 38.26& 24.12& 2.05& 0.05& 67.53\\
B\&G:& 4.40& 28.60& 23.50& 5.80& 0.33& 62.90\\
\hline
\end{tabular}}
\end{center}
\end{table}
The number of habitable\footnote{By habitable, I mean a planet in a circular orbit whose semimajor axis is scaled so that it will have the same blackbody equilibrium temperature as the Earth.} Earth-like planets that Kepler will detect is another quantity of interest. Using the KIC, and assuming no red-noise and that every star posses a habitable Earth, Kepler should detect 36.17 planets. This is slightly less than half the number predicted in B\&G, which also assumes that every star possess an Earth, that there is no red noise in the data, and predicts 78.89 detections. The difference is attributable to the estimated spectral distribution of the KIC stars, which is skewed more towards earlier-type stars than B\&G. The skew results from the low $log(g)$ cut-off, which would select some slightly evolved stars.
In any event, it should be stressed that both predictions (KIC and B\&G) are extreme upper limits to the number of habitable Earths that Kepler will detect. In reality, it is improbable that every star will have a habitable Earth, and that Kepler will have no red-noise in its data. Indeed, B\&G considers the effect of varying amounts of red-noise on Kepler's habitable planet detections, and finds that even small amounts could have potentially large effects on the final detection numbers.
\section{Conclusion}
The field of transiting planets is transitioning from pure discovery, wherein every new transiting planet is newsworthy, to the details. Individual planets, aside from the smallest ones being discovered, are becoming less interesting in and of themselves, and more interesting for what they can tell us about the statistical properties of planets. Given that transiting planets are so far one of the only ways that we may infer the radius of extrasolar planets and their exact orbital properties, the statistical characteristics of this group of objects is one of the only foreseeable ways that the areas of planetary interiors, system dynamics, migration, and formation will acquire more data.
That being said, it is often hard to draw specific statistical conclusions from the results of the transit surveys. In B\&G we were able to adopt hard cut-offs on our simulated detections, but the transit surveys typically do not adopt the strict detection criteria that we have used. Promising planet candidates are often followed up even if they are beyond the stated S/N or magnitude limits of the survey. Understanding and quantifying how the survey teams select candidates is vital to appropriately deriving the statistical properties of extrasolar planets. Indeed, the results of calculations shown here and in B\&G demonstrate that the actual predictions depend crucially on the specific magnitude and S/N threshold used.
In the future, besides using the results of B\&G to design more efficient transit surveys, I would hope that a more systematic approach towards transit surveys will allow the model to be used to make more specific statistical comparisons. As the number of known transiting planets grows, we would also expect that the model will be used to test different distributions of planet frequencies, periods, and radii against those observational results. This will allow us to better understand the statistics of extrasolar planetary systems, improve our ability to find new planets, and help to understand the implications of the ones that have already been detected.
|
2,877,628,090,439 | arxiv | \section{Introduction\label{sec1}}
The flow properties of a solution of polymers have attracted the
interest of physicists for a long time. One side of the problem
concerns how the concentration of dissolved polymers influences
e.g. the viscous (or viscoelastic) properties of the fluid. The other
side concerns how fluid flow influences the behavior of polymers. Here
we restrict ourselves to the latter. Specifically, we consider a
dilute solution of double-stranded DNA (dsDNA) segments in water under
shear. Double-stranded DNA is a semiflexible polymer, since it
preserves mechanical rigidity over a range, characterized by the
persistence length $l_p\approx40$ nm \cite{dnapersist,wang}, along its
contour.
That a polymer will go through a ``coil-stretch transition'' under the
influence of a shear flow was originally predicted by de Gennes
\cite{degennes}, although it would be more than two decades before
the coil-stretch transition would be put to experimental
verification. Interestingly however, the first key experiment along
this line --- combining fluid flow and fluorescence microscopy
techniques (the latter in order to visually track polymers) --- was
performed to determine the force-extension curve of dsDNA, wherein
uniform water flow was used to stretch (end-tethered) polymers
\cite{perkins95}. Extending that experimental setup to include more
complicated flow patterns, such as elongational flow
\cite{perkins97,smith98} and shear flow \cite{smith99,leduc99} soon
followed, driven by the quest to understand how flow-induced
conformational changes take place in polymers (see e.g.,
Ref. \cite{shaqfeh} for a review).
An intriguing by-product of the experiments with shear flow was
the tumbling motion of the chains, which can be tracked by, e.g., the
relative orientation of the polymer's end-to-end vector with respect to
the direction of the flow \cite{smith99,leduc99}. Although irregular
at short time-scales, a tumbling frequency could be defined based
on the long-time statistics of the chain's orientation. The tumbling
behavior soon started to receive further attention from researchers:
over the last decade and a half, a number of models have been constructed
\cite{turit1,turit2,lang}
and further experiments have been performed
\cite{doyle,teix,schroeder2,harasim} to characterize and
quantify the tumbling behavior, in particular the dependence of the
tumbling frequency on the shear strength. The subject of this paper,
too, is tumbling behavior in a shear flow, specifically for a dsDNA
chain that is smaller than its persistence length.
As stated earlier, the shear strength $\dot\gamma$ is customarily
expressed by the dimensionless Weissenberg number Wi
$=\dot\gamma\tau$, where $\tau$ is a characteristic time-scale for the
polymer. At one extreme, for flexible polymers (polymer segments that
are many times longer than their persistence length, assuming coil
configurations in the absence of shear), which many of the above
studies focus on, the natural choice for $\tau$ is the polymer's
terminal relaxation time. For them there is good theoretical,
numerical and experimental evidence that the tumbling frequency $f$
scales with Wi as $f\propto$ Wi$^{2/3}$
\cite{doyle,teix,schroeder2,turit1,turit2}. At the other extreme, for
semiflexible polymer segments (polymer segments shorter than their
persistence lengths resemble the configuration of rigid rods in the
absence of shear, and the natural choice of $\tau$ is the time-scale
for rotational diffusion of a rigid rod of the same length), one
expects the rigid rod result, namely that the tumbling frequency
scales as $f\propto$ Wi$^{2/3}$ \cite{jeffery,harasim,bloete}. (Given
that the physics of tumbling is different for flexible and
semiflexible polymers, the similarity in the scaling behavior of $f$
is striking.)
Recently, Harasim {\it et al.\/} \cite{harasim} experimented with
tumbling f-actin segments of several lengths ($\sim$ 3-40 $\mu$m) in a
shear flow. They found that the tumbling frequency $f$ follows the law
$f \propto$ Wi$^{2/3}$ for small Weissenberg numbers. A closer
inspection of their data reveals significant deviations from the $f
\propto$ Wi$^{2/3}$ power-law around and above the persistence length
($\approx$ 16 $\mu$m). Images and movies out of the experiments have
revealed that f-actin segments of lengths smaller than the persistence
length can strikingly buckle into J and U-shapes, broadly known as
Euler buckling.
These issues of buckling and the tumbling frequency were taken up by Lang
et al. \cite{lang} by an extensive modeling study, using the inextensible
wormlike chain as Hamiltonian. They discussed the tumbling frequency
for the whole range spanning the two extremes, i.e., from flexible to
semiflexible polymer segments, and reported, in the intermediate regime,
the dependence $f \propto$ Wi$^{3/4}$.
The present paper has been inspired by the experiment of Harasim {\it
et al}. \cite{harasim}. Our focus is to provide a {\it
quantitative\/} characterization of the Euler buckling, and the
corresponding shapes of a tumbling semiflexible polymer segment in a
shear flow. To this end, we take advantage of a recently developed
bead-spring model for semiflexible polymers \cite{leeuwen,leeuwen1}
and its highly efficient implementation on a computer
\cite{leeuwen2}. We model dsDNA segments dynamics for lengths
$\lesssim20$ nm, and analyze their dynamics in terms of the Rouse
modes \cite{rouse}. The persistence length of dsDNA is $\approx 40$
nm, corresponding to $\approx 120$ beads with the average intra-bead
distance $\approx0.33$ nm, the length of a dsDNA basepair. We show
that the tumbling frequency adheres to the rigid rod results
at low Wi and that for high Wi, semiflexible polymer segments tumble much faster.
This difference quickly leads us to
issues related to (Euler) buckling of the chain under the influence of
shear. We first analyze Euler buckling in terms of the oriented
deterministic state (ODS), which results from turning off the
stochastic (thermal) forces in polymer dynamics at a fixed orientation
of the chain. In this state the internal forces, tending to keep the
chain straight, balance the shear forces. Below a critical
Weissenberg number Wi$_{\text c}$, the ODS shows a slightly bend
S-shape. Above Wi$_{\text c}$ a symmetry breaking takes place,
analogous to pitchfork bifurcation, where the ODS strongly deviates
from a rigid rod.
We follow up the ODS analysis with simulations and demonstrate
symmetry breaking in computer experiments, and demonstrate that
similar to the experimental snapshots found for f-actin filaments in
Ref. \cite{harasim}, shear can cause strong deformation, even for a
chain that is shorter than its persistence length.
The structure of the paper is as follows. In Sec. \ref{sec2} we
introduce the model. In Sec. \ref{sec3} we describe the polymer
dynamics in terms of the Rouse modes. In Sec. \ref{sec4} we analyze
the time evolution of the orientation of the polymer, from which we
determine the tumbling frequency. In Sec. \ref{sec5} we analyze Euler
buckling, identify the critical Weissenberg number Wi$_{\text c}$ and
solve for the shapes of the polymer in the ODS. We follow up the
theory of Sec. \ref{sec5} with simulations in Sec. \ref{sec6}, and end
the paper with a discussion in Sec. \ref{sec7}. A movie of a tumbling
dsDNA segment can be found in the ancillary files --- details on the
movie are provided in Sec. \ref{sec6}.
\section{The model \label{sec2}}
The Hamiltonian for our bead-spring model for semiflexible polymers,
the details of which can be found in our earlier works
\cite{leeuwen,leeuwen1,leeuwen2}, reads
\begin{equation}
{\cal H} =\frac{\lambda}{2} \sum^N_{n=1} (|{\bf u}_n|-d)^2 -
\kappa \sum^{N-1}_{n=1} {\bf u}_n \cdot {\bf u}_{n+1},
\label{a1}
\end{equation}
with stretching and bending parameters $\lambda$ and $\kappa$
respectively. Here ${\bf u}_n$ is the bond vector between the
$(n-1)$-th and the $n$-th beads
\begin{equation}
{\bf u}_n = {\bf r}_n - {\bf r}_{n-1},
\label{a2}
\end{equation}
and ${\bf r}_n$ is the position of the $n$-th bead
($n=0,1,\ldots,N$). The parameter $d$ provides a length-scale, by
the use of which we reduce the Hamiltonian to
\begin{eqnarray}
\frac{\cal
H}{k_BT}\!=\!\frac{1}{T^*}\!\left[\sum^N_{n=1} (|{\bf
u}_n|\!-\!1)^2\!-\! 2 \nu\!\! \sum^{N-1}_{n=1}\! {\bf u}_n\! \cdot\!
{\bf u}_{n+1}\! \right]\!\!,
\label{e02}
\end{eqnarray}
with dimensionless $\nu=\kappa/\lambda$ and $T^*=k_BT/(\lambda d^2)$
parametrizing the Hamiltonian. In this formulation the persistence
length of the polymer is given by $l_p=(\nu/T^*)d/(1-2\nu)$. The model
is a discrete version of the polymer with $N$ discretization units
(i.e., of length $N$). From the analysis of the ground-state of the
Hamiltonian (\ref{e02}) \cite{leeuwen,leeuwen1,leeuwen2}, each
discretization unit can be shown to have a length $a=d/(1-2\nu)$.
The parameters of the model --- $T^*$ and $\nu$ --- are determined by
matching to the force-extension curve. For dsDNA, our semiflexible
polymer of choice in this paper, we use $a=0.33$ nm, the length of a
dsDNA basepair, which leads to $T^*=0.034$ and $\nu=0.353$, meaning
that one persistence length corresponds to $N\approx 120$
\cite{leeuwen,leeuwen1,leeuwen2}.
\section{Polymer dynamics\label{sec3}}
\subsection{Construction of the Rouse modes
modes\label{sec3a}}
We analyze the dynamics of the polymer by its Rouse modes, since they
turn out to be a convenient scheme for solving the equations of motion
with a sizable time step, without introducing large errors
\cite{leeuwen2}.
The representation of the configurations of a polymer chain in terms
of its fluctuation modes uses basis functions. The well-known Rouse
modes employ the basis functions
\begin{equation}
\phi_{n,p}= \left(\frac{2}{N+1}\right)^{1/2} \cos
\left[\frac{(n+1/2)p\pi}{N+1} \right],
\label{b1}
\end{equation}
such that conversion of positions ${\bf r}_n$ to Rouse modes ${\bf
R}_p$ and vice versa given by
\begin{equation}
{\bf R}_p = \sum_n {\bf r}_n \phi_{n,p}, \quad \quad
\quad {\bf r}_n = \sum_p \phi_{n,p} {\bf R}_p,
\label{b2}
\end{equation}
The modes with $p=0$ correspond to the location of the center-of-mass,
the dynamics of which can be rigorously separated from that of the
other modes. We eliminate the center-of-mass motion by always
measuring the bead positions with respect to the center-of-mass.
\subsection{The equations for the Rouse modes under
shear\label{sec3b}}
We consider the situation where water flows in the $\hat{\bf
x}$-direction, with a shear gradient $\dot\gamma$ in the $\hat{\bf
y}$-direction. The Langevin equation for the motion of the bead
position ${\bf r}_n$ then reads
\begin{equation}
\frac{d {\bf r}_n}{ dt} = -\frac{1}{\xi}\frac{\partial{\cal H}}{\partial
{\bf r}_n} + \dot{\gamma} \, (y_n -Y_{cm}) \, \hat{\bf x} +\bm{k}_n.
\label{c1}
\end{equation}
The Hamiltonian ${\cal H}$ is given in Eq. (\ref{a1}), and $\xi$ is
the friction coefficient due to the viscous drag, acting on each
bead. The first term on the right hand side of the equation represents
the internal force, which tends to keep the chain straight. The
second term is the shear force due to the flow, where $\dot{\gamma}$
is the shear rate, $Y_{cm}$ is the $y$ co-ordinate of the
center-of-mass of the chain and $y_n$ is the $y$ co-ordinate of monomer
$n$. As mentioned earlier, we measure the bead positions wrt the
location of the chain's center-of mass, leading to the term $\propto
(y_n -Y_{cm})$. The last term in Eq. (\ref{c1}) gives the influence
of the random thermal force $\bm{k}_n$, which has the correlation
function
\begin{equation}
\langle k^\alpha_n (t) \, k^\beta_m (t' ) \rangle = (2 \, k_B \, T /\xi) \,
\delta^{\alpha,\beta} \, \delta_{n,m} \delta (t-t').
\label{c2}
\end{equation}
In order to work with dimensionless units we replace the time $t$ by
\begin{equation}
\tau =\lambda t / \xi.
\label{c3}
\end{equation}
The ratio $\xi/ \lambda$ then becomes the microscopic time scale, such
that $\tau$ is dimensionless. In the same spirit we combine the shear
ratio $\dot{\gamma}$ with this time scale leading to the dimensionless
constant $g$ as the shear strength
\begin{equation}
g = \dot{\gamma} \, \frac{\xi}{\lambda}.
\label{c4}
\end{equation}
The shear strength is customarily expressed in terms of the Weissenberg
number, which we define as
\begin{equation}
\mbox{Wi} = \frac{\dot{\gamma}}{2 D_r} \quad \quad \quad {\rm with} \quad \quad
D_r = \frac{k_B T}{I \xi}.
\label{c5}
\end{equation}
Here $D_r$ is the rotational diffusion constant with $I$ as the moment
of inertia of the polymer segment in its ground-state (of the
Hamiltonian). The relation between the two dimensionless quantities Wi
and $g$ is then given by
\begin{equation} \label{cn}
\mbox{Wi} = g \frac{I_0}{2 T^*}
\end{equation}
where $I_0=I/d^2$, the dimensionless moment of inertia of the polymer
segment in the ground-state.
Using the orthogonal transformation converting positions into modes
the dynamic equations for the Rouse modes can be cast in the form
\cite{leeuwen2}
\begin{equation}
\frac{d {\bf R}_p}{ d \tau} =-\zeta_p {\bf R}_p +{\bf F}_p+{\bf H}_p +
\bm{K}_p.
\label{c6}
\end{equation}
For the decay constant we use the expression
\begin{equation}
\zeta_p = 4 \nu \left[1 - \cos \left(\frac{p \pi}{N+1}\right)\right]^2.
\label{c7}
\end{equation}
This spectrum follows from a subtraction in the coupling
force ${\bf H}_p$, which derives from the contour length term in the
Hamiltonian \cite{leeuwen2}
\begin{equation}
{\bf H}_p = \left(\frac{2}{N+1}\right)^{1/2} \sum_n
\sin \left( \frac{p n \pi}{N+1} \right) {\bf u}_n \left(
\frac{1}{u_n} - 1+2 \nu\right).
\label{c8}
\end{equation}
The subtraction $1- 2\nu$ within the last brackets changes the Rouse
spectrum from longitudinal to the transverse form Eq. (\ref{c7}).
Finally, ${\bf F}_p$ is the shear force given by
\begin{equation}
{\bf F}_p = g \, \hat{\bf x} \, (\hat{\bf y} \cdot
{\bf R}_p).
\label{c9}
\end{equation}
The fluctuating thermal force $K^\alpha_p$ is the orthogonal
transform of the $\bm{k}_n$ in Eq. (\ref{c2})
\begin{equation}
\langle K^\alpha_p (\tau) K^\beta_q (\tau') \rangle = 2 T^* \delta^{\alpha, \beta} \,
\delta_{p,q} \, \delta (\tau - \tau').
\label{c10}
\end{equation}
Although the Rouse modes turn out to be a convenient scheme for
solving the equations of motion with a sizable time step without
large errors, we do pay a computational penalty in the
calculation of the coupling force, which requires a transformation
(\ref{c3}) from the Rouse modes to the bond vectors and the
transformation (\ref{c8}) back to the modes. The penalty can be kept
to the minimum by the use the fast Fourier transform (FFT) to switch
from modes to bead positions; it keeps the number of operations of the
order $N \log N $.
\subsection{Body-fixed co-ordinate system to analyze tumbling
dynamics\label{sec3c}}
One of the major quantities of interest in the tumbling process is the
dynamics of the orientation of the polymer. The orientation can be
defined in several ways. The most common one is the direction of the
end-to-end vector. Since the ends of the chain fluctuate substantially
over short time-scales, this is not a slow variable. We prefer to use
as orientation the direction of the first Rouse mode ${\bf R}_1$,
being the slowest decaying mode. We therefore define the orientation
$\hat{\bf n}$ of the polymer as
\begin{equation}
\hat{\bf n} = \hat{\bf R}_1.
\label{d1}
\end{equation}
We refer to the components of the Rouse modes in the direction of
$\hat{\bf n}$ as longitudinal components
\begin{equation}
R^l_p = \hat{\bf n} \cdot {\bf R}_p.
\label{d2}
\end{equation}
The perpendicular directions are transverse to $\hat{\bf n}$. The
first Rouse mode has, by definition, only a longitudinal component.
In practice the so-defined orientation does not differ much from the
direction of the end-to-end vector.
Further, it is convenient to discuss the temporal behavior of the
polymer not only in the lab-frame co-ordinate system, with co-ordinate
axes $(\hat{\bf x}, \hat{\bf y},\hat{\bf z})$, but also in the
body-fixed co-ordinate system which we define as follows. Along with
the unit vector $\hat{\bf n}$, one of the two transverse axes
$\hat{\bf n}$, is taken perpendicular to $\hat{\bf n}$ and $\hat{\bf
x}$, namely
\begin{equation}
\hat{\bf m}= \hat{\bf n} \times \hat{\bf x}/r, \quad \quad r=[n^2_y
+n^2_z]^{1/2}.
\label{d3}
\end{equation}
The mode component in this direction, being perpendicular to $\hat{\bf
x}$, is not influenced by the shear force. The other transverse
direction is then naturally obtained as
\begin{equation}
\hat{\bf s} =\hat{\bf m} \times \hat{\bf n}.
\label{d4}
\end{equation}
Vector components along $\hat{\bf s}$ are maximally sheared. The
system $(\hat{\bf n}, \hat{\bf s}, \hat{\bf m})$ forms an orthogonal
basis set. For later use, below we list the Cartesian components of
the vectors $(\hat{\bf n}, \hat{\bf s}, \hat{\bf m})$:
\begin{equation}
\left\{ \begin{array}{lll}
n_x = \sin \theta \cos \phi, \quad & s_x=r, & m_x = 0, \\*[2mm]
n_y = \sin \theta \sin \phi, & s_y = - n_x n_y/r, \quad & m_y = n_z /r, \\*[2mm]
n_z = \cos \theta, & s_z = - n_x n_z /r, & m_z = - n_y /r.
\end{array} \right.
\label{d5}
\end{equation}
We denote the components of the modes generically with the index
$\alpha$, which alternatively runs through $\alpha=(x,y,z)$ or
$\alpha=(n,s,m)$.
\section{Time evolution of the orientation of the polymer\label{sec4}}
The evolution of the orientation is given by the dynamics of the two
transverse components of ${\bf R}_1$
\begin{equation}
\frac{d \, \hat{\bf n}}{d \tau} =\frac{d}{d \tau} \frac{{\bf R}_1}{R^l_1} =
\frac{ d {\bf R}_1}{d \tau} \frac{1}{R^l_1} + {\bf R}_1 \frac{ d }{d \tau} \frac{1}{R^1_1} =
\frac{1} {R^l_1} \frac{d \, {\bf R}_1}{d \tau},
\label{e1}
\end{equation}
wherein the third equality follows from the fact that the transverse
components of ${\bf R}_1$ vanish by definition. So the temporal
derivative of the longitudinal component is multiplied the vanishing
transverse component. We then use Eq. (\ref{c6}) and get
\begin{equation}
\frac{d \, \hat{\bf n}}{d \tau} = g \, \hat{\bf x} \, (\hat{\bf y} \cdot \hat{\bf n})
+ ({\bf H}_1 + \bm{K}_1)/R^l_1.
\label{e2}
\end{equation}
Obviously, in the right hand side of the equations only the transverse
components of the vectors are relevant.
For the interpretation of Eq. (\ref{e2}) we note that $R^l_1$ is
closely related to the moment of inertia $I$ of the chain, which is
defined as
\begin{equation}
\frac{I}{d^2} = \sum_n (r^l_n)^2 = \sum_p (R^l_p)^2 = \sum_p I_p,
\label{e3}
\end{equation}
where $I_p$ is the contribution of the $p$-th Rouse mode to the moment of inertia.
When the chain is (relatively) straight, the sum over the modes is
heavily dominated by the first component $I_1$. So it is an
indicative approximation to replace $I$ by $I_1$.
In the (relatively) straight state the configuration of the chain
resembles that of a straight rod. In order to make a connection with
the equation of tumbling for a rigid rod, we rewrite the equation
using a different scaling of the time $\tau$. We define the variable
$\tilde{\tau}$, linked to the Weissenberg number, as
\begin{equation}
\tilde{\tau} = g \tau /{\text{Wi}}, \quad \quad {\rm with} \quad \quad g/{\text{Wi}}= 2 T^*/I_1,
\label{e4}
\end{equation}
where we have used $I_1$ as measure for the moment of inertia.
In terms of $\tilde{\tau}$, Eq. (\ref{e2}) then becomes
\begin{equation}
\frac{d \, \hat{\bf n}}{d \tilde{\tau}} ={\text{Wi}}\,\,[\hat{\bf x} \, (\hat{\bf y}
\cdot \hat{\bf n})]+ (\tilde{\bf H}_1 + \tilde{\bm{K}}_1),
\label{e5}
\end{equation}
with $\tilde{\bf H}_1$ and $\tilde{\bf G}_1 $ defined as
\begin{equation}
\tilde{\bf H}_1 = \frac{\sqrt{I_1}}{2 T^*} \,
{\bf H}_1, \quad \quad \quad \tilde{\bm{K}}_1 (\tilde{\tau}) =
\frac{\sqrt{I_1}}{2 T^*} \, \bm{K}_1 (\tau).
\label{e6}
\end{equation}
The new random force has a correlation function
\begin{equation}
\langle \tilde{K}^\alpha_1 (\tilde{\tau}) \tilde{K}^\beta_1 (\tilde{\tau}')
\rangle = \delta^{\alpha, \beta} \delta (\tilde{\tau} - \tilde{\tau}').
\label{e7}
\end{equation}
Apart from the mode-coupling term $\tilde{\bf H}_1$, Eq. (\ref{e5}) is
the same as that of an infinitely thin rigid rod
\cite{bloete}. Therefore it makes sense to compare the tumbling
frequency $f$ with that of the rigid rod, given by \cite{bloete}
\begin{equation}
f = \frac{{\text{Wi}}}{4 \pi (1 +0.65{\text{Wi}}^2)^{1/6}}.
\label{e8}
\end{equation}
For comparison we show in Fig. \ref{raw} the tumbling frequency of
dsDNA chains for several lengths shorter than the persistence length
(which corresponds to $N \approx 120$), as found from simulations,
together with that of a rigid rod expression, Eq.~(\ref{e8}). The
simulations have been performed using an efficient implementation of
semiflexible polymer dynamics \cite{leeuwen2} of the bead-spring model
\cite{leeuwen,leeuwen1}. At any given value of the Weissenberg number,
obtained by using the rotational inertia of a rigid rod that has the
same configuration as the ground-state of the Hamiltonian (\ref{e02}),
snapshots of the polymer have been used to calculate its orientation
$[\theta(t),\phi(t)]$ in the laboratory frame. The $\phi(t)$ data is
then fitted by a straight line to obtain the tumbling frequency.
\begin{figure}[h]
\begin{center}
\includegraphics[width=0.55\linewidth]{freq}
\end{center}
\caption{(color online) The tumbling frequency as a function of the
Weissenberg number Wi for a series of chain lengths $N=7,15,31$ and
63. The conversion to Weissenberg numbers is based on the
ground-state moment of inertia, see Eq. (\ref{cn}).\label{raw}}
\end{figure}
We point out that the simulations follow the rigid rod formula for a
surprisingly large range of Weissenberg numbers, clearly indicating
that up to Wi = 100 the mode-coupling force $\tilde{\bf H}_1$ is
unimportant. In order to see what this implies for the shear rate
$\dot{\gamma}$, using the expression $I/d^2=N^3/12$ for the moment of
inertia, we write the relation between Wi and the shear rate
$\dot{\gamma}$ as
\begin{equation}
{\rm Wi} = \dot{\gamma} \frac{a^2 \xi}{k_B T} \frac{N^3}{24}.
\label{e9}
\end{equation}
Note that in Eq. (\ref{e9}) the molecular time scale equals \cite{leeuwen2}
\begin{equation}
\frac{a^2 \xi}{k_B T} = 52 \times 10^{-12}\, {\rm s}.
\label{e10}
\end{equation}
Commercially available rheometers at present are limited to shear
rates $\dot{\gamma} < 10^6$ s$^{-1}$. This implies, for dsDNA
fragments of the order of the persistence length, say $N=100$, that
only the range Wi $< 2$ is presently achievable in the lab; i.e., the
differences from the rigid rod behavior in Fig. \ref{raw} lie outside
the reach of present day experiments. Nevertheless, the origin of the
deviations from the rigid rod behavior is theoretically interesting;
we will address this issue in the Sec. \ref{sec6}.
\section{Shapes of semiflexible polymer in the oriented
deterministic state\label{sec5}}
In order to further analyze the tumbling process, it is useful to note
that the orientation changes at a slower rate than all the other
modes. This prompts us to focus on the configuration which is obtained
as the steady- state solution of the dynamical equations by turning
off the stochastic (thermal) forces at a fixed orientation of the
chain. We call this configuration the oriented deterministic state
(ODS). We use the properties of the ODS as indicative for the
configurations of the chain at the given orientation.
\subsection{The approach to the oriented deterministic state (ODS) \label{sec5a}}
The ODS configuration of the chain is obtained from Eq. (\ref{c6})
by the decay of the equation
\begin{equation}
\frac{d {\bf R}_p}{ d \tau} = -\zeta_p {\bf R}_p +{\bf F}_p+{\bf
H}_p.
\label{f1}
\end{equation}
The constraint of a fixed orientation is imposed by leaving out the
transverse components of the mode ${\bf R}_1$ and setting them equal
to zero in the other mode equations. Asymptotically the configuration
obeying Eq. (\ref{f1}) will turn into the ODS. So for the ODS the
l.h.s. of Eq. (\ref{f1}) vanishes. The approach to the ODS
configuration as following from Eq. (\ref{f1}) is slow.
A further simplification of finding the asymptotic state of
Eq. (\ref{f1}) follows by considering the ODS in the body-fixed
system. As there are no shear forces in the $\hat{\bf m}$ direction,
the ODS shape has no component in that direction. For the two other
equations in the $(\hat{\bf n}, \hat{\bf s})$ plane we get in detail
\begin{equation}
\left\{ \begin{array}{rcl}
(\zeta_p - g n_x n_y) R^n_p - g n_x s_y R^s_p & = & H^n_p \\*[4mm]
g s_x n_y R^n_p + (\zeta_p - g s_x s_y) R^s_p & = & H^s_p
\end{array} \right.
\label{f2}
\end{equation}
For $p=1$ we have only the first equation since the second refers to
the transverse component $R^s_1$, which we keep equal to zero.
Solving this set of non-linear equations is delicate. We found that,
under normal circumstances, iteration is a stable and quick way to the
solution. For a given orientation of the chain, we start with an
arbitrary configuration (in fact, for the starting configuration, we
use the ground-state configuration of the chain
\cite{leeuwen,leeuwen1}). We then compute the coupling forces $H^n_p$
and $H^s_p$, solve the two-by-two equations (\ref{f2}) for $R^n_p$ and
$R^s_p$ and construct a new set of bond vectors. We repeat the
calculation of $H^n_p$ and $H^s_p$ for the new configuration and
continue the process until the iterative process converges. Iteration
leads faster to the ODS than the evolution of the equations
Eq. (\ref{f1}). The results of the two approaches, in any case,
coincide.
\subsection{Symmetry breaking in the oriented deterministic
state \label{sec5b}}
The iterative solution of Eq.~(\ref{f2}), as well as the decay towards
the ODS on the basis of Eq.~(\ref{f1}), reveals an interesting
phenomenon. To show this, we note that configurations that are
invariant under reversal of the chain have vanishing even Rouse
modes. It is easy to see that the equations (\ref{f2}) preserve this
symmetry under iteration. The bond vectors ${\bf u}_n$ changes sign
under the operation
\begin{equation}
n \leftrightarrow N-n.
\label{g1}
\end{equation}
Changing the summation variable from $n$ to $N-n$ in the definition
Eq. (\ref{c8}) of the coupling force shows that ${\bf H}_p$ changes
sign for even $p$, but not for odd $p$. This means that if we start
the iteration with a configuration that is invariant under reversal,
i.e., we start the iteration with only odd ${\bf H}_p$ on the rhs of
Eq.~(\ref{f2}), it leads to a solution that has, once again, only odd
Rouse mode components.
\begin{figure}[h]
\begin{center}
\includegraphics[width=0.55\linewidth]{equi2Wei}
\end{center}
\caption{The squared value of the transverse component $R^s_2$ as
function of the Weissenberg number for a dsDNA chain of length
$N=63$. The critical Weissenberg number is Wi$_{\text
c}\approx19.4$.}
\label{Weiss}
\end{figure}
\begin{figure*}
\begin{center}
\includegraphics[width=0.465\linewidth]{equiphi}
\hspace{5mm}
\includegraphics[width=0.475\linewidth]{equie2e}
\end{center}
\caption{(color online) (a) The appearance of the first two even Rouse
modes in a window around the optimal value $\phi = 3 \pi/4$ for
$\theta=\pi/2$ at Wi$=21.3$. (b) The end-to-end distance as function
of $\phi$ for $\theta=\pi/2$ for some values of Wi around Wi$_{\text
c}\approx19.4$.}
\label{equator}
\end{figure*}
The above does not however exclude that there are solutions which
break the reversal symmetry. The best way to solve for the ODS is to
therefore start the iteration with a configuration with a
(perturbatively) small even mode, e.g. ${\bf R}_2$. The perturbation
may grow or decrease under successive iterations. We find that for low
Weissenberg numbers the perturbation decays to zero, while beyond a
critical Weissenberg number Wi$_{\text c}$, the reversal symmetry is
broken, i.e., the perturbation grows and saturates at a non-zero
value, much like the classic case of a pitchfork bifurcation.
As an example, for a dsDNA chain of length $N=63$ (note: a dsDNA
segment of one persistence length corresponds to $N\approx120$), we
plot the squared value of the transverse component $R^s_2$ as function
of the Weissenberg number in Fig. \ref{Weiss}. At Wi $=$ Wi$_{\text
c}$ the first non-zero even Rouse modes in the chain appear for
$\theta=\pi/2$ and $\phi=3 \pi/4$. The coefficient of $R^s_2$ in
Eq. (\ref{f2})
\begin{equation}
\zeta_p - g s_x s_y=\zeta_p + g (\sin \theta)^2 \sin \phi \cos \phi
\label{g2}
\end{equation}
reaches its smallest value for $\theta=\pi/2$ and $\phi=3 \pi/4$,
thus leading to the largest value of $R^s_2$ in the case of symmetry
breaking.
From Fig. \ref{Weiss} we see that the critical Weissenberg number for
$\theta=\pi/2$ and $\phi=3 \pi/4$ equals Wi$_{\text c}\approx19.4$ for
a dsDNA chain of length $N=63$.
\subsection{Shapes of the chain and Euler buckling\label{sec5c}}
The shape of the chain depends on its orientation of the polymer,
which comes into the solution through the components of the axes
$\hat{\bf n}$ and $\hat{\bf s}$. The shear is most effective in the
$x$-$y$ plane, i.e., for $\theta=\pi/2$. In Fig. \ref{equator}(a), for
$N=63$, we show the value of $|R^s_2|$ and $|R^s_4|$ as a function of
$\phi$ in the neighborhood of the most effective value $\phi=3 \pi/4$
for Wi$=21.3$ and $\theta=\pi/2$ (note: Wi$_{\text c}\approx19.4$). The
non-zero value of $|R^s_p|$ disappears at $\phi=3 \pi/4$ when Wi
approaches Wi$_{\text c}$ from above.
Further, in order to see the magnitude of the effect we plot in
Fig. \ref{equator}(b) the behavior of the end-to-end distance of the
chain for $N=63$ as a function of $\phi$ for $\theta=\pi/2$ and for
some values of Wi around the critical Weissenberg number Wi$_{\text
c}$. One observes that the end-to-end distance varies only slightly
as a function of orientation below Wi$_{\text c}$. Above the
Wi$_{\text c}$ a large dip develops around $\phi=3 \pi/4$,
demonstrating that the symmetry breaking goes hand-in-hand with the
so-called Euler buckling of the chain, i.e., the chain folds, which
reduces its end-to-end distance.
In order to visually appeal the reader to Euler buckling, we
provide a number of snapshots of the chain in the ODS for $N=63$ and
Wi $=100$, confined to the $x$-$y$ plane in Fig. \ref{shapes}. This
large Weissenberg number is well above Wi$_{\text c}$. The region
of $\phi$ within $\pi/2 <\phi \leq \pi$ is the interesting region,
for which we plot the polymer configurations.
\begin{figure}[h]
\begin{center}
\includegraphics[width=0.55\linewidth]{shapes}
\end{center}
\caption{Shapes of the dsDNA chain of length $N=63$ at Wi$=100$,
demonstrating Euler buckling in the ODS. The shapes are shown on the
$x$-$y$ plane (i.e., $\theta=\pi/2$) for $0.6\pi\leq\phi<\pi$. In
the direction of the arrow the shapes correspond to $\phi=0.60\pi,
0.65\pi, 0.70\pi, 0.75\pi, 0.80\pi, 0.85\pi, 0.90\pi$ and $0.95\pi$
respectively. Interestingly, the orientation here defined by the first Rouse
mode corresponds closely to the direction of the end-to-end vector.}
\label{shapes}
\end{figure}
To conclude: the ODS configuration of the chain resembles a rigid rod
below a critical value Wi$_{\text c}$ of the Weissenberg number. Above
this critical value, {\it even though the length of the chain is only
about half as that of the persistence length}, it breaks the
reversal symmetry, much like the classic case of a pitchfork
bifurcation. This leads to the development of a region around
$\theta=\pi/2$ and $\phi=3 \pi/4$, where the chain (Euler)
buckles. The buckling gives a large dip in the end-to-end distance.
Finally we note that the critical Wi$_{\text c}$ depends on the length $N$ of
the chain (roughly inversely proportional) and on $\nu$ (decreasing with $\nu$).
Converting it to a critical shear rate $\dot{\gamma}_c$ involves also $T^*$
[see Eq.~(\ref{e4})].
\section{Shapes of a tumbling semiflexible polymer:
simulations \label{sec6}}
Our simulations have been performed using an efficient implementation
of semiflexible polymer dynamics \cite{leeuwen2} of the bead-spring
model \cite{leeuwen,leeuwen1}.
\begin{figure*}
\includegraphics[width=0.49\linewidth]{scatter10}
\includegraphics[width=0.49\linewidth]{scatter20}
\includegraphics[width=0.49\linewidth]{scatter50}
\includegraphics[width=0.49\linewidth]{scatter100}
\caption{(color online) Scatterplots of $R^s_2$ as a function of
$\phi$ around $\theta=\pi/2$ at four values of Wi of length
$N=63$. In between the solid lines, representing
$0.35\pi<\phi<3\pi/4$, we see that the probability distribution
$P(R^s_2)$ of $R^s_2$ changing with increasing Wi. We take these
issues up in Fig. \ref{probdist}. See also the text for
details. \label{scatter}}
\end{figure*}
Before we discuss the details of the simulation results, we make the
readers aware of the differences between the ODS in Sec. \ref{sec5}
and the simulations. Thermal noise plays no role in the ODS while in
simulations it does. This implies that although for Wi $<$ Wi$_{\text
c}$ the amplitude of $R^s_2$ is identically zero in the ODS, we
should not expect to find the same in simulations at low Wi-values,
since in simulations the second Rouse mode will always be kicked up by
noise. This calls into question the relevance of the ODS for
simulations --- in particular, whether the tumbling of the chain is
sufficiently slow such that the simulation can explore the
neighborhood of the ODS, and thereby follow the characteristics of the
ODS. In view of lack of clarity for an answer to this question we used
the ODS as a guide for the simulations: to be more precise, we
focused on the values of Wi in the range 10 $\le$ Wi $\le$ 100 for
$N=63$ and sampled the probability distribution of $R^s_2$ as function
of the orientation.
In simulations for dsDNA of length $N=63$ we recorded 16 million
consecutive snapshots of the chain at regular intervals, ${\bf R}_1$
(for determining the orientation of the chain) and ${\bf R}_2$, at
equal intervals of time, for several values of Wi. The angles
$(\theta,\phi)$ for the chain's orientation are determined from the
values of ${\bf R}_1$. We then selected out the snapshots in this
slice $|\theta-\pi/2|=0.1$ radians, leaving us with 2-3 million
snapshots dependent on the value of Wi. In Fig. \ref{scatter} we show
the corresponding scatterplots of $R^s_2$ as a function of $\phi$ in
this slice at four values of Wi. In between the solid lines,
representing $0.35\pi<\phi<3\pi/4$, we see two lobes of empty regions
developing, signaling that the probability distribution $P(R^s_2)$ of
$R^s_2$ for these values of $\phi$ changes with increasing Wi. We note
that the values of $\phi$ corresponding to the two solid lines in
Fig. \ref{scatter} are chosen solely by visual inspection, and that
the locations of the empty lobes are shifted wrt the region in $\phi$,
for which the ODS exhibits Euler buckling.
\begin{figure}[h]
\includegraphics[width=0.49\linewidth]{probdist}
\includegraphics[width=0.49\linewidth]{binder}
\caption{(color online) (a) Probability distribution $P(R^s_2)$ of
$R^s_2$ corresponding to $0.35\pi<\phi<3\pi/4$ in Fig. \ref{scatter}
for $N=63$, showing that the unimodal distribution for Wi $=$ 10
gradually transforming into a bimodal distribution symmetric around
$R^s_2=0$ for higher Wi. (b) The corresponding Binder cumulant $B$,
as defined in Eq. (\ref{bc}), which changes from $\approx0.3$ at Wi
$=$ 10 to $\approx0.53$ at Wi $=$ 100. \label{probdist}}
\end{figure}
In order to further study the change in the probability distribution
$P(R^s_2)$ of $R^s_2$, we selected out the data points corresponding
to $0.35\pi<\phi<3\pi/4$ in Fig. \ref{scatter}, leaving us
50,000-100,000 data points. From them we constructed the probability
distribution $P(R^s_2)$. The distributions, corresponding to Wi $=$
10, 15, 20, 25, 50 and 100 are shown in Fig. \ref{probdist}. In
Fig. \ref{probdist}(a) we see that the unimodal distribution for Wi
$=$ 10 gradually transforms into a bimodal distribution symmetric
around $R^s_2=0$ for higher Wi. This development is the telltale sign
of symmetry breaking, which can also be tracked by the development of
the Binder cumulant $B$, defined as
\begin{eqnarray}
B=1-\frac{\langle(R^s_2)^4\rangle}{3\langle(R^s_2)^2\rangle},
\label{bc}
\end{eqnarray}
and shown in Fig.~\ref{probdist}(b). The Binder cumulant,
originally introduced to study symmetry breaking, attains the value
zero when the probability distribution is Gaussian, and reaches the
value $2/3$ when the symmetry is fully broken, changing the
probability distribution into a combination of two symmetric
$\delta$-peaks. For the data in Fig.~\ref{probdist}(a) we see that the
value of $B$ changes from $\approx0.3$ at Wi $=$ 10 to $\approx0.53$
at Wi $=$ 100.
The symmetry breaking is certainly not confined to
$N=63$. The same analysis on the simulation data (again, all data
points within $|\theta-\pi/2|=0.1$ and $0.35\pi<\phi<3\pi/4$, with the
corresponding figures, analogous to Fig.~\ref{probdist}, presented
in Fig.~\ref{probdist31}) reveals symmetry breaking taking place also
for $N=31$.
\begin{figure}[h]
\includegraphics[width=0.49\linewidth]{probdist31.pdf}
\includegraphics[width=0.49\linewidth]{binder31.pdf}
\caption{(color online) (a) Probability distribution $P(R^s_2)$ of
$R^s_2$ corresponding to $0.35\pi<\phi<3\pi/4$ in Fig. \ref{scatter}
for $N=31$, showing that the unimodal distribution for Wi $=$ 10
slowly transforming into a bimodal distribution symmetric around
$R^s_2=0$ for higher Wi. (b) The corresponding Binder cumulant $B$,
as defined in Eq. (\ref{bc}), which changes from $\approx0.23$ at Wi
$=$ 10 to $\approx0.58$ at Wi $=$ 200. \label{probdist31}}
\end{figure}
We note that the center of the region where the symmetry breaking
takes place is around the values of $\phi \approx 0.55 \pi$ as can be
observed from the scatterplots. This is substantially different from
the value $\phi=0.75 \pi$ where the onset of buckling takes place in
the ODS. The chain tumbles in the direction from $\phi=\pi$ towards
$\phi=0$. So the buckling in the simulation lags behind with respect
to the ODS. This is likely the result of the slowness by which the
buckled state is formed and is broken down. Using Eq. (\ref{f1}) we
estimated the time $\Delta \tau$, needed to evolve from the
ground-state (in which the transverse ${\bf R}^2_2=0$) to 50\% of its
asymptotic value (the ODS), to be of the order $\Delta \tau \simeq
10^6$. This translates to a time $\Delta \tilde{\tau}\approx0.3$ [see
Eq. (\ref{e4})]. In order to put this estimate in perspective, we
compare it with the tumbling period $1/f$ of the rigid rod, which is
1.7 for Wi=20 according to Eq. (\ref{e8}). In other words, the chain
indeed travels a sizable fraction of the period in the building-up
phase of the buckling, the more so since it rotates faster for the
buckling orientations than in the position aligned with the flow.
Thus, to summarize this section: using the theoretical analysis of
symmetry breaking as a guide we have computed the probability
distribution $P(R^s_2)$ of $R^s_2$ by simulations of a tumbling dsDNA
segment of length $N=63$ and $N=31$. The simulation data has confirmed
that symmetry breaking takes place, showing up as the transition from
an unimodal probability distribution $P(R^s_2)$ of $R^s_2$ at Wi $=$
10 transforming into a bimodal distribution symmetric around
$R^s_2=0$, as well as the associated Binder cumulants.
\begin{figure}[h]
\includegraphics[width=0.49\linewidth]{Ushape_sim}
\includegraphics[width=0.49\linewidth]{Sshape_sim}
\caption{Simulation snapshots a tumbling dsDNA chain of length $N=63$
at Wi $=$ 100, projected on the $x$-$y$ plane: (a) U-shape (b)
S-shape. \label{snapshots}}
\end{figure}
To supplement the above analysis of the simulation data we show, in
Fig.~\ref{snapshots}, two simulation snapshots of a tumbling dsDNA
chain of length $N=63$ at Wi $=$ 100, projected on the $x$-$y$ plane,
in order to showcase that, akin to the experimental snapshots shown
for f-actin in Ref. \cite{harasim}, shear can cause strong deformation
even for a chain that is shorter than its persistence length. A movie
of this tumbling chain (that includes both configurations of
Fig.~\ref{snapshots}) can be found in the ancillary files. In the
movie the center-of-mass of the chain always remains at the origin
of the co-ordinate system. The movie contains 3,000 snapshots, with
consecutive snapshots being $\Delta\tau=560$ apart in time. With
$\Delta\tau=1$ representing $0.16$ ps \cite{leeuwen2}, the full
duration of the movie spans $\approx2.7$ $\mu$s in real time.
\section{Conclusion\label{sec7}}
Our study focuses on fragments dsDNA, which are fairly extensible
semiflexible polymers. The extensibility of dsDNA implies parameters
in our Hamiltonian, which admit mode dynamics with a large time
step. The usual workhorse for theoretical studies is the inextensible
wormlike chain model for the Hamiltonian, the computer implementation
of which is confined to significantly smaller time steps.
Our simulations of a semiflexible polymer (dsDNA fragments smaller
than the persistence length) show that their tumbling frequency is
given, for the accessible range of Weissenberg numbers (Wi<2), by the
thin rigid-rod formula. Deviations of the tumbling frequency from
this formula (Fig.~(\ref{raw})) occur at higher Weissenberg
numbers. It is theoretically interesting to speculate about the nature
of the deviations from the rigid-rod formula, also in view of the
observation that the accessible range of Weissenberg numbers is much
larger for stiffer and longer polymers, e.g. f-actin. The Weissenberg
number for a polymer chain is a product of the shear rate $\dot\gamma$
and the rotational diffusion time-scale of a rigid rod of the same
length as the chain, i.e., $L$. Consequently, the Weissenberg number
$\propto \dot\gamma L^3$. One persistence length of f-actin is about
200 times longer than one persistence length of dsDNA. In units of
persistence length, for the same shear rate one thus reaches orders of
magnitude higher Weissenberg numbers for f-actin than for dsDNA.
In this respect we note that the Wi$^{2/3}$ law for rigid thin rods
originates from a singularity that develops in the probability
distribution for the orientation in the points $\theta=\pi/2$ and
$\phi=0$ or $\pi$ \cite{bloete}. The reason is that a thin rigid rod
does not feel a torque from the shear in the aligned orientation and
only a fluctuation can pull the rod over this stagnation point. A
semiflexible polymer, however, always feels a torque due to
fluctuations of the other modes (either thermal or buckling), which
communicate with the orientation through the coupling force ${\bf
H}_1$. These fluctuations enable Jeffery-like orbits which are
characteristic for ellipsoids with a finite aspect ratio in the
moments of inertia \cite{burgers}. The deviations from the thin rigid
rod formula that we see in Fig.~(\ref{raw}) do not substantiate the $f
\propto {\rm Wi}^{3/4}$ law reported for inextensible wormlike chains
\cite{lang}.
Using our Hamiltonian we have made a quantitative analysis of the
phenomenon of Euler buckling. Fixing the orientation and searching
for the configuration which results by turning off the thermal noise,
yields the oriented deterministic state (ODS). In the ODS we see a
sharply defined critical Wi$_c$ above which the buckling occurs. It is
a form of symmetry breaking through the occurrence of even modes in the
ODS above Wi$_c$.
In the simulations we observe correspondingly a transition in the
probability distribution for the even modes, in particular $R^s_2$,
changing gradually from a unimodal distribution to a bimodal
distribution. The simulations show that the formation of the buckled
state is a slow process. Therefore the orientation where the two peaks
in the bimodal are most significant, lags behind the orientation where
the ODS gives the maximum buckling. The buckling is substantiated by
characteristic configurations and a movie of the tumbling process.
|
2,877,628,090,440 | arxiv | \section{Introduction}
The wave equation of the following form
\begin{eqnarray}\label{main}
\begin{cases}
u_{tt}-\Delta u+a|u_t|^{q-2}u_t=b|u|^{p-2}u,\ \ \ &(x,t)\in D \times (0,T),\\
u(x,t)=0, &(x,t)\in\partial D\times (0,T),\\
u(x,0)=u_{0}(x),\ \ \ u_t(x,0)=u_1(x),\ \ \ &x\in D,
\end{cases}
\end{eqnarray}
where $D$ is a bounded domain in $\mathbb{R}^d$ with a smooth
boundary $\partial D$, $a,\ b>0$ are constants, has been
extensively studied and results concerning existence, blow-up and
asymptotic behavior of smooth, as well as weak solutions have been
established by several authors over the past three decades. For
$b=0$, it is well known that the damping term assures global
existence and decay of the solution energy for arbitrary initial
data (see \cite{HZ} and \cite{K}). For $a=0$, the source term
causes finite time blow-up of solutions with large initial data
(negative initial energy), see \cite{B} and \cite{KL}. The
interaction between the damping term $a|u_t|^{q-2}u_t$ and the
source term $b|u|^{p-2}u$ makes the problem more interesting. This
situation was first considered by Levine \cite{L,L1} in the linear
damping case ($q=2$), where he showed that solutions with negative
initial energy blow up in finite time. In \cite{GT}, Georgiev and
Todorova extended Levine's result to the nonlinear damping case
($q>2$). In their work, the authors introduced a new method and
determined relations between $q$ and $p$ for which there is finite
time blow-up. Specifically, they showed that solutions with
negative energy continue to exist globally in time if $q\geq p\geq
2$ and blow up in finite time if $p>q \geq 2$ and the initial
energy is sufficiently negative. Messaoudi \cite{M1} extended the
blow-up result of \cite{GT} to solutions with only negative
initial energy. For related results, we refer the reader to Levine
and Serrin \cite{LS}, Levine and Ro Park \cite{LR}, Vitillaro
\cite{V} and Messaoudi and Said-Houari \cite{MS}.
In fact, the driving force may be affected by the environment
randomly. In view of this, we consider the following stochastic
wave equations
\begin{eqnarray}\label{smain}
\begin{cases}
u_{tt}-\Delta u+|u_t|^{q-2}u_t=|u|^{p-2}u+\varepsilon
\sigma (u,\nabla u,x,t)\partial_t W(t,x),\ \ \ &(x,t)\in D \times (0,T),\\
u(x,t)=0, &(x,t)\in\partial D\times (0,T),\\
u(x,0)=u_{0}(x),\ \ \ u_t(x,0)=u_1(x),\ \ \ &x\in D,
\end{cases}
\end{eqnarray}
where $q\geq2,\ p>2$, $\varepsilon$ is given positive constant
which measures the strength of the noise, and $W(t,x)$ is a Wiener
random field, which will be defined precisely later, and the
initial data $u_0(x)$ and $u_1(x)$ are given functions.
To motivate our work, let us recall some results regarding
stochastic wave equations with linear damping ($q=2$). For the
blow-up results, Chow \cite{C} discussed a class of
non-dissipative stochastic wave equations with polynomial
nonlinearity in $\mathbb{R}^d$ with $d\leq 3$. Using the energy
inequality the author demonstrated the blow-up in finite time with
a positive probability or explosive in $L^2$ norm for an example
and studied the global existence of the solutions for the
equation. This blow-up result has been later generalized by the
same author in \cite{C3}. In a recent paper, using the energy
inequality, Bo et al. \cite{BTW} proposed sufficient conditions
that the solutions of a class of stochastic wave equations blow up
with a positive probability or explosive in $L^2$ sense. In those
papers, the main tool in proving explosive/blow-up is the
``concavity method" where the basic idea of the method is to
construct a positive defined functional $F(t)$ of the solution by
the energy inequality and show that $F^{-\alpha}(t)$ is a concave
function of $t$. Unfortunately, this method fails in the case of a
nonlinear damping term ($q>2$). For the global existence and
invariant measure, Chow \cite{C1,C2} studied properties of the
solution of (\ref{smain}) with $q=2$ such as asymptotic stability
and invariant measure and Brze$\acute{z}$niak et al. \cite{BM}
studied global existence and stability of solutions for the
stochastic nonlinear beam equations. There are also many other
works on the stochastic wave equations with global existence and
invariant measure for linear damping, see references in \cite{CN,
HC, DF, MM}.
Nonlinear stochastic wave equations with nonlinearity on the
damping were first studied by Pardoux \cite{EP}. But the progress
is little in nearly three decades. Recently, J.U. Kim \cite{JUK}
and V. Barbu et al. \cite{BPT} considered an initial boundary
value stochastic wave equations with nonlinear damping and
dissipative damping, respectively. They proved the existence of an
invariant measure. However, to our knowledge, the
explosive/blow-up results with nonlinearity on the damping seems
to be studied here for the first time. Since the existence and
uniqueness of a solution for the deterministic equation
($\varepsilon=0$) is well known under some assumptions with
nonlinearity on the damping, we may anticipate similar results for
the stochastic equation. However, the methods used in earlier
works on the stochastic wave equation with linear damping do not
work. Hence, we will employ the Galerkin approximation method to
establish the local existence and uniqueness solution for
(\ref{smain}). For multiplicative noise, i.e., when $\sigma$
depend on $u$ and $\nabla u$, we need to obtain the mean energy
estimates, but this is some technical difficulty. This is also a
major hurdle for the uniqueness of a solution of (\ref{smain}). So
here we consider only additive noise, i.e. $\sigma(u,\nabla u,
x,t)=\sigma(t,x,\omega)$ so that the stochastic integral may be
well defined as an $L^2(D)$-valued continuous martingale. We will
prove the global solution of (\ref{smain}) for $q\geq p$.
Concerning explosive/blow-up results, we use the technique of
\cite{GT} with a modification in the energy functional due to the
different nature of the problems for $p>q$.
This paper is organized as follows. In Section $2$ we present some
assumptions and definitions needed for our work. In Section $3$,
we show the local existence and uniqueness solution of
(\ref{smain}) and prove the solution being global for $q\geq p$.
Section $4$ is devoted to the proof of the explosive solutions of
(\ref{smain}) for $p>q$.
\section{Preliminaries}
Firstly, let us introduce some notation used throughout this
paper. We set $H=L^2(D)$ with the inner product and norm denoted
by $(\cdot,\cdot)$ and $||\cdot||_2$, respectively. Denote by
$||\cdot||_q$ the $L^q(D)$ norm for $1\leq q\leq\infty$ and by
$||\nabla\cdot||_2$ the Dirichlet norm in $V=H^1_0(D)$ which is
equivalent to the $H^1(D)$ norm. We also set $q,\ p$ satisfy
\begin{eqnarray}\label{condition}
\begin{cases}
q\geq2,\ \ \ p>2,\ \ \ \max\{p,\ q\}\leq \displaystyle\frac{2(d-1)}{d-2}, \ \ \ &{\rm if}\ d\geq3,\\
q\geq2,\ \ \ p>2,&{\rm if} \
d=1,2,
\end{cases}
\end{eqnarray}
which implies that $H_0^1(D)$ is continuously compact embedded into
$ L^p(D)$. Hence, we have the Sobolev inequality
\begin{equation}\label{sobolev}
||u||_{2(p-1)}\leq c||\nabla u||_2,\ \ \ \forall u\in H_0^1(D),
\end{equation}
where $c$ is the embedding constant of $H_0^1(D)\subseteq L^p(D)$.
Using (\ref{sobolev}), we have the following inequality
\begin{equation}\label{sobolev1}
||u^{p-2}v||_2\leq c^{p-1}||\nabla u||_2^{p-2}||\nabla v||_2, \ \ \
\forall u,\ v\in H_0^1(D).
\end{equation}
In fact, when $d=1,\ 2$, let $q>1$ and $k=\frac{q}{q-1}$, by the
H\"{o}lder inequality and (\ref{sobolev}) we have
\begin{equation}\label{sobolev2}
||u^{p-2}v||_2\leq ||u||^{p-2}_{2(p-2)q}||v||_{2k}\leq c^{p-1}||\nabla
u||_2^{p-2}||\nabla v||_2.
\end{equation}
When $d>2$, set $q=\frac{d}{(d-2)(p-2)}>1$. Then
$k=\frac{d}{d-(d-2)(p-2)}\leq \frac{d}{d-2}$, (\ref{sobolev2}) is also
valid for $d>2$.
Let $(\Omega,P,\mathcal{F})$ be a complete probability space for
which a $\{\mathcal{F}_t,\ t\geq0\}$ of sub-$\sigma$-fields of
$\mathcal{F}$ is given. A point of $\Omega$ will be denoted by
$\omega$ and $\textbf{E}(\cdot)$ stands for expectation with
respect to probability measure $P$. When $\mathcal{O}$ is a
topological space, $\mathcal{B}$ denotes the Borel
$\sigma$-algebra over $\mathcal{O}$. Suppose that
$\{W(t,x):t\geq0\}$ is a $V$-valued $R$-Wiener process on the
probability space with the variance operator $R$ satisfying
$TrR<\infty$. Moreover, we can assume that $R$ has the following
form
\[
Re_i=\lambda_ie_i,\ \ \ i=1,2,\cdots,
\]
where ${\lambda_i}$ are eigenvalues of $R$ satisfying
$\sum_{i=1}^\infty\lambda_i<\infty$ and $\{e_i\}$ are the
corresponding eigenfunctions with
$c_0:=\sup_{i\geq1}||e_i||_\infty<\infty$ (where
$||\cdot||_\infty$ denotes the super-norm). To simplify the
computations, we assume that the covariance operator $R$ and
$-\Delta$ with homogeneous Dirichlet boundary condition have a
common set of eigenfunctions, i.e., $\{e_i\}_{i=1}^\infty$ satisfy
\begin{eqnarray}\label{ef}
\begin{cases}
-\Delta e_i=\mu_ie_i,\ \ \ &x\in D,\\
e_i=0,\ \ \ &x\in\partial D,
\end{cases}
\end{eqnarray}
and form an orthonormal base of $V$. In this case,
\[
W(t,x)=\sum_{i=1}^\infty\sqrt{\lambda_i}B_i(t)e_i,
\]
where $\{B_i(t)\}$ is a sequence of independent copies of standard
Brownian motions in one dimension. Let $\mathcal{H}$ be the set of
$L^0_2=L^2(R^{\frac{1}{2}}V,V)$-valued processes with the norm
\[
||\Psi(t)||_{\mathcal{H}}=\Big(\textbf{E}\int_0^t||\Psi(s)||_{L^0_2}^2ds\Big)^{\frac{1}{2}}=
\Big(\textbf{E}\int_0^tTr\big(\Psi(s)R
\Psi^*(s)\big)ds\Big)^{\frac{1}{2}}<\infty,
\]
where $\Psi^*(s)$ denotes the adjoint operator of $\Psi(s)$. Let
$\{t_k\}_{k=1}^n$ be a partition on $[0,T]$ such that
$0=t_0<t_1<\cdots<t_n=T$. For a process $\Psi(t)\in \mathcal{H}$,
define the stochastic integral with respect to the $R$-Wiener
process as
\begin{equation}\label{condition2}
\int_0^t
\Psi(s)dW(s)=\lim_{n\rightarrow\infty}\sum_{k=0}^{n-1}\Psi(t_k)(W(t_{k+1}\wedge t)-W(t_k\wedge
t)),
\end{equation}
where the sequence converges in $\mathcal{H}$-sense. It is not
difficult to check that the integral process $\int_0^t
\Psi(s)dW(s)$ is a martingale for any $\Psi(t)\in \mathcal{H}$, and the quadratic variation process is given by
\[
\Big\langle\Big\langle\int_0^t
\Psi(s)dW(s)\Big\rangle\Big\rangle=\int_0^t
Tr\big(\Psi(s)R
\Psi^*(s)\big)ds.
\]
For more details about the infinite dimension Wiener process and
the stochastic integral, we refer to \cite{PZ}.
Finally, we give the definition of solution to (\ref{smain}). For
the definition of a solution, we assume that
\begin{equation}\label{mild}
(u_0,u_1)\in
H_0^1(D)\times L^2(D),
\end{equation}
and that $\sigma(x,t)$ is $L^2(D)$-valued progressively measurable
such that
\begin{equation}\label{mild1}
\textbf{E}\int_0^T
||\sigma(t)||_2^2dt<\infty,
\end{equation}
\begin{defn}\label{defn}
Under the assumption (\ref{mild}) and (\ref{mild1}), $u$ is said
to be a solution of (\ref{smain}) on the interval $[0,T]$ if
\begin{equation}\label{define}
(u,u_t)\ \rm {is\ } H^1_0(D)\times L^2(D)\rm{-valued\ progressively\
measurable},
\end{equation}
\begin{equation}\label{define1}
(u,u_t)\in L^2(\Omega; C([0,T];H_0^1(D)\times L^2(D))),\ \ \
u_t\in L^q((0,T)\times D),\ \ \
\rm{for\ almost\ all }\ \omega,
\end{equation}
\begin{equation}\label{define2}
u(0)=u_0,\ \ \ u_t(0)=u_1,
\end{equation}
\begin{equation}\label{define3}
u_{tt}-\Delta u+|u_t|^{q-2}u_t=|u|^{p-2}u+\varepsilon \sigma
(x,t)\partial_t W(t,x)
\end{equation}
holds in the sense of distributions over $(0,T)\times D$ for
almost all $\omega$.
\end{defn}
\begin{rem}
(\ref{define1}) and (\ref{define3}) imply that
\begin{eqnarray}\label{define5}
&&\big(u_t(t),\phi\big)=\big(u_1,\phi\big)-
\int_0^t\big(\nabla u,\nabla \phi\big)ds-\int_0^t\big(|u_s|^{q-2}u_s, \phi\big)ds\nonumber\\
&&\quad\quad\quad\quad\quad\ +\int_0^t\big(|u|^{p-2}u, \phi\big)ds+\int_0^t\big(
\phi,\varepsilon\sigma(x,s)dW_s\big),
\end{eqnarray}
for all $t\in[0,T]$ and all $\phi\in H_0^1(D)$. In fact,
(\ref{define5}) is a conventional form for the definition of
solution to stochastic differential equations. Here we say $u$ is
a strong solution of the equation (\ref{smain}).
\end{rem}
\section{ Existence and uniqueness of solution}
In this section, we deal with the local existence and uniqueness
of solution for problem (\ref{smain}) and prove that the solution
of (\ref{smain}) is global for $q\geq p$. Let $f(u)=|u|^{p-2}u$.
For each $N\geq1$, define a $C^1$ function $\chi_N$ by
\begin{eqnarray*}
\chi_N(x)=\left\{
\begin{array}{ll}
1, \ \ \ & {\rm if}\ x\leq N,\\
\in (0,1), \ \ \ & {\rm if}\ N<x<N+1,\\
0 , \ \ \ & {\rm if}\ x\geq N+1,
\end{array}
\right.
\end{eqnarray*}
and further assume that $||\chi'_N||_\infty\leq2$. We define
\[
f_N(u)= \chi_N(||\nabla u||_2)f(u), \ \ \ u\in H_0^1(D).
\]
Then, it follows from (\ref{sobolev1}) that
\begin{equation}\label{condition6}
||f_N(u)-f_N(v)||_2\leq C_N||\nabla u-\nabla v||_2, \ \ \ u,\ v\in
H_0^1(D),
\end{equation}
where $C_N$ is a constant dependent only on $N$. Let
$g(x)=|x|^{q-2}x$. For any $\lambda>0$, let
\[
g_\lambda(x)=\frac{1}{\lambda}\big(x-(I+\lambda g)^{-1}(x)\big)=g(I+\lambda g)^{-1}(x),\ \ \ x\in \mathbb{R},
\]
where $g_\lambda$ is the Yosida approximation of the mapping $g$.
Since $g(x)$ satisfies maximal monotone and
$g'(x)=(q-2)|x|^{q-2}\geq0$ for any $x\in \mathbb{R}$, then
$g_\lambda\in C^1(\mathbb{R})$ and satisfies (see Pazy \cite{AP})
\begin{equation}\label{local}
0\leq g'_\lambda\leq\frac{1}{\lambda},\ \ \ |g_\lambda (x)|\leq
|g(x)|,\ \ \ |g_\lambda(x)|\leq \frac{1}{\lambda}|x|, \ {\rm\ for
\ any
}\ x\in\mathbb{R}.
\end{equation}
\begin{lem}\label{lem:regularize}
Let $\{\lambda_n\}$ be a sequence of positive numbers, and
$\{x_n\}$ be a sequence of real numbers such that
$\lambda_n\rightarrow0$ and $x_n\rightarrow x$. Then
\[
\lim_{n\rightarrow\infty}g_{\lambda_n}(x_n)=g(x).
\]
\end{lem}
\begin{proof}
There is some $L>0$ such that $|x_n|\leq L$ for all $n\geq 1$.
Since $g(x)$ is maximal monotone, let $y_n$ be a unique number
such that $y_n+\lambda_n g(y_n)=x_n$, for each $n\geq1$. Then we
have
\[
|y_n|\leq |x_n|\leq L,\ \ \ |x_n-y_n|\leq \lambda_n C,
\]
for each $n\geq1$, where $C=\sup_{|z|\leq L}|g(z)|$. Now the above
assertion follows from
\[
|g(x)-g_{\lambda_n}(x_n)|\leq|g(x)-g(x_n)|+|g(x_n)-g(y_n)|.
\]
\end{proof}
\begin{lem}\label{lem:regularize1}[See Lemma 1.3 in Lions \cite{LL}]
Let $D$ be a bounded domain in $\mathbb{R}^d,\ d\geq1$,
$\{\varphi_k\}$, $\varphi\in L^q(D),\ 1<q<\infty$. If
\[
||\varphi_k||_q\leq C\ \ \ {\rm and}\ \ \ \varphi_k(x)\rightarrow
\varphi(x)\ {\rm for\ almost\ all}\ x\in D,
\]
where $C$ is a constant, then $\varphi_k\rightarrow \varphi$
weakly in $L^q(D)$.
\end{lem}
In order to obtain the local existence and uniqueness of solution
for problem (\ref{smain}), we will first establish a lemma for the
regularized problem. Fix $\lambda$ and $N>0$, we will work on the
following initial boundary value problem
\begin{eqnarray}\label{regularize}
\begin{cases}
u_{tt}-\Delta u+g_\lambda(u_t)=f_N(u)+
\varepsilon \sigma (x,t)\partial_t W(t,x),\ \ \ &(x,t)\in D \times (0,T),\\
u(x,t)=0, &(x,t)\in\partial D\times (0,T),\\
u(x,0)=u_{0}(x),\ \ \ u_t(x,0)=u_1(x),\ \ \ &x\in D,
\end{cases}
\end{eqnarray}
where we suppose that
\begin{equation}\label{regularize1}
(u_0,u_1)\in (H_0^1(D)\cap H^2(D))\times H_0^1(D)
\end{equation}
and that $\sigma(x,t)$ is $H_0^1(D)\cap L^\infty(D)$-valued
progressively measurable such that
\begin{equation}\label{regularize2}
\textbf{E}\int_0^T(||\nabla \sigma(t)||_2^2+||\sigma
(t)||_\infty^2)dt<\infty.
\end{equation}
\begin{lem}\label{lem:regularize2}
Assume (\ref{condition}), (\ref{regularize1}) and
(\ref{regularize2}) hold. Then there is a pathwise unique solution
$u$ of (\ref{regularize}) such that
\[
u\in L^2\big(\Omega; L^\infty (0,T;H_0^1(D)\cap H^2(D))\big)\cap
L^2\big(\Omega; C([0,T];H_0^1(D))\big)
\]
and
\[
u_t\in L^2\big(\Omega; L^\infty (0,T;H_0^1(D))\big)\cap
L^2\big(\Omega; C([0,T];L^2(D))\big).
\]
Moreover, it holds that
\[
\textbf{E}\left(||u_t||^2_{L^\infty (0,T;H_0^1(D))}+||u||^2_{L^\infty (0,T;H_0^1(D)\cap H^2(D))}+\int_0^T\int_D
g_\lambda(u_t)u_tdxdt\right)\leq C_N,
\]
where $C_N$ denotes a positive constant independent of $\lambda$.
\end{lem}
\begin{proof}
Let
\[
u_m(t,x)=\sum_{j=1}^m a_{m,j}(t)e_j(x),
\]
where $\{e_j\}_{j=1}^\infty$ is a complete orthonormal base of
$H_0^1(D)$ satisfying (\ref{ef}) and $a_{m,j}$ form a solution of
the following system of stochastic differential equations
\begin{eqnarray}\label{regularize3}
\begin{cases}
a_{m,j}''=-\mu_ja_{m,j}-\Big(g_\lambda\big(\sum_{j=1}^m
a_{m,j}'e_j\big),e_j\Big)+\Big(f_N\big(\sum_{j=1}^m
a_{m,j}'e_j\big),e_j\Big) +\big(e_j,\varepsilon\sigma (x,t)dW_t\big),\\
a_{m,j}(0)=(u_0,e_j),\ \ \ a_{m,j}'(0)=(u_1,e_j),
\end{cases}
\end{eqnarray}
for $1\leq j\leq m$. By It\^{o} formula, we have
\begin{eqnarray}\label{regularize4}
&&||u'_m(t)||^2_2+||\nabla u_m(t)||^2_2\leq||u'_m(0)||^2_2+
||\nabla u_m(0)||^2_2-2\int_0^t\int_D g_\lambda \big(u_m'(s)\big)u_m'(s)dxds\nonumber\\
&& +2\int_0^t\int_D f_N \big(u_m\big)u_m'dx ds+2\int_0^t\big(
u_m',\varepsilon\sigma\big)dW_s+c_0^2Tr R \sum_{j=1}^m\int_0^t |\big(e_j,\varepsilon\sigma\big )|^2ds,
\end{eqnarray}
and
\begin{eqnarray}\label{regularize5}
&&||\nabla u'_m(t)||^2_2+||\Delta u_m(t)||^2_2\leq ||\nabla
u'_m(0)||^2_2+||\Delta u_m(0)||^2_2+
2\int_0^t\int_D g_\lambda \big(u_m'(s)\big)\Delta u_m'(s)dxds\nonumber\\
&& -2\int_0^t\int_D f_N \big(u_m(s)\big)
\Delta u_m'dx ds+2\int_0^t\big(
\nabla u_m',\varepsilon\nabla (\sigma dW_s)\big)\nonumber\\
&& +2c_0^2Tr R \sum_{j=1}^m\int_0^t|\big(\nabla e_j,\varepsilon\nabla\sigma\big
)|^2ds+2\sum_{j=1}^m\sum_{i=1}^\infty \lambda_i\int_0^t
\big|(e_j, \sigma\nabla e_i)\big|^2ds
\end{eqnarray}
for all $t\in[0,T]$ and almost all $\omega$, where
\[
Tr R=\sum_{i=1}^\infty \lambda_i, \ \ \
c_0:=\sup_{i\geq1}||e_i||_\infty.
\]
From (\ref{sobolev}), (\ref{sobolev1}) and (\ref{local}), we get
\begin{equation}\label{regularize6}
\int_D f_N \big(u_m\big)u_m'dx\leq \int_D\chi_N (||\nabla u_m||_2)|u_m|^{p-1}|u_m'(s)|dx\leq C_N||\nabla
u_m||_2 ||u_m'||_2,
\end{equation}
\begin{eqnarray}\label{regularize7}
&&-\int_D f_N \big(u_m\big)\Delta u_m'dx=
(p-1)\int_D \chi_N(||\nabla u_m||_2)|u_m|^{p-2}\nabla u_m \cdot \nabla
u_m'dx\nonumber\\
&&\quad\quad\quad\quad\quad\quad\quad\quad\quad \ \leq C_N(p-1)|||u_m|^{p-2}\nabla u_m||_2||\nabla
u_m'||_2\leq C_N||\Delta u_m||_2||\nabla
u_m'||_2,\quad\quad
\end{eqnarray}
and
\begin{equation}\label{regularize8}
\int_D g_\lambda \big(u_m'(s)\big)\Delta u_m'(s)dx=-\int_D g_\lambda' \big(u_m'(s)\big)|\nabla
u_m'(s)|^2dx\leq0.
\end{equation}
By the Burkholder-Davis-Gundy inequality, we have
\begin{eqnarray}\label{regularize9}
&&\textbf{E}\left(\sup_{t\in[0,T]}\left|\int_0^t\big(
u_m'(s),\varepsilon\sigma\big)dW_s\right|\right)
\leq C\textbf{E}\Big(\sup_{t\in[0,T]}||u_m'||_2\Big(\varepsilon^2\sum_{i=1}^\infty\int_0^T
\big(\sigma(x,t)Re_i, \sigma(x,t)e_i\big)dt\Big)^{\frac{1}{2}}\Big)\nonumber\\
&&
\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\
\ \leq \alpha\textbf{E}\Big(\sup_{t\in[0,T]}||u_m'||_2^2\Big)
+\frac{C\varepsilon^2c_0^2}{\alpha}Tr R\textbf{E}\Big(\int_0^T
||\sigma(t)||^2_2 dt\Big),\quad\quad
\end{eqnarray}
and
\begin{eqnarray}\label{regularize10}
&&\textbf{E}\left(\sup_{t\in[0,T]}\left|\int_0^t\big(
\nabla u_m',\nabla (\sigma dW_s)\big)\right|\right)
\leq\alpha\textbf{E}\Big(\sup_{t\in[0,T]}||\nabla
u_m'||_2^2\Big)\nonumber\\
&&\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\
\
+\frac{C\varepsilon^2c_0^2}{\alpha}Tr R\textbf{E}\Big(\int_0^T
\big(||\nabla \sigma(t)
||_2^2+||\sigma(t)||_\infty^2\big)dt\Big).\quad\quad\quad
\end{eqnarray}
Here and below, $C$ and $C_N$ denote positive constants
independent of $m$ and $\lambda$. From (\ref{regularize1}),
(\ref{regularize2}) and (\ref{regularize4})--(\ref{regularize10}),
by Gronwall's inequality, we have
\begin{equation}\label{regularize11}
\textbf{E}\left(\sup_{t\in[0,T]}||\nabla u_m'||_2^2+\sup_{t\in[0,T]}|| u_m||_{H^1_0(D)\cap H^2(D)}
+\int_0^T\int_D g_\lambda
\big(u_m'(s)\big)u_m'(s)dxds\right)\leq C_N.
\end{equation}
Define
\begin{equation}\label{regularize12}
\mathcal{A}_\lambda=||v||^2_{L^\infty(0,T;H^1_0(D)\cap
H^2(D))}+||v'||^2_{L^\infty(0,T;H^1_0(D))}+\int_0^T\int_D g_\lambda
\big(v'(s)\big)v'(s)dxds.
\end{equation}
It follows from (\ref{regularize12}) that
\begin{equation}\label{regularize13}
P\Big(\bigcup_{L=1}^{\infty}\bigcap_{j=1}^{\infty}\bigcup_{m=j}^{\infty}\{\mathcal{A}_\lambda(u_m)\leq
L\}\Big)=1.
\end{equation}
Let $\mathcal{P}_m$ is the orthogonal projection of $L^2(D)$ onto
the subspace spanned by $\{e_1,\cdots, e_m\}$, i. e.,
\[
\mathcal{P}_m\varphi=\sum_{j=1}^m(\varphi,e_j)e_j.
\]
From (\ref{regularize3}), we have
\begin{equation}\label{regularize14}
\partial_t(u_m'-\varepsilon\mathcal{P}_m M(t))=\Delta u_m-\mathcal{P}_m
g_\lambda(u_m')+\mathcal{P}_mf_N(u_m)
\end{equation}
in the sense of distributions over $(0,T)\times D$ for almost all
$\omega$, where $M(t)$ is defined by (\ref{condition2}) with
(\ref{regularize2}). Since $\sigma(x,t)$ is $H_0^1(D)\cap
L^\infty(D)$-valued progressively measurable and
$\{W(t,x):t\geq0\}$ is a $V$-valued process, there is a subset
$\Omega_1\subset \Omega$ with $P(\Omega\setminus\Omega_1)=0$ such
that for each $\omega\in \Omega_1$,
\begin{equation}\label{regularize15}
M\in C([0,T];H_0^1(D)),\ \rm{and \ (\ref{regularize14})\ holds \
for \ all }\ m\geq1.
\end{equation}
From (\ref{regularize11}), for each $\omega\in \Omega_1$ there is
a subsequence $\{u_{m_k}\}_{k=1}^\infty$ such that
\begin{equation}\label{regularize16}
\mathcal{A}_\lambda(u_{m_k})\leq L_\omega,\ \rm{for\ all }\ k\geq1
\ \rm{and\ for \ some\ constant }\ L_\omega>0,
\end{equation}
\begin{equation}\label{regularize17}
u_{m_k}\rightarrow u\ \ \ \rm{weak \ star\ in}\ L^\infty(0,T;
H_0^1(D)\cap H^2(D)),
\end{equation}
\begin{equation}\label{regularize18}
u_{m_k}\rightarrow u\ \ \ \rm{strongly \ in }\ C([0,T];
H_0^1(D)),
\end{equation}
and
\begin{equation}\label{regularize19}
u_{m_k}'\rightarrow u'\ \ \ \rm{weak \ star\ in}\ L^\infty(0,T;
H_0^1(D)),
\end{equation}
for some function $u=u(\omega)$. It follows from (\ref{local})
that
\[
|g_\lambda(x)|^{\frac{q}{q-1}}\leq C g_\lambda(x)x,\ \ \ \rm{for \ all
}\ x \in \mathbb{R} \ \rm{and} \ \lambda>0.
\]
From (\ref{condition}), we have the embedding
$L^{\frac{q}{q-1}}\subset H^{-1}(D)$. Thus, by
(\ref{regularize12}) and (\ref{regularize16}), we have
\begin{equation}\label{regulariz21}
||g_\lambda(u_{m_k}')||^{\frac{q}{q-1}}_{L^{\frac{q}{q-1}}(0,T;
H^{-1}(D))}\leq C L_\omega,
\end{equation}
which combined with (\ref{regularize14}), yields
\begin{equation}\label{regularize20}
||u_{m_k}'-\varepsilon\mathcal{P}_{m_k}M||_{W^{1,\frac{q}{q-1}}(0,T;
H^{-1}(D))}\leq C L_\omega
\end{equation}
for all $k\leq1$. By (\ref{regularize19}) and
(\ref{regularize20}), we get
\begin{equation}\label{regularize21}
u_{m_k}'-\varepsilon\mathcal{P}_{m_k}M\rightarrow u'-\varepsilon
M\ \ \ \rm{strongly\ in}\ C([0,T];L^2(D)).
\end{equation}
This implies that there is a subsequence still denoted by
$\{u_{m_k}\}$ such that
\begin{equation}\label{regulariz20}
u_{m_k}'(t,x)\rightarrow u'(t,x),\ \ \ \rm{for\ almost\ all}\
(t,x)\in (0,T)\times D.
\end{equation}
It follows from (\ref{regulariz21}), (\ref{regulariz20}) and Lemma
\ref{lem:regularize1} that
\[
g_\lambda(u_{m_k}')\rightarrow g_\lambda(u')\ \ \ \rm{weakly\ in}\
L^{\frac{q}{q-1}}((0,T)\times D).
\]
Thus, $u=u(\omega)$ satisfies (\ref{regularize}) in the sense of
distributions over $(0,T)\times D$. Here the choice of the above
subsequence may depend on $\omega\in \Omega_1$. If there is
another subsequence which converges to
$\widetilde{u}=\widetilde{u}(\omega)$ in the above sense, then
$w=u(\omega)-\widetilde{u}(\omega)$ satisfies
\[
w''-\Delta w
+g_\lambda(u'(\omega))-g_\lambda(\widetilde{u}'(\omega))=f_N(u(\omega))-f_N(\widetilde{u}(\omega)),
\]
\[
w(0)=0,\ \ \ w'(0)=0,
\]
\[
w\in L^\infty (0,T;H_0^1(D)\cap H^2(D))\cap C([0,T);H_0^1(D)),
\]
\[
w'\in L^\infty (0,T;H_0^1(D))\cap C([0,T);L^2(D)).
\]
Thus, we have
\begin{equation}\label{regularize22}
\frac{1}{2}\frac{d}{dt}(||w'(t)||^2_2+||\nabla w(t)||_2^2)+\int_D
\big(g_\lambda(u')-g_\lambda(\widetilde{u}')\big)w'dx=\int_D\big(f_N(u)-f_N(\widetilde{u})\big)w'dx.
\end{equation}
From (\ref{local}), we get
\[
\int_D
g_\lambda(u'(\omega))-g_\lambda(\widetilde{u}'(\omega))w'dx\geq0.
\]
By the H\"{o}lder inequality, it follows from (\ref{condition})
that
\begin{eqnarray}\label{regularize23}
&&\left|\int_D(f_N(u)-f_N(\widetilde{u})\big)w'dx\right|=
\left|\int_D\big(\chi_N(||\nabla u||_2)|u|^{p-2}u-
\chi_N(||\nabla \widetilde{u}||_2)|\widetilde{u}|^{p-2}\widetilde{u}\big)w'dx\right|\nonumber\\
&&\leq C_N(p-1)\int_D \sup \{|u|^{p-2},|\widetilde{u}|^{p-2}\}|w||w'|dx
\leq C_N(||u||_{(p-2)d}^{(p-2)}+||\widetilde{u}||_{(p-2)d}^{(p-2)})||w||_{\frac{2d}{d-2}}
||w'||_2\nonumber\\
&&\leq C_N ||\nabla w(t)||_2 ||w'||_2.
\end{eqnarray}
Combining with (\ref{regularize22}) with (\ref{regularize23}), we
have
\[
||w'(t)||^2_2+||\nabla w(t)||_2^2\leq 2C_N\int_0^t
(||w'(s)||^2_2+||\nabla w(s)||_2^2 )ds,
\]
which implies $w=0$, i.e., $u(\omega)=\widetilde{u}(\omega)$.
Hence, for each $\omega\in \Omega_1$, $u=u(\omega)$ is
well-defined.
We shall also show that $(u,u_t)$ is $(H_0^1(D)\cap H^2(D))\times
H^1_0(D)$-valued progressively measurable for any $0\leq t\leq T$.
Let $\textbf{B}_r(z)$ be a closed ball in $C([0,T];H_0^1(D)\times
L^2(D))$ with radius $r>0$ and center at $z$. Then by virtue of
the way $u$ has been obtained, it holds that
\begin{eqnarray}\label{regularize24}
\{(u,u_t)\in
\textbf{B}_r(z)\}\cap\Omega_1=\Omega_1\cap
\bigcup_{L=1}^\infty\bigcap_{\nu=1}^\infty\bigcap_{j=1}^\infty\bigcup_{m=j}^\infty
\left\{\big((u_m,u_m')\in \textbf{B}_{r+1/\nu}(z)\big)\cap\big(\mathcal{A}_\lambda(u_{m})\leq
L\big)\right\}.
\end{eqnarray}
Since $(u,u_t)\in C([0,T];H_0^1(D)\times L^2(D))$ for almost all
$\omega$, and the right- hand side of (\ref{regularize24}) belongs
to $\mathcal{F}_T$, it holds that
\begin{eqnarray}\label{regularize25}
\big\{(t,\omega)|0\leq t\leq T, (u(t,\omega),u_t(t,\omega))\in
A\big\}\in \mathcal{B}([0,T])\otimes \mathcal{F}_T,
\end{eqnarray}
for every $A\in \mathcal{B}(H_0^1(D)\times L^2(D))$. Since every
closed ball of finite radius in $(H_0^1(D)\cap H^2(D))\times
H^1_0(D)$ is closed in $ H^1_0(D)\times L^2(D)$, we have
$\mathcal{B}((H_0^1(D)\cap H^2(D))\times H^1_0(D))\subset
\mathcal{B}(H^1_0(D)\times L^2(D))$. Thus, (\ref{regularize25})
holds for every $\mathcal{B}((H_0^1(D)\cap H^2(D))\times
H^1_0(D))$. By the pathwise uniqueness, we may replace $T$ in
(\ref{regularize25}) by any $0\leq t\leq T$ and $(u,u_t)$ is
$(H_0^1(D)\cap H^2(D))\times H^1_0(D)$-valued progressively
measurable.
Next we show that for each $\omega\in\Omega_1$,
\begin{eqnarray}\label{regularize26}
\mathcal{A}_\lambda(u)\wedge K\leq
\underline{\lim}_{m\rightarrow\infty}\mathcal{A}_\lambda(u_m)\wedge
K
\end{eqnarray}
for each $K>0$. If
$\underline{\lim}_{m\rightarrow\infty}\mathcal{A}_\lambda(u_m)\wedge
K=K$, then the inequality is obvious. If
$\underline{\lim}_{m\rightarrow\infty}\mathcal{A}_\lambda(u_m)\wedge
K=\delta<K$, then there is a subsequence
$\{u_{m_k}\}_{k=1}^\infty$ such that
\[
\lim_{k\rightarrow\infty}\mathcal{A}_\lambda(u_{m_k})=\delta,
\]
and $\{u_{m_k}(\omega)\}$ converges to $u(\omega)$ in the sense of
(\ref{regularize16})-(\ref{regularize19}) and
(\ref{regularize21}). It follows that
\[
||u||_{L^\infty(0,T;H_0^1(D)\cap H^2(D))}\leq
\underline{\lim}_{k\rightarrow\infty}||u_{m_k}||_{L^\infty(0,T;H_0^1(D)\cap
H^2(D))},
\]
\[
||u'||_{L^\infty(0,T;H_0^1(D))}\leq
\underline{\lim}_{k\rightarrow\infty}||u_{m_k}'||_{L^\infty(0,T;H_0^1(D))},
\]
and
\[
\int_0^T\int_D g_\lambda
\big(u'(s)\big)u'(s)dxds\leq \underline{\lim}_{k\rightarrow\infty}\int_0^T\int_D g_\lambda
\big(u_{m_k}'(s)\big)u_{m_k}'(s)dxds,
\]
which yield
\[
\mathcal{A}_\lambda(u)\leq\delta.
\]
Thus, (\ref{regularize26}) is valid. By (\ref{regularize11}),
(\ref{regularize26}) and Fatou's lemma, we have
\[
\textbf{E}(\mathcal{A}_\lambda(u)\wedge K)\leq C_N,
\]
for some constant $C_N$ independent of $K$ and $\lambda$. By
passing $K\uparrow \infty$, we get
\begin{equation}\label{vary0}
\textbf{E}(\mathcal{A}_\lambda(u))\leq C_N.
\end{equation}
\end{proof}
Next we still fix $N>0$ and consider the following equation
\begin{eqnarray}\label{vary}
\begin{cases}
u_{tt}-\Delta u+g(u_t)=f_N(u)+\varepsilon \sigma (x,t)\partial_t W(t,x),\ \ \ &(x,t)\in D \times (0,T),\\
u(x,t)=0, &(x,t)\in\partial D\times (0,T),\\
u(x,0)=u_{0}(x),\ \ \ u_t(x,0)=u_1(x),\ \ \ &x\in D.
\end{cases}
\end{eqnarray}
\begin{lem}\label{lem:vary}
Assume (\ref{condition}), (\ref{regularize1}) and
(\ref{regularize2}) hold. Then there is a pathwise unique solution
$u$ of (\ref{vary}) such that
\[
u\in L^2\big(\Omega; L^\infty (0,T;H_0^1(D)\cap H^2(D))\big)\cap
L^2\big(\Omega; C([0,T];H_0^1(D))\big),
\]
\[
u_t\in L^2\big(\Omega; L^\infty (0,T;H_0^1(D))\big)\cap
L^2\big(\Omega; C([0,T];L^2(D))\big),
\]
and
\[
u_t\in L^q((0,T)\times D).
\]
\end{lem}
\begin{proof}
We denote by $u_\lambda$ the solution of (\ref{regularize}) under
the conditions (\ref{regularize1}) and (\ref{regularize2}). Since
$\textbf{E}(\mathcal{A}_\lambda(u_\lambda))\leq C_N$ for all
$\lambda>0$, we can repeat the same argument as above by
considering $\lambda=\frac{1}{m}$, $m=1,2,\cdots$. there is
$\Omega_2\subset \Omega$ with $P(\Omega\setminus\Omega_2)=0$ and
the following properties. For each $\omega\in\Omega_2$,
\begin{equation}\label{vary1}
M\in C([0,T];H_0^1(D)),\ \rm{and \ for \ all }\
\lambda=\frac{1}{m},\ m\geq1,
\end{equation}
\[
( u_\lambda'-\varepsilon M(t))'-\Delta u_\lambda+
g_\lambda(u_\lambda')=f_N(u_\lambda)
\]
holds in the sense of distributions over $(0,T)\times D$, and
there is a subsequence satisfying the following.
\begin{equation}\label{vary2}
\mathcal{A}_{\lambda_k}(u_{\lambda_k})\leq L_\omega,\ \rm{for\ all
}\ k\geq1 \ \rm{and\ for \ some\ constant }\ L_\omega>0,
\end{equation}
\begin{equation}\label{vary3}
u_{\lambda_k}\rightarrow u\ \ \ \rm{weak \ star\ in}\
L^\infty(0,T; H_0^1(D)\cap H^2(D)),
\end{equation}
\begin{equation}\label{vary4}
u_{\lambda_k}\rightarrow u\ \ \ \rm{strongly \ in }\ C([0,T];
H_0^1(D)),
\end{equation}
\begin{equation}\label{vary5}
u_{\lambda_k}'\rightarrow u'\ \ \ \rm{weak \ star\ in}\
L^\infty(0,T; H_0^1(D)),
\end{equation}
\begin{equation}\label{vary5}
u_{\lambda_k}'\rightarrow u'\ \ \ \rm{strongly \ in }\ C([0,T];
L^2(D)),
\end{equation}
and
\begin{equation}\label{vary5}
u_{\lambda_k}'\rightarrow u'\ \ \ \rm{for \ almost\ all }\ (x,t)\in (0,T)\times
D,
\end{equation}
for some function $u=u(\omega)$. By Lemma \ref{lem:regularize},
\[
g_{\lambda_k}(u_{\lambda_k}')\rightarrow g( u')\ \ \ \rm{for \
almost\ all }\ (x,t)\in (0,T)\times D.
\]
It follows from (\ref{vary2}) and Lemma \ref{lem:regularize1} that
\[
g_{\lambda_k}(u_{\lambda_k}')\rightarrow g(u')\ \ \ \rm{weakly\
in}\ L^{\frac{q}{q-1}}((0,T)\times D).
\]
Thus, $u=u(\omega)$ satisfies (\ref{vary}) in the sense of
distributions over $(0,T)\times D$ for $\omega\in\Omega$. Suppose
that for $\omega\in \Omega$, there is another subsequence which
converges to $\widetilde{u}=\widetilde{u}(\omega)$ in the sense of
(\ref{vary2})-(\ref{vary5}). Similarly the proof in Lemma
\ref{lem:regularize2}, we can show that
$u(\omega)=\widetilde{u}(\omega)$ follows from the equation
\[
u_{tt}(\omega)-\widetilde{u}_{tt}(\omega)-\Delta
(u(\omega)-\widetilde{u}(\omega))
+g(u_t(\omega))-g(\widetilde{u}_t(\omega))=f_N(u(\omega))-f_N(\widetilde{u}(\omega)),
\]
and the regularity
\[
u(\omega), \widetilde{u}(\omega)\in L^\infty (0,T;H_0^1(D)\cap
H^2(D))\cap C([0,T);H_0^1(D)),
\]
\[
u_t(\omega), \widetilde{u}_t(\omega)\in L^\infty
(0,T;H_0^1(D))\cap C([0,T);L^2(D)),
\]
\[
g(u_t(\omega)), g(\widetilde{u}_t(\omega))\in
L^{\frac{q}{q-1}}((0,T)\times D).
\]
Again by the same argument as Lemma \ref{lem:regularize2},
$(u,u_t)$ is $(H_0^1(D)\cap H^2(D))\times H_0^1(D)$-valued
progressively measurable. Next we define
\begin{equation}\label{vary6}
\mathcal{A}(u)=||u||^2_{L^\infty(0,T;H^1_0(D)\cap
H^2(D))}+|| u_t||^2_{L^\infty(0,T;H^1_0(D))}+\int_0^T\int_D g_\lambda
(u_t)u_tdxdt.
\end{equation}
Then by the same argument as (\ref{vary0}), we have
\begin{equation}\label{vary7}
\textbf{E}\big(\mathcal{A}(u)\big)\leq C_N.
\end{equation}
\end{proof}
Now we consider the local existence and uniqueness of solution for
problem (\ref{smain}) under the assumption (\ref{mild}).
\begin{thm}\label{thm local}
Under the assumptions (\ref{condition}), (\ref{mild}) and
(\ref{mild1}), there is a pathwise unique local solution $u$ of
(\ref{smain}) according to Definition \ref{defn} such that the
energy equation holds:
\begin{eqnarray}\label{ienery}
&&||\nabla u(t)||_2^2+||u_t(t)||^2_2+2\int_0^t\int_D |u_t(s)|^qdxds-2\int_0^t\int_D |u(s)|^{p-2}u(s)u_t(s)dxds\nonumber\\
&&=||\nabla u_0||^2_2+||u_1||^2_2+2\int_0^t\big(u_t(s),\varepsilon\sigma(x,s)\big)dW_s+\varepsilon^2\sum_{i=1}^\infty\int_0^t
\int_D \lambda_ie_i^2(x)\sigma^2(x,s)dxds.
\end{eqnarray}
\end{thm}
\begin{proof}
Let us choose sequences $\{u_{0,m}\}$, $\{u_{1,m}\}$ and
$\{\sigma_m(x,t,\omega)\}$ such that
\[
u_{0,m}\in H^1_0(D)\cap H^2(D),\ \ \ u_{1,m}\in H^1_0(D),\ \ \ \sigma_m(x,t,\omega)\in L^2(\Omega;L^2(0,T;H^1_0(D)\cap L^\infty(D)))
\]
\[
\textbf{E}\int_0^T(||\nabla \sigma_m(t)||_2^2+||\sigma_m
(t)||_\infty^2)dt<\infty.,
\]
and as $m\rightarrow\infty$,
\begin{equation}\label{exist1}
u_{0,m}\rightarrow u_0\ \ \ \rm{strongly\ in} \ \ \ H^1_0(D),
\end{equation}
\begin{equation}\label{exist2}
u_{1,m}\rightarrow u_1\ \ \ \rm{strongly\ in} \ \ \ L^2(D),
\end{equation}
\begin{equation}\label{exist3}
\textbf{E}\int_0^T ||\sigma_m(x,t)-\sigma(x,t)||_2^2 dt\rightarrow
0.
\end{equation}
For each $m\geq1$, let $u_m$ be the solution of
\begin{eqnarray}\label{exist4}
\begin{cases}
u_{tt}-\Delta u+g(u_t)=f_N(u)+\varepsilon \sigma_m (x,t)\partial_t W(t,x),\ \ \ &(x,t)\in D \times (0,T),\\
u(x,t)=0, &(x,t)\in\partial D\times (0,T),\\
u(x,0)=u_{0,m}(x),\ \ \ u_t(x,0)=u_{1,m}(x),\ \ \ &x\in D.
\end{cases}
\end{eqnarray}
By Lemma \ref{lem:vary}, we have
\begin{equation}\label{exist5}
u_m\in L^2\big(\Omega; L^\infty (0,T;H_0^1(D)\cap H^2(D))\big)\cap
L^2\big(\Omega; C([0,T];H_0^1(D))\big),
\end{equation}
\begin{equation}\label{exist6}
u_m'\in L^2\big(\Omega; L^\infty (0,T;H_0^1(D))\big)\cap
L^2\big(\Omega; C([0,T];L^2(D))\big),
\end{equation}
and the energy equation
\begin{eqnarray}\label{ienery1}
&&||\nabla u_m||_2^2+||u_m'||^2_2+2\int_0^t\int_D |u_m'|^qdxds
-2\int_0^t\int_D \chi(||\nabla u_m||_2)|u_m|^{p-2}u_mu_m'(s)dxds\nonumber\\
&&=||\nabla u_{0,m}||^2_2+||u_{1,m}||^2_2+2\int_0^t\big(u_m',\varepsilon\sigma_m\big)dW_s+\varepsilon^2\sum_{i=1}^\infty\int_0^t
\int_D \lambda_ie_i^2(x)\sigma_m^2(x,s)dxds.
\end{eqnarray}
Let
\[
M_m(t,x)=\int_0^t \sigma_m(x,s)dW(s,x),\ \ \ t>0,\ x\in D.
\]
Then, for any $m_1$, $m_2$
\begin{equation}\label{exist7}
(u_{m_1}''-u_{m_2}'')-\Delta
(u_{m_1}-u_{m_2})+g(u_{m_1}')-g(u_{m_2}')=f_N(u_{m_1})-f_N(u_{m_2})+\varepsilon(M_{m_1}-M_{m_2})'
\end{equation}
holds in the sense of distributions over $(0,T)\times D$ for
almost all $\omega$. For the damping term, we use the following
elementary inequality
\begin{equation}\label{iexist}
(|a|^{q-2}a-|b|^{q-2}b)(a-b)\geq c|a-b|^{q}
\end{equation}
for $a,\ b\in \mathbb{R}$, $q\geq2$, where $c$ is a positive
constant. By inequality (\ref{iexist}) and the regularity
(\ref{exist5}) and (\ref{exist6}), we can drive from
\begin{eqnarray}\label{exist8}
&&||u_{m_1}'(t)-u_{m_2}'(t)||_2^2+||\nabla u_{m_1}(t)-\nabla u_{m_2}(t)||^2_2+2c\int_0^t
||u_{m_1}'-u_{m_2}'||^q_qds\nonumber\\
&&\leq ||\nabla u_{0,m_1}-\nabla u_{0,m_2}||^2_2+|| u_{1,m_1}-u_{1,m_2}||^2_2+2\int_0^t\big(f_N(u_{m_1})-f_N(u_{m_2}),
u_{m_1}'-u_{m_2}'\big)ds\nonumber\\
&&\ \ \ \ +2\varepsilon\int_0^t\big(\sigma_{m_1}-\sigma_{m_2},
u_{m_1}'-u_{m_2}'\big)dW_s+\varepsilon^2 c_0^2Tr R\int_0^t||\sigma_{m_1}-\sigma_{m_2}||_2^2ds
\end{eqnarray}
for all $t\in [0,T]$. For the third term on the right of
(\ref{exist8}), it follows from (\ref{condition6}) that
\begin{eqnarray}\label{exist9}
&&2\Big|\int_0^t\big(f_N(u_{m_1})-f_N(u_{m_2}),
u_{m_1}'-u_{m_2}'\big)ds\Big|\leq 2\int_0^t ||f_N(u_{m_1})-f_N(u_{m_2})||_2||u_{m_1}'-u_{m_2}'||_2ds\nonumber\\
&&\leq 2C_N\int_0^t||\nabla u_{m_1}-\nabla u_{m_2}||_2||u_{m_1}'-u_{m_2}'||_2ds\nonumber\\
&&\leq C_N\int_0^t||\nabla u_{m_1}-\nabla
u_{m_2}||_2^2dt+C_N\int_0^t||u_{m_1}'-u_{m_2}'||_2^2ds,
\end{eqnarray}
where $C_N$ ia s positive constant independent of $m_1$ and $m_2$.
In view of (\ref{exist8}) and (\ref{exist9}), it follows that
\begin{eqnarray}\label{exist13}
&&\textbf{E}\sup_{0\leq t\leq T}\Big(||u_{m_1}'(t)-u_{m_2}'(t)||_2^2+
||\nabla u_{m_1}(t)-\nabla u_{m_2}(t)||^2_2\Big) \nonumber\\
&&\leq ||\nabla u_{0,m_1}-\nabla
u_{0,m_2}||^2_2+C_N\int_0^T\textbf{E}\sup_{0\leq t\leq
T}\Big(||\nabla u_{m_1}-\nabla
u_{m_2}||_2^2+||u_{m_1}'-u_{m_2}'||_2^2\Big)ds\nonumber\\
&&\ \ \ + ||
u_{1,m_1}-u_{1,m_2}||^2_2+\varepsilon^2 c_0^2Tr R\textbf{E}\int_0^t||\sigma_{m_1}-\sigma_{m_2}||_2^2ds\nonumber\\
&&\ \ \ +2\varepsilon\textbf{E}\sup_{0\leq t\leq
T}\Big|\int_0^t\big(\sigma_{m_1}-\sigma_{m_2},
u_{m_1}'-u_{m_2}'\big)dW_s\Big| .
\end{eqnarray}
For the last term on the right of (\ref{exist13}), by the
Burkholder-Davis-Gundy inequality we have
\begin{eqnarray}\label{exist14}
&&\textbf{E}\Big(\sup_{t\in[0,T]}\Big|\int_0^t\big(\sigma_{m_1}-\sigma_{m_2},
u_{m_1}'-u_{m_2}'\big)dW_s\Big|\Big)\nonumber\\
&&\leq C\textbf{E}\Big(\sup_{t\in[0,T]}||u_{m_1}'-u_{m_2}'||_2\Big(\sum_{i=1}^\infty\int_0^t
\big((\sigma_{m_1}-\sigma_{m_2})Re_i,(\sigma_{m_1}-\sigma_{m_2})e_i\big)dt
\Big)^{\frac{1}{2}}\Big)\nonumber\\
&&\leq
\alpha\textbf{E}\Big(\sup_{t\in[0,T]}||u_{m_1}'-u_{m_2}'||_2^2\Big)
+\frac{Cc_0^2}{\alpha}Tr R\textbf{E}\int_0^t||\sigma_{m_1}-\sigma_{m_2}||^2_2dt
\end{eqnarray}
where $\alpha$ and $C$ are some positive constants. By
taking(\ref{exist13}), (\ref{exist14}) into account and invoking
the Gronwall inequality again, we get
\begin{eqnarray}\label{exist13'}
&&\textbf{E}\Big(\sup_{t\in[0,T]}||u_{m_1}'-u_{m_2}'||_2^2+\sup_{t\in[0,T]}||\nabla u_{m_1}-\nabla
u_{m_2}||_2^2\Big)\nonumber\\
&&\leq C_N \Big(||\nabla u_{0,m_1}-\nabla u_{0,m_2}||^2_2+||
u_{1,m_1}-u_{1,m_2}||^2_2+\varepsilon^2 c_0^2Tr
R\textbf{E}\int_0^t||\sigma_{m_1}-\sigma_{m_2}||_2^2ds\Big).
\end{eqnarray}
Moreover, it can be derived from (\ref{exist8}) and
(\ref{exist13'}) that
\begin{eqnarray}\label{exist12}
&&\textbf{E}\Big(\int_0^T
\sup_{t\in[0,T]}||u_{m_1}'-u_{m_2}'||^q_qdt\Big)\nonumber\\
&&\leq C_N \Big(||\nabla u_{m_1}(t)-\nabla
u_{m_2}(t)||^2_2
+||
u_{1,m_1}-u_{1,m_2}||^2_2 +\varepsilon^2 c_0^2Tr
R\textbf{E}\int_0^t||\sigma_{m_1}-\sigma_{m_2}||_2^2ds\Big).\quad
\end{eqnarray}
It follows from (\ref{exist1})-(\ref{exist3}) and (\ref{exist13'})
that $\{u_m\}$ and $\{u_m'\}$ are cauchy sequences in
$L^2(\Omega;H_0^1(D))$ and $L^2(\Omega;L^2(D))$, respectively.
Thus,
\begin{equation}\label{exist11}
(u_m, u_m')\rightarrow (u_N,u_N') \ \ \ \rm{stronly \ in }\ \ \
L^2(\Omega;C([0,T];H_0^1(D)\times L^2(D)))
\end{equation}
for some function $u_N$ dependent on $N$. Also, by
(\ref{exist12}), $\{u_m'\}$ are cauchy sequences in
$L^q((0,T)\times D)$. So $\{u_m'\}$ converge strongly in
$L^q((0,T)\times D)$. Then there exist subsequences of $\{u_m'\}$,
still denoted by $\{u_m'\}$ such that
\begin{equation}\label{exist15}
u_m'\rightarrow u_N'\ \ \ \rm {for \ almost\ all}\ (x,t)\in
(0,T)\times D.
\end{equation}
It follows from (\ref{vary7}), (\ref{exist15}) and Lemma
\ref{lem:regularize1} that
\begin{equation}\label{exist16}
|u_m'|^{q-2}u_m'\rightarrow |u_N'|^{q-2}u_N'\ \ \ \rm{weakly\ in}\
L^{\frac{q}{q-1}}((0,T)\times D).
\end{equation}
Therefore, using (\ref{exist11}), (\ref{exist15}), the convergence
of the initial data and $\sigma_m (x,t)$, $u_N$ is the solution of
the following equation
\begin{eqnarray}\label{exist17}
\begin{cases}
u_{tt}-\Delta u+|u_t|^{q-2}u_t=f_N(u)+\varepsilon \sigma (x,t)\partial_t W(t,x),\ \ \ &(x,t)\in D \times (0,T),\\
u(x,t)=0, &(x,t)\in\partial D\times (0,T),\\
u(x,0)=u_{0}(x),\ \ \ u_t(x,0)=u_{1}(x),\ \ \ &x\in D,
\end{cases}
\end{eqnarray}
which satisfies the requirements of Definition \ref{defn}, where
$u_0$, $u_1$ and $\sigma (x,t)$ satisfy condition (\ref{mild}).
For uniqueness of (\ref{exist17}), the proof is similar in the
Lemma \ref{lem:regularize2}, so we omit it here.
To obtain the energy equation of (\ref{exist17}), we proceed by
taking the termwise limit in the approximate equation
(\ref{ienery1}). It is ease to show that
\[
||\nabla u_m||_2^2\rightarrow ||\nabla u_N||_2^2,\ \ \ || u_m'||_2^2\rightarrow ||
u_N'||_2^2, \ \ \ ||\nabla u_{0,m}||_2^2\rightarrow ||\nabla
u_0||_2^2, \ \ \ || u_{1,m}||_2^2\rightarrow ||
u_1||_2^2
\]
in the mean and
\[
\int_0^t\int_D |u_m'|^{q}\rightarrow \int_0^t\int_D |u_N'|^{q}
\]
by (\ref{exist16}). By the dominated convergence theorem, the term
$\int_0^t
\int_D \lambda_1e_i^2(x)\sigma_m^2(x,s)dxds$ converges in the mean to
$\int_0^t
\int_D \lambda_1e_i^2(x)\sigma^2(x,s)dxds$. For the remaining two terms in
(\ref{ienery1}), we first consider
\begin{eqnarray}\label{exist18}
&&\Big|\int_0^t\int_D \chi(||\nabla
u_m||_2)|u_m|^{p-2}u_mu_m'(s)dxds-\int_0^t\int_D \chi(||\nabla
u_N||_2)|u_N|^{p-2}u_Nu_N'(s)dxds\Big|\nonumber\\
&&\leq\int_0^t\Big|\big(f_N(u_m)-f_N(u_N),u_N'\big)\Big|ds
+\int_0^t\Big|\big(f_N(u_m),u_m'-u_N'\big)\Big|ds.
\end{eqnarray}
From (\ref{condition6}) and (\ref{sobolev}), we get
\begin{equation}\label{exist19}
\Big|\big(f_N(u_m)-f_N(u_N),u_N'\big)\Big|\leq
||f_N(u_m)-f_N(u_N)||_2||u_N'||_2\leq C_N ||\nabla u_m-\nabla
u_N||_2||u_N'||_2,
\end{equation}
and
\begin{equation}\label{exist20}
\Big|\big(f_N(u_m),u_m'-u_N'\big)\Big|\leq C_N ||\nabla
u_m||_2||u_m'-u_N'||_2.
\end{equation}
substituting (\ref{exist19}) and (\ref{exist20}) into
(\ref{exist18}), we obtain
\begin{eqnarray*}
&&\Big|\int_0^t\int_D \chi(||\nabla
u_m||_2)|u_m|^{p-2}u_mu_m'(s)dxds-\int_0^t\int_D \chi(||\nabla
u_N||_2)|u_N|^{p-2}u_Nu_N'(s)dxds\Big|\nonumber\\
&&\leq C_N\int_0^t(||\nabla
u_m||_2+||u_N'||_2)(||\nabla u_m-\nabla u_N||_2+||u_m'-u_N'||_2)ds.
\end{eqnarray*}
Therefore
\begin{eqnarray*}
&&\textbf{E}\Big|\int_0^t\int_D \chi(||\nabla
u_m||_2)|u_m|^{p-2}u_mu_m'(s)dxds-\int_0^t\int_D \chi(||\nabla
u_N||_2)|u_N|^{p-2}u_Nu_N'(s)dxds\Big|^2\nonumber\\
&&\leq 2C_N\Big(\textbf{E}\int_0^T(||\nabla
u_m||_2^2+||u_N'||_2^2)ds\Big)\Big(\textbf{E}\int_0^T(||\nabla u_m-\nabla
u_N||_2+||u_m'-u_N'||_2)ds\Big),
\end{eqnarray*}
which converges to zero as $m\rightarrow\infty$. Finally, for the
stochastic integral term, we have
\begin{eqnarray*}
&&\textbf{E}\Big|\int_0^t\big(u_m',\sigma_m\big)dW_s-\int_0^t\big(u_N',\sigma\big)dW_s\big|\\
&&\leq
\textbf{E}\Big|\int_0^t\big(u_m'-u_N',\sigma_m\big)dW_s\big|
+\textbf{E}\Big|\int_0^t\big(u_N',\sigma_m-\sigma\big)dW_s\big|.
\end{eqnarray*}
Now, by the Burkholder-Davis-Gundy inequality, we have
\begin{eqnarray*}
&& \textbf{E}\sup_{0\leq t\leq
T}\Big|\int_0^t\big(u_m'-u_N',\sigma_m\big)dW_s\big|\leq
C\textbf{E}\Big(\sup_{0\leq t\leq
T}||u_m'-u_N'||_2\Big(\sum_{i=1}^\infty\int^T_0(\sigma_{m}R e_i,\sigma_{m}e_i)dt\Big)^{\frac{1}{2}}\Big)\quad\\
&&\leq Cc_0Tr R\textbf{E}\Big(\sup_{0\leq t\leq
T}||u_m'-u_N'||_2^2\Big)^{\frac{1}{2}}\textbf{E}
\Big(\int^T_0||\sigma_{m}||^2_2dt\Big)^{\frac{1}{2}}\rightarrow0,\
\ \ \rm{as}\ m\rightarrow\infty.
\end{eqnarray*}
Similarly,
\[
\textbf{E}\sup_{0\leq t\leq
T}\Big|\int_0^t\big(u_N',\sigma_m-\sigma\big)dW_s\big| \leq Cc_0Tr
R\textbf{E}\Big(\sup_{0\leq t\leq
T}||u_N'||_2^2\Big)^{\frac{1}{2}}\textbf{E}
\Big(\int^T_0||\sigma_m-\sigma||_2^2dt\Big)^{\frac{1}{2}},
\]
which also tends to zero as $m\rightarrow0$ by (\ref{exist3}).
There above three inequalities imply that
\[
\int_0^t\big(u_m',\sigma_m\big)dW_s\rightarrow\int_0^t\big(u_N',\sigma\big)dW_s,\
\ \ \rm{as}\ m\rightarrow\infty.
\]
Hence, we obtain the energy equation of (\ref{exist17})
\begin{eqnarray}\label{ienery2}
&&||\nabla u_N||_2^2+||u_N'||^2_2+2\int_0^t\int_D |u_N'|^qdxds
-2\int_0^t\int_D \chi(||\nabla u_N||_2)|u_N|^{p-2}u_Nu_N'(s)dxds\nonumber\\
&&=||\nabla u_{0}||^2_2+||u_{1}||^2_2+2\int_0^t\big(u_N',\varepsilon\sigma\big)dW_s+\varepsilon^2\sum_{i=1}^\infty\int_0^t
\int_D \lambda_1e_i^2(x)\sigma^2(x,s)dxds.
\end{eqnarray}
For each $N$, introduce the stopping time $\tau_N$ by
\[
\tau_N=\inf\{t>0;||\nabla u_N||_2\geq N\}.
\]
By the uniqueness of the solution of (\ref{exist17}), for
$t\in[0,\tau_N\wedge T)$, $u(t)=u_N(t)$ is the local solution of
(\ref{smain}). As $\tau_N$ is increasing in $N$, let
$\tau_\infty=\lim_{N\rightarrow\infty}\tau_N$. Hence, we construct
a unique continuous local solution $u(t)=\lim_{N\rightarrow\infty}
u_N(t)$ to (\ref{smain}) on $[0,T\wedge\tau_\infty)$, which
satisfies the requirements of Definition \ref{defn} and the energy
equation (\ref{ienery}).
\end{proof}
To obtain a global solution, it is necessary to consider the
interaction between the damping term $|u_t|^{q-2}u_t$ and the
source term $|u|^{p-2}u$ such that a certain energy bound can be
established to prevent the unlimited growth. To state the next
theorem, we define
\[
e\big(u(t)\big)=||u_t(t)||_2^2+||\nabla
u(t)||_2^2+\frac{2}{p}||u||_p^p.
\]
\begin{thm}\label{thm global}
Suppose (\ref{condition}), (\ref{mild}) and (\ref{mild1}) hold.
If $q\geq p$, then for any $T>0$, there is a unique solution $u$
of (\ref{smain}) according to Definition \ref{defn} on the
interval $[0,T]$ such that
\begin{equation}\label{global0}
\textbf{E}\sup_{0\leq t\leq T}e(t)<\infty.
\end{equation}
\end{thm}
\begin{proof}
For any $T>0$, we will show that
$u_N(t)=u(t\wedge\tau_N)\rightarrow u$ a.s. as
$N\rightarrow\infty$ for any $t\leq T$, so that the local solution
becomes a global one. To this end, it suffices to show that
$\tau_N\rightarrow\infty$ as $N\rightarrow\infty$ with probability
one.
Recall that, for $t\in[0,\tau_N\wedge T)$,
$u(t)=u_N(t)=u(t\wedge\tau_N)$ is the local solution of
(\ref{smain}). By the Theorem~\ref{thm local}, the following
energy equation holds:
\begin{eqnarray}\label{global}
&&e\big(u(t\wedge\tau_N)\big)=e(u_0)+
4\int_0^{t\wedge t_N}\int_D |u|^{p-2}uu_t(s)dxds-2\int_0^{t\wedge t_N}\int_D |u_t(s)|^qdxds\nonumber\\
&&\quad\quad\quad\quad\ \ +2\int_0^{t\wedge t_N}\big(u_t(s),\varepsilon\sigma\big)dW_s+\varepsilon^2\int_0^{t\wedge t_N}
\Big(\sigma(x,s)R e_i(x),\sigma(x,s)e_i(x)\Big)ds.
\end{eqnarray}
Using H\"{o}lder inequality and Young's inequality, we get
\begin{equation}\label{global1}
\Big|\int_D |u|^{p-2}uu_t(s)dx\Big|\leq
||u||_{p}^{p-1}||u_t||_p\leq \beta ||u_t||_p^p+C_\beta
||u||_{p}^p,
\end{equation}
where $\beta>0$ and $C_\beta$ is a constant depending on $\beta$.
Since $q\geq p$ and (\ref{condition}), the embedding inequality
yields
\begin{equation}\label{global2}
||u_t||_p^p\leq C||u_t||_q^p,
\end{equation}
where $C$ is the embedding constant. Therefore, from
(\ref{global}), (\ref{global1}) and (\ref{global2}), we get
\begin{eqnarray}\label{global3}
&&e\big(u(t\wedge\tau_N)\big)\leq 4C\beta\int_0^{t\wedge t_N}
||u_t(s)||_q^pds-2\int_0^{t\wedge t_N} ||u_t(s)||_q^qdxds
+4C_\beta\int_0^{t\wedge t_N} ||u||_p^{p}ds\nonumber\\
&&\quad\quad\quad\quad\ \ +e(u_0)+2\int_0^{t\wedge t_N}\big(u_t(s),\varepsilon\sigma\big)dW_s
+\varepsilon^2c_0^2Tr R\int_0^{t\wedge t_N}
||\sigma(s)||_2^2ds.
\end{eqnarray}
Using $q\geq p$, at this point we distinguish two case:
$(i)$ Either $||u_t||_q^q>1$ so we choose $\beta$ small such that
$-2||u_t||_q^q+4C\beta||u_t||_q^p\leq0$.
$(ii)$ Or $||u_t||_q^q\leq1$, in this case we have
$-2||u_t||_q^q+4C\beta||u_t||_q^p\leq 4C\beta$.\\
Therefore in either case, we have
\begin{eqnarray}\label{global4}
&&e\big(u(t\wedge\tau_N)\big)\leq e(u_0)+4C\beta(t\wedge t_N)
+4C_\beta\int_0^{t\wedge t_N} ||u||_p^{p}ds\nonumber\\
&&\quad\quad\quad\quad\ \ +2\int_0^{t\wedge t_N}\big(u_t(s),\varepsilon\sigma\big)dW_s+
\varepsilon^2c_0^2Tr R\int_0^{t\wedge t_N}
||\sigma(s)||_2^2ds.
\end{eqnarray}
By taking the expectation of (\ref{global4}), we obtain
\[
\textbf{E}e\big(u(t\wedge\tau_N)\big)\leq e(u_0)+4C\beta(t\wedge
t_N)+
\varepsilon^2c_0^2Tr R\int_0^{t\wedge t_N}
\textbf{E}||\sigma(s)||_2^2ds+K\int_0^{t\wedge
t_N}\textbf{E}e\big(u(s)\big)ds,
\]
where $K>0$ is a constant, which, by the Gronwall inequality and
(\ref{mild}) , implies that
\begin{equation}\label{global5}
\textbf{E}e\big(u(T\wedge\tau_N)\big)\leq
\big(e(u_0)+CT\big)e^{KT}\leq C_T.
\end{equation}
On the other hand, we have
\[
\textbf{E}e\big(u(T\wedge\tau_N)\big)\geq \textbf{E}
\Big(I(\tau_N\leq T)e\big(u(\tau_N)\big)\Big)\geq
C\textbf{E}\Big(||u_{\tau_N}||^2_2I(\tau_N\leq T)\Big)\geq CN^2
P(\tau_N\leq T),
\]
where $I$ is the indicator function. In view of (\ref{global5}),
the above inequality gives
\[
P(\tau_\infty\leq T)\leq P(\tau_N\leq T)\leq \frac{C_T}{N^2},
\]
which, with the aid of the Borel-cantelli Lemma. implies that
\[
P(\tau_\infty\leq T)=0.
\]
or
\[
\lim_{N\rightarrow\infty} \tau_N=\infty\ \ \ a.s..
\]
Hence, on $[0,\tau_\infty\wedge T)=[0,T)$,
$u=\lim_{N\rightarrow\infty}u_N(t)$ is the global solution as
announced. Since $T>0$ was chosen arbitrarily, we may replace
$[0,T)$ by $[0,T]$.
To verify the energy bound (\ref{global0}), by the energy equation
(\ref{ienery}), (\ref{global1}),(\ref{global2}) and (\ref{mild}),
we have
\[
e\big(u(t)\big)\leq e(u_0)+(4C\beta+\varepsilon^2C_1)t
+4KC_\beta\int_0^{t} e\big(u(s)\big)ds
+2\int_0^{t}\big(u_t(s),\varepsilon\sigma\big)dW_s,
\]
where $C_1$ and $K$ are positive constants. The above inequality
yields
\begin{equation}\label{global6}
\textbf{E}\sup_{0\leq t\leq T}e\big(u(t)\big)\leq
e(u_0)+(4C\beta+\varepsilon^2C_1)T+4KC_\beta\int_0^{T}\textbf{E}\sup_{0\leq
s\leq T}e\big(u\big)ds+2\textbf{E}\sup_{0\leq t\leq
T}\int_0^{t}\big(u_t,\varepsilon\sigma\big)dW_s.
\end{equation}
By the Burkholder-Davis-Gundy inequality, we have
\begin{eqnarray}\label{global7}
&& \textbf{E}\sup_{0\leq t\leq
T}\Big|\int_0^t\big(u_t,\varepsilon\sigma\big)dW_s\big|\leq
C_2\textbf{E}\Big(\sup_{0\leq t\leq
T}||u_t||_2\Big(\varepsilon^2\sum_{i=1}^\infty\int^T_0\big(\sigma R e_i,\sigma e_i\big)dt\Big)^{\frac{1}{2}}\Big)\\
&&\leq \frac{1}{4}\textbf{E}\sup_{0\leq t\leq
T}||u_t||_2^2+C_3c_0^2\varepsilon^2TrR
\int^T_0\textbf{E}||\sigma(t)||_2^2dt
\end{eqnarray}
for some constant $C_2$, $C_3>0$. In view of (\ref{mild}),
(\ref{global6}) and (\ref{global7}), there exist positive
constants $C_4$ and $C_5$ depending on $\beta,\ T$ etc. such that
\[
\textbf{E}\sup_{0\leq t\leq T}e\big(u(t)\big)\leq
C_4+C_5\int_0^{T}\textbf{E}\sup_{0\leq s\leq T}e\big(u\big)ds.
\]
By applying the Gronwall inequality, the above gives
\[
\textbf{E}\sup_{0\leq t\leq T}e\big(u(t)\big)\leq C_4e^{C_5T},
\]
which implies the energy bound (\ref{global0}).
\end{proof}
\section{Explosive solution of (\ref{smain})}
In this section, we switch to discuss the explosion of the
solution to (\ref{smain}) for $p>q$. Throughout this section, we
suppose that $\sigma(x,t,\omega)\equiv \sigma(x,t)$ such that
\begin{equation}\label{blowup}
\int_0^\infty \int_D \sigma^2(x,t)dxdt<\infty.
\end{equation}
As well-known, equation (\ref{smain}) is equivalent to the
following It\^{o} system
\begin{eqnarray}\label{emain}
\begin{cases}
du_{t}=v_tdt,\\
dv_t=\Big(\Delta u_t- |v_t|^{q-2}v_t+|u_t|^{p-2}u_t\Big)dt+\varepsilon\sigma(x,t)d W(t,x),\\
u_t(x,t)=0, \ \ \ x\in\partial D,\\
u_0(x,0)=u_{0}(x),\ \ \ v_0(x,0)=u_1(x),
\end{cases}
\end{eqnarray}
where $(u_0,u_1)\in H_0^1(D)\times L^2(D)$. Define energy
functional $E(t)$ associated to our system
\begin{equation*}
\mathcal{E}(t)=\frac{1}{2}||v_t(t)||^2_2+\frac{1}{2}||\nabla
u_t(t)||^2_2 -\frac{1}{p}||u_t||_{p}^{p}.
\end{equation*}
Before we state and prove our explosion result, we need the
following lemmas.
\begin{lem}\label{lem blowup}
Assume (\ref{condition}) and (\ref{blowup}) hold. Let $(u_t,v_t)$
be a solution of system (\ref{emain}) with initial data
$(u_0,u_1)\in H_0^1(D)\times L^2(D)$. Then we have
\begin{equation}\label{blowup3}
\frac{d}{dt}\textbf{E}\mathcal{E}(t)=-\textbf{E}||v_t||_q^q+\frac{1}{2}\varepsilon^2
\sum_{i=1}^\infty \int_D \lambda_ie_i^2(x)\sigma^2(x,t)dx,
\end{equation}
where $r(x,x)$ is defined in Section $2$, and
\begin{eqnarray}\label{blowup2}
&&\textbf{E}\big( u_t(t), v_t(t)\big)=\big(
u_0(x),v_0(x)\big)-\int_0^t \textbf{E}||\nabla
u_{s}||_2^2ds+\int_0^t\textbf{E}||v_s(s)||_2^2ds
\nonumber\\
&&\quad\quad\quad\quad\quad\quad\quad\
-\int_0^t\textbf{E}\big(|v_s|^{q-2}v_s, u_s\big)
ds+\int_0^t\textbf{E}||u_s(s)||_{p}^{p}ds,
\end{eqnarray}
\end{lem}
\begin{proof}
Using It\^{o} formula to $||v_t||_2^2$, we have
\begin{eqnarray}\label{blowup4}
&&||v_t||_2^2=||v_0||_2^2+2\int_0^t( v_s, d v_s)+\int_0^t( dv_s, dv_s)\nonumber\\
&&=||v_0||^2_2-2\int_0^t( \nabla u_s, \nabla v_s)
ds-2\int_0^t||v_s||_q^qds+2\int_0^t( v_s, |u_s|^{p-2}u_s)
ds\nonumber\\
&&\quad +2\int_0^t ( v_s, \varepsilon\sigma (x,s)dW(s))
+\varepsilon^2 \sum_{i=1}^\infty\int_0^t
\big(\sigma(x,s)Re_i,\sigma(x,s)Re_i\big)ds\nonumber\\
&&=2\mathcal{E}(0)-||\nabla
u_t(t)||^2_2-2\int_0^t||v_s||_q^qds +\frac{2}{p}||u_t(t)||_{p}^{p}\nonumber\\
&&\quad +2\int_0^t ( v_s, \varepsilon\sigma (x,s)dW(s))
+\varepsilon^2 \sum_{i=1}^\infty \int_0^t \int_D
\lambda_ie_i^2(x)\sigma^2(x,s)dxds.
\end{eqnarray}
(\ref{blowup3}) follows from (\ref{blowup4}) taking the
expectation and taking derivative. Next we turn to prove
(\ref{blowup2}).
\begin{eqnarray}\label{blowup5}
&&\big( u_t(t), v_t(t)\big)=\big( u_0, v_0\big)+\int_0^t\big( u_s(s),d v_s(s)\big)
+\int_0^t\big(v_s(s),d u_s(s)\big)\nonumber\\
&&=\big( u_0, v_0\big)-\int_0^t ||\nabla u_{s}(s)||^2_2ds-
\int_0^t\big( |v_s|^{q-2}v_s, u_s(s)\big) d\tau\nonumber\\
&&\quad+\int_0^t\big( u_s, |u_s|^{p-2}u_s\big) ds +\int_0^t \big(
u_s(s), \varepsilon\sigma (x,s)dW(s)\big) +\int_0^t
||v_s(s)||_2^2ds.
\end{eqnarray}
Then (\ref{blowup2}) follows from (\ref{blowup5}).
\end{proof}
Let
\[F(t)=\frac{1}{2}\varepsilon^2 \sum_{i=1}^\infty \int_0^t \int_D
\lambda_ie_i^2(x)\sigma^2(x,s)dxds.
\]
From (\ref{blowup}), we have
\begin{equation}\label{blowup0}
F(\infty)=\frac{1}{2}\varepsilon^2 \sum_{i=1}^\infty \int_0^\infty
\int_D \lambda_ie_i^2(x)\sigma^2(x,s)dxds\leq
\frac{1}{2}\varepsilon^2c^2_0Tr R \int_0^\infty
\int_D\sigma^2(x,s)dxds= E_1<\infty.
\end{equation}
Denote
\[
H(t)=F(t)-\textbf{E}\mathcal{E}(t).
\]
Then, by (\ref{blowup3}), we get
\begin{equation}\label{blowup9'}
H'(t)=F'(t)-\frac{d}{dt}\textbf{E}\mathcal{E}(t)=\textbf{E}||v_t||_q^q\geq0.
\end{equation}
\begin{lem}\label{lem blowup1}
Let $(u_t, u_t)$ is a solution of (\ref{emain}). Assume
(\ref{condition}) holds. Then there exists a positive constant
$C>1$ such that
\begin{equation}\label{blowup6}
\textbf{E}||u_t||_p^s\leq
C(F(t)-H(t)-\textbf{E}||v_t||^2_2+\textbf{E}||u_t||^p_p)
\end{equation}
for any $2\leq s\leq p$.
\end{lem}
\begin{proof}
If $||u_t||^p_p\leq1$ then $||u_t||^s_p\leq ||u_t||^2_p\leq
C||\nabla u_t||^2_2$ by Sobolev embedding. If $||u||^p_p\geq1$
then $||u_t||^s_p\leq ||u_t||^p_p$. Therefore, it follows that
\begin{equation}\label{blowup7}
\textbf{E} ||u_t||^s_p\leq C(\textbf{E}||\nabla
u_t||^2_2+\textbf{E}||u_t||^p_p).
\end{equation}
By the definition of energy function, we have
\begin{equation}\label{blowup7'}
\frac{1}{2}\textbf{E}||\nabla
u_t||^2_2=\textbf{E}\mathcal{E}(t)-\frac{1}{2}\textbf{E}||v_t||^2_2+\frac{1}{p}\textbf{E}||u_t||^p_p
=F(t)-H(t)-\frac{1}{2}\textbf{E}||v_t||^2_2+\frac{1}{p}\textbf{E}||u_t||^p_p.
\end{equation}
Then, (\ref{blowup6}) follows (\ref{blowup7}) and
(\ref{blowup7'}).
\end{proof}
In the following, we switch
to discuss the explosion of the solution to (\ref{smain}) for
$p>q$. Actually, we have
\begin{thm}\label{thm blowup}
Assume (\ref{condition}) and (\ref{blowup}) hold. Let $(u_t,v_t)$
be the solution of (\ref{emain}) with initial data $(u_0,u_1)\in
H_0^1(D)\times L^2(D)$ satisfying
\begin{equation}\label{blowup8}
\mathcal{E}(0)\leq -(1+\beta)E_1,
\end{equation}
where $\beta>0$ is any constant and $E_1$ is defined in
(\ref{blowup0}). If $p>q$, then the solution $(u_t,v_t)$ and the
lifespan $\tau_\infty$ defined in Section $3$
with $L^2$ norm, either\\
(1) $\textbf{P}(\tau_\infty<\infty)>0$, i.e., $u_t(t)$ in $L^2$
norm blows up in finite time with positive probability, or\\
(2) there existence a positive time $T^*\in(0,T_0]$ such that
\[
\lim_{t\rightarrow T^*}\textbf{E}\mathcal{E}(t)=+\infty.
\]
with
\[
T_0= \frac{1-\alpha}{\alpha K \mathcal{E}^\frac{\alpha}{1-\alpha}(0)},
\]
where $\alpha$, $K$ are given later.
\end{thm}
\begin{proof}
For the lifespan $\tau_\infty$ of the solution $\{u_t(t);\
t\geq0\}$ of (\ref{smain}) with $L^2$ norm, let us consider the
case when $\textbf{P}(\tau_\infty=+\infty)=1$. Then, for
sufficiently large $T>0$, by (\ref{blowup9'}) and (\ref{blowup8}),
we have
\begin{equation}\label{blowup9}
0< (1+\beta)E_1\leq -\mathcal{E}(0)=H(0)\leq H(t)\leq
F(t)+\frac{1}{p}\textbf{E}||u||^p_p\leq
E_1+\frac{1}{p}\textbf{E}||u||^p_p.
\end{equation}
Define by
\[
L(t):=H^{1-\alpha}(t)+\mu\textbf{E}(u_t,v_t),
\]
for small $\mu$ to be chosen later and for
\begin{equation}\label{blowup10}
0<\alpha<\min\Big\{\frac{1}{2},\frac{p-q}{pq}\Big\}.
\end{equation}
Taking a derivative of $L(t)$ and using (\ref{blowup2}) and
(\ref{blowup9'}), we obtain
\begin{eqnarray}\label{blowup11}
&&L'(t)=(1-\alpha)H^{-\alpha}(t)H'(t)+\mu\Big(-\textbf{E}||\nabla u_t||_2^2-\textbf{E}(|v_t|^{q-2}v_t,u_t)
+\textbf{E}||u_t||^p_p+\textbf{E}||v_t||^2_2\Big)\nonumber\\
&&\quad\quad\ =(1-\alpha)H^{-\alpha}(t)\textbf{E}||v_t||_q^q+\mu p
H(t)+\mu(\frac{p}{2}+1)\textbf{E}||v_t||^2_2\nonumber\\
&&\quad\quad\ \ \ +\mu(\frac{p}{2}-1)\textbf{E}||\nabla
u_t||_2^2-\mu\textbf{E}(|v_t|^{q-2}v_t,u_t)-\mu p F(t).
\end{eqnarray}
Exploiting the inequality $\textbf{E}||u_t||_q^q\leq C
\textbf{E}||u_t||_p^q$ and the assumption $q<p$, we obtain
\begin{eqnarray}\label{blowup11'}
&&\Big|\textbf{E}(|v_t|^{q-2}v_t,u_t)\Big|\leq\big(\textbf{E}||v_t||^q_q\big)^{\frac{q-1}{q}}
\big(\textbf{E}||u_t||^q_q\big)^{\frac{1}{q}}\leq
C\big(\textbf{E}||v_t||^q_q\big)^{\frac{q-1}{q}}\big(\textbf{E}||u_t||^q_p\big)^{\frac{1}{q}}\nonumber\\
&&\leq
C\big(\textbf{E}||v_t||^q_q\big)^{\frac{q-1}{q}}\big(\textbf{E}||u_t||^p_p\big)^{\frac{1}{p}}\leq
C\big(\textbf{E}||v_t||^q_q\big)^{\frac{q-1}{q}}\big(\textbf{E}||u_t||^p_p\big)^{\frac{1}{q}}
\big(\textbf{E}||u_t||^p_p\big)^{\frac{1}{p}-\frac{1}{q}}.
\end{eqnarray}
The Young's inequality gives
\begin{equation}\label{blowup10'}
\big(\textbf{E}||v_t||^q_q\big)^{\frac{q-1}{q}}\big(\textbf{E}||u_t||^p_p\big)^{\frac{1}{q}}\leq
\frac{q-1}{q}k\textbf{E}||v_t||^q_q+\frac{k^{1-q}}{q}\textbf{E}||u_t||^p_p.
\end{equation}
In view of (\ref{blowup9}), we get
\[
\textbf{E}||u_t||^p_p\geq p\big(H(t)-F(t)\big)\geq \kappa H(t),
\]
where $\kappa=p\beta/(1+\beta)$. We choose $\alpha$ satisfying
(\ref{blowup10}) and assume $H(0)>1$, we have
\begin{equation}\label{blowup01}
\big(\textbf{E}||u_t||^p_p\big)^{\frac{1}{p}-\frac{1}{q}}\leq
\kappa^{\frac{1}{p}-\frac{1}{q}} H^{\frac{1}{p}-\frac{1}{q}}(t)
\leq \kappa^{\frac{1}{p}-\frac{1}{q}}H^{-\alpha}(t) \leq \kappa^{\frac{1}{p}-\frac{1}{q}}H^{-\alpha}(0).
\end{equation}
Substituting (\ref{blowup10'}) and (\ref{blowup01}) into
(\ref{blowup11'}), we obtain
\begin{equation}\label{blowup02}
\Big|\textbf{E}(|v_t|^{q-2}v_t,u_t)\Big|\leq C_1
\frac{q-1}{q}k\textbf{E}||v_t||^q_q H^{-\alpha}(t)+C_1\frac{k^{1-q}}{q}\textbf{E}||u_t||^p_p
H^{-\alpha}(0),
\end{equation}
where $C_1=C\kappa^{\frac{1}{p}-\frac{1}{q}}$. Thus, from
(\ref{blowup11}) and (\ref{blowup02}) it follow that
\begin{eqnarray}\label{blowup12}
&&L'(t)\geq\Big((1-\alpha)-C_1\frac{q-1}{q}\mu
k\Big)H^{-\alpha}(t)\textbf{E}||v_t||_q^q+\mu p
H(t)+\mu(\frac{p}{2}+1)\textbf{E}||v_t||^2_2-\mu p F(t)\nonumber\\
&&\quad\quad\ \ \ +\mu(\frac{p}{2}-1)\textbf{E}||\nabla u_t||_2^2
-\mu C_1\frac{k^{1-q}}{q}H^{-\alpha}(0) \textbf{E}||u_t||^p_p.
\end{eqnarray}
We now use Lemma \ref{lem blowup1} with $s=p$ to deduce from
(\ref{blowup12})
\begin{eqnarray}\label{blowup15}
&&L'(t)\geq\Big((1-\alpha)-C_1\frac{q-1}{q}\mu
k\Big)H^{-\alpha}(t)\textbf{E}||v_t||_q^q+\mu p
H(t)+\mu(\frac{p}{2}+1)\textbf{E}||v_t||^2_2-\mu p F(t)\nonumber\\
&&\quad\quad\ \ \ +\mu(\frac{p}{2}-1)\textbf{E}||\nabla u_t||_2^2
-\mu k^{1-q}C_2\Big(
F(t)-H(t)-\textbf{E}||v_t||^2_2+\textbf{E}||u_t||^p_p\Big)\nonumber\\
&&\quad\quad \ \geq \Big((1-\alpha)-C_1\frac{q-1}{q}\mu
k\Big)H^{-\alpha}(t)\textbf{E}||v_t||_q^q+\mu(\frac{p}{2}+1+k^{1-q}C_2)\textbf{E}||v_t||^2_2
+\mu(\frac{p}{2}-1)\textbf{E}||\nabla u_t||_2^2\nonumber\\
&&\quad\quad\ \ \ +\mu(p+k^{1-q}C_2)H(t)-\mu
k^{1-q}C_2\textbf{E}||u_t||^p_p-\mu(p+k^{1-q}C_2)F(t),
\end{eqnarray}
where $C_2=C_1H^{-\alpha}(0)/q$. Noting that
\[
H(t)=F(t)+\frac{1}{p}\textbf{E}||u_t||^p_p-\frac{1}{2}\textbf{E}||\nabla u_t||^2_2-\frac{1}{2}\textbf{E}||v_t||^2_2
\]
and writing $p=2C_3+(p-2C_3)$, where $C_3<(p-2)/2$, the estimate
(\ref{blowup15}) implies
\begin{eqnarray}\label{blowup16}
&&L'(t)\geq \Big((1-\alpha)-C_1\frac{q-1}{q}\mu
k\Big)H^{-\alpha}(t)\textbf{E}||v_t||_q^q+\mu(\frac{p}{2}+1+k^{1-q}C_2-C_3)\textbf{E}||v_t||^2_2\nonumber\\
&&\quad\quad\ \ \ +\mu(\frac{p}{2}-1-C_3)\textbf{E}||\nabla u_t||_2^2+\mu(p-2C_3+k^{1-q}C_2)H(t)\nonumber\\
&&\quad\quad\ \ \ +\mu
\big(\frac{2C_3}{p}-k^{1-q}C_2\big)\textbf{E}||u_t||^p_p-\mu(p-2C_3+k^{1-q}C_2)F(t).
\end{eqnarray}
In view of (\ref{blowup8}) and (\ref{blowup9}), we get
\[
(p-2C_3+k^{1-q}C_2)F(t)\leq (p-2C_3+k^{1-q}C_2)E_1\leq
\frac{(p-2C_3+k^{1-q}C_2)}{1+\beta}H(t).
\]
Substituting the above inequality into (\ref{blowup16}), we get
\begin{eqnarray*}
&&L'(t)\geq \Big((1-\alpha)-C_1\frac{q-1}{q}\mu
k\Big)H^{-\alpha}(t)\textbf{E}||v_t||_q^q+\mu\Big((\frac{p}{2}+1+k^{1-q}C_2-C_3)\textbf{E}||v_t||^2_2\nonumber\\
&&\quad\quad\ +(\frac{p}{2}-1-C_3)\textbf{E}||\nabla
u_t||_2^2+(p-2C_3+k^{1-q}C_2)\frac{\beta}{1+\beta}H(t) +
\big(\frac{2C_3}{p}-k^{1-q}C_2\big)\textbf{E}||u_t||^p_p\Big).
\end{eqnarray*}
At this point, we choose $k$ large enough so that the above
inequality becomes
\begin{equation}\label{blowup17}
L'(t)\geq \Big((1-\alpha)-C_1\frac{q-1}{q}\mu
k\Big)H^{-\alpha}(t)\textbf{E}||v_t||_q^q+\mu\gamma\big(H(t)+\textbf{E}||\nabla
u_t||_2^2+\textbf{E}||v_t||^2_2+\textbf{E}||u_t||^p_p\big),
\end{equation}
where $\gamma>0$ is the minimum of the coefficients of $H(t)$,
$\textbf{E}||\nabla u_t||_2^2,\ \textbf{E}||v_t||^2_2,\
\textbf{E}||u_t||^p_p$ in (\ref{blowup17}). Once $k$ is fixed, we
pick $\mu$ small enough so that
\[
(1-\alpha)-C_1\frac{q-1}{q}\mu k\geq0
\]
and
\[
L(0)=H^{1-\alpha}(0)+\mu (u_0,u_1)>0.
\]
Therefore, (\ref{blowup17}) takes on the form
\begin{equation}\label{blowup18}
L'(t)\geq \mu\gamma\big(H(t)+\textbf{E}||\nabla
u_t||_2^2+\textbf{E}||v_t||^2_2+\textbf{E}||u_t||^p_p\big)\geq0.
\end{equation}
Consequently, we have
\[
L(t)\geq L(0)>0,\ \ \ \forall t\geq0.
\]
By H\"{o}lder inequality, we get
\[
\Big|\textbf{E}\big(u_t,v_t\big)\Big|\leq
\big(\textbf{E}||u_t||_2^2\big)^{\frac{1}{2}}\big(\textbf{E}||v_t||_2^2\big)^{\frac{1}{2}}
\leq
C\big(\textbf{E}||u_t||_p^2\big)^{\frac{1}{2}}\big(\textbf{E}||v_t||_2^2\big)^{\frac{1}{2}},
\]
which, by young's inequality implies
\begin{eqnarray}\label{blowup19}
&&\Big|\textbf{E}\big(u_t,v_t\big)\Big|^{\frac{1}{1-\alpha}}
\leq C \big(\textbf{E}||u_t||_p^2\big)^{\frac{1}{2(1-\alpha)}}
\big(\textbf{E}||v_t||_2^2\big)^{\frac{1}{2(1-\alpha)}}\nonumber\\
&&\quad\quad\quad\quad\quad\ \ \ \ \leq
C\Big(\big(\textbf{E}||u_t||_p^2\big)^{\frac{\theta}{2(1-\alpha)}}+
\big(\textbf{E}||v_t||_2^2\big)^{\frac{\eta}{2(1-\alpha)}}\Big),
\end{eqnarray}
for $1/\theta+1/\eta=1$. We take $\eta=2(1-\alpha)$. Then, by
(\ref{blowup10}),
\[
\frac{\theta}{2(1-\alpha)}=\frac{1}{1-2\alpha}=\frac{pq}{pq-2p+2q}\leq\frac{p}{2},
\]
i.e., $2/(1-2\alpha)\leq p$. Using $\alpha<1/2$, (\ref{blowup19})
becomes
\[
\Big|\textbf{E}\big(u_t,v_t\big)\Big|^{\frac{1}{1-\alpha}}\leq
C\Big(\big(\textbf{E}||u_t||_p^2\big)^{\frac{1}{1-2\alpha}}+
\textbf{E}||v_t||_2^2\Big)\leq
C\Big(\textbf{E}||u_t||_p^{\frac{2}{1-2\alpha}}+
\textbf{E}||v_t||_2^2\Big).
\]
Using Lemma \ref{lem blowup1} with $s=2/(1-2\alpha)$, we get
\begin{equation}\label{blowup20}
\Big|\textbf{E}\big(u_t,v_t\big)\Big|^{\frac{1}{1-\alpha}}\leq
C\big(H(t)+\textbf{E}||\nabla
u_t||_2^2+\textbf{E}||v_t||^2_2+\textbf{E}||u_t||^p_p\big), \ \ \
\forall t\geq0.
\end{equation}
Therefore, we have
\begin{eqnarray}\label{blowup21}
&&L^{\frac{1}{1-\alpha}}(t)=\Big(H^{1-\alpha}(t)+\mu\textbf{E}\big(u_t,v_t\big)\Big)^{\frac{1}{1-\alpha}}
\leq 2^{\frac{1}{1-\alpha}} \Big(H(t)+\mu\Big|\textbf{E}\big(u_t,v_t\big)\Big|^{\frac{1}{1-\alpha}}\Big)\nonumber\\
&&\quad\ \ \ \ \ \leq C\big(H(t)+\textbf{E}||\nabla
u_t||_2^2+\textbf{E}||v_t||^2_2+\textbf{E}||u_t||^p_p\big), \ \ \
\forall t\geq0.
\end{eqnarray}
Combining (\ref{blowup18}) and (\ref{blowup21}), we obtain
\begin{equation}\label{blowup22}
L'(t)\geq K L^{\frac{1}{1-\alpha}}, \ \ \ \forall t\geq0,
\end{equation}
where $K$ is a positive constant. A simple integration of
(\ref{blowup22}) over $(0,t)$ then yields
\begin{equation}\label{blowup23}
L^{\frac{\alpha}{1-\alpha}}(t)\geq
\frac{1-\alpha}{(1-\alpha)L^{-\frac{\alpha}{1-\alpha}}(0)-\alpha
Kt}.
\end{equation}
Let
\[
T_0= \frac{1-\alpha}{\alpha K \mathcal{E}^\frac{\alpha}{1-\alpha}(0)}.
\]
Then $L(t)\rightarrow+\infty$ as $t\rightarrow T_0$. This means
that there exists a positive time $T^*\in(0,T_0]$ such that
\[
\lim_{t\rightarrow T^*}\textbf{E}\mathcal{E}(t)=+\infty.
\]
As for the case when $\textbf{P}(\tau_\infty=+\infty)<1$ (i.e.,
$\textbf{P}(\tau_\infty<+\infty)>0$), then $u_t(t)$ in $L^2$ norm
blows up in finite time interval $[0,\tau_\infty]$ with positive
probability.
\end{proof}
\begin{rem}
In the classical (deterministic) case of $\varepsilon=0$, it is
well known that for $(u_0,v_0)\in H^1_0(D)\times L^2(D)$, the
condition $\mathcal{E}(0)\leq0$ already imply finite-time blowup
of (\ref{smain}) (see e.g. \cite{M1}). If $\varepsilon>0$, by our
results, to balance the influence of $W(t,x)$ such that the local
solution of (\ref{smain}) is blow-up with positive probability or
explosive in $L^2$ sense, the initial energy should be satisfied
$\mathcal{E}(0)\leq -\frac{1}{2}(1+\beta)\varepsilon^2
r_0^2\int_0^\infty \int_D \sigma^2(x,t)dxdt$.
\end{rem}
|
2,877,628,090,441 | arxiv | \section{Introduction}
The hallmark of topological states of matter is the exact quantization of a physical observable in terms of a conserved quantity, the topological invariant~\cite{thouless_quantized_1982,thouless_quantization_1983}.
A paradigmatic example is the quantum Hall conductance, which is quantized as integer (or fractional) multiples of $e^2/h$ with a precision exceeding one part in a billion~\cite{klitzing_new_1980,laughlin_quantized_1981}.
Moreover, this quantization is robust against perturbations, i.e., it persists in the presence of disorder, defects, impurities, or imperfections of the experimental sample.
This led to an extremely precise definition of the electrical resistance standard and experimental determination of the finite-structure constant~\cite{klitzing_quantum_2017}.
A topologically equivalent state is the Thouless pump~\cite{thouless_quantization_1983,niu_towards_1990,shindou_quantum_2005,fu_time_2006,wei_anomalous_2015,roux_quasiperiodic_2008,wang_topological_2013,marra_fractional_2015,marra_fractional_2017,matsuda_two-dimensional_2019}, which can be engineered, e.g., with ultracold atoms~\cite{lewenstein_ultracold_2007,bloch_many-body_2008,zhang_topological_2018,cooper_topological_2019} in a superlattice created by the superposition of two optical lattices with different wavelengths~\cite{nakajima_topological_2016,lohse_thouless_2016,taddia_topological_2017,das_realizing_2019}.
When the superlattice is adiabatically and periodically varied in time $t$, the charge pumped through the atomic cloud is quantized in terms of the topological invariant, i.e., the Chern number~\cite{thouless_quantization_1983}.
However, the charge is quantized only when the duration of the pumping process is an integer multiple of the full adiabatic cycle, and deviations from the quantized value are linear in time.
In this sense, the quantization of the pumped charge is not exact:
This constitutes a fundamental hindrance to the realization of metrological standards.
\begin{figure}[t]
\centering
\includegraphics[width=1\columnwidth]{Fig1.png}%
\setlength{\belowcaptionskip}{-14pt}\setlength{\abovecaptionskip}{2pt}
\caption{%
The superposition of two stationary lattices in a tilted direction produces a quasiperiodic one-dimensional lattice when $\alpha=\lambda_\mathrm{S}/(\lambda_{\mathrm{L}}\cos{\theta})$ is an irrational number.
}
\label{fig1}
\end{figure}
In this Rapid Communication, we will show that the quantization of the pumped \emph{current} can be indeed realized by Thouless pumps in the \emph{quasiperiodic} regime and, most importantly, that this quantization is exact.
In ultracold atomic systems, quasiperiodicity~\cite{kraus_topological_2012-1,kraus_topological_2012-2,kraus_quasiperiodicity_2016,ozawa_topological_2019,valiente_super_2019,kuno_disorder-induced_2019,yao_critical_2019} is realized using a superposition of two optical lattices with incommensurate lattice constants, i.e., their ratio $\alpha$ is an irrational number.
In this regime, the translational symmetry is completely broken, the familiar concept of Brillouin zone (BZ) becomes ill-defined, and the usual definition of the Chern number as an integral of the Berry curvature breaks down.
In order to consider a realistic experimental setup, we will derive an effective tight-binding (TB) model describing an atomic gas in a bichromatic potential~\cite{das_realizing_2019,roux_quasiperiodic_2008}, which coincides with a generalized Aubry-Andr\'{e}-Harper-Hofstadter (AAHH) model~\cite{harper_single_1955,hofstadter_energy_1976,aubry_analyticity_1980,hatsugai_energy_1990,osadchy_hofstadter_2001,hatsuda_hofstadters_2016,ikeda_hofstadters_2018} with an extra spatially dependent tunneling term.
Furthermore, we will operatively define the Chern number by taking the limit of an ensemble of periodic and topologically equivalent states which progressively approximate quasiperiodicity.
In this limit, the Bloch bands and Berry curvatures become asymptotically flat, as already known~\cite{kraus_topological_2012-1,harper_perturbative_2014}.
Finally, we describe the experimental fingerprint of the quasiperiodic topological state, which reveals itself in the charge transport and adiabatic evolution of the center of mass of the atomic cloud.
Whereas in the commensurate (nonquasiperiodic) case the current is not constant and the pumped charge is quantized only at exact multiples of the pumping cycle, we find that the quasiperiodic nontrivial state is characterized by a steady and topologically quantized pumping current, independently from the duration of the pumping process.
Most importantly, we find that this quantization is exact up to exponentially small corrections, it is robust against perturbations which do not break the symmetries of the system, and does not depend on the details of the model considered.
This exact quantization is a direct consequence of quasiperiodicity, and may contribute to a more accurate definition of current standards~\cite{kaneko_review_2016}.
\begin{figure}[t]
\centering
\includegraphics[width=1\columnwidth]{Fig2.png}%
\setlength{\belowcaptionskip}{-4pt}\setlength{\abovecaptionskip}{2pt}
\caption{%
Energy spectra of the TB Hamiltonian \eqref{Hk} calculated for $V=J$ and $K=0.25J$.
The large central gap has Chern number $C=1$ and is topologically equivalent to the RM model ($\alpha=1/2$).
For $K\to0$ (not shown) the central gaps close at $\alpha=p/q$ for $q$ even.
Different color shades correspond to different Chern numbers.
}
\label{fig2}
\end{figure}
Experimentally, Thouless pumps are realized by ultracold Fermi gases loaded into dynamically controlled bichromatic lattices~\cite{nakajima_topological_2016,lohse_thouless_2016}.
Using a tilted setup~\cite{nakajima_disorder_2020,matsuda_two-dimensional_2019} as in \cref{fig1}, two sets of counterpropagating laser beams produce two standing waves with wavelengths $\lambda_\mathrm{S}$ and $\lambda_\mathrm{L}>\lambda_\mathrm{S}$ which intersect at an angle $\theta$.
For an atomic cloud confined in the $x$ direction, the total dipole potential is
\begin{equation}
V(x,\phi)=
V_\mathrm{S}\cos^2\left(\frac{\pi x}{d_\mathrm{S}}\right)
+
V_\mathrm{L}\cos^2\left(\frac{\pi x}{d_\mathrm{L}}-\frac\phi2 \right),
\label{CH}
\end{equation}
where
$d_\mathrm{S}=\lambda_\mathrm{S}$ and $d_\mathrm{L}=\lambda_\mathrm{L}\cos{\theta}$ are respectively the short and long lattice constants,
$V_\mathrm{S,L}$ the lattice depths, and $\phi$ the phase difference between the two lattices, which varies
in time as $\phi=\nu t$ with instantaneous frequency $\nu$.
The commensuration $\alpha=d_\mathrm{S}/d_\mathrm{L}=\lambda_\mathrm{S}/(\lambda_\mathrm{L}\cos{\theta})$ between the two lattices is controlled by the tilting angle $\theta$.
We assume a deep lattice regime $V_\mathrm{S}> E_r$ (here, $E_r=h^2/(8 M d_\mathrm{S}^2)$ is the recoil energy of the short lattice~\cite{bloch_many-body_2008}).
If $V_\mathrm{S}> V_\mathrm{L}$, the continuum Hamiltonian ${\cal H}=p^2/2M +V(x,\phi)$ can be discretized using localized states at the short lattice minima and treating the long lattice as a perturbation~\cite{roux_quasiperiodic_2008}.
This leads to an effective low-energy TB Hamiltonian corresponding to a generalized Harper equation which reads
\begin{align}
&
[-J
\!+\!
2K \alpha\sin{(\pi\alpha)} \cos(2\pi\alpha (n +1) \!-\! \phi) ] (\psi_{n-1} \!+\! \psi_{n+1})
\,+\nonumber\\
&\qquad
+ 2V\cos(2\pi\alpha (n+1/2)- \phi) \psi_{n} = E \psi_{n}.
\label{HHe}
\end{align}
This is a generalization of the AAHH model, which includes an extra site-dependent tunneling term $K\propto V_\mathrm{L}$.
Moreover, for $\alpha=1/2$ (staggered field), \cref{HHe} reduces to the Rice-Mele (RM) model~\cite{rice_elementary_1982,rice_mele_shen_topological_2017}
\begin{align}
&
[-J \!-\! K (-1)^n \cos\phi ] (\psi_{n-1} \!+\! \psi_{n+1})
\nonumber\\
&\qquad
+ 2V(-1)^n\sin\phi \,\psi_{n} = E \psi_{n},
\label{eq:RM}
\end{align}
which has an energy gap $\Delta E_\mathrm{RM}=4\min(|J|,|V|,|K|)$.
In the commensurate case, i.e., $\alpha=p/q$ with $p, q$ integer coprimes, one can verify that \cref{CH,HHe} are invariant up to translations $n\to n+q$, and consequently the superlattice unit cell has length $q d_\mathrm{S}$.
In momentum space,
\begin{align}
&\qquad
H=
\sum_k -2J\cos{k}\, c_k^\dag c_k
+
e^{\ii (\pi \alpha-\phi)}
\nonumber\\\times
&
\left[
V \!+\! 2K \alpha\sin{(\pi\alpha)}
\cos{(k\!+\!\pi\alpha)}
\right]\!
c^\dag_{k} c_{k+2\pi\alpha}
\!+\! \text{H.~c.},
\label{Hk}
\end{align}
where $k$ is restricted to the first BZ $[0,2\pi/q]$.
\Cref{fig2} shows the energy spectra of the TB model, which are a deformed version of the Hofstadter butterfly~\cite{hofstadter_energy_1976,avila_ten_2009}.
Indeed, whereas the Hofstadter butterfly ($K=0$) is symmetric with respect to the transformations $\alpha\to1-\alpha$ and $E\to -E$ (corresponding to $k\to k+\pi$), the spatially dependent tunneling term breaks these symmetries.
For small $K$, one can assume that the intraband gaps remain open for $K\to0$ and are thus homeomorphic to the gaps of the Hofstadter butterfly.
Thus, the intraband gaps are topologically nontrivial with Chern number $C\neq0$ satisfying the diophantine equation $p C \equiv j \mod q$ (analogously to the Hofstadter butterfly $K=0$).
Unlike the original Hofstadter butterfly, the energy spectra is gapped at $E=0$ for $\alpha=p/q$ with $q$ even.
Intraband gaps with low Chern numbers are generally wide and remain open for a broad range of the commensuration $\alpha$.
In particular, the large central gap in \cref{fig2} is open for any value of $\alpha$ and is topologically equivalent to the RM model:
It can be continuously deformed into $\alpha\to1/2$, where \cref{HHe} reduces to \cref{eq:RM}.
In the commensurate case, assuming homogeneously populated bands below the Fermi level $E_\mathrm{F}$ and at zero temperature, the total charge pumped during an adiabatic evolution $\phi\to \phi+ 2\pi $ is quantized and equal to the Chern number $C$ of the filled Bloch bands~\cite{thouless_quantization_1983}
$
Q=C=
(1/2\pi)
\int_{\phi}^{\phi+2\pi} \dd \phi
\int_\mathrm{0}^{2\pi/q} \dd k
\Omega
$.
Here,
$\Omega
=\sum_i
\Theta(E_\mathrm{F}-E_i)
\omega_i
$
is the total Berry curvature at the Fermi level $E_\mathrm{F}$,
with
$\Theta(E)$ the Heaviside step function and
$\omega_i
=2\Im
\braket{\partial_\phi u_i | \partial_k u_i}$
the Berry curvature of the $i$-th band, defined in terms of the Bloch wavefunctions
$\ket{\psi_{i}(k,x)}=e^{\ii k x}\ket{u_i(k,x)}$.
Moreover, the current
$
I=\partial_\phi Q=(1/2\pi)
\int_\mathrm{0}^{2\pi/q} \dd k
\Omega
$
is not quantized and not constant during the pumping process, oscillating around an average value
$\langle I \rangle=
\langle\Omega
\rangle/{q}$
with maximum variation
$\delta I \leq
\delta\Omega
/{q}$
where $\delta\Omega
=\max \Omega
-\min \Omega
$.
Due to translational invariance, Hamiltonian \eqref{Hk} is periodic in the momentum $k\to k+2\pi/q$, but not in the phase since $H(\phi+2\pi/q)\neq H(\phi)$.
One can show that a phase shift $\phi\to\phi+2\pi m/q$ in \cref{CH,HHe} is equivalent to a translation $n\to n-c$, where $c$ satisfies the diophantine equation $p c\equiv m\mod q$.
Thus, the Hamiltonian is ``unitarily'' periodic~\cite{marra_fractional_2015,marra_fractional_2017} in the phase $\phi$ up to lattice translations, i.e., it is periodic up to unitary transformations (translations),
\begin{equation}
\label{translation}
H(\phi+2\pi m/q)= {T}^{-c} H(\phi) {T}^{c},
\end{equation}
where ${T}$ is defined by $T V(x,\phi)T^{-1}= V(x+d_\mathrm{S},\phi)$.
Consequently, energies and Berry curvatures are periodic in the phase $\phi\to\phi+2\pi/q$, and the pumped charge at well-defined fractions of the pumping cycle $\Delta\phi=2\pi m/q$ is quantized as fractions of the Chern number~\cite{marra_fractional_2015,marra_fractional_2017}
$
Q=m C/q=
(1/2\pi)
\int_{\phi}^{\phi+2\pi m/q} \dd \phi
\int_\mathrm{0}^{2\pi/q} \dd k\,
\Omega
$.
Moreover, the energy bands, Berry curvatures, and total Berry curvature become flat in the limit of large denominators $q$.
\begin{figure}[t]
\centering
\includegraphics[width=1\columnwidth]{Fig3.pdf}%
\setlength{\belowcaptionskip}{-4pt}\setlength{\abovecaptionskip}{2pt}
\caption{%
Pumped charge and current for the central gap with Chern number $C=1$ calculated with the continuous model (a,b) and the effective TB model (c,d) respectively.
Different curves correspond to successive rational approximations of $\alpha=1/\Phi^2$, corresponding to tilting angles $\theta=\arccos{(1/(4\alpha))}$ in the range between $\ang{60}$ and $\ang{49}$ for typical laser wavelengths $\lambda_\mathrm{S}=\SI{266}{nm}$ and $\lambda_\mathrm{L}=\SI{1064}{nm}$.
The pumped charge is quantized as $m C/q$ for pumping periods $\Delta\phi=2\pi m/q$.
In the limit $\alpha_n\to\alpha$, the charge has a linear dependence $Q=\Delta \phi C_\alpha/2\pi$.
The current shows large fluctuations but becomes steady for large denominators $q$, reaching its quantized value $I_\alpha=C/2\pi$.
We use $V_\mathrm{S}=2 E_r$, $V_\mathrm{L}=0.5 E_r$.
(e)
The current approaches its quantized value exponentially as $\delta I=|I-I_\alpha|\propto\exp(-q/\xi)$.
Different data sets correspond to $V=J, 1.25 J$ and $K=0.25 J, 0.375 J, 0.5 J$.
}
\label{fig3}
\end{figure}
We now consider the incommensurate quasiperiodic case $\alpha\in\mathbb R-\mathbb Q$.
Every irrational number $\alpha$ can be written uniquely as an infinite continued fraction~\cite{continued_fractions_hardy}
$\alpha=[a_0; a_1, a_2, \,\ldots ] =
a_0+
1/(a_1 +
1/(a_2 +
\dots))$
with $a_i$ integers.
Successive approximations obtained by truncating the continued fraction representation $\alpha_n=[a_0; a_1, a_2, \,\ldots, a_n]$ are rational numbers, and converge to $\alpha$.
We thus consider the ensemble of Hamiltonians $H^{(\alpha_n)}$ describing commensurate systems with $\alpha_n= p_n/q_n=[a_0; a_1, a_2, \,\ldots, a_n]$.
We assume that the insulating gap at the Fermi level remains open, such that the Hamiltonians $H^{(\alpha_n)}$ are topologically equivalent.
As the denominator $q_n$ increases for $n\to\infty$, the BZ $[0,2\pi/q_n]$ shrinks and becomes ill-defined in the quasiperiodic limit.
Thus, the usual definition of the Chern number as an integral of the Berry curvature in the BZ needs to be reformulated.
However, energy bands and Berry curvatures become constant in the quasiperiodic limit ($q_n\to\infty$).
Hence, if the gap remains open for $\alpha_n\to\alpha$, the Berry integral converges for $n\to\infty$, and we can define the Chern number in the quasiperiodic limit as
\begin{equation}
C_\alpha=\!\!\lim_{\alpha_n\to\alpha}
\frac1{2\pi} \!
\int_{0}^{2\pi} \!\!\!\! \!\!\! \dd \phi \!
\int_\mathrm{0}^{{2\pi}/{q_n}} \!\!\! \!\!\! \dd k\,
\Omega^{(\alpha_n)}
=\!\!
\lim_{\alpha_n\to\alpha}
\frac{2\pi}{q_n}\, \Omega^{(\alpha_n)}
.
\label{chern}
\end{equation}
In this limit, the Chern number is simply proportional to the total Berry curvature, which diverges asymptotically as $\Omega^{(\alpha_n)}\sim q C/2\pi$.
Moreover, since the total Berry curvature is flat, the charge pumped during adiabatic transformations, for any initial and final values of the phase $\phi\to\phi+\Delta\phi$, becomes
\begin{equation}
\label{charge}
Q_\alpha=
\!\!\lim_{\alpha_n\to\alpha}
\frac1{2\pi} \!
\int_{\phi}^{\phi+\Delta\phi} \!\!\!\! \!\!\! \dd \phi \!
\int_\mathrm{0}^{{2\pi}/{q_n}} \!\! \dd k\,
\Omega^{(\alpha_n)}
=
\frac{\Delta\phi}{2\pi} C_\alpha,
\end{equation}
whereas the instantaneous charge current becomes
\begin{equation}
\label{current}
I_\alpha=
\!\!\lim_{\alpha_n\to\alpha}
\frac1{2\pi}
\int_\mathrm{0}^{{2\pi}/{q_n}} \!\! \dd k\,
\Omega^{(\alpha_n)}
=
\frac{C}{2\pi}
\end{equation}
In the quasiperiodic limit, the pumped charge becomes linear in the phase difference $\Delta\phi$, whereas the current $I=\partial_\phi Q$ becomes constant and proportional to the Chern number.
Notice that, in order to observe the effects of quasiperiodicity, the system size $L$ must be larger than the unit cell $q d_\mathrm{S}$.
In this sense, the limit $\alpha_n\to\alpha$ corresponds to the infinite-size limit $L\to\infty$.
These effects are robust against perturbations which do not break translational symmetry.
In fact, adding a perturbation $\lambda V$ in \cref{translation}, one can verify that the perturbed Hamiltonian satisfies
\begin{equation}
\label{brokentranslation}
H'(\phi+2\pi m/q)= {T}^{-c} (H'(\phi)+c \lambda [T,V] T^{-1}) {T}^{c}.
\end{equation}
If translational symmetry is unbroken, this equation reduces to \cref{translation}.
In this case, energy levels and Berry curvatures are still periodic and become flat in the quasiperiodic limit, and the current remains quantized.
However, if $[V,T]\neq0$, from \cref{brokentranslation} one can expect polynomial corrections $O(\lambda)$ to the energy levels and Berry curvatures.
Thus, spatial disorder is expected to break down the exact quantization of the current.
However, disorder is usually negligible in optical lattices, contrarily to solid state systems.
We now consider the continuous Hamiltonian $
{\cal H}=
{p^2}/{2M}
+V(x,t)
+ V_\mathrm{T} x^2
$
describing an ultracold atomic cloud in a bichromatic potential, confined by a shallow harmonic trap $\propto V_\mathrm{T}$.
The pumped current $I=\partial_\phi Q=\partial_t Q/\nu$ is related to a simple physical observable, i.e., the center of mass of the atomic cloud.
The variation of the center of mass $\langle x (t)\rangle=(1/N) \sum_{i=1}^j \int_{-\infty}^\infty |\Psi_i(x,t)|^2 x \dd x$ is proportional to the pumped charge~\cite{marra_fractional_2015,wang_topological_2013}, i.e.,
$Q=\rho [\langle x (t+\Delta t)\rangle - \langle x (t)\rangle]$,
where $\rho=j/(q d_\mathrm{S})$ is the number of atoms $j$ per unit cell.
Assuming the number of filled bands to be $j \equiv p C \mod{q}$, the total length of a cloud of $N$ atoms is given by $N/j$ unit cells (of length $q d_\mathrm{S}$).
Hence the number of atoms $N$ must be multiple of the filling factor $j$, and the system length $L$ must be tuned such that
\begin{equation}
\label{condition}
d_\mathrm{S}\frac{N}{L} \equiv \alpha C\mod q.
\end{equation}
Moreover, in order to minimize thermal and nonadiabatic effects, one should consider a filling factor $j=p$ corresponding to the large central gap in \cref{fig2} with Chern number $C=1$.
This gap $\Delta E$ has the same order of magnitude for a wide range of values of the commensuration $\alpha$, including $\alpha=1/2$ where the system is equivalent to the RM model, i.e., $\Delta E \approx \Delta E_\mathrm{RM}$.
This fixes the temperature and timescales to $T<\Delta E_\mathrm{RM}/k_\mathrm{B}$ and $\nu<\Delta E_\mathrm{RM}/\hbar$.
Note that the RM quantum pump has been already realized experimentally~\cite{nakajima_topological_2016,lohse_thouless_2016}.
Note also that the experimental errors in measuring the center of mass can be reduced by averaging over a large number of cycles~\cite{nakajima_topological_2016,lohse_thouless_2016}.
Figure~\ref{fig3} shows the pumped charge $Q$ and the current $I=\partial_\phi Q$ obtained by calculating the center of mass of the continuous system in the adiabatic limit and, alternatively, using the effective TB Hamiltonian \eqref{Hk}.
Different curves correspond to successive rational approximations of $\alpha=1/\Phi^2\in \mathbb{R}-\mathbb{Q}$, where $\Phi$ is the golden ratio.
We tune the trapping potential such that the length $L$ satisfies \cref{condition}.
The pumped charge is quantized as integer fractions of the Chern number $(m/q) C$ for well-defined fractions of the pumping period $\Delta\phi=2\pi m/q$.
For increasing denominators $q$, the pumped charge is approximately $Q=\Delta \phi C_\alpha/2\pi$, whereas the current approaches its quantized value $I_\alpha=C_\alpha/2\pi$ for $\alpha_n\to\alpha$.
Hence, the pumped current $I_\alpha$ in the quasiperiodic limit is quantized and equal to the Chern number (in elementary units).
We will now determine the asymptotic behavior of the current approaching the quasiperiodic limit.
For $K=0$, \cref{Hk} reduces to the AAHH model:
In this case, it has been shown numerically and perturbatively~\cite{harper_perturbative_2014} that the total Berry curvature takes the form
$
\Omega^{(p/q)}\approx F+G e^{- q/\xi} [\cos{(q k)}+\cos{(q \phi)}]
$.
It is reasonable to extrapolate this result also to $K\neq0$.
\Cref{chern} gives $F=q C/2\pi$, whereas $G\propto q^2$~\cite{harper_perturbative_2014}.
Hence, the flattening of the total Berry curvature is exponential, and
$\delta\Omega^{(q)}\approx g q^2 e^{- q/\xi}$ asymptotically for large $q$, where $g>0$ is a constant.
Thus, the current approaches its quantized value as
\begin{equation}
\delta I=|I-I_\alpha|\lesssim
g q \
e^{- q_n/\xi}
\approx
\frac{g \ e^{- \sfrac{1}{\xi \sqrt{D|\alpha-\alpha_n|}}}}{\sqrt{D |\alpha-\alpha_n|}},
\label{scaling}
\end{equation}
where $|\alpha-\alpha_n|\sim 1/{D q_n^2}$ with $D<\sqrt{5}$, due to the Dirichlet's approximation theorem and Hurwitz's theorem~\cite{continued_fractions_hardy}.
Thus, \cref{scaling} describes the scaling behavior of the current in the quasiperiodic limit, in terms of the difference $|\alpha-\alpha_n|$ between the irrational commensuration $\alpha$ and its successive rational approximations $\alpha_n=p_n/q_n$.
The denominator $q_n$ determines the length scale $L_n=q_n d_\mathrm{S}$ where the effects of quasiperiodicity become relevant.
Consequently, \cref{scaling} mandates that corrections to the quantized value of the current are exponentially small in the system size $L$.
This is a distinctive fingerprint of topological quantization, and is analogous to the case of, e.g., the quantum Hall effect, where corrections to the quantized conductance are exponentially small in the linear dimensions of the system~\cite{niu_quantum_1987,exponentially_small_topological_thouless_1998}.
\Cref{fig3}(e) shows the variations $\delta I$ calculated numerically via \cref{current} using the effective TB Hamiltonian \eqref{Hk}.
As expected, the current approaches its quantized value $I_\alpha=C_\alpha/2\pi$ exponentially for $\alpha_n\to\alpha$.
In summary, we have shown how a quasiperiodic and topologically nontrivial Thouless pump can be realized by an atomic gas confined in a quasiperiodic optical lattice, which is a superposition of two harmonic potentials with incommensurate periodicities.
This system is characterized by a topological invariant defined as the limit of the Chern numbers of an ensemble of topologically equivalent and periodic Hamiltonians.
The distinctive fingerprint of this quasiperiodic and topologically nontrivial state is the exact quantization of the current, which is a consequence of the flattening of the Bloch bands and of the Berry curvatures.
This exact quantization is measurable in a typical experimental setting of ultracold atomic gases in optical lattices, and may open new perspectives for a more accurate definition of current standards.
\begin{acknowledgments}
P.~M. thanks Yoshihito Kuno, Michael Lohse, Shuta Nakajima, Yoshiro Takahashi, and Nobuyuki Takei for useful discussions.
The work of P.~M. is supported by the Japan Science and Technology Agency (JST) of the Ministry of Education, Culture, Sports, Science and Technology (MEXT), JST CREST Grant~No.~JPMJCR19T2, by the (MEXT)-Supported Program for the Strategic Research Foundation at Private Universities ``Topological Science'' (Grant No.~S1511006), and by JSPS Grant-in-Aid for Early-Career Scientists (Grant No.~20K14375).
The work of M.~N.~is partially supported by the Japan Society for the Promotion of Science (JSPS) Grant-in-Aid for Scientific Research (KAKENHI) Grants No.~16H03984 and No.~18H01217 and by a Grant-in-Aid for Scientific Research on Innovative Areas ``Topological Materials Science'' (KAKENHI Grant No.~15H05855) from MEXT of Japan.
\end{acknowledgments}
|
2,877,628,090,442 | arxiv | \section{Introduction}
The results from the \textsc{Planck}\ satellite have recently demonstrated the consistency between the temperature and the polarization data \citep{planck2014-a15}.
Adding the information coming from the velocity gradients of the photon--baryon fluid through the polarization power spectra to the measurement of the temperature fluctuations improves the constraints on cosmological parameters and helps break some degeneracies.
One of the best examples is the measurement of the reionization optical depth using the large-scale signature that reionization leaves in the $EE$ polarization power spectrum \citep{planck2014-a25}.
Moreover, as suggested in~\citet{galli:2014}, for a cosmic variance limited experiment, polarization power spectra alone can provide tighter constraints on cosmological parameters than the temperature power spectrum, while for an experiment with \textsc{Planck}-like noise, constraints should be comparable.
In this paper, we discuss in greater detail the constraints on cosmological parameters obtained with the \textsc{Planck}\ 2015 polarization data (including foregrounds and systematic residuals). We find that the level of instrumental noise allows for an accurate reconstruction of cosmological parameters using temperature-polarization cross-correlation $C^{TE}_\ell$ only.
Constraints from \textsc{Planck}\ $EE$ polarization spectrum are dominated by instrumental noise.
In addition, we investigate the robustness of the cosmological interpretation with respect to astrophysical residuals.
In the \textsc{Planck}\ analysis \citep{planck2013-p08,planck2014-a13}, the foreground contamination is mitigated using masks which are adapted to each frequency, reducing the sky fraction to the region where the foreground emission is low. The residuals of diffuse foreground emission are then taken into account using models at the spectrum level in the likelihood. Most of the results presented in \citet{planck2014-a15} are based on $TT$ angular power spectra which present the higher signal-to-noise ratio.
However, foreground residuals in temperature combine several different components which are difficult to model in the power spectra domain as they are both non-homogeneous and non-Gaussian. Any mismatch between the foreground model and the data can thus result in a bias on the estimated cosmological parameters and, in all cases, will increase their posterior width.
On the contrary, in polarization, even though the signal-to-noise ratio is lower, the only foreground that affects the \textsc{Planck}\ data is the polarized emission of the Galactic dust.
As we show, this allows for a precise reconstruction of the cosmological parameters (especially with $TE$ spectra) with less impact from foreground uncertainties.
The cosmological parameters reconstructed with $TT$ spectra are compared to those obtained independently with $TE$ and $EE$. In each case, we detail the foreground modelling and the propagation of its uncertainties.
We use the High-$\ell$ Likelihood on Polarized Power spectra ({HiLLiPOP} ) likelihood which is based on the \textsc{Planck}\ data in temperature and polarization. {HiLLiPOP}\ is one of the four high-$\ell$ likelihoods developed within the \textsc{Planck}\ consortium for the 2015 release and is briefly presented and compared to others in \citet{planck2014-a15}. It is a full temperature+polarization likelihood based on cross-spectra from \textsc{Planck}\ maps at 100, 143, and 217\ifmmode $\,GHz$\else \,GHz\fi. It is based on a Gaussian approximation of the $C_\ell$ likelihood which is well suited for multipoles above $\ell=30$.
In contrary to the \textsc{Planck}\ public likelihood \citep{planck2014-a15}, the foreground description in {HiLLiPOP}\ directly relies on the \textsc{Planck}\ astrophysical measurements.
For the {$\rm{\Lambda CDM}$}\ cosmology, using a $\tau$ prior, it gives results very compatible with the \textsc{Planck}\ public likelihood, except for the $(\tau,A_{\rm s})$ pair which is more consistent with the low-$\ell$ data. Consequently, it also shows a better lensing amplitude $A_{\rm L}$ \citep[see the discussion in][]{couchot:2015}.
The paper is organized as follows. In Sect.~\ref{sec:data}, we describe the power spectra used in this analysis. We discuss the \textsc{Planck}\ maps and the sky region for the power spectra estimation. Section~\ref{sec:lik} presents the likelihood functions both in temperature and in polarization, and details the model of each associated foreground emission. We then present in Sect.~\ref{sec:results} the results for the {$\rm{\Lambda CDM}$}\ cosmological model and check the impact of priors on the astrophysical parameters. Section~\ref{sec:lcdm+} gives the results on the $A_{\rm L}$ parameter considered as an internal cross-check of the CMB likelihoods. Finally, in Sect.~\ref{sec:systematics}, we demonstrate the impact of the foreground parameters for the temperature likelihood and the $TE$ likelihood in terms of both the bias and the precision of the cosmological parameters.
\section{Data set}
\label{sec:data}
\subsection{Maps and masks}
\label{sec:data:maps}
The maps used in this analysis are taken from the \textsc{Planck}\ 2015 data release\footnote{Planck PLA: \url{http://pla.esac.esa.int}} and described in detail in \citet{planck2014-a09}. We use two maps per frequency ($A$ and $B$, one for each \emph{half-mission}) at 100, 143, and 217\ifmmode $\,GHz$\else \,GHz\fi.
The beam associated with each map is provided by the \textsc{Planck}\ collaboration \citep{planck2014-a08}.
Figure~\ref{fig:signal_vs_noise} compares the signal with the noise of the \textsc{Planck}\ maps for each mode $TT$, $EE$, and $TE$.
\begin{figure}[!ht]
\includegraphics[draft=false,width=\columnwidth]{cl_model_vs_noise}
\caption{Signal ({\it solid line}) versus noise ({\it dashed line}) for the \textsc{Planck}\ cross-spectra for each mode $TT$, $EE$, and $TE$ (in {\it red}, {\it blue}, and {\it green}, respectively).}
\label{fig:signal_vs_noise}
\end{figure}
Frequency-dependent apodized masks are applied to these maps in order to limit the foreground contamination in the power spectra.
We use the same masks in temperature and polarization.
The masks are constructed first by thresholding the total intensity maps of diffuse Galactic dust to exclude strong dust emission. In addition, we also remove regions with strong Galactic CO emission, nearby galaxies, and extragalactic point sources.
Diffuse Galactic dust emission is the main contaminant for CMB measurements in both temperature and polarization at frequencies above 100\ifmmode $\,GHz$\else \,GHz\fi.
We build Galactic masks using the \textsc{Planck}\ 353\ifmmode $\,GHz$\else \,GHz\fi\ map as a tracer of the thermal dust emission in intensity.
In practice, we smoothed the \textsc{Planck}\ 353\ifmmode $\,GHz$\else \,GHz\fi\ map to increase the signal-to-noise ratio before applying a threshold which depends on the frequency considered.
Masks are then apodized using a $8^\circ$ Gaussian taper for power spectra estimation.
For polarization, \textsc{Planck}\ dust maps show that the diffuse emission is strongly related to the Galactic magnetic field at large scales \citep{planck2014-XIX}. However, at the smaller scales which matter here ($\ell > 50$), the orientation of dust grains is driven by local turbulent magnetic fields which produce a polarization intensity proportional to the total intensity dust map. We thus use the same Galactic mask for polarization as for temperature.
Molecular lines from CO produce diffuse emission on star forming region. Two major CO lines at 115\ifmmode $\,GHz$\else \,GHz\fi\ and 230\ifmmode $\,GHz$\else \,GHz\fi\ enter the \textsc{Planck}\ bandwidths at 100 and 217\ifmmode $\,GHz$\else \,GHz\fi,\ respectively \citep{planck2013-p03a}.
We smoothed the \textsc{Planck}\ reconstructed CO map to 30~arcmin before applying a threshold at 2~K.km/s. The resulting masks are then apodized at 15~arcmin.
In practice, the CO masks are almost completely included in the Galactic masks, decreasing the accepted sky fraction only by a few percentage points.
For point sources, the \textsc{Planck}\ 2013 and 2015 analyses mask the sources detected with a signal-to-noise ratio above 5 in the \textsc{Planck}\ point-source catalogue \citep{planck2014-a35} at each frequency \citep{planck2013-p11,planck2014-a13}.
On the contrary, the masks used in our analysis rely on a more refined procedure that preserves Galactic compact structures and ensures the completeness level at each frequency, but with a higher flux cut (340, 250, and 200 mJy at 100, 143, and 217\ifmmode $\,GHz$\else \,GHz\fi, respectively). The consequence is that these masks leave a slightly greater number of unmasked extragalactic sources, but preserve the power spectra of the dust emission \citep[as described in][]{planck2014-XXX}.
For each frequency, we mask a circular area around each source using a radius of three times the effective Gaussian beam width ($\sigma = FWHM/\sqrt{\ln8}$) at that frequency. We apodize these masks with a Gaussian taper of FWHM = 15~arcmin.
Finally, we also mask strong extragalactic objects including both point sources and nearby extended galaxies.
The masked galaxies include the LMC and SMC and also M31, M33, M81, M82, M101, M51, and CenA.
The combined masks used are named M80, M70, and M55 (corresponding to effective $f_{\mathrm{sky}}=72\%,62\%,48\%$), associated with the 100, 143, and 217~GHz channels, respectively (Fig.~\ref{fig:masks}).
Tests have been carried out using more conservative Galactic masks (with $f_{\rm sky}$ = 65\%, 55\%, and 40\% for 100, 143, and 217~GHz, respectively) showing perfectly compatible results with those of the smaller masks.
Compared to the masks used in the \textsc{Planck}\ 2015 analysis, the retained sky fraction is almost identical. Indeed, the Galactic masks used in \citet{planck2014-a13} retain 70\%, 60\%, and 50\% respectively.
\begin{figure}[!ht]
\includegraphics[draft=false,width=\columnwidth]{mask_M72_Hillipop.png}
\includegraphics[draft=false,width=\columnwidth]{mask_M62_Hillipop.png}
\includegraphics[draft=false,width=\columnwidth]{mask_M48_Hillipop.png}
\caption{ M80, M70, and M55 masks. A combination of an apodized Galactic mask and a compact object mask is used at each frequency (see text for details).}
\label{fig:masks}
\end{figure}
\subsection{Power spectra}
We use \emph{Xpol}\ \citep[an extension to polarization of][]{tristram:2005} to compute the cross-power spectra in temperature and polarization ($TT$, $EE$, and $TE$). \emph{Xpol}\ is a pseudo-$C_{\ell}$ method which also computes an analytical approximation of the $C_\ell$ covariance matrix directly from data.
Using the six maps presented in Sect.~\ref{sec:data:maps}, we derive the 15 cross-power spectra for each CMB mode: one each for 100$\times$100, 143$\times$143, and 217$\times$217; four each for 100$\times$143, 100$\times$217, and 143$\times$217 as outlined below.
From the coefficients of the spherical harmonic decomposition of the ($I$,$Q$,$U$) masked maps $\vec{\tilde a}_{\ell m}^X = \{\tilde a^T_{\ell m},\tilde a^E_{\ell m},\tilde a^B_{\ell m}\}$, we form the pseudo cross-power spectra between map $i$ and map $j$,
\begin{equation}
\tilde{\vec C}_\ell^{ij} = \frac{1}{2\ell+1} \sum_{m} \vec{\tilde a}^{i*}_{\ell m} \vec{\tilde a}^{j}_{\ell m} \, ,
\end{equation}
where the vector $\vec{\tilde C}_\ell$ includes the four modes $\{\tilde C^{TT}_\ell,\tilde C^{EE}_\ell,\tilde C^{TE}_\ell,\tilde C^{ET}_\ell\}$.
We note that the $TE$ and $ET$ cross-power spectra do not carry exactly the same information since computing T from map $i$ and E from map $j$ is not the same as computing E from map $j$ and T from $i$. They are computed independently and averaged afterwards using their relative weights for each cross-frequency.
The pseudo-spectra are then corrected from beam and sky fraction using
\begin{equation}
\tilde{\vec C}_\ell^{ij} = (2\ell'+1) \tens{M}^{ij}_{\ell \ell'} \vec{C}^{ij}_{\ell'}
\label{eq:master}
,\end{equation}
where the coupling matrix $\tens{M}$ depends on the masks used for each set of maps \citep{peebles:1973} and includes beam transfer functions usually extracted from Monte Carlo simulations~\citep{hivon2002}.
The multipole ranges used in the likelihood analysis have been chosen to limit the contamination of the Galactic dust emission at low-$\ell$ and the noise at high-$\ell$. Table~\ref{tab:multipoles} gives the multipole ranges, $[\ell_{\rm min},\ell_{\rm max}]$, considered for each of the six cross-frequencies in TT, TE, and EE.
The spectra are cosmic-variance limited up to $\ell \simeq 1500$ in $TT$ and $\ell \simeq 700$ in $TE$ (outside the troughs of the CMB signal). The $EE$ mode is dominated by instrumental noise.
\begin{table}[!ht]
\begin{center}
\begin{tabular}{l|ccc}
\hline
\hline
& TT & EE & TE\\
\hline
100$\times$100 & [~50,1200] & [100,1000] & [100,1200] \\
100$\times$143 & [~50,1500] & [100,1250] & [100,1500] \\
100$\times$217 & [500,1500] & [400,1250] & [200,1500] \\
143$\times$143 & [~50,2000] & [100,1500] & [100,1750] \\
143$\times$217 & [500,2500] & [400,1750] & [200,1750] \\
217$\times$217 & [500,2500] & [400,2000] & [200,2000] \\
$n_\ell$ & $9\,556$ & $7\,256$ & $8\,806$ \\
\hline
\end{tabular}
\caption{Multipole ranges used in the analysis and corresponding number of multipoles available ($n_{\ell}=\ell_{\rm max}-\ell_{\rm min}+1$). The total number of multipoles is $25\,618$.}
\label{tab:multipoles}
\end{center}
\end{table}
\begin{figure*}[!ht]
\center
\includegraphics[width=0.9\textwidth]{covmat_hillipop}
\caption{ Full {HiLLiPOP}\ covariance matrix including all correlations in multipoles between cross-frequencies and power spectra.}
\label{fig:CovMat}
\end{figure*}
\section{ Likelihood function}
\label{sec:lik}
On the full-sky, the distribution of auto-spectra is a scaled-$\chi^2$ with $2\ell+1$ degrees of freedom. The distribution of the cross-spectra is slightly different \citep[see Appendix A in][]{mangilli:2015}; however, above $\ell \geqslant 50$ the number of modes is large enough so that we can safely assume that the $C_\ell$ are Gaussian distributed.
When considering only a part of the sky, the values of $C_\ell$ are correlated so that for high multipoles, the resulting distribution can be approximated by a multi-variate Gaussian taking into account $\ell$-by-$\ell$ correlations
\begin{equation}
\label{eq:likelihood}
-2 \ln \mathcal{L} = \sum_{\substack{i \leqslant j \\ i' \leqslant j'}} \sum_{\ell\ell'}
\vec{R}_\ell^{ij} \,
\left[\tens{\Sigma}^{-1}\right]_{\ell\ell^\prime}^{ij,{i'}{j'}} \,
\vec{R}_{\ell^\prime}^{{i'}{j'}}
+ \ln | \tens{\Sigma} |
,\end{equation}
where $\vec{R}^{ij}_\ell = \vec{C}^{ij}_\ell - \vec{\hat C}^{ij}_\ell$ denotes the residual of the estimated cross-power spectrum $\vec{C}_\ell$ with respect to the model $\vec{\hat C}_\ell$ for each polarization mode considered ($TT$, $EE$, $TE$) and each frequency ($\{i,j\} \in [100,143,217]$). The matrix $\tens{\Sigma} = \left< \vec{R} \vec{R}^T \right>$ is the full covariance matrix which includes the instrumental variance from the data as well as the cosmic variance from the model. The latter is directly proportional to the model so that the matrix $\tens{\Sigma}$ should, in principle, depend on the model.
In practice, given our current knowledge of the cosmological parameters, the theoretical power spectra typically differ from each other at each $\ell$ by less than they differ from the observed $C_\ell$ so that we can expand $\tens{\Sigma}$ around a reasonable fiducial model. As described in~\citet{planck2013-p08}, the additional terms in the expansion are small if the fiducial model is accurate and its absence does not bias the likelihood. Using a fixed covariance matrix $\tens{\Sigma}$, we can drop the constant term $\ln|\tens{\Sigma}|$.
We therefore expect the likelihood to be $\chi^2$-distributed with a mean equal to the number of degrees of freedom $n_{\rm dof} = n_\ell - n_{\rm p}$ ($n_\ell$ is given in Table~\ref{tab:multipoles} and $n_{\rm p}$ is the number of fitted parameters) and a variance equal to $2n_{\rm dof}$.
We define several likelihood functions based on the information used: \emph{hlp} T for TT cross-spectra, \emph{hlp} E for EE cross-spectra, \emph{hlp} X for TE cross-spectra, and \emph{hlp} TXE for the combination of all cross-spectra. The \emph{hlp} X likelihood combines information from TE and ET cross-spectra.
The next two sections describe the computation of the covariance matrix and the building of the model, focusing on the differences with the \textsc{Planck}\ public likelihood.
\subsection{Semi-analytical covariance matrix}
We use a semi-analytical estimation of the $C_\ell$ covariance matrix computed using \emph{Xpol}. The matrix encloses the $\ell$-by-$\ell$ correlations between all the power spectra involved in the analysis. The computation relies directly on data estimates. It follows that contributions from noise (correlated and uncorrelated), sky emission (from astrophysical and cosmological origin), and the cosmic variance are implicitly taken into account in this computation without relying on any model or simulations.
The covariance matrix $\tens\Sigma$ of cross-power spectra is directly related to the covariance $\tens{\tilde\Sigma}$ of the pseudo cross-power spectra through the coupling matrices:
\begin{eqnarray}
\Sigma_{\ell_1\ell_2}^{ab,cd}
\equiv \left<\Delta C_{\ell}^{ab}\Delta C_{\ell^\prime}^{cd*}\right>
= \left(M_{\ell\ell_1}^{ab}\right)^{-1} \tilde\Sigma_{\ell_1\ell_2}^{ab,cd} \left(M_{\ell^\prime\ell_2}^{cd*}\right)^{-1}
\end{eqnarray}
with $(a,b,c,d) \in \{T,E\}$ for each map $A,B,C,D$.
We compute $\tens{\tilde\Sigma}$ for each cross-spectra block independently that includes $\ell$-by-$\ell$ correlation and four-spectra mode correlation $\{TT,EE,TE,ET\}$.
The TE and ET blocks are both computed individually and finally averaged.
The matrix $\tens{\tilde\Sigma}$, which gives the correlations between the pseudo cross-power spectra ($ab$) and ($cd$), is an N-by-N matrix (where $N=n^{TT}_\ell+n^{EE}_\ell+n^{TE}_\ell+n^{ET}_\ell$) and reads
\begin{eqs}
\label{eq:correlation}
\tilde\Sigma_{\ell\ell^\prime}^{ab,cd} &\equiv& \left<\Delta\tilde{C}_{\ell}^{ab}\Delta\tilde{C}_{\ell^\prime}^{cd*}\right>
= \left<\tilde{C}_{\ell}^{ab}\tilde{C}_{\ell^\prime}^{cd*}\right>-\tilde{C}_{\ell}^{ab}\tilde{C}_{\ell^\prime}^{cd*} \nonumber \\
&=& \sum_{mm^\prime} \frac{
\left<\tilde{a}_{\ell m}^{a}\tilde{a}_{\ell^\prime m^\prime}^{c*}\right>\left<\tilde{a}_{\ell m}^{b*}\tilde{a}_{\ell^\prime m^\prime}^{d}\right>+
\left<\tilde{a}_{\ell m}^{a}\tilde{a}_{\ell^\prime m^\prime}^{d*}\right>\left<\tilde{a}_{\ell m}^{b*}\tilde{a}_{\ell^\prime m^\prime}^{c}\right>
}{(2\ell+1)(2\ell^\prime+1)} \nonumber
\end{eqs}
by expanding the four-point Gaussian correlation using Isserlis' formula (or Wick's theorem).
Each two-point correlation of pseudo-$\ifmmode {\vec{a}_{\ell m}} \else $\vec{a}_{\ell m}$\fi$ can be expressed as the convolution of $\vec C_\ell$ with a kernel which depends on the polarization mode considered
\begin{eqnarray*}
\VEV{ \tilde a^{T_a*}_{{\ell m}}\tilde a^{T_b}_{{\ell'm'}}} &=& \sum_{{\ell_1m_1}} \Cl{\ell_1}{T_aT_b} \W{0}{T_a}{{\ell m}}{{\ell_1m_1}} \W{0}{T_b*}{{\ell'm'}}{{\ell_1m_1}}
\\
\VEV{ \tilde a^{E_a*}_{{\ell m}} \tilde a^{E_b}_{{\ell'm'}}} &=& \frac{1}{4} \sum_{\ell_1m_1}
\left\{
\Cl{\ell_1}{E_aE_b} \W{+}{E_a*}{{\ell m}}{{\ell_1m_1}} \W{+}{E_b}{{\ell'm'}}{{\ell_1m_1}}
+ \Cl{\ell_1}{B_aB_b} \W{-}{E_a*}{{\ell m}}{{\ell_1m_1}} \W{-}{E_b}{{\ell'm'}}{{\ell_1m_1}}
\right\}
\\
\VEV{\tilde a^{T_a*}_{{\ell m}} \tilde a^{E_b}_{{\ell'm'}}} &=& \frac{1}{2} \sum_{\ell_1m_1}
\Cl{\ell_1}{T_a E_b} \W{0}{T_a*}{{\ell m}}{{\ell_1m_1}} \W{+}{E_b}{{\ell'm'}}{{\ell_1m_1}}
\end{eqnarray*}
where the kernels $W^{0}$, $W^{+}$, and $W^{-}$ are defined as linear combination of products of $Y_{\ell m}$ of spin 0 and $\pm 2$ (see Appendix~\ref{ann:xpol_covariance}).
As suggested in \citet{efstathiou:2006}, neglecting the gradients of the window function and applying the completeness relation for spherical harmonics \citep{varshalovich:1988}, we can reduce the products of four $W$ into kernels similar to the coupling matrix $\tens{M}$ defined in Eq.~\ref{eq:master}.
In the end, the blocks of $\tens{\Sigma}$ matrices reads
\begin{eqnarray*}
\Sigma^{T_aT_b,T_cT_d}
&\simeq
\Cl{\ell\ell'}{T_aT_c}\Cl{\ell\ell'}{T_bT_d} \tens{M}_{TT,TT} &+\ \Cl{\ell\ell'}{T_aT_d}\Cl{\ell\ell'}{T_bT_c} \tens{M}_{TT,TT}
\\
\Sigma^{E_aE_b,E_cE_d}
&\simeq
\Cl{\ell\ell'}{E_aE_c}\Cl{\ell\ell'}{E_bE_d} \tens{M}_{EE,EE} &+\ \Cl{\ell\ell'}{E_aE_d}\Cl{\ell\ell'}{E_bE_c} \tens{M}_{EE,EE}
\\
\Sigma^{T_aE_b,T_cE_d}
&\simeq
\Cl{\ell\ell'}{T_aT_c}\Cl{\ell\ell'}{E_bE_d} \tens{M}_{TE,TE} &+\ \Cl{\ell\ell'}{T_aE_d}\Cl{\ell\ell'}{E_bT_c} \tens{M}_{TT,TT}
\\
\Sigma^{T_aT_b,T_cE_d}
&\simeq
\Cl{\ell\ell'}{T_aT_c}\Cl{\ell\ell'}{T_bE_d} \tens{M}_{TT,TT} &+\ \Cl{\ell\ell'}{T_aE_d}\Cl{\ell\ell'}{T_bT_c} \tens{M}_{TT,TT}
\\
\Sigma^{T_aT_b,E_cE_d}
&\simeq
\Cl{\ell\ell'}{T_aE_c}\Cl{\ell\ell'}{T_bE_d} \tens{M}_{TT,TT} &+\ \Cl{\ell\ell'}{T_aE_d}\Cl{\ell\ell'}{T_bE_c} \tens{M}_{TT,TT}
\\
\Sigma^{E_aE_b,T_cE_d}
&\simeq
\Cl{\ell\ell'}{E_aT_c}\Cl{\ell\ell'}{E_bE_d} \tens{M}_{TE,TE} &+\ \Cl{\ell\ell'}{E_aE_d}\Cl{\ell\ell'}{E_bT_c} \tens{M}_{TE,TE}
\end{eqnarray*}
which are thus directly related to the measured auto- and cross-power spectra (see Appendix~\ref{ann:xpol_covariance} for details). In practice, to avoid any correlation between $C_\ell$ estimates and their covariance, we use a smoothed version of each measured power spectrum (using a Gaussian filter with $\sigma_\ell=5$) to estimate the covariance matrix.
The analytical full covariance matrix (Fig.~\ref{fig:CovMat}) has $25\,618\times25\,618$ elements, is symmetric and positive definite. Its condition number is $\sim 10^8$.
This semi-analytical estimation has been tested against Monte Carlo simulations. In particular, we tested how accurate the approximations are in the case of a non-ideal Gaussian signal (due to the presence of small foregrounds residuals), Planck realistic (low) level of pixel-pixel correlated noise, and apodization length used for the mask.
We have found no deviation to the sample covariance estimated from the 1000 realizations of the full focal plane Planck simulations \citep[FFP8, see][]{planck2014-a14} including anisotropic correlated noise and foreground residuals. To go further and to check the detailed impact from the sky mask (including the choice of the apodization length), we simulated CMB maps from the \textsc{Planck}\ 2015 best-fit $\Lambda$CDM angular power spectrum, to which we added realistic anisotropic Gaussian noise (but without correlation) corresponding to each of the six data set maps. We then computed their cross-power spectra using the same foreground masks as for the data. A total of $15\,000$ sets of cross-power spectra have been produced.
When comparing the diagonal of the covariance matrix from the analytical estimation with the corresponding simulated variance, a precision better than a few per cent is found (Fig.~\ref{fig:MatrixPrecision}).
The residuals show some oscillations, essentially in temperature, which are introduced by the compact objects mask. Indeed, the large number of small holes with short apodization length induces structures in the harmonic window function which break the hypothesis used in the semi-analytical estimation of the $C_\ell$ covariance matrix.
However, the refined procedure used to construct our specific point source mask allows to keep the level of the impact to less than a few per cent
Since we are using a Gaussian approximation of the likelihood, the uncertainty of the covariance matrix will not bias the estimation of the cosmological parameters. The per cent precision obtained here will then only propagate into a per cent error on the variance of the recovered cosmological model.
\begin{figure}[!ht]
\centering
\includegraphics[draft=false,width=\columnwidth]{mll_sim_TTEETE}
\includegraphics[draft=false,width=\columnwidth]{mll_simx_TTEETE}
\caption{Diagonals of the $C_\ell$ covariance matrix $\tens{\Sigma}$ for the block 143A$\times$143B computed using the semi-analytical estimation ({\it coloured lines}) compared with the Monte Carlo ({\it black line}). {\it Top:} spectra auto-correlation. {\it Bottom:} spectra cross-correlation.}
\label{fig:MatrixPrecision}
\end{figure}
\subsection{Model}
\label{sec:lik:model}
We now present the model ($\vec{\hat C}_\ell$) used in the likelihood (Eq.~\ref{eq:likelihood}). The foreground emissions are mitigated by applying the masks (defined in Sect.~\ref{sec:data:maps}) and using an appropriate choice of multipole range. However, our likelihood function explicitly takes into account residuals of foreground emissions in power spectra together with CMB model and instrumental systematic effects. The model finally reads:
\begin{equation}
\vec{\hat C}_\ell^{ij} = A_{\rm pl}^2 c_i c_j \left(1+\beta^{ij}\mu_\ell^{ij}\right)^2 \left( \vec{C}_\ell^{\rm CMB} + \sum_{\rm fg} {A}^{ij}_{\rm fg} \vec{C}_\ell^{ij,\rm fg} \right)
\label{eq:model}
\end{equation}
where $A_{\rm pl}$ is an absolute calibration factor, $c$ represents the inter-calibration of each map (normalized to the 143A map), $\beta$ is the amplitude of the beam uncertainty $\mu_\ell$, and $\tens{A}_{\rm fg}$ are the amplitudes of the foreground components $\vec{C}_\ell^{\rm fg}$.
The model for CMB, $\vec{C}^{\rm CMB}_\ell$, is computed solving numerically the background+perturbation equations for a specific cosmological model. In this paper, we consider a {$\rm{\Lambda CDM}$}\ model with six free parameters describing the current density of baryons ($\Omega_b$) and cold dark matter ($\Omega_{cdm}$); the angular size of sound horizon at recombination ($\theta$); the reionization optical depth ($\tau$); and the index and the amplitude of the primordial scalar spectrum ($n_{\rm s}$ and $A_{\rm s}$).
We include in the sum of the foregrounds for the temperature likelihood contributions from Galactic dust, cosmic infrared background (CIB), thermal (tSZ) and kinetic (kSZ) Sunyaev-Zel'dovich components, Poisson point sources (PS), and the correlation between infrared galaxies and the tSZ effect (tSZxCIB).
Only Galactic dust is considered in polarization. Synchrotron emission is known to be significantly polarized, but it is subdominant in the \textsc{Planck}-HFI channels and we can neglect its contribution in power spectra above $\ell=50$. The contribution from polarized point sources is also negligible in the $\ell$ range considered for polarized spectra~\citep{tucci:2012}.
In {HiLLiPOP}, we use physically motivated templates of foreground emission power spectra, based on \textsc{Planck}\ measurements. We assume a $C_{\ell}$ template for each foreground with a fixed frequency spectrum and rescale it using a free parameter $\tens{A}^{\rm fg}$ normalized to one.
The model is a function of the cosmological ($\mathbf{\Omega}$) and nuisance ($p$) parameters: $\vec{\hat C}_{\ell}^{\rm model}(\mathbf{\Omega}, p)$.
The latter include instrumental parameters accounting for instrumental uncertainties and scaling parameters for each astrophysical foreground model as described in the following sections.
In the end, we have a total of 6 instrumental parameters (only calibration is considered, see Sect.~\ref{sssec:instru_syste}), 9 astrophysical parameters (7 for $TT$, 1 for $TE$, 1 for $EE$), and $6+$ cosmological parameters ($\Lambda$CDM and extensions), i.e. a total of $21+$ free parameters in the full likelihood function (see Appendix~\ref{hlp_params}).
We note that the \textsc{Planck}\ public likelihood depends on more nuisance parameters: 15 for $TT$ (compared to 13 for \emph{hlp} T), 9 for $TE$ (compared to 7 for \emph{hlp} X), and 9 for $EE$ (compared to 7 for \emph{hlp} E).
\subsubsection{Instrumental systematics}
\label{sssec:instru_syste}
The instrumental parameters of the {HiLLiPOP}\ likelihood are the inter-calibration coefficients ($c$, which are measured relative to the 143A map), and the amplitudes ($\beta$) of the beam error modes ($\mu_\ell$).
In practice, we have linearized Eq.~\ref{eq:model} for the coefficients $c$ and fit for small deviations around zero ($c \rightarrow 1+c$), while fixing $c_{\rm 143A}=0$ for normalization.
The uncertainty in the absolute calibration is propagated through a global rescaling factor $A_{\rm pl}$.
The effective beam window functions $B_{\ell}$ account for the scanning strategy and the weighted sum of individual detectors performed to obtained the combined maps \citep{planck2014-a08}. It is constructed from Monte Carlo simulations of CMB convolved with the measured beam on each time-ordered data sample.
The uncertainties in the determination of the HFI effective beams come directly from simulations and is described in terms of the Monte Carlo eigenmodes $\mu_\ell$~\citep{planck2013-p08}.
In the \textsc{Planck}\ 2013 analysis, it was found that, in practice, only the first beam eigenmode for the 100$\times$100 spectrum was relevant \citep{planck2013-p11}. For the 2015 analysis, \cite{planck2014-a13} found no evidence of beam error in their multipole range thanks to higher accuracy in the beam estimation, which reduced the amplitude of the beam uncertainty. As a consequence, in our analysis, we fixed their contribution to zero ($\beta=0$).
\subsubsection{Galactic dust}
\label{sec:dust_model}
The $TT$, $EE$, and $TE$ Galactic dust $C_\ell$ templates are obtained from the cross-power spectra between half-mission maps at 353\ifmmode $\,GHz$\else \,GHz\fi\ \citep[as in][]{planck2014-XXX}. This is repeated for each mask combination associated with the map data set. The estimated power spectra are then accordingly rescaled to each of the six cross-frequencies considered in this analysis.
We compute the 353\ifmmode $\,GHz$\else \,GHz\fi\ cross-spectra $\vec{\hat C}_\ell^{M_iM_j}$ for each pair of masks $(M_i,M_j)$ associated with the cross-spectra $i \times j$ (Fig.~\ref{fig:dust353}). We then subtract the \textsc{Planck}\ best-fit CMB power spectrum. For $TT$, we also subtract the CIB power spectrum \citep{planck2013-pip56}.
In addition to Galactic dust, unresolved point sources contribute to the $TT$ power spectra at 353\ifmmode $\,GHz$\else \,GHz\fi. To construct the dust templates $\vec{C}_\ell^{M_iM_j,\rm dust}$ for our analysis, we thus fit a power-law model with a free constant $A\ell^\alpha+B$ in the range $\ell=[50,2500]$ for $TT$, while a simple power-law is used to fit the $EE$, $TE$ power spectra in the range $\ell=[50,1500]$.
\begin{figure}[!ht]
\includegraphics[draft=false,width=\columnwidth]{dust_template.pdf}
\caption{Dust power spectra at 353\ifmmode $\,GHz$\else \,GHz\fi\ for $TT$ ({\it top}), $TE$ ({\it middle}), and $EE$ ({\it bottom}). The power spectra are computed from cross-correlation between half-mission maps for different sets of masks as defined in Sect.~\ref{sec:data:maps} and further corrected for CMB power spectrum ({\it solid black line}) and CIB power spectrum ({\it dashed black line}).}
\label{fig:dust353}
\end{figure}
Thanks to the use of the point source mask (described in Sect.~\ref{sec:data:maps}), our Galactic dust residual power spectrum is much simpler than in the case of the \textsc{Planck}\ official likelihood.
Indeed, the masks used in the \textsc{Planck}\ analysis remove some Galactic structures and bright cirrus, which induces an artificial knee in the residual dust power spectra around $\ell \sim 200$ \citep[Sect. 3.3.1 in][]{planck2014-a13}. In contrast, our Galactic dust power spectra are directly comparable to those derived in~\citet{planck2014-XXX}. Moreover, here we do not assume that the dust power spectra have the same spatial dependence across masks.
For each polarization mode ($TT$, $EE$, $TE$), we then extrapolate the dust templates at 353\ifmmode $\,GHz$\else \,GHz\fi\ for each cross-mask to the cross-frequency considered
\begin{equation}
\vec{C}_{\ell}^{ij,{\rm dust}} = A_{\rm dust} \, a^{\rm dust}_{\nu_i} a^{\rm dust}_{\nu_j} \vec{C}_\ell^{M_iM_j,{\rm dust}}
\label{eq:dust_model}
,\end{equation}
where the $a^{dust}_{\nu} = f^{dust}(\nu) / f^{dust}(353\ifmmode $\,GHz$\else \,GHz\fi)$ extrapolated factors are estimated for intensity or polarization maps. We use a greybody emission law with a mean dust temperature of $19.6$~K and spectral indices $\beta^T=1.59$ and $\beta_P=1.51$ as measured in \citet{planck2014-XXII}. The resulting $a^{dust}_\nu$ factors are $(0.0199,0.0387,0.1311)$ for total intensity and $(0.0179,0.0384,0.1263)$ for polarization at 100, 143, and 217~GHz, respectively.
In {HiLLiPOP}, this results in three free parameters ($A_{\rm dust}^{TT}$, $A_{\rm dust}^{EE}$, $A_{\rm dust}^{TE}$) describing the amplitude of the dust residuals in each mode. This model based on \textsc{Planck}\ internal measurements is simpler than the one used in the \textsc{Planck}\ official likelihood, which allows the amplitude of each cross-frequency to vary (ending with a total of 16 free parameters) and puts constraints on the dust SED through the use of strong priors.
\subsubsection{Cosmic infrared background}
\label{sec:CIB_model}
The thermal radiation of dust heated by UV emission from young stars produces an extragalactic infrared background whose emission law is very close to the Galactic dust emission.
The Planck Collaboration has studied the CIB in detail in \citet{planck2013-pip56} and provides templates based on a model that associates star forming galaxies with dark matter halos and their sub-halos, using a parametrized relation between the dust-processed infrared luminosity and (sub-)halo mass.
This model provides an accurate description of the Planck and IRAS CIB spectra from 3000\ifmmode $\,GHz$\else \,GHz\fi\ down to 217\ifmmode $\,GHz$\else \,GHz\fi. We extrapolate this model here, assuming it remains appropriate when describing the 143\ifmmode $\,GHz$\else \,GHz\fi\ and 100\ifmmode $\,GHz$\else \,GHz\fi\ data.
The halo model formalism, which is also used for the tSZ and the tSZ$\times$CIB models (see Sects.~\ref{sec:SZ_model} and \ref{sec:tSZxCIB_model}), has the general expression \citep{planck2014-a29}
\begin{equation}
C_{\ell} = C^{{\rm AB, 1h}}_\ell + C^{{\rm AB, 2h}}_\ell,
\end{equation}
where A and B stand for tSZ effect or CIB emission, $C^{{\rm AB, 1h}}_\ell$ is the one-halo contribution, and $C^{{\rm AB, 2h}}_\ell$ is the two-halo term.
The one-halo term $C^{{\rm AB, 1h}}_\ell$ is computed as
\begin{equation}
C_{\ell}^{\rm AB,{\rm 1h}} = 4 \pi \int {\rm d}z \frac{{\rm d}V}{{\rm d}z {\rm d}\Omega}\int{\rm d}M \frac{{\rm d^2N}}{{\rm d}M {\rm d}V} W^{\rm 1h}_{\rm A} W^{\rm 1h}_{\rm B} ,
\end{equation}
where $\frac{{\rm d^2N}}{{\rm d}M {\rm d}V}$ is the dark matter halo mass function from \citet{tinker:2008}, $\frac{{\rm d}V}{{\rm d}z {\rm d}\Omega}$ the comoving volume element, and $W^{\rm 1h}_{\rm A,B}$ is the window function that accounts for selection effects and total halo signal.
Instead, the contribution of the two-halo term, $C^{{\rm AB, 2h}}_\ell$, accounts for correlation in the spatial distribution of halos over the sky.
For the CIB, the two-halo term (i.e. the term that considers galaxies belonging to two different halos) is dominant at low and intermediate multipoles and is very well constrained by \textsc{Planck}. The one-halo term is flat in $C_\ell$ and not well measured as it is degenerated with the shot noise. Hence, in \citet{planck2013-pip56} strong priors on the shot noises have been used to get the one-halo term. In {HiLLiPOP}, we did not include any shot noise term in the CIB template to avoid degeneracies with the amplitude of infrared sources (see Sect.~\ref{sec:ps_model}).
The power spectra template for each cross-frequency in $\mathrm{Jy}^2\mathrm{sr}^{-1}$ (with the IRAS convention $\nu I(\nu)=$cst) are then converted in $\mu {\rm K}_{\rm CMB}^2$ using a slightly revised version of Table~6 in \citet{planck2013-p03d}: $a^{\rm conv}_{100} = 1/244.06$, $a^{\rm conv}_{143} = 1/371.66$, and $a^{\rm conv}_{217} = 1/483.48$~K$_{\rm CMB}$/MJy.sr$^{-1}$ at 100, 143, and 217~GHz, respectively. Those coefficients account for the integration of the CIB emission law in the \textsc{Planck}\ bandwidth.
The CIB templates used in {HiLLiPOP}\ (Fig.~\ref{fig:CIB}) are then rescaled with a free single parameter $A_{\rm CIB}$:
\begin{equation}
\vec{C}_{\ell}^{ij,{\rm CIB}}=A_{\rm CIB} \, a^{\rm conv}_{\nu_i} a^{\rm conv}_{\nu_j} C_{\ell}^{\nu_i\nu_j,{\rm temp}} \, .
\label{eq:CIB_model}
\end{equation}
The same parametrization was finally adopted in the \textsc{Planck}\ official analysis for the 2015 release.
\begin{figure}[!ht]
\centering
\includegraphics[draft=false,width=\columnwidth]{cib_template.pdf}
\caption{ CIB power spectra templates. The SED and the angular dependence is given by \citet{planck2013-pip56}. The CMB TT power spectrum is plotted in black.}
\label{fig:CIB}
\end{figure}
\subsubsection{Sunyaev-Zel'dovich effect}
\label{sec:SZ_model}
The thermal Sunyev-Zel'dovich emission (tSZ) is also parameterized by a single amplitude and a fixed template measured in \citet{planck2013-p05b} at 143\ifmmode $\,GHz$\else \,GHz\fi,
\begin{equation}
\vec{C}^{ij,\rm tSZ}_{\ell} = A_{\rm tSZ} \, a^{\rm tSZ}_{\nu_i} a^{\rm tSZ}_{\nu_j} C_{\ell}^{\rm tSZ} \,,
\label{eq:tSZ_model}
\end{equation}
where $a^{\rm tSZ}_{\nu} = f^{\rm tSZ}(\nu)/f^{\rm tSZ}(143)$ is the thermal Sunyaev-Zel'dovich spectrum normalized at 143~GHz.
We recall that, ignoring the bandpass corrections, the tSZ spectrum is given by
\begin{equation}
f^{\rm tSZ}(\nu) = \left(x\coth\left(\frac{x}{2}\right)-4 \right) \quad \mbox{with}\ x=\frac{h\nu}{k_\mathrm{B} T_\mathrm{cmb}}.
\label{eq:SZ_spectrum}
\end{equation}
After integrating over the instrumental bandpass, we obtain $f^{\rm tSZ} = -4.031, -2.785$, and $0.187$ at 100, 143, and 217~GHz, respectively \citep[see Table 1 in][]{planck2014-a28}.
The \textsc{Planck}\ official likelihood uses the same parametrization but with an empirically motivated template power spectrum \citep{efstathiou:2012}.
The kinetic Sunyev-Zel'dovich (kSZ) is produced by the peculiar velocities of the clusters containing hot electron gas. We use power spectra extracted from reionization simulations. We supposed that the kSZ follows the same SED as the CMB and only fit a global free amplitude, $A_{\rm kSZ}$.
We chose a combination of templates coming from homogeneous and patchy reionization.
\begin{equation}
\vec{C}_{\ell}^{ij,\rm kSZ} = A_{\rm kSZ} \, \left( C_{\ell}^{\rm hKSZ} + C_{\ell}^{\rm pKSZ} \right) \, .
\label{eq:kSZ_model}
\end{equation}
For the homogeneous kSZ, we use a template power spectrum given by \citet{Shaw12} calibrated with a ``cooling and star formation'' simulation. For the patchy reionization kSZ we use the fiducial model of \citet{Battaglia13}.
Both templates are shown in Fig.~\ref{fig:tSZ}. The \textsc{Planck}\ official likelihood considers a template from homogeneous reionization only, but the impact on the cosmology is completely negligible.
\begin{figure}[!ht]
\includegraphics[draft=false,width=\columnwidth]{tsz_template.pdf}
\includegraphics[draft=false,width=\columnwidth]{ksz_template.pdf}
\caption{Top: tSZ power spectra templates at each cross-frequency. Dashed lines are negative. SED are fixed and we fit the overall amplitude $A_{\rm tSZ}$. Bottom: Frequency independent kSZ template. The black line is the CMB power spectrum.}
\label{fig:tSZ}
\end{figure}
\subsubsection{tSZxCIB correlation}
\label{sec:tSZxCIB_model}
The halo model can naturally account for the correlation between two different source populations, each tracing the underlying dark matter, but having different dependence on host halo properties \citep{addison:2012}.
An angular power spectrum can thus be extracted for the correlation between unresolved clusters contributing to the tSZ effect and the dusty sources that make up the CIB. While the latter has a peak in redshift distribution between $z \simeq 1$ and $z \simeq 2$, and is produced by galaxies in dark matter halos of $10^{11}$-$10^{13}$ ${\rm M_\odot}$, tSZ is mainly produced by local ($z < 1$) and massive dark matter halos (above $10^{14}$ ${\rm M_\odot}$). This implies that the CIB and tSZ distributions present a very small overlap for the angular scales probed by \textsc{Planck}, and their correlation is thus hard to detect \citep{planck2014-a29}.
We use the templates shown in Fig.~\ref{fig:tSZxCIB}, computed using a tSZ power spectrum template based on \citet{efstathiou:2012} and a CIB template as described in Sect.~\ref{sec:CIB_model}.
The power spectra templates in~$\mathrm{Jy}^2\mathrm{sr}^{-1}$ (with the convention $\nu I(\nu)$=cst) are then converted to~$\mu {\rm K}_{\rm CMB}^2$ using the same coefficients as for the CIB (Sect.~\ref{sec:CIB_model}).
As for the other foregrounds, we then allow for a global free amplitude, $A_{\rm tSZxCIB}$, and write
\begin{equation}
\vec{C}_{\ell}^{ij,\rm tSZxCIB} = A_{\rm tSZxCIB} \, a^{\rm conv}_{\nu_i} a^{\rm conv}_{\nu_j} C_{\ell}^{\nu_i\nu_j,\rm temp} \, .
\label{eq:tSZxCIB_model}
\end{equation}
\begin{figure}[!ht]
\centering
\includegraphics[draft=false,width=\columnwidth]{szxcib_template.pdf}
\caption{ tSZxCIB power spectra templates. The SED and the angular dependence are fixed. Dashed lines are negative.}
\label{fig:tSZxCIB}
\end{figure}
\subsubsection{Unresolved PS}
\label{sec:ps_model}
At \textsc{Planck}\ frequencies, the unresolved point sources signal incorporates the contribution from extragalactic radio and infrared dusty galaxies \citep{tucci:2005}. We use a specific mask for each frequency to mitigate the impact of strong sources (see Sect.~\ref{sec:data:maps}).
\citet{planck2014-a13} gives the expected amplitudes for the Poisson shot noise from theoretical models that predict number counts $dN/dS$ for each frequency.
Their analyses take into account the details of the construction for the point source masks, such as the fact that the flux cut varies across the sky or the ``incompleteness'' of the catalogue from which the masks are built at each frequency.
We computed the expectation at each cross-frequency for the point source amplitudes ($a_{\nu_i,\nu_j}^{radio}$ for the radio sources and $a_{\nu_i,\nu_j}^{IR}$ for the infrared sources) based on the flux-cut considered for our own point sources masks (see Sect.~\ref{sec:data:maps}) using the model from \citet{tucci:2011} for the radio sources and from \citet{bethermin:2012} for dusty galaxies (see Table~\ref{tab:shot_noise}). We note that we found different prediction numbers for radio galaxies than those reported in Table 17 of \citet{planck2014-a13}.
We consider a flat Poisson-like power spectrum for each component and rescale by two free amplitudes $A^{\rm radio}_{\rm PS}$ and $A^{\rm IR}_{\rm PS}$:
\begin{equation}
\vec{C}_{\ell}^{ij,\rm PS} = A_{\rm PS}^{\rm radio} \, a_{\nu_i,\nu_j}^{radio} + A_{\rm PS}^{\rm IR} \, a_{\nu_i,\nu_j}^{IR} \, .
\label{eq:ps_model}
\end{equation}
In polarization, we neglect the point source contribution from both components~\citep{tucci:2004}.
It is important to notice that building a reliable multi-frequency model for the unresolved sources is difficult. Indeed, it depends on the flux-cut used to construct each mask, but also on the procedure used to identify spurious detections of high-latitude Galactic cirrus as point sources in the catalogue. The uncertainty on the flux-cut estimation is particularly important in the case of radio sources as the flux-cuts considered for CMB analysis (typically around 200\,mJy) are close to the peak of the number count.
That is the main reason why the \textsc{Planck}\ public likelihood analysis considers one amplitude for point sources per cross-spectrum.
\begin{figure*}[!ht]
\center
\includegraphics[width=0.9\textwidth]{cosmo_TT_EE_TE_tauprior}
\caption{Posterior distribution for the six cosmological {$\rm{\Lambda CDM}$}\ parameters for {HiLLiPOP}\ and a prior on $\tau$ ($0.058 \pm 0.012$).}
\label{fig:cosmo}
\end{figure*}
\subsection{Additional priors}
\label{sec:lik:priors}
The various parameters considered in the model described in this section are not all well constrained by the CMB data themselves.
We complement our model with additional priors coming from external knowledge.
For the instrumental nuisances, Gaussian priors are applied on the calibration coefficients based on the uncertainty estimated in \citet{planck2014-a09}: $c_0 = c_1 = c_3 = 0\pm0.002$, $c_4 = c_5 = 0.002\pm0.004$ (Table 5), and $A_{\rm pl}=1\pm0.0025$.
Given its angular resolution, \textsc{Planck}\ is not equally able to constrain the different astrophysical emissions.
We choose to apply Gaussian priors on the dominant ones, including galactic dust, CIB, thermal-SZ, and point sources. The width of the priors is driven by the uncertainty of the foreground modelling. We recall that this model tries to capture residuals from highly non-Gaussian and non-isotropic emission using the template in $C_\ell$ with fixed spectral energy densities (SED). As a consequence, it is difficult to derive an accurate estimation of the expected amplitudes.
We used a Gaussian centred on one with a 20\% width ($1.0 \pm 0.2$) as priors for the rescaling amplitudes of the five foregrounds ($A_{\rm dust}$, $A_{\rm CIB}$, $A_{\rm tSZ}$, $A_{\rm PS}^{\rm radio}$, and $A_{\rm PS}^{\rm IR}$).
The \textsc{Planck}\ collaboration suggests the addition of a 2D prior on both amplitudes of tSZ and kSZ in order to mimic the constraints from the high-resolution experiments ACT and SPT~\citep[see][]{planck2014-a13}. As demonstrated in \citet{couchot:2015}, this is not strictly equivalent, in particular for results on $A_{\rm L}$. We choose to leave the correlation free.
\section{{HiLLiPOP}\ Results}
\label{sec:results}
This section is dedicated to the results derived with the {HiLLiPOP}\ likelihood functions (\emph{hlp} TXE, \emph{hlp} T, \emph{hlp} E, and \emph{hlp} X).
We discuss the cosmological parameters as well as the astrophysical foregrounds and instrumental nuisance.
We pay particular attention to the difference between the results obtained with $TT$ spectra (\emph{hlp} T) and those obtained with $TE$ spectra (\emph{hlp} X).
We choose not to use any low-$\ell$ information and prefer to apply a simple prior on the optical reionization depth ($\tau = 0.058 \pm 0.012$) as given by the \emph{lollipop} likelihood in \citet{planck2014-a25}. We have checked that, for the {$\rm{\Lambda CDM}$}\ model, the parameters are undistinguishable when using the corresponding \textsc{Planck}\ low-$\ell$ likelihood.
We use the Gaussian priors on the inter-calibration coefficients and on astrophysical rescaling factors (dust, CIB, tSZ, and point sources) as discussed in Sect.~\ref{sec:lik:priors}.
The results described here were obtained using the adaptative-MCMC algorithm implemented in the CAMEL toolbox\footnote{available at \href{camel.in2p3.fr}{camel.in2p3.fr}}. We use the \texttt{CLASS}\footnote{\href{http://class-code.net}{http://class-code.net}} software to compute spectra models for a given cosmology.
\begin{table}[!ht]
\begin{center}
\begin{tabular}{lccc}
\hline
\hline
Likelihood & $\chi^2$ & $n_{\rm dof}$ & $\chi^2$/$n_{\rm dof}$ \\
\hline
\emph{hlp} TXE & 27888.3 & 25597 & 1.090 \\
\emph{hlp} T & 9995.9 & 9543 & 1.047 \\
\emph{hlp} X & 9319.9 & 8799 & 1.059 \\
\emph{hlp} E & 7304.5 & 7249 & 1.008 \\
\hline
\end{tabular}
\caption{$\chi^2$ values compared to the number of degree of freedom ($n_{\rm dof} = n_\ell - n_{\rm p}$) for each of the {HiLLiPOP}\ likelihoods.}
\label{tab:chi2}
\end{center}
\end{table}
The $\chi^2$ values of the best fit for each {HiLLiPOP}\ likelihood are given in Table~\ref{tab:chi2}. Using our simple foreground model, we are able to fit the \textsc{Planck}\ data with reasonable $\chi^2$ values and reduced-$\chi^2$ comparable to the Planck public likelihood (the absolute values are not directly comparable since the Planck public likelihood uses binned cross-power spectra and different foreground modelling). We note that \emph{hlp} T and \emph{hlp} X show comparable $\chi^2$ with a similar number of degrees of freedom.
\subsection{{$\rm{\Lambda CDM}$}\ cosmological results}
\label{sec:results:cosmo}
\begin{table*}
\begin{center}
\begin{tabular}{lcccc}
\hline
\hline
Parameters & \emph{hlp} T & \emph{hlp} X & \emph{hlp} E & \emph{hlp} TXE \\
\hline
$\Omega_\mathrm{b}h^2$ & $0.02212 \pm 0.00021$ & $0.02210 \pm 0.00024$ & $0.02440 \pm 0.00106$ & $0.02227 \pm 0.00014$ \\
$\Omega_\mathrm{c}h^2$ & $0.1209 \pm 0.0021$ & $0.1204 \pm 0.0020$ & $0.1130 \pm 0.0043$ & $0.1191 \pm 0.0012$ \\
$100\theta_\mathrm{s}$ & $1.04164 \pm 0.00043$ & $1.04184 \pm 0.00047$ & $1.04101 \pm 0.00074$ & $1.04179 \pm 0.00028$ \\
$\tau$ & $0.062 \pm 0.011$ & $0.059 \pm 0.012$ & $0.059 \pm 0.012$ & $0.067 \pm 0.011$ \\
$\mathrm{n}_\mathrm{s}$ & $0.9649 \pm 0.0058$ & $0.9631 \pm 0.0108$ & $0.9939 \pm 0.0158$ & $0.9672 \pm 0.0037$ \\
$log(10^{10}A_\mathrm{s})$ & $3.058 \pm 0.022$ & $3.046 \pm 0.027$ & $3.061 \pm 0.027$ & $3.065 \pm 0.022$ \\
\hline
\end{tabular}
\caption{Central value and 68\% confidence limit for the base $\Lambda$CDM model with {HiLLiPOP}\ likelihoods with a prior on $\tau$ ($0.058 \pm 0.012$).}
\label{tab:lcdm}
\end{center}
\end{table*}
Figure~\ref{fig:cosmo} shows the posterior distributions of the six {$\rm{\Lambda CDM}$}\ parameters reconstructed from each likelihood and their combinations, which are summarized in Table~\ref{tab:lcdm}.
We find very consistent results for cosmology between all the likelihoods.
For \emph{hlp} E, we find a $\sim2\sigma$ tension on both $n_s$ and $\Omega_b$, which is not related to the foregrounds or to the multipole range or the sky fraction.
Almost all parameters are compatible with the \textsc{Planck}\ results \citep{planck2014-a15} within 0.5$\sigma$ when considering the temperature data only or the full likelihood.
Error bars from the \textsc{Planck}\ public likelihood and {HiLLiPOP}\ as presented in this paper are nearly identical.
As discussed in detail in~\citet{couchot:2015}, the difference in $\tau$ and $A_{\rm s}$ can be understood as a preference of the {HiLLiPOP}\ likelihood for a lower $A_{\rm L}$ (Sect.~\ref{sec:lcdm+}). In both cases, the shifted value for $A_{\rm L}$ comes from a tension between the high-$\ell$ and the $\tau$ constraint (either from lowTEB or from the prior), the likelihood for {HiLLiPOP}\ alone showing almost no constraint on $\tau$ when $A_{\rm L}$ is free.
The results are compatible with those presented in~\citet{couchot:2015}, where we used low-$\ell$ data from \textsc{Planck}-LFI (instead of a tighter prior on $\tau$ from the last results of \textsc{Planck}-HFI). We also now impose a model for the point source frequency spectrum (radio sources and infrared sources) which increased the sensitivity in $n_s$ by $\sim$15\%.
The \emph{hlp} X likelihood is almost as sensitive as \emph{hlp} T to {$\rm{\Lambda CDM}$}\ parameters, although the signal-to-noise ratio is lower in the $TE$ spectra. As we discuss in Sect.~\ref{sec:systematics}, this comes from the uncertainties on the foreground parameters which increase the width of the \emph{hlp} T posteriors.
This is also the case for the Hubble parameter $H_0$ for which we find
\begin{eqs}
H_0 &=& 67.09 \pm 0.86 \quad \text{(\emph{hlp} T)}\\
H_0 &=& 67.16 \pm 0.89 \quad \text{(\emph{hlp} X)},
\end{eqs}
compatible with the low value reported by the \textsc{Planck}\ collaboration \citep{planck2014-a15}.
The only parameter which is significantly less constrained by the $TE$ data is $n_{\rm s}$. Indeed, for a cosmic-variance limited experiment, $TT$ and $TE$ show comparable sensitivity for $n_{\rm s}$; instead, the \textsc{Planck}\ instrumental noise on $TE$ spectra increases the posterior width by a factor of almost 2~\citep{galli:2014}.
As expected, the results based on the \emph{hlp} E likelihood are even less accurate.
\subsection{Instrumental nuisances}
\label{sec:results:instru}
In the likelihood function, the calibration uncertainties are modelled using an absolute rescaling $A_{\rm pl}$ and inter-calibration factors $c$. The parameter $A_{\rm pl}$ allows the propagation of an overall calibration error at the cross-spectra level (which principally translates into a larger error on the amplitude of the primordial power spectrum $A_s$). We apply the same calibration factors for temperature and polarization.
The constraints on inter-calibration coefficients from the \textsc{Planck}\ CMB data are much weaker than the external priors. Without priors, we found that the coefficients are recovered without any bias in all cases with posterior width of typically 1.5\%, 2\%, 7\%, and 5\% for \emph{hlp} TXE, \emph{hlp} T, \emph{hlp} E, and \emph{hlp} X, respectively.
Figure~\ref{fig:calib_calpriors} shows the posterior distributions for the inter-calibration factors (including external priors described in Sect~\ref{sec:lik:priors}).
We found a slight tension (less than $2\sigma$) between the calibration factors recovered from temperature and for polarization.
The relatively bad $\chi_{\rm min}^2$ value of the full likelihood configuration (Table~\ref{tab:chi2}) is certainly partially due to this disagreement between calibrations.
We tried to take into account the difference between temperature and polarization calibration. To this end we added, in the polarization case, additional new parameters $\epsilon$ (corresponding to the polarization efficiency) through the redefinition $c \rightarrow c(1+\epsilon)$ for the polarization maps. We checked the results with the \emph{hlp} X and \emph{hlp} TXE likelihoods. The calibrations in temperature are kept fixed and the $\epsilon_i$s are left free in the analysis. We did not see any improvement of the $\chi_{\rm min}^2$ for the full likelihood.
The level of the calibration shifts is of the order of one per mil. We have checked that it has a negligible impact on both the cosmology and the astrophysical parameters.
\begin{figure}[!ht]
\center
\includegraphics[draft=false,width=\columnwidth]{nuisance_TT_EE_TE_tauprior}
\caption{Posterior distribution of the five inter-calibration parameters for each of the {HiLLiPOP}\ likelihoods (\emph{hlp} TXE, \emph{hlp} T, \emph{hlp} E, and \emph{hlp} X).}
\label{fig:calib_calpriors}
\end{figure}
\subsection{Astrophysical results}
\label{sec:results:astro}
We recall that the foregrounds in the {HiLLiPOP}\ likelihoods are modelled using fixed spectral energy densities (SED) and that, for each emission, the only free parameter is an overall rescaling amplitude (which should be one if the correct SED is used).
\begin{figure}[!ht]
\center
\includegraphics[draft=false,width=\columnwidth]{astro_nofg}
\caption{Astrophysical foreground amplitude posterior distributions for the {HiLLiPOP}\ likelihoods: \emph{hlp} TXE ({\it black}), \emph{hlp} T ({\it red}), \emph{hlp} E ({\it blue}) and \emph{hlp} X ({\it green}). Priors are also plotted (grey dashed line).}
\label{fig:astro}
\end{figure}
The compatibility with one for all foreground amplitudes is thus a good test for the consistency of the internal \textsc{Planck}\ templates. Figure~\ref{fig:astro} shows the posterior distributions for the astrophysical foreground amplitudes.
We discuss the results in detail in the following sections. We check the stability of the cosmological results with respect to foreground parameters in Sect.~\ref{sec:systematics}.
\subsubsection*{Dust}
The emission of galactic dust is the dominant residual foreground in the power spectra considered in this analysis.
The recovered amplitudes for each case (and, in parentheses for the full likelihood) are
\begin{eqs}
A^{TT}_{\rm dust} = 0.97 \pm 0.09 \quad (0.99 \pm 0.08)\\
A^{TE}_{\rm dust} = 0.86 \pm 0.12 \quad (0.80 \pm 0.11)\\
A^{EE}_{\rm dust} = 1.14 \pm 0.13 \quad (1.20 \pm 0.11)
\end{eqs}
The dust amplitude in temperature is recovered perfectly.
The amplitude for the $EE$ polarization mode is found to be slightly high at 1.5$\sigma$, while the $TE$ polarization mode is low at about 1.5$\sigma$.
When using the full {HiLLiPOP}\ likelihood, the tension on the dust polarization modes $EE$ and $TE$ reaches 2$\sigma$ which is directly related to the small tension on calibration discussed in Sect.~\ref{sec:results:instru}.
\subsubsection*{CIB}
The second emission to which \textsc{Planck}\ $TT$ CMB power spectra are sensitive is the CIB.
The $A_{\rm CIB}$ recovered for \emph{hlp} T and \emph{hlp} TXE are, respectively,
\begin{align}
A_{\rm CIB} = 0.84 \pm 0.15 \quad (1.01 \pm 0.13) \, ,
\end{align}
which is perfectly compatible with the astrophysical measurement from \textsc{Planck}\ for \emph{hlp} TEX and at $1\sigma$ for \emph{hlp} T.
\subsubsection*{SZ}
\textsc{Planck}\ data are only mildly sensitive to SZ components.
In particular, we have no constraint at all on the amplitude of the kSZ effect ($A_{\rm kSZ}$) and the correlation coefficient between SZ and CIB ($A_{\rm tSZxCIB}$).
When using astrophysical foreground information, the external prior on $A_{\rm tSZ}$ drives the final posterior:
\begin{align}
A_{\rm SZ} = 1.00 \pm 0.20 \quad (0.94 \pm 0.19) \, .
\end{align}
\subsubsection*{Point Sources}
\label{sec:res:ps}
\begin{table*}[!ht]
\center
\begin{tabular}{lccc|cc}
\hline
\hline
& Radio & IR & Total & \emph{hlp} T & \emph{hlp} TXE \\
\hline
100$\times$100 & $7.8 \pm 1.6$ & ~~$0.2 \pm 0.0$ & ~~$7.9 \pm 1.6$ & $15.5 \pm 1.4$ & $15.8 \pm 0.9$ \\
100$\times$143 & $5.4 \pm 1.1$ & ~~$0.5 \pm 0.1$ & ~~$5.8 \pm 1.1$ & $10.4 \pm 1.5$ & $10.5 \pm 1.0$ \\
100$\times$217 & $4.3 \pm 0.9$ & ~~$1.9 \pm 0.4$ & ~~$6.2 \pm 1.0$ & $10.1 \pm 1.7$ & $10.0 \pm 1.4$ \\
143$\times$143 & $4.8 \pm 1.0$ & ~~$1.2 \pm 0.2$ & ~~$6.1 \pm 1.0$ & ~~$6.3 \pm 1.7$ & ~~$5.9 \pm 1.2$ \\
143$\times$217 & $3.6 \pm 0.8$ & ~~$5.1 \pm 1.0$ & ~~$8.7 \pm 1.3$ & ~~$6.2 \pm 1.8$ & ~~$5.3 \pm 1.5$ \\
217$\times$217 & $3.2 \pm 0.8$ & $21.0 \pm 3.8$ & $24.2 \pm 3.8$ & $16.7 \pm 2.2$ & $15.0 \pm 2.1$ \\
\hline
\end{tabular}
\caption{Poisson amplitudes for radio galaxies \citep[model from][]{tucci:2011} and dusty galaxies \citep[model from][]{bethermin:2012} compared to {HiLLiPOP}\ results. Units: Jy$^2$.sr$^{-1}$ ($\nu I_\nu = cte$).}
\label{tab:shot_noise}
\end{table*}
We find more power in \textsc{Planck}\ power spectra for the radio sources than expected and a bit less for IR sources:
\begin{eqs}
A_{\rm PS}^{\rm radio} &=& 1.61 \pm 0.09 \quad (1.62 \pm 0.09) \\
A_{\rm PS}^{\rm IR} &=& 0.78 \pm 0.07 \quad (0.71 \pm 0.07) \, ,
\end{eqs}
with no impact on cosmology (see Sect.~\ref{sec:systematics}). We have identified that the tension comes essentially from the 100\ifmmode $\,GHz$\else \,GHz\fi\ map which dominates the constraints for the radio source amplitude. Table~\ref{tab:shot_noise} shows the results when we fit one amplitude for each cross-spectra and when compared to the model expectation. The distribution of the posteriors for the point sources amplitudes are plotted in Fig.~\ref{fig:ps_distrib}. We find relatively good agreement between the predictions from source counts and the {HiLLiPOP}\ results, with the exception of the 100$\times$100 where the measurement differ by up to 4$\sigma$ with the prediction.
This is coherent with the results from the \textsc{Planck}\ collaboration \citep[discussed in Sect.~4.3 of][]{planck2014-a13}. It could be a sign for residual systematics in the data but we recall that an accurate point source modelling is very hard to obtain for a large sky coverage with inhomogeneous noise as such of \textsc{Planck}. This is particularly important for the estimation of the radio sources amplitudes which are sensitive to both catalogue completeness and flux cut estimation.
\begin{figure}[!ht]
\includegraphics[draft=false,width=\columnwidth]{ps_TXE_TT_tauprior}
\caption{Posterior distributions for the six point sources amplitudes for \emph{hlp} TXE ({\it black line}) and \emph{hlp} T ({\it red line}) compared to model prediction ({\it dashed line}). Units: Jy$^2$.sr$^{-1}$ ($\nu I_\nu = cte$).}
\label{fig:ps_distrib}
\end{figure}
\section{$A_{\rm L}$ as a robustness test}
\label{sec:lcdm+}
As discussed in \citet{couchot:2015}, the measurement of the lensing effect in the angular power spectra of the CMB anisotropies provides a good internal consistency check for high-$\ell$ likelihoods. The \textsc{Planck}\ public likelihood shows an $A_{\rm L}$ different from one by up to 2.6$\sigma$.
\begin{figure}[!ht]
\center
\includegraphics[width=0.8\columnwidth]{alens_hlpTXE}
\caption{Posterior distribution for the $A_{\rm L}$ parameter for the temperature likelihood \emph{hlp} T ({\it red line}) and the temperature-polarization likelihood \emph{hlp} X ({\it green line}).}
\label{fig:prof_alens}
\end{figure}
With {HiLLiPOP}\ and the $\tau$-prior, the best fits for $A_{\rm L}$ (Fig.~\ref{fig:prof_alens}) are
\begin{eqs}
A_{\rm L} &=& 1.12 \pm 0.09 \quad \text{(\emph{hlp} T + $\tau$ prior)} \\
A_{\rm L} &=& 1.07 \pm 0.21 \quad \text{(\emph{hlp} X + $\tau$ prior)} \, ,
\end{eqs}
compatible with the standard expectation.
While the relative variation of the theoretical power spectra with $A_{\rm L}$ is more important for $TE$ than for $TT$, we find a weaker constraint for $TE$. This illustrates the fact that the noise level in the $TE$ power spectrum from \textsc{Planck}\ is unable to capture the information from the lensing of the CMB $TE$ at high multipoles.
In \citet{couchot:2015}, we have shown that the \textsc{Planck}\ tension on $A_{\rm L}$ is directly related to the constraint on $\tau$. Indeed, the $\tau$ constraints from the {HiLLiPOP}\ likelihoods (Fig.~\ref{fig:prof_tau}) are less in tension with the \textsc{Planck}\ low-$\ell$ likelihoods.
The {HiLLiPOP}\ only likelihoods give
\begin{eqs}
\tau &=& 0.122 \pm 0.036 \quad \text{(\emph{hlp} T)}\\
\tau &=& 0.103 \pm 0.081 \quad \text{(\emph{hlp} X)}
,\end{eqs}
which is, for \emph{hlp} T, at 1.7$\sigma$ from the HFI low-$\ell$ analysis $\tau = 0.058 \pm 0.012$ \citep{planck2014-a25}. The difference with the $\tau$ estimation derived in \citet{couchot:2015} comes directly from the additional constraints in the point source sector. For \emph{hlp} X, the $\tau$ distribution is compatible with the \textsc{Planck}\ low-$\ell$ constraint, but the constraint is weaker.
\begin{figure}[!ht]
\center
\includegraphics[width=0.8\columnwidth]{tau_hlpTXE}
\caption{Posterior distribution for the reionization optical depth $\tau$ for \emph{hlp} T ({\it red line}) and \emph{hlp} X ({\it green line}) compared to the prior from \citet{planck2014-a25} used throughout this analysis ({\it dashed dark blue line}).}
\label{fig:prof_tau}
\end{figure}
We note that when adding the information from the measurement of the power spectrum of the lensing potential \citep[using the \textsc{Planck}\ lensing likelihood described in ][]{planck2014-a17} the constraints on $\tau$ from \emph{hlp} T and \emph{hlp} X become comparable
\begin{eqs}
\tau &=& 0.077 \pm 0.028 \quad \text{(\emph{hlp} T + lensing)}\\
\tau &=& 0.056 \pm 0.027 \quad \text{(\emph{hlp} X + lensing)},
\end{eqs}
and compatible with low-$\ell$-only results from both \textsc{Planck}-HFI \citep[$\tau=0.058 \pm 0.012$,][]{planck2014-a25} and \textsc{Planck}-LFI \citep[$\tau = 0.067 \pm 0.023$,][]{planck2014-a13}.
\section{Foreground robustness: TT vs. TE}
\label{sec:systematics}
In this section, we investigate the impact of foregrounds on the recovery of the {$\rm{\Lambda CDM}$}\ cosmological parameters.
We focus on the results from \emph{hlp} T and \emph{hlp} X.
First, we show in Fig.~\ref{fig:cosmo_nofg}, the posterior for the parameters with and without external foreground priors. These results demonstrate no impact of the priors on the final results, and suggest a low level of correlation between foreground parameters and cosmological parameters in the likelihood. Indeed, the statistics reconstructed from the MCMC samples (Fig.~\ref{fig:param_corr}) exhibit less than 15\% correlation between the two sets of parameters.
In the case of temperature, we see strong correlations between the instrumental parameters on the one hand, and between the astrophysical parameters on the other hand. This is not the case for \emph{hlp} X, which, apart from the cosmological sector, exhibits less than 10\% correlation.
\begin{figure}[!hb]
\includegraphics[draft=false,width=1.025\columnwidth]{cosmo_nofg}
\caption{Posterior distributions for the six {$\rm{\Lambda CDM}$}\ parameters with ({\it solid lines}) and without ({\it dashed lines}) astrophysical foregrounds priors in the case of \emph{hlp} T ({\it red}) and \emph{hlp} X ({\it green}).}
\label{fig:cosmo_nofg}
\end{figure}
\begin{figure}[!ht]
\center
\includegraphics[draft=false,width=\columnwidth]{hlpT_lcdm_correlation}
\includegraphics[draft=false,width=\columnwidth]{hlpX_lcdm_correlation}
\caption{Correlation matrix of the likelihood parameters including {$\rm{\Lambda CDM}$}\ and nuisance parameters for \emph{hlp} T ({\it top}) and \emph{hlp} X ({\it bottom}). The colour scale is saturated at 50\%.}
\label{fig:param_corr}
\end{figure}
In a second step, we have estimated the contribution of the foreground parameters to the error budget of the cosmological parameters (Table~\ref{tab:errors}). This analysis assesses the degree to which our uncertainties on the nuisance parameters impact the cosmological error budget. A parameter estimation is performed to assess the full error for each parameter. Then another parameter estimation is performed with the foreground parameters fixed to their best-fit values. The confidence intervals recovered in this last case give the \emph{statistical} uncertainties which are essentially driven by noise and cosmic variance (and which correspond to the errors on parameters if we knew the nuisance parameters perfectly).
Finally the \emph{foreground} error is deduced by quadratically subtracting the statistical uncertainty from the total error following what was done in \citet{planck2014-a13}.\\
In the temperature case, we see a strong impact of the nuisances on the error of $\Omega_b h^2$ and $n_{s}$.
The posterior width of the reionization optical depth $\tau$ is strongly dominated by the prior so it is marginally affected by foreground uncertainties.
Finally, even if the statistical uncertainty is larger in the case of $TE$, foreground uncertainties are negligible in the total error budget, which makes them competitive with $TT$ (except for $n_{\rm s}$).
\begin{table}[!ht]
\begin{center}
\begin{tabular}{p{1.2cm}p{1.2cm}p{1.2cm}p{1.2cm}p{0.8cm}p{0.4cm}}
\hline
\hline
Parameter & Estimate & \multicolumn{3}{c}{Error} \\
& & Full & statistical & foreground & \\
\hline
\multicolumn{5}{c}{\emph{hlp} T parameters} \\
\hline
$\Omega_\mathrm{b}h^2$ & $0.02212$& $0.00020$ & $0.00018$ & $0.00009$& ($27\%$) \\
$\Omega_\mathrm{c}h^2$ & $0.1210$& $0.0021$ & $0.0021$ & $0.0003$& ($ 3\%$) \\
$100\theta_\mathrm{s}$ & $1.04164$& $0.00043$ & $0.00044$ & $0.00000$& ($ 0\%$) \\
$\tau$ & $0.062$& $0.011$ & $0.011$ & $0.002$& ($ 5\%$) \\
$\mathrm{n}_\mathrm{s}$ & $0.9649$& $0.0058$ & $0.0052$ & $0.0025$& ($24\%$) \\
$log(10^{10}A_\mathrm{s})$ & $3.058$& $0.022$ & $0.022$ & $0.003$& ($ 2\%$) \\
\hline
\multicolumn{5}{c}{\emph{hlp} X parameters} \\
\hline
$\Omega_\mathrm{b}h^2$ & $0.02209$& $0.00024$ & $0.00024$ & $0.00004$& ($ 3\%$) \\
$\Omega_\mathrm{c}h^2$ & $0.1204$& $0.0020$ & $0.0020$ & $0.0005$& ($ 6\%$) \\
$100\theta_\mathrm{s}$ & $1.04184$& $0.00047$ & $0.00047$ & $0.00003$& ($ 0\%$) \\
$\tau$ & $0.058$& $0.012$ & $0.012$ & $0.000$& ($ 0\%$) \\
$\mathrm{n}_\mathrm{s}$ & $0.9630$& $0.0111$ & $0.0107$ & $0.0026$& ($ 6\%$) \\
$log(10^{10}A_\mathrm{s})$ & $3.046$& $0.026$ & $0.027$ & $0.000$& ($ 0\%$) \\
\hline
\end{tabular}
\caption{Errors on cosmological parameters within the {$\rm{\Lambda CDM}$}\ model for \emph{hlp} T and \emph{hlp} X. The full error is split between statistical and foreground errors. Errors are given at 68\,\% confidence level.}
\label{tab:errors}
\end{center}
\end{table}
More important than increasing the error budget, nuisance uncertainties can also bias the cosmological parameters.
Figure~\ref{fig:fgbias} shows the results on the {$\rm{\Lambda CDM}$}\ parameters for \emph{hlp} T and \emph{hlp} X when nuisances are fixed either to their best fit or to the value expected by the astrophysical constraints (i.e. scaling parameters fixed to 1). This corresponds to the extreme case for the potential bias, where we supposed an exact knowledge of the characteristics of the complex spatial distribution of foregrounds and their spectra. The attempt here is to give an idea of the impact of foreground uncertainties on cosmological parameters.
Once again, we see a stronger impact on \emph{hlp} T than on \emph{hlp} X. In temperature, almost all parameters are shifted when changing the nuisance values, the strongest effect being for $\Omega_b h^2$, $\Omega_c h^2$, and $n_s$.
On the contrary, we cannot see any impact of the $A_{\rm dust}^{TE}$ parameter shift even if its best-fit value is at $0.86$ compared to $1$.
\begin{figure}[!ht]
\includegraphics[width=\columnwidth]{cosmo_fg_bias}
\caption{Posterior distribution for the cosmological parameters when foregrounds are fixed either to their best-fit value ({\it solid lines}) or to the expected astrophysical value ({\it dashed lines}) for \emph{hlp} T ({\it red}) and \emph{hlp} X ({\it green}).}
\label{fig:fgbias}
\end{figure}
\section{Discussion}
With the currently available CMB measurements, the sensitivity to {$\rm{\Lambda CDM}$}\ cosmological parameters is dominated by the \textsc{Planck}\ data in the $\ell$-range typically below $\ell=2000$ both in $TT$ and $TE$.
For $TE$, adding higher multipoles coming from the measurements of the South Pole Telescope~\citep{crites:2015} or the Atacama Cosmology Telescope~\citep{naess:2014}, we find almost identical results on {$\rm{\Lambda CDM}$}\ cosmological parameters without any reduction of parameter uncertainties.
This is different from the temperature data for which high-resolution experiments help to reduce the uncertainty on foreground parameters, which indirectly reduces the posterior width for cosmological parameters through their correlation~\citep{couchot:2015}.
On the low-$\ell$ side, measurements of $TE$ at $\ell$ < 20 give information about the reionization optical depth $\tau$ (although not equivalent to the low-$\ell$ from $EE$) and a longer lever arm for $n_s$.
We checked the results of the temperature-polarization cross-correlation likelihood on some basic extensions to the {$\rm{\Lambda CDM}$}\ model: essentially $A_{\rm L}$, $N_{\rm eff}$, and $\sum$$m_\nu$. Given \textsc{Planck}\ sensitivity, we do not find any competitive constraints compared to the temperature likelihood.
For example, we find an effective number of relativistic species $N_{\rm eff} = 2.45 \pm 0.45$ for \emph{hlp} X compared to $N_{\rm eff} = 2.95 \pm 0.32$ for \emph{hlp} T. Adding data from high-resolution experiments, we find $N_{\rm eff} = 2.84 \pm 0.43$, which does not help reduce the error to the level of temperature data.
We combine CMB $TE$ data with complementary information from the late time evolution of the Universe geometry, coming from the Baryon Acoustic Oscillations scale evolution~\citep{alam:2016} and the SNIa magnitude-redshift measurements~\citep{betoule:2014}. We find very compatible results with a significantly better accuracy only on $\Omega_{\rm c} h^2$.
\section{Conclusion}
Building a coherent likelihood for CMB data given \textsc{Planck}\ sensitivity is difficult owing to the complexity of the foreground emissions modelling. In this paper, we have presented a full temperature and polarization likelihood based on cross-spectra (including $TT$, $TE$, and $EE$) over a wide range of multipoles (from $\ell=50$ to 2500).
We have described in detail the foreground parametrization which relies on the \textsc{Planck}\ measurements for astrophysical modelling.
We found results on the {$\rm{\Lambda CDM}$}\ cosmological parameters consistent between the different likelihoods (\emph{hlp} T, \emph{hlp} X, \emph{hlp} E).
The cosmological constraints from this work are directly comparable to the \textsc{Planck}\ 2015 cosmological analysis \citep{planck2014-a15} despite the differences in the foreground modelling adopted in {HiLLiPOP}.
Both instrumental and astrophysical nuisance parameters are compatible with expectations, with the exception of the point source amplitudes in temperature for which we found a small tension with the astrophysical expectations. This tension may be the sign of potential systematic residuals in \textsc{Planck}\ data and/or uncertainty in the foreground model in temperature (especially on the dust SED or the various $\ell$-shape of the foreground templates).
We investigated the robustness of the results with respect to the foreground and nuisance parameters. In particular, we demonstrated the impact of foreground uncertainties on the temperature power spectrum likelihood. We compared these data to the results from the likelihood based on temperature-polarization cross-correlation which involves fewer foreground components, but is statistically less sensitive.
We found that foreground uncertainties have a stronger impact on $TT$ than on $TE$ with comparable final errors (except for $n_s$). Moreover, the \emph{hlp} X likelihood function does include fewer nuisance parameters (only 7 compared to 13 for \emph{hlp} T) and shows less correlation in the nuisance/foregrounds sectors which, in practice, allows much faster sampling.
This work illustrates the fact that $TE$ spectra provide an estimation of the cosmological parameters that are as accurate as $TT$ while being more robust with respect to foreground contaminations. The results from \textsc{Planck}\ in polarization are still limited by instrumental noise in $TE$, but as suggested in \citet{galli:2014}, future experiments only limited by cosmic variance over a wider range of multipoles will be able to constrain cosmology with $TE$ even better than with $TT$.
\begin{acknowledgement}
The authors thank G. Lagache for the work on the point source mask and the estimation of the infrared sources amplitude, M. Tucci for the estimation of the radio point source amplitude, and P. Serra for the work on the CIBxtSZ power spectra model.
\end{acknowledgement}
\bibliographystyle{aat}
|
2,877,628,090,443 | arxiv | \section{Introduction}
The permitted O\emissiontype{I} lines at 1304, 8446, and 11287 \AA\ are common features in
AGN (active galactic nuclei) and
have been used to study the physical properties of BLR (broad-line
region) clouds (e.g., \cite{Grandi80}; \cite{Kwan81};
\cite{Rudy89}; \cite{Laor}).
As shown in the partial Grotrian diagram of O\emissiontype{I} atom in figure \ref{fig:energydiagram},
the transition from the ground state
$2p\ ^3P$ to the excited state $3d\ ^3D^0$ has an excitation energy whose wavelength is
$\lambda$ = 1025.77 \AA, falling within the Doppler core of Ly$\beta$(1025.72 \AA)
for gas at 10$^4$ K. The excited state $3d\ ^3D^0$ decays either back to the ground state,
or by the emission of a $\lambda$11287 photon, to the intermediate
excited state $3p\ ^3P$.
The latter decays by a $\lambda$8446 photon to the lower excited state
$3s\ ^3S^0$,
which finally decays to the ground state by a $\lambda$1304 photon. If Ly$\beta$
pumping is the dominant process of O\emissiontype{I} line formation, the photon flux ratio of
these three lines should be 1:1:1, and the photon flux ratio, especially between
O\emissiontype{I} $\lambda$1304 and $\lambda$8446, can be used as a
reddening indicator (\cite{Rudy89}; \cite{Laor};
Rodr{\usefont{T1}{ppl}{m}{n}\'{i}}guez-Ardila et al. 2002b).
It can also be expected that if the O\emissiontype{I} line emission is
dominated by Ly$\beta$ pumping,
the O\emissiontype{I} line intensities should be used to measure the
microturbulence parameter in BLR clouds, because Ly$\beta$ pumping would become
more efficient in a larger microturbulent velocity field.
Several mechanisms that could modify the 1:1:1 photon flux ratio
have been suggested, such as the collisional excitation of
O\emissiontype{I} $\lambda$8446 and the Balmer continuum absorption of
O\emissiontype{I} $\lambda$1304
(\cite{Grandi83}).
Rodr{\usefont{T1}{ppl}{m}{n}\'{i}}guez-Ardila et al. (2002b) found that in six of
their seven low-luminosity AGNs, Ly$\beta$ pumping is not the only
mechanism responsible for the O\emissiontype{I} line emission, and the
1:1:1 photon flux ratio is altered. Their sample consisted of six NLS1s (narrow line
Seyfert 1s) and one Seyfert 1.
Before Rodr{\usefont{T1}{ppl}{m}{n}\'{i}}guez-Ardila et al. (2002b), I Zw 1, classified as
the NLS1, is the only object where the three O\emissiontype{I} lines
have been studied (Rudy et al. 1989, 2000;
\cite{Laor}). It is thus not clear whether the
properties of the O\emissiontype{I} line emission found in
low-luminosity AGNs can be
applied to quasars.
The O\emissiontype{I} line emission in quasars would differ from
those of NLS1s, because
it is generally considered that the gas density in BLR clouds is
higher in NLS1s than in
quasars, and thus the collisional processes are more dominant in
NLS1s
(Baldwin et al. 1988, 1996; \cite{Laor}; \cite{Wilkes};
\cite{Kuras}).
Investigating the O\emissiontype{I} line formation mechanisms would also provide knowledge
about the physical properties of the Fe\emissiontype{II} emitting
region, since the Fe\emissiontype{II} and
O\emissiontype{I} lines are considered to be emitted in the same
portion of BLR clouds
(Rodr{\usefont{T1}{ppl}{m}{n}\'{i}}guez-Ardila et al. 2002a).
Since Fe\emissiontype{II} emission is the strongest coolant in BLR clouds, it would give an
important clue to better understand the radiative mechanisms in BLR gas.
In order to study the O\emissiontype{I} line formation mechanism in quasars,
we have started a program of
taking NIR(near-infrared) spectra of quasars.
Combining the NIR spectra with the HST/FOS ultraviolet spectra,
we intend to analyze the three O\emissiontype{I}
lines, i.e., O\emissiontype{I} $\lambda$1304,
O\emissiontype{I} $\lambda$8446, and O\emissiontype{I}
$\lambda$11287. We present the ratios
of the three O\emissiontype{I} lines of the quasar PG 1116+215 at z=0.176.
This is the first quasar in which the three O\emissiontype{I}
lines have been analyzed.
\section{Observation and Reduction}
\subsection{Near-Infrared Spectroscopy}
The NIR spectrum of PG 1116+215, covering 0.9 -- 1.8$\micron$, including the
redshifted O\emissiontype{I} $\lambda$8446 and
O\emissiontype{I} $\lambda$11287 lines,
was obtained using
the longslit of FLAMINGOS (Florida Multi-object Imaging Near-IR Grism
Observational Spectrometer; \cite{Elston}) on the KPNO (Kitt Peak National
Observatory) 2.1m telescope on 2005 February 28.
The detector is the Hawaii II 2048$\times$2048 HgCdTe science-grade array,
divided into four quadrants with 8 amplifiers each.
A 4-pixel slit with a scale of \timeform{0.606''} pixel$^{-1}$ was placed on the object
in the north-south direction under a seeing of \timeform{1''}.
The first order of the {\it JH} grism was used, which gave a spectral
resolution of 430 km s$^{-1}$.
The object was shifted along the slit by \timeform{20''} between exposures.
The total integration time was 2400s, which consisted of eight 300s exposures.
{\it J}-band photometry of the object was also performed for photometric
calibration.
Data reduction was performed using IRAF (Image Reduction
and Analysis Facility).\footnote{IRAF is distributed
by the National Optical Astronomy Observatories,
which are operated by the Association of Universities for Research
in Astronomy, Inc., under cooperative agreement with the National
Science Foundation, USA.}
OH airglow lines were used to calibrate the wavelength scale.
The flux scaling was twofold: (1) the A-type star SAO 81808 was observed to
calibrate the relative response within the {\it J}-band, and (2) the standard star
AS 20-0, having {\it J} = 9.55 $\pm$ 0.01 mag (\cite{Hunt}), was used to
determine the {\it J}-band
magnitude. A 9500K blackbody was fitted to the spectrum of SAO 81808 to obtain
the sensitivity curve within the {\it J}-band. Both standard stars were observed
at an airmass of 1.1, which is similar to that of the object.
The total uncertainty in flux calibration is 5\%.
The final spectrum was re-binned into a 10 \AA\ step, corresponding to a spectral
resolution of 300 km s$^{-1}$.
\subsection{Ultraviolet Spectroscopy}
The HST (Hubble Space Telescope) ultraviolet spectrum of PG 1116+215 was observed on
1993 February 19 and 20 by using the FOS (Faint Object Spectrograph) with
three gratings (G130H, G190H, and G270H), which cover 1200 -- 3200 \AA\ in the observed
frame. The integration times were 8724s for G130H, 1878s for G190H, and 751s
for G270H.
The HST spectrum used in this work was obtained from \citet{Evans}.
They recalibrated the raw archival spectra using the latest algorithms and
calibration data.
Spectral data contaminated by intermittent noisy
diodes and cosmic-ray events were identified manually and eliminated.
They combined multiple observations of the same source, if available, in such a way
that the resultant spectrum would have the highest possible signal-to-noise ratio.
We removed prominent geocoronal and galactic absorption lines from
the spectrum of PG 1116+215 by interpolating from both sides of the feature.
The spectrum of PG 1116+215 was re-binned into a 1 \AA\ step, corresponding to a spectral
resolution of 300 km s$^{-1}$. The total uncertainty in the flux
calibration was less than 5\%.
\subsection{Galactic and Internal Extinction}
In order to correct the galactic reddening, we adopted
$E_{B-V}$ = 0.095 $\pm$ 0.015 mag from the extinction map of the Milky Way based on the
far-infrared emission observed by IRAS and COBE/DIRBE (\cite{Schlegel}).
The resolution of their extinction map is \timeform{6.1'}.
De-reddening of our spectrum was performed by using a galactic extinction curve
presented by \citet{Pei}.
We also sought the effect of the intrinsic extinction of PG 1116+215.
\citet{Popovic} analyzed the HST/FOS spectrum of
PG 1116+215, and found a flux ratio of H$\alpha$/H$\beta$ = 2.94 $\pm$
0.74.
When corrected for galactic reddening, as described above, the ratio
became 2.70.
\citet{Dong} showed that the flux ratio of the broad
components of H$\alpha$ to H$\beta$ is
H$\alpha$/H$\beta$ = 2.97 $\pm$ 0.36 for their 94 blue AGN samples.
These AGN samples are considered to be free of intrinsic extinction.
Judging from H$\alpha$/H$\beta$ of PG 1116+215, which is similar to those of these blue AGNs,
the intrinsic extinction in PG 1116+215 appears to be little.
\subsection{Variability}
It is widely known that Seyferts and quasars are highly variable.
Since the observation dates of the UV and NIR spectra are separated by
more than
12 yrs, a possible variability effect should be carefully examined before comparing
the O\emissiontype{I} lines in the UV and NIR wavelengths.
{\it J} = 13.59 $\pm$ 0.03 mag is found in the 2MASS
(Two Micron All Sky Survey) database,\footnote{This
publication makes use of data products from the Two Micron All Sky
Survey, which is a joint project of the University of Massachusetts and
the Infrared Processing and Analysis Center/California Institute of
Technology, funded by the National Aeronautics and Space Administration
and the National Science Foundation, USA.}
while {\it J} = 12.96 $\pm$ 0.03 mag was obtained for our observation.
It is thus clear that PG 1116+215 varied its brightness by about 0.6 mag
since the 2MASS observation on 1998 February 2.
However, because the O\emissiontype{I} lines are formed in the outermost portion of the BLR,
it is expected that
their variability is sufficiently small to be ignored, even if the power-law continuum,
which is supposed to be direct light from the central source, shows significant
variability.
In fact, a monitoring observation of the well-known variable Seyfert
NGC 5548 revealed that when the continuum varied its brightness by about three
times, the Mg\emissiontype{II} line, which is emitted in the same region
as the O\emissiontype{I} lines, varied by less than 6\% (\cite{Dietrich}).
These arguments suggest that the variability has little effect on
the O\emissiontype{I} line fluxes of our data.
\begin{figure}
\begin{center}
\FigureFile(80mm,80mm){figure1.eps}
\end{center}
\caption{Partial Grotrian diagram of O\emissiontype{I}. The solid lines are used for
permitted transitions, while the dashed lines are for semi-forbidden
transitions.}
\label{fig:energydiagram}
\end{figure}
\begin{figure}
\begin{center}
\FigureFile(80mm,80mm){figure2.eps}
\end{center}
\caption{UV spectrum taken with the HST/FOS shown on the top
panel and the NIR spectrum on the bottom panel in the observed
frame. The positions of the three O\emissiontype{I} lines,
Ca\emissiontype{II} lines, and Si\emissiontype{II} $\lambda$1264 line
are indicated by vertical lines.}\label{fig:spectra}
\end{figure}
\section{Results and Analysis}
Figure \ref{fig:spectra} shows the UV spectrum of PG 1116+215 on the
top panel and the NIR spectrum on the bottom. The three O\emissiontype{I} lines are clearly detected.
Note that the signal-to-noise
ratio of the NIR spectrum at the wavelength range shorter than 1$\micron$
is low because of the aberration at the edge of the detector.
The O\emissiontype{I} $\lambda$11287 line flux, which is free from blending with other lines,
was measured by integrating all counts
above the estimated continuum level in the wavelength range between
1.319$\micron$ and 1.338$\micron$ in the observed frame.
The line width was measured by fitting a single
Gaussian to the observed line.
The FWHM of O\emissiontype{I} $\lambda$11287
is 2000 $\pm$ 200 km s$^{-1}$, and
significantly narrower than that of Ly$\alpha$ ($\sim$ 5500 km s$^{-1}$).
Rodr{\usefont{T1}{ppl}{m}{n}\'{i}}guez-Ardila et al. (2002a) showed that
the line widths and profiles of Fe\emissiontype{II},
O\emissiontype{I}, and Ca\emissiontype{II}
in their NLS1 samples
are very similar, while they are significantly narrower than that of Pa$\beta$.
Their results are consistent with the Fe\emissiontype{II},
O\emissiontype{I}, and Ca\emissiontype{II}
lines occurring in the partly ionized
regions,\footnote{Note that O\emissiontype{I}, Fe\emissiontype{II}, and
Ca\emissiontype{II} emitting regions should be
heavily overlapped because of the similar ionization potentials,
i.e., 13.6eV for O\emissiontype{I}, 16.2eV for
Fe\emissiontype{II}, and 11.9eV for Ca\emissiontype{II}.}
which are farther from the central ionizing source and have smaller
velocities. Our narrow O\emissiontype{I} $\lambda$11287 line width relative to Ly$\alpha$
implies that this geometrical configuration found in the NLS1s also applies
to quasars.
Although O\emissiontype{I} $\lambda$8446 is blended with the Ca\emissiontype{II} lines in
the red wing, the blueward part of the line is almost free from its
contamination. We thus fitted a single Gaussian to the line in the blueward
part where
the contamination by Ca\emissiontype{II} is not severe,
and then measured the flux within the fitted Gaussian.
The broad feature at 1304 \AA\ results from the blending of an O\emissiontype{I} triplet at
$\lambda$1302.17, $\lambda$1304.86, and $\lambda$1306.0 and a Si\emissiontype{II} doublet at
Si\emissiontype{II} $\lambda$1304 and $\lambda$1309. The total blend flux of the $\lambda$1304 feature
was measured by integrating all counts above the estimated continuum level between
1524 \AA\ and 1548 \AA\ in the observed frame. The O\emissiontype{I}
and Si\emissiontype{II}
lines are severely blended
and de-blending with high spectral resolution is hopeless for quasars.
However, the Si\emissiontype{II} doublet blended in the 1304 \AA\ feature
is expected to be accompanied by considerable Si\emissiontype{II} line emission at 1264 \AA\
(\cite{Constantin}).
Actually, Si\emissiontype{II} $\lambda$1264 is clearly detected in UV
spectrum of PG 1116+215 (figure \ref{fig:spectra}) and
its flux is 44 $\pm$ 5\% of the blend flux of
the 1304 \AA\ feature.
The photoionization models by \citet{Kwan79} and
\citet{Netzer80} predict that the Si\emissiontype{II} doublet at 1304
\AA\ and 1309 \AA\ has
16 -- 28\% of the flux of Si\emissiontype{II} $\lambda$1264
(\cite{Dumont}),
implying that 7 -- 12\% of
the blend flux comes
from the Si\emissiontype{II} doublet and 88 -- 93\% from
the O\emissiontype{I} triplet.
\citet{Laor} found that 50 - 56\% of the blend flux is due to
the O\emissiontype{I}
triplet in I Zw 1 and Rodr{\usefont{T1}{ppl}{m}{n}\'{i}}guez-Ardila et
al. (2002b) found that the average
portion of O\emissiontype{I} flux contribution to the blend flux is 75\% in their three NLS1s.
In a sample of high-redshift quasars, \citet{Constantin}
concluded that the
$\lambda$1304 feature is due only to O\emissiontype{I} lines, because
Si\emissiontype{II} lines,
which are expected to be accompanied with Si\emissiontype{II} $\lambda$1304 and
$\lambda$1309, are not seen.
From the discussions presented above, we assumed that 90\% of the
$\lambda$1304 of PG 1116+215 is due to O\emissiontype{I} lines.
We emphasize that assuming the portion of O\emissiontype{I} lines as
being 75\%
(Rodr{\usefont{T1}{ppl}{m}{n}\'{i}}guez-Ardila et al. 2002b) or 100\%
(\cite{Constantin}) does not change our conclusions.
We list the measured flux of O\emissiontype{I} $\lambda$1304, $\lambda$8446, and
$\lambda$11287 in Table \ref{tab:flux}.
\begin{table}
\caption{Measured flux of O\emissiontype{I} $\lambda$1304, $\lambda$8446, and
$\lambda$11287 for PG 1116+215.}
\label{tab:flux}
\begin{center}
\begin{tabular}{cc}
\hline\hline
Line & Flux ($10^{-14}$erg s$^{-1}$ cm$^{-2}$)\\\hline
O\emissiontype{I} $\lambda$1304 & 27.4$\pm$2.1\\
O\emissiontype{I} $\lambda$8446 & 8.1$\pm$1.7\\
O\emissiontype{I} $\lambda$11287 & 4.6$\pm$0.6\\\hline
\end{tabular}
\end{center}
\end{table}
\section{Mechanism of O\emissiontype{I} Line Formation}
The photon flux ratios for PG 1116+215 of O\emissiontype{I}
$\lambda$11287/O\emissiontype{I} $\lambda$8446
(hereafter $\mathrm{ROI_{ir}}$) and O\emissiontype{I}
$\lambda$1304/O\emissiontype{I} $\lambda$8446
(hereafter $\mathrm{ROI_{uv}}$) are given in table \ref{tab:photonratio} along with
those for a Seyfert 1 and six NLS1s.
The O\emissiontype{I} $\lambda$1304 flux of I Zw 1 was obtained from \citet{Laor},
while $\lambda$8446 and $\lambda$11287 were obtained from \citet{Rudy00}.
$\mathrm{ROI_{ir}}$ and $\mathrm{ROI_{uv}}$ of other 5 NLS1s and a Seyfert 1 were
obtained from Rodr{\usefont{T1}{ppl}{m}{n}\'{i}}guez-Ardila et al. (2002b).
Note that while the quoted errors for PG 1116+215 and I Zw 1 are 1$\sigma$
significant, those for other samples are 2$\sigma$ significant.
\begin{table*}
\caption{Measured $\mathrm{ROI_{uv}}$ and
$\mathrm{ROI_{ir}}$ for PG 1116+215,
one Seyfert1, and six NLS1s.}
\label{tab:photonratio}
\begin{center}
\begin{tabular}{llcccc}
\hline\hline
Object & Type & M$_B$\footnotemark[$\ast$] & $\mathrm{ROI_{uv}}$\footnotemark[$\dagger$,$\ddagger$]
& $\mathrm{ROI_{ir}}$\footnotemark[$\dagger$,$\ddagger$] & References\footnotemark[$\amalg$]\\
\hline
PG 1116+215 & quasar & $-$24.7 & 0.52 $\pm$ 0.12 & 0.76 $\pm$ 0.19 & --\\
NGC 863 & Seyfert1 & $-$20.9 & 0.26 $\pm$ 0.04 & 0.55 $\pm$ 0.08 & 1\\
1H 1934-063 & NLS1 & $-$17.1 & $<$ 0.19 & 0.64 $\pm$ 0.05 & 1\\
Ark 564 & NLS1 & $-$20.3 & 0.07 $\pm$ 0.01 & 0.82 $\pm$ 0.03 & 1\\
Mrk 335 & NLS1 & $-$21.0 & 0.20 $\pm$ 0.02 & 0.64 $\pm$ 0.05 & 1\\
Mrk 1044 & NLS1 & $-$19.5 & 0.43 $\pm$ 0.07 & 0.42 $\pm$ 0.05 & 1\\
Ton S180 & NLS1 & $-$22.6 & 0.91 $\pm$ 0.15 & 1.08 $\pm$ 0.16 & 1\\
I Zw 1\footnotemark[$\star$] & NLS1 & $-$22.7 & 0.19 $\pm$ 0.02 & 0.76 $\pm$ 0.11 & 2, 3\\
\hline
\end{tabular}
\end{center}
\footnotemark[$\ast$] These values were obtained by modifying the entries in the catalog
presented by \citet{Veron},
assuming cosmological
constants as $H_0$=70km/s, $\Omega_{M}$=0.3, and
$\Omega_{\Lambda}$=0.7.
Originally assumed values are $H_0$=50km/s and $q_0$=0.
\par\noindent
\footnotemark[$\dagger$] $\mathrm{ROI_{uv}}$ and $\mathrm{ROI_{ir}}$ are defined as the photon flux ratio
of $\lambda$1304/8446 and $\lambda$11287/8446.
\par\noindent
\footnotemark[$\ddagger$] The quoted errors are 1$\sigma$ significant for PG 1116+215 and I
Zw 1, while those of other samples are 2$\sigma$ significant.
\par\noindent
\footnotemark[$\star$] We assumed that 53\% of the $\lambda$1304 feature is due to
O\emissiontype{I} lines, which is the average portion of optically
thick and thin cases presented by \citet{Laor}.
\par\noindent
\footnotemark[$\amalg$] References.--- (1) Rodr{\usefont{T1}{ppl}{m}{n}\'{i}}guez-Ardila et
al. (2002b). (2) \citet{Laor}. (3) \citet{Rudy00}.
\end{table*}
The average value of $\mathrm{ROI_{ir}}$ for the NLS1s is 0.73 $\pm$ 0.22, where the 1$\sigma$
value, 0.22, reflects only the scatter from
object to object, not including the errors measured for each object.
$\mathrm{ROI_{ir}}$ of PG 1116+215 falls within this 1$\sigma$ scatter
of the NLS1 average.
Even if we neglect Ton S180, which has a significantly larger value of
$\mathrm{ROI_{ir}}$ than the other five NLS1s, the average value of
$\mathrm{ROI_{ir}}$ for the NLS1s
is 0.66 $\pm$ 0.15 and that of PG 1116+215 falls within the
1$\sigma$ scatter.
Thus, $\mathrm{ROI_{ir}}$ of PG 1116+215 is not significantly different
from those of other NLS1 samples.
In the same way, we can see no significant difference between the
$\mathrm{ROI_{ir}}$ of the Seyfert 1 galaxy NGC 863 and those of the NLS1s.
$\mathrm{ROI_{ir}}$ of the AGN was first measured for I Zw 1 by \citet{Rudy89}.
They found that $\mathrm{ROI_{ir}}$ of this object is equal to unity, and
suggested that O\emissiontype{I} lines are formed by Ly$\beta$ pumping.
However, Rodr{\usefont{T1}{ppl}{m}{n}\'{i}}guez-Ardila et al. (2002b) found that
$\mathrm{ROI_{ir}}$ of I Zw 1 falls below unity, using the spectrum with higher
resolution and wider spectral coverage published by \citet{Rudy00}.
They also examined the spectra of six NLS1s including I Zw 1, and one Seyfert 1, to find
that in six of their seven samples, $\mathrm{ROI_{ir}}$ are significantly below unity.
Our result for PG 1116+215 indicated that this trend is also present in
quasars.
They discussed that
the collisional excitation to the upper level of the O\emissiontype{I}
$\lambda$8446 transition
enhances the strength of the O\emissiontype{I} $\lambda$8446
relative to that of the O\emissiontype{I} $\lambda$11287, so that $\mathrm{ROI_{ir}}$ falls below
unity.
This conclusion was drawn from the presence of O\emissiontype{I} $\lambda$7774 in 1H
1934$-$063, whose relative strength to O\emissiontype{I} $\lambda$8446 is consistent with the
combination of Ly$\beta$ pumping and collisional excitation mechanisms.
Rodr{\usefont{T1}{ppl}{m}{n}\'{i}}guez-Ardila et al. (2002b) ruled out continuum
fluorescence as the additional mechanism
from the lack or weakness of other O\emissiontype{I} lines, such as
$\lambda$7002, $\lambda$7254, and $\lambda$13165, which should be
present if continuum fluorescence is at work (\cite{Grandi80}).
Measuring the strengths of O\emissiontype{I} $\lambda$6048,
$\lambda$7774, and $\lambda$7990, which are very weak relative to that of
$\lambda$8446, they also suggested that the contribution to
O\emissiontype{I} $\lambda$8446 by recombination is no larger than a few percents.
The average value of $\mathrm{ROI_{uv}}$ for the NLS1s is 0.33 $\pm$
0.30, adopting the upper
limit value for 1H 1934$-$063.
$\mathrm{ROI_{uv}}$ of PG 1116+215 and NGC 863 fall within this
1$\sigma$ scatter of the NLS1s
average, which means that the $\mathrm{ROI_{uv}}$ values of these two objects are
not significantly different from those of other NLS1 samples.
These values are significantly below unity.
\citet{Kwan81} showed that the intrinsic $\mathrm{ROI_{uv}}$ is 0.76, rather
than 1, due to the collisional excitation of
O\emissiontype{I} $\lambda$8446 and the Balmer
continuum absorption of O\emissiontype{I} $\lambda$1304.
\citet{Grandi83} found that O\emissiontype{I} $\lambda$1304 could also be destroyed by
collisional de-excitation of the O\emissiontype{I} $\lambda$1304 transition and
de-excitation of the upper term of O\emissiontype{I} $\lambda$1304 via O\emissiontype{I}]
$\lambda$1641 and $\lambda$2324 line emission.
He calculated the fraction of populations of $3s\ ^3S^0$ that result in
the observable $\lambda$1304, and found that up to half of the
$\lambda$1304 photon could be
converted to O\emissiontype{I}] $\lambda$1641 before it leaves the emission-line cloud.
However, \citet{Laor} found that no strong line is detected at
1641 \AA\ for I Zw 1, and that such a line cannot add more than
$\sim$30\% to the observed $\lambda$1304 flux.
Such an investigation was possible for I Zw 1
because the He\emissiontype{II} $\lambda$1640 line, which is usually the dominant
feature around 1641 \AA, is blueshifted by about 10 \AA\ in the
I Zw 1 spectrum.
Thus, they ruled out this mechanism as the main contributor to destroy the
O\emissiontype{I} $\lambda$1304 photon.
Consequently, the major mechanism of O\emissiontype{I} $\lambda$1304
destruction is the Balmer
continuum absorption or collisional de-excitation, or both.
\section{Discussion}
\citet{Laor} showed that the relative strength of C\emissiontype{III}]
$\lambda$1909 of
I Zw 1 is significantly lower than that observed in the typical AGN.
They argued that this suppression of C\emissiontype{III}] $\lambda$1909 may imply that
the typical BLR density in I Zw 1 is about an order of magnitude
larger than in the typical AGN.
This suppression of C\emissiontype{III}] $\lambda$1909 is also seen in
other narrow-line quasars (Baldwin et al. 1988, 1996; \cite{Wilkes};
\cite{Kuras}), suggesting that the high density
of the BLR gas is common in this type of AGN.
However, it is not clear whether these arguments are applicable to the
region where Fe\emissiontype{II} and O\emissiontype{I} lines are emitted.
\citet{Kuras} showed that while line flux ratios of
Ly$\alpha$, C\emissiontype{IV} $\lambda$1549,
Si\emissiontype{IV} $\lambda$1440,
C\emissiontype{III}] $\lambda$1909,
and Si\emissiontype{III}] $\lambda$1892 observed in the NLS1s can be
explained by 10-times lower ionization
parameter and a few-times ($<$10) higher densities than the normal AGN,
the Mg\emissiontype{II} line strength cannot be explained by these parameter values.
Since the Fe\emissiontype{II} and O\emissiontype{I} lines are
considered to be formed in the same region as the
Mg\emissiontype{II} line (see, e.g., \cite{Kwan81}),
the physical properties in the Fe\emissiontype{II} emitting region could be quite
different from the C\emissiontype{III}] emitting region.
Three O\emissiontype{I} lines presented here
($\lambda$1304, $\lambda$8446, and $\lambda$11287)
provide a good indicator of the gas density in the Fe\emissiontype{II} emitting region.
This is because these O\emissiontype{I} line emissions are seriously affected by
several mechanisms that are sensitive to the gas density, as described
above.
Our observation of these three O\emissiontype{I} lines in a quasar enables, for the
first time, to investigate whether there is a significant difference in
the gas density of Fe\emissiontype{II} emitting cloud between NLS1s and
quasars.
The results are quite different from the situation indicated by
the C\emissiontype{III}] $\lambda$1909 line.
As described above, there are no significant differences in
$\mathrm{ROI_{ir}}$ and $\mathrm{ROI_{uv}}$ among the NLS1s, the
Seyfert 1, and the quasar.
This indicates that the physical properties of O\emissiontype{I} emitting
cloud affect the O\emissiontype{I} line formation in the similar way
in these three types of AGN.
In other words, the collisional processes are working to a similar
extent in the NLS1s, the Seyfert1, and the quasar.
Thus, our O\emissiontype{I} observations did not find any significant differences among
these types of AGN in the gas density
in the outermost portion of the BLR where the Fe\emissiontype{II}
and O\emissiontype{I} lines are emitted.
This result would provide some clues for modeling the environment of
the Fe\emissiontype{II} emitting
cloud in NLS1s, Seyfert 1s, and quasars.
NLS1s are known as strong Fe\emissiontype{II} emitters.
The efficiency of Fe\emissiontype{II} emission is sensitive to several physical
parameters other than the gas density, in which the most sensitive
parameters are the ionization parameter, microturbulence, and
input spectral energy distribution (\cite{Netzer83}; \cite{Wills};
Verner et al. 1999, 2003).
Our results suggest that the Fe\emissiontype{II} emission enhancement in
the NLS1s may not be caused by the high density of the BLR gas.
If this is the case, some of the other parameters should be quite different
between NLS1s and quasars.
\section{Summary}
We performed NIR spectroscopy of the quasar PG 1116+215.
By combining the NIR spectrum with the UV spectrum taken with the HST/FOS, we
obtained three O\emissiontype{I} lines ($\lambda$1304, $\lambda$8446, and
$\lambda$11287).
We found that the line width of O\emissiontype{I} $\lambda$11287 is narrower than that of
Ly$\alpha$, which is consistent with O\emissiontype{I} and
Fe\emissiontype{II} emission occurring in the
partly ionized regions at the outermost portion of the BLR.
We also found that the photon flux ratio of the three O\emissiontype{I} lines
significantly deviate from 1:1:1, the expected ratio in the
case of pure Ly$\beta$ pumping formation.
This strongly suggests the contribution of mechanisms other than Ly$\beta$ pumping
to the O\emissiontype{I} line formation/destruction, for which the best candidates are
the density-sensitive processes.
Furthermore, the obtained photon flux ratio for PG 1116+215 is not significantly
different from those of the NLS1s and the Seyfert 1.
This indicates that the gas density in Fe\emissiontype{II} and O\emissiontype{I} emitting regions
are not significantly different among NLS1s, Seyfert 1s, and quasars,
although a larger gas density in the NLS1s is indicated by
the C\emissiontype{III}] $\lambda$1909 strengths.
We also suggest that the physical parameters other than the gas density should be
quite different between NLS1s and quasars to account for NLS1s as
strong Fe\emissiontype{II} emitters.
\bigskip
We are grateful to the referee, Jack Baldwin, for useful comments to
improve this manuscript.
We thank the staff of KPNO for technical support and
assistance with the observation.
We would also like to thank H. Fukushi for her help.
The trip of YM and SO to Tucson was financially supported by
Research Center for the Early Universe, The University of Tokyo.
|
2,877,628,090,444 | arxiv | \section{\textbf{Introduction}}\label{S:intro}
The study of minimal surfaces has been one of the prime drivers of
the study of geometry and calculus of variations in the twentieth
century and, in particular, the Bernstein problem has played a
central role. Bernstein proved his Theorem \cite{Bernstein}, that a
$C^2$ minimal graph in $\mathbb R^3$ must necessarily be an affine plane in
1915 and, almost fifty years later, a new insight of Fleming
\cite{Fle} generated renewed interest in the problem. The work of
De Giorgi, \cite{DG3}, Almgren, \cite{Al}, Simons, \cite{Sim}, and
Bomberi-De Giorgi-Giusti, \cite{BDG}, culminated in the complete
solution to the Bernstein problem:
\begin{Thm}\label{T:classicalB}
Let $S = \{(x,u(x)) \in \mathbb R^{n+1}| x\in \mathbb R^n , x_{n+1} = u(x)\}$ be
a $C^2$ minimal graph in $\mathbb R^{n+1}$, i.e., let $u\in C^2(\mathbb R^n)$ be a
solution of the minimal surface equation
\begin{equation}\label{ms}
div\left(\frac{Du}{\sqrt{1 + |Du|^2}}\right) = 0,
\end{equation}
in the whole space. If $n\leq 7$, then there exist $a\in \mathbb R^n$,
$\beta \in \mathbb R$ such that $u(x) = <a,x> + \beta$, i.e., $S$ must
be an affine hyperplane. If instead $n\geq 8$, then there exist
non affine (real analytic) functions on $\mathbb R^n$ which solve
\eqref{ms}.
\end{Thm}
Roughly a decade later, Fischer-Colbrie and Schoen, \cite{S-FC}, and do
Carmo and Peng, \cite{DoC-P}, imposing a stability condition,
independently proved
a far reaching generalization of the Bernstein property:
\begin{Thm}\label{T:classicalS}
Every stable complete minimal surface $S\subset \mathbb R^3$ must be a plane.
\end{Thm}
Here, stable means that on every compact set $S$
minimizes area up to order two. We note in passing that, thanks to
the strict convexity of the area functional $\mathcal A(u) =
\int_\Omega \sqrt{1+|Du|^2} dx$, where $\Omega\subset\subset \mathbb R^n$, for Euclidean graphs on $\mathbb R^n$ the
stability assumption is automatically satisfied.
The purpose of this paper is to prove an analogue of Theorem
\ref{T:classicalS} in the sub-Riemannian Heisenberg group $\mathbb H^1$ (for
the relevant definitions we refer the reader to the next section).
The study of the Bernstein problem in this setting has received
increasing attention over the last decade. The existence of minimal
surfaces in sub-Riemannian spaces was established by two of us in
\cite{GarNh} by developing in such setting the methods of the
geometric measure theory. The study of minimal graphs in the
Heisenberg group was first approached by one of us in
\cite{Pauls:minimal}, by Cheng, Hwang, Malchiodi and Yang
\cite{CHMY} (who studied the problem in a more general class of
pseudohermitian $3$-manifolds), by three of us in
\cite{DGN:minimal}, and by two of us in \cite{GP}.
Henceforth in this paper, following a perhaps unfortunate but old
tradition, by \emph{minimal} we intend a $C^2$ surface $S\subset
\mathbb H^1$ whose sub-Riemannian, or horizontal mean curvature $\mathcal H$
(see Proposition \ref{P:mc} below for its expression) vanishes
identically on $S$. In these initial investigations, a number of
nonplanar minimal graphs over the $xy$-plane are produced
(\cite{Pauls:minimal,CHMY,GP}) and indeed are classified (first in
\cite{CHMY}, with an alternate proof in \cite{GP}). A prototypical
example is given by the surface $t=xy/2$ which is an entire minimal
graph over the $xy$-plane. However, this example and all other
entire minimal graphs over the $xy$-plane must have non empty
characteristic locus (this fact was proved independently in
\cite{CHMY} and \cite{GP}). We recall that the latter is defined as
the set of points of the surface at which the two bracket generating
vector fields $X_1, X_2$ become tangent to the surface itself.
In some of these same papers, new examples were discovered of entire
minimal graphs over some plane, but with an empty characteristic
locus. In \cite{GP}, two of us first produced infinitely many examples of
such graphs, one of which is given by
\begin{equation}\label{ce1}
x = y\ \tan(\tanh(t)).\end{equation}
Moreover, as announced in
\cite{CHMY} (this and many other examples are shown in more detail
in \cite{CH}), the surface
\begin{equation}\label{ce2}
x = y\ t
\end{equation}
is also noncharacteristic and minimal. From the point of view of
the Bernstein problem, these examples would indicate a failure of
the property - there exists a rich reservoir of graphs over the
$xy$-plane which are minimal (although they have characteristic
points) and an equally rich reservoir of nonplanar noncharacteristic minimal
graphs over the $yt$-plane (or the $xt$-plane). In the positive
direction, the work \cite{GP} shows that graphs over vertical planes
must have a specific structure indicating some kind of rigidity (see
also \cite{CH} for other classification results).
In \cite{DGN:stable} the first three authors continued the
investigation into noncharacteristic graphs by asking a more refined
question: are surfaces such as \eqref{ce1} or \eqref{ce2} local
minima? Just as in the classical case, sub-Riemannian minimal
surfaces are shown to merely be critical points of the relative area
functional (the so-called horizontal perimeter). Since this
functional is shown to lack the fundamental convexity property which
guarantees in the flat case that critical points are global
minimizers, the question of stability becomes central. It could
happen in fact that minimal surfaces such as \eqref{ce1},
\eqref{ce2} might fail to be locally area minimizing. Using a basic second
variation formula discovered in \cite{DGN:minimal}, in \cite{DGN:stable} the following surprising theorem is proved.
\begin{Thm}\label{T:DGNstable} Let $\alpha
>0,\beta \in \mathbb R$, then the surfaces
\[x\ =\ y \ (\alpha t+\beta)\ ,\ \ \;\; y\ =\ x\ (-\alpha t+\beta),\]
are unstable noncharacteristic entire minimal minimal graphs.
\end{Thm}
We emphasize that these surfaces are also global intrinsic graphs in the sense of \cite{FSSC2}, \cite{FSSC3}, see Definition
\ref{D:intgraph} below. We also note that
Theorem \ref{T:DGNstable} shows that an analogue of the Bernstein
property cannot hold unless we assume the surface be
noncharacteristic and stable.
The second variation formula in \cite{DGN:minimal} reduces to a
stability inequality of Hardy type on the surface. Another major
tool in the proof of Theorem \ref{T:DGNstable} is the reduction of
such Hardy type inequality to a one dimensional integral inequality
of Carleman-Wirtinger type which is confirmed by explicitly
constructing a variation which decreases perimeter. In \cite{DGNP},
we continued this line of investigation and provided a positive
answer to the following version of the Bernstein problem.
\begin{Thm}[\textbf{Bernstein Theorem 1, \cite{DGNP}}]\label{T:bern1}
In $\mathbb H^1$ the only stable $C^2$ minimal entire graphs, with empty
characteristic locus, are the vertical planes \eqref{vp0}.
\end{Thm}
To illustrate the strategy behind this result, we recall a
definition from \cite{DGNP}.
\begin{Def}\label{D:gs}
We say that a $C^2$ surface $S\subset \mathbb H^1$ is a \emph{graphical
strip} if there exist an interval $I\subset \mathbb R$, and $G \in C^2(I)$,
with $G'\geq 0$ on $I$, such that, after possibly a left-translation
and a rotation about the $t$-axis, then either
\begin{equation}\label{ceI}
S\ =\ \{(x,y,t)\in \mathbb H^1 \mid (y,t) \in \mathbb R \times I , x = y G(t)\},
\end{equation}
or
\begin{equation}\label{ceII}
S\ =\ \{(x,y,t)\in \mathbb H^1 \mid (x,t) \in \mathbb R \times I , y = - x
G(t)\}.
\end{equation}
If there exists $J\subset I$ such that $G'>0$ on $J$, then we call
$S$ a \emph{strict graphical strip}.
\end{Def}
It should be immediately clear to the reader that the surfaces in \eqref{ce1} or \eqref{ce2}
are examples of strict graphical strips in which one can take $J = I=\mathbb R$. Here is one of the two central results of \cite{DGNP}.
\begin{Thm}\label{T:DGNP1}
Any strict graphical strip is an unstable minimal surface with empty
characteristic locus. As a consequence, any minimal surface
containing a strict graphical strip is unstable.
\end{Thm}
The proof of Theorem \ref{T:DGNP1} involves, among other things, an
adaptation of the technique in \cite{DGN:stable} which leads to the
construction of an explicit variation along which the horizontal
perimeter strictly decreases. Our second main result in \cite{DGNP}
consists in proving, using the techniques of \cite{GP}, that every
noncharacteristic minimal graph over some plane which is not itself
a vertical plane
\begin{equation}\label{vp0}
\Pi_\gamma\ =\ \{(x,y,t)\in \mathbb H^1\mid a x + b y = \gamma\},
\end{equation}
contains a strict graphical strip. Combining this result with
Theorem \ref{T:DGNP1}, we obtain Theorem \ref{T:bern1}.
Another approach to the sub-Riemannian Bernstein problem arises when
considering an intrinsic notion of graph. Observe that in flat
$\mathbb R^3$ a graph of the type $S = \{x = \phi(y,z)\mid(y,z)\in \Omega\}$,
can be written as $S= \{(0,u,v) + \phi(u,v) e_1\mid (u,v)\in \Omega\}$, where we have let $e_1 = (1,0,0)$.
Inspired by this observation Franchi, Serapioni and Serra Cassano
proposed the following notion of intrinsic graph adapted to the
non-Abelian group structure of $\mathbb H^1$.
\begin{Def}\label{D:intgraph}
A $C^2$ surface $S$ is an \emph{intrinsic $X_1$-graph} if there exist a
domain $\Omega \subset \mathbb R^2_{uv}$ and $\phi \in C^2(\Omega)$, such
that $S=\{(0,u,v) \circ \phi(u,v)e_1 | (u,v) \in \Omega\}$.
\end{Def}
We note that one basic consequence of this definition is that $S$
has empty characteristic locus. This follows from the fact that the
vector field $X_1$ is always transversal to the surface.
Interestingly, if we assume that $\Omega$ be bounded, then the
horizontal perimeter of $S$ is given by the formula
\begin{equation}\label{igper} \mathscr{P}_H(S)\ =\ \int_\Omega \sqrt{1
+ \mathcal B_\phi(\phi)^2}\ du\ dv,
\end{equation}
where we have denoted by $\mathcal B_\phi(f) = f_u + \phi f_v$ the
linearized Burger's operator. Notice that $\mathcal B_\phi(\phi) =
\phi_u + \phi \phi_v$ is the nonlinear inviscid Burger's operator.
Definition \ref{D:intgraph} was first introduced in \cite{FSSC2} and
developed further in \cite{FSSC3,AS,BSV,GS}. In \cite{BSV}, Barone
Adesi, Serra Cassano and Vittone prove the following Bernstein theorem for
these types of graphs.
\begin{Thm}[\textbf{Bernstein Theorem 2, \cite{BSV}}]\label{T:bern2} The only $C^2$, stable minimal
entire intrinsic $X_1$-graphs are the vertical planes.
\end{Thm}
The proof of Theorem \ref{T:bern2} relies on a clever choice of
coordinates, suggested by the study of the characteristic curves of
the solutions of the minimal surface equation, which for an
intrinsic graph becomes
\begin{equation}\label{mseig}
\mathcal B_\phi(\mathcal B_\phi(\phi))\ =\ 0\ .
\end{equation}
Such change of coordinates allows the authors to reduce to a case which
can again be solved using the one dimensional reduction techniques used in
\cite{DGN:stable} to prove Theorem \ref{T:DGNstable}.
We are now in a position to discuss the results of this paper.
First, we introduce a definition which is related to Definition
\ref{D:gs} and that is suggested by the analysis of the double
Burger equation \eqref{mseig}. Suppose we are given some interval $J
= (-4\epsilon,4\epsilon)\subset \mathbb R$, $\epsilon >0$, and functions
$F, G, \sigma\in C^2(J)$ satisfying the condition
\begin{equation}\label{nd}
F'(s)^2 < 2 \sigma'(s) G'(s), \ \ \ \text{for every}\ s \in J.
\end{equation}
We note explicitly that \eqref{nd} implies, in particular, that
$\sigma'(s) G'(s)>0$ for every $s \in J$. If we consider the mapping
$\Psi: \mathbb R \times J \to \mathbb R^2$ from the $(u,s)$ to the $(u,v)$ plane
defined by $\Psi(u,s) = (u,v)$, where $v$ is defined by the equation
\begin{equation}\label{if}
v = v(u,s) = G(s) \frac{u^2}{2} + F(s) u + \sigma(s),
\end{equation}
then we see that the Jacobian determinant of $\Psi$ is given by
\begin{align}\label{jac}
\det J_\Psi(u,s) & = \det \begin{pmatrix} 1 & 0 \\ G(s)u + F(s) &
G'(s) \frac{u^2}{2} + F'(s)u + \sigma'(s)
\end{pmatrix}
\\
& = G'(s) \frac{u^2}{2} + F'(s)u + \sigma'(s). \notag\end{align}
Thanks to \eqref{nd} the Jacobian of $\Psi$ is always different from
zero. We emphasize at this moment that the continuity of the first
derivatives of the functions $F, G$ and $\sigma$, along with the
assumption \eqref{nd}, guarantee that, possibly restricting the
interval $J = (-4\epsilon,4\epsilon)$, we can always force the map
$\Psi$ to be globally one-to-one, hence a $C^2$ diffeomorphism of
$\mathbb R\times J$ onto its image $\Psi(\mathbb R\times J)$. We denote with
$\Psi^{-1}(u,v) = (u,s(u,v))$ the inverse $C^2$ diffeomorphism .
When we write $s(u,v)$ we mean the function specified by such
inverse diffeomorphism.
\begin{Def}\label{intdeltastrip} Let $\epsilon >0$,
$J=(-4\epsilon,4\epsilon)$. A $C^2$ surface $S\subset \mathbb H^1$ is an \emph{intrinsic
graphical strip} on $J$ if there exist functions $F,G, \sigma \in
C^2(J)$ satisfying $(F')^2 \leq 2 \sigma' G'$ such that, if
\[\Omega= \Psi(\mathbb R\times J) = \{(u,v) | u \in \mathbb R, v=G(s) \frac{u^2}{2} + F(s) u + \sigma(s) \text{\; \;for
$s \in J$}\},\] then with $\phi\in C^2(\Omega)$ defined by
\[
\phi(u,v) = F(s(u,v)) + u G(s(u,v)),
\]
we have
\[S = \{(0,u,v)\circ (\phi(u,v),0,0) | (u,v) \in \Omega\} = \{(\phi(u,v),u,v-\frac{u}{2} \phi(u,v))\mid (u,v)\in \Omega\}.\]
We say that $S$ is a \emph{strict intrinsic graphical strip} on $J$
if $F, G, \sigma$ satisfy the strict inequality \eqref{nd}, and if
the map $\Psi:\mathbb R\times J \to \Omega$ is globally one-to-one, hence a
diffeomorphism of $\mathbb R\times J$ onto $\Psi(\mathbb R\times J) = \Omega$.
\end{Def}
\begin{Rem}\label{R:H-m}
A strict intrinsic graphical strip is necessarily a minimal surface.
To see this, we observe that the function $\phi$ in the above
definition satisfies \eqref{mseig}. The reader will find most of the
computations to achieve this in the proof of Corollary
\ref{T:varstrip}, see formula \eqref{sigmaderivs} below, and the
computations following that formula.
\end{Rem}
\begin{Rem}\label{R:G'>0}
In the case of a strict intrinsic graphical strip, without loss of
generality we can assume that $G'(s) > 0$ on $J$ (this property is
needed in the proof of Lemmas \ref{RHS}, \ref{LHS} and Theorem
\ref{I:unstable}). We can justify this claim as follows. As
observed earlier, the condition \eqref{nd} implies $\sigma'(s)G'(s)
> 0$. Since $\sigma'(s)$ does not change sign, if $\sigma'(s) > 0$,
this forces $G'(s) > 0$. If instead we have $G'(s) < 0$ on $J$, we
replace $F,G,\sigma$ by $\tilde F(s) = F(-s)$, $\tilde G(s) =
G(-s)$, $ \tilde \sigma(s) = \sigma(-s)$. The newly defined
functions satisfy \eqref{nd}. We also have $\tilde G'(s) > 0$. Now
we take $\phi(u,v) = \tilde F(-s(u,v)) + u \tilde G(-s(u,v))$. We
see that the surface parameterized by this new $\phi$ has the same
trace as the one with the original $\phi$.
\end{Rem}
\noindent
\begin{Rem} We emphasize here that any vertical plane such as \eqref{vp0} is an
intrinsic graphical strip, but not a strict intrinsic graphical
strip. One has in fact if $a\not= 0$, that $\phi(u,v) =
\frac{\gamma}{a} - \frac{b}{a} u$, so that $F(s) \equiv
\frac{\gamma}{a}$, $G(s) = - \frac{b}{a}$, $\sigma(s) \equiv 0$.
Therefore, $2 \sigma' G' - (F')^2 \equiv 0$.
\end{Rem}
Notice that, as a consequence of the smoothness hypothesis on $F,
G$, an intrinsic graphical strip is a surface of class $C^2$.
Definition \ref{intdeltastrip} takes advantage of the coordinates
introduced in \cite{BSV} discussed above, and the motivation behind
it will be explained in Section \ref{S:strips}. With this definition
in place and the second variation formula written in terms of
intrinsic graphical strips, in Section \ref{S:explicit} we use
techniques from \cite{DGN:stable} (and modifications from
\cite{DGNP}) to construct a variation on an intrinsic graphical
strip which decreases the horizontal perimeter, proving the
following basic result.
\begin{MThm}\label{I:unstable}
Let $S$ be a strict intrinsic graphical strip as in Definition
\ref{intdeltastrip}.
There exists a $\psi\in C^2_0(S)$ such that
\[
\mathcal V^H_{II}(S,\psi X_1) < 0.
\]
As a consequence, $S$ is unstable.
\end{MThm}
The relevance of Theorem \ref{I:unstable} is in the following
theorem, which we prove in Section \ref{S:strips}.
\begin{MThm}\label{T:existstrip0} Every $C^2$ complete noncompact embedded
minimal surface without boundary with empty characteristic locus and
which is not itself a vertical plane contains a strict intrinsic
graphical strip.
\end{MThm}
Our proof of Theorem \ref{T:existstrip0} hinges on a close analysis
of the representation results of \cite{GP}. Theorems
\ref{I:unstable} and \ref{T:existstrip0} are the main novel
technical points of the present paper. From them, the following
theorem of Bernstein type will follow.
\begin{MThm}[of Bernstein type]\label{I:main} The vertical planes are the only stable $C^2$ complete embedded
minimal surfaces in $\mathbb H^1$ without boundary and with empty
characteristic locus.
\end{MThm}
We note that Theorem \ref{I:main} is not contained in either of the
cited Theorems \ref{T:bern1} or \ref{T:bern2}. For instance the
sub-Riemannian \emph{catenoids} in $\mathbb H^1$ (the reader should note
that these surfaces are just the classical hyperboloids of
revolution)
\begin{equation}\label{cat0}
(t-a)^2\ =\ \frac{4}{b^2}\left(\frac{b}{4}(x^2+y^2)-1\right),\ \ \ \
a,b \in \mathbb R, b > 0,
\end{equation}
are complete embedded minimal surfaces with empty characteristic
locus which are not graphs on any plane, nor they are entire intrinsic
graphs. Theorem \ref{I:main} shows that such minimal surfaces are
unstable. These surfaces are a model of special interest. For this
reason, and also for making transparent to the reader our more
general constructions, we discuss them in detail in section
\ref{SS:catenoid}.
In closing, we note that the representation results of this paper
require that the surface be $C^2$: the complete regularity theory
of minimal surfaces is currently an open problem which is being very
actively investigated.
This work was presented by the last named author at the ICM
Satellite Conference "Geometric Analysis and PDE's" in Naples,
Italy, September 2006, and by the third named author at the
Conference on Geometric Analysis and Applications, Univ. Illinois,
Urbana Champaign, July 2006. After its completion we were informed
of the preprint \cite{HRR} which addresses questions related to
those in this paper.
\section{\textbf{Definitions}}\label{S:def}
In this section we recall some definitions and known results which
will be needed in the paper. We recall that the Heisenberg group
$\mathbb H^n$ is the graded, nilpotent Lie group of step $2$ with underlying
manifold is $\mathbb C^{n}\times \mathbb R \cong \mathbb R^{2n+1}$, whose points
we indicate $g = (x,y,t)$, $g'=(x',y',t')$, etc., with non-Abelian
left-translation
\begin{equation}\label{Hn}
L_{g} (g') = g \circ g'\ =\ \left(x + x', y + y',t + t' +
\frac{1}{2} (x\cdot y' - x'\cdot y)\right),
\end{equation}
and non-isotropic dilations
\begin{equation}\label{Hn3}
\delta_\lambda (g) = (\lambda x, \lambda y, \lambda^2 t),\quad\quad\quad \lambda > 0.
\end{equation}
Here, and throughout the paper, we will use $v \cdot w$ to denote
the standard Euclidean inner product of two vectors $v$ and $w$ in
$\mathbb R^n$. The dilations \eqref{Hn3} provide a natural scaling
associated with the grading of the Heisenberg algebra $\mathfrak h_n
= V_1\oplus V_2$, where $V_1 = \mathbb R^{2n} \times \{0\}$, $V_2 =
\{0\}\times \mathbb R$. According to such scaling, elements of the
horizontal layer $V_1$ have degree one, whereas elements of the
vertical layer $V_2$ are assigned the degree two. The homogeneous
dimension associated with \eqref{Hn3} is $Q = 2n + 2$. We recall
that, identifying $\mathfrak h_n$ with $\mathbb R^{2n+1}$, we have
for the bracket
\[
[g,g']\ =\ (0,0, x\cdot y'- x'\cdot y).
\]
It is then clear that $[V_1,V_1] = V_2$, and that $V_2$ is the group
center.
Henceforth, we will focus on the first Heisenberg group $\mathbb H^1$.
Applying the differential $(L_g)_*$ of \eqref{Hn} to the standard basis $\{\partial_x,\partial_y,\partial_t\}$ of $\mathbb R^3$, we obtain
the three distinguished vector fields
\[X_1\ =\ (L_g)_*(\partial_x)\ =\ \partial_x -\frac{y}{2} \partial_t\ , \;\; X_2\ =\ (L_g)_*(\partial_y)\ =\ \partial_y
+\frac{x}{2} \partial_t\ ,\;\;T\ =\ (L_g)_*(\partial_t)\ =\ \partial_t\ .\]
The horizontal bundle $H\mathbb H^1$ is the subbundle of $T\mathbb H^1$ whose fiber at a point $g\in \mathbb H^1$ is given by
\[
H_g\ =\ span\{X_1(g),X_2(g)\} \ .
\]
We endow $\mathbb H^1$ with a left-invariant Riemannian metric $\{g_{ij}\}$,
whose inner product we will denote by $<\cdot,\cdot>$, with respect
to which $\{X_1,X_2,T\}$ constitute an orthonormal basis. If
$S\subset \mathbb H^1$ is a $C^2$ oriented surface we will indicate with
$\boldsymbol{N}$ a (non-unit) Riemannian normal with respect to
$<\cdot,\cdot>$, and with $\boldsymbol \nu = \boldsymbol{N}/|\boldsymbol{N}|$ the corresponding Gauss
map. We will let
\begin{equation}\label{pqw}
p = <\boldsymbol{N},X_1>,\ \ \ q = <\boldsymbol{N},X_2>,\ \ \ W = \sqrt{p^2 + q^2},\ \ \
\omega\ =\ <\boldsymbol{N},T>.
\end{equation}
The characteristic locus of $S$ is the closed subset of $S$ defined by
\[\Sigma(S) = \{g \in S|W(g)=0\}.\]
We notice explicitly that $\Sigma(S) = \{g\in S\mid T_gS = H_g\}$. We also set on $S\setminus \Sigma(S)$
\begin{equation}\label{bars}
\overline p = \frac{p}{W},\ \ \ \ \overline q = \frac{q}{W},\ \ \ \ \overline \omega =
\frac{\omega}{W}.
\end{equation}
\begin{Def} Let $S\subset \mathbb H^1$ be a $C^2$ oriented surface. A horizontal normal of $S$ is defined as
\[
\boldsymbol{N}^H\ =\ p\ X_1\ +\ q\ X_2,\] whereas on $S\setminus \Sigma(S)$ the
horizontal Gauss map is defined as
\[\boldsymbol{\nu}^H = \frac{1}{W} \boldsymbol{N}^H\ =\ \overline p X_1 + \overline q X_2\ .\]
\end{Def}
The horizontal perimeter measure of $S$ has the following form.
\begin{Pro}\label{P:Hper} Let $S\subset \mathbb H^1$ be a $C^2$ oriented surface, then the horizontal perimeter of
$S$ is
\[\mathscr{P}_H(S)=\int_S \sqrt{<\boldsymbol \nu,X_1>^2 + <\boldsymbol \nu,X_2>^2}\ d\sigma = \int_S \frac{W}{|\boldsymbol{N}|} d\sigma,\]
where $d\sigma$ is the Riemannian surface area element associated to
$\langle \cdot,\cdot\rangle$.
\end{Pro}
To investigate minimal surfaces, we recall the notion of horizontal
mean curvature $\mathcal H$ introduced in \cite{DGN:minimal},
\cite{Pauls:minimal}, \cite{GP}. Such notion is obtained by
projecting the horizontal Levi-Civita connection onto the so-called
horizontal tangent bundle $HTS = TS \cap H\mathbb H^1$. If we assume, as we
may, that the Riemannian normal field on $S$, $\boldsymbol{N}^H$, can be
extended to a neighborhood of $S$, and continuing to denote by $\overline p,
\overline q$ the quantities introduced in \eqref{bars} relative to such
extension, then it has been shown in the above cited references that
$\mathcal H$ can be computed by the following proposition.
\begin{Pro}\label{P:mc}
For $g \in S \setminus \Sigma(S)$, the $H$-mean curvature of $S$ at $g$ is given by
\[\mathcal H(g)=X_1 \overline p(g) + X_2\overline q(g)\ .\]
\end{Pro}
For $g \in \Sigma(S)$, we define $\mathcal H(g)= \lim_{g' \rightarrow g, g'
\in S \setminus \Sigma(S)} \mathcal H(g')$, whenever the limit exists.
A surface $S$ is said to be \emph{minimal} if its horizontal mean
curvature vanishes identically.
It is now well known (\cite{CHMY,RR,GP,DGN:minimal,Pauls:minimal})
that critical points of the perimeter are characterized by having zero $H$-mean
curvature away from the characteristic locus. We mention that recent
work of Cheng, Hwang and Yang (\cite{CHY}) and Ritor\'e and Rosales
(\cite{RR2}) have clarified the behavior of such critical points at
the characteristic locus. However, since we will be restricting to
the category of noncharacteristic surfaces, we will not discuss these
results here.
\section{\textbf{The second variation of the horizontal perimeter and the stability of minimal surfaces}}\label{S:var}
In this section, we recall the first and second variation of the
horizontal perimeter for intrinsic graphs. We mention that formulas
for the first and second variation of the horizontal perimeter have
been derived a number of times in various contexts
(\cite{RR,RR2,CHMY,DGN:minimal,GS,AS,BSV,HP2,HP4,BC}).
Let $S \subset \mathbb H^1$ be an oriented $C^2$ surface with empty
characteristic locus, and consider vector fields $\mathcal X = a X_1
+ b X_2 + k T$, with $a, b , k\in C^2_0(\mathcal S)$. We define the
\emph{first variation} of the horizontal perimeter with respect to
the deformation of $S$,
\[
S^\lambda\ =\ S + \lambda \mathcal X,
\]
as
\[
\mathcal V^{H}_I(S;\mathcal X)\ =\ \frac{d}{d\lambda}~ P_H(S^\lambda)\Bigl|_{\lambda = 0}.
\]
We say that $S$ is \emph{stationary} if $\mathcal V^{H}_I(S;\mathcal X) = 0$, for every
$\mathcal X$. We define the \emph{second variation} of the
horizontal perimeter as
\[
\mathcal V^{H}_{II}(S;\mathcal X)\ =\ \frac{d^2}{d\lambda^2}~ P_H(S^\lambda)\Bigl|_{\lambda = 0}.
\]
We say that $S$ is \emph{stable} is $\mathcal V^{H}_{II}(S;\mathcal X)\geq 0$ for every $\mathcal
X$.
Henceforth, to simplify the formulas we introduce the following
notation
\begin{equation}\label{inner} F_{\mathcal X}\ \overset{def}{=}\ \overline p a + \overline q b +
\overline \omega k\ =\ \frac{<\mathcal X,\boldsymbol{N}>}{<\boldsymbol{\nu}^H,\boldsymbol{N}>}.
\end{equation}
The following result was proved independently by several people in
various contexts,
see\cite{RR,RR2,CHMY,DGN:minimal,GS,AS,BSV,HP2,HP4,BC}.
\begin{Thm}\label{T:variations}
Let $\mathcal S\subset \mathbb H^1$ be an oriented $C^2$ surface with empty
characteristic locus, then
\begin{equation}\label{fvH}
\mathcal V^H_I(S;\mathcal X)\ =\
\int_{S}
\mathcal H\ F_{\mathcal X}\ d\sigma_H.
\end{equation}
In particular, $S$ is stationary if and only if it is minimal.
\end{Thm}
To state the next result we introduce a notation. Given the quantity
$\overline \omega$ we let
\[
\mathcal A\ =\ -\ \nabla^{H,S} \overline \omega.
\]
The following second variation formula was proved in
\cite{DGN:minimal}.
\begin{Thm}\label{T:svgeometric}
Let $\mathcal S\subset \mathbb H^1$ be a minimal surface with empty characteristic
locus, then
\[
\mathcal V^{H}_{II}(S;\mathcal X)\ =\ \int_S \bigg\{|\nabla^{H,S} F_{\mathcal X}|^2\ +\ (2\mathcal A - \overline \omega^2)
F_{\mathcal X}^2\bigg\} d\sigma_H.
\]
As a consequence, $S$ is stable if and only if for any $\mathcal X$
one has
\[
\int_S (\overline \omega^2 - 2\mathcal A) F_{\mathcal X}^2 d\sigma_H\ \leq\ \int_S |\nabla^{H,S}
F_{\mathcal X}|^2\ d\sigma_H.
\]
\end{Thm}
The following result is Corollary 15.4 in \cite{DGN:minimal}. Let
$\phi:\Omega \subset \mathbb R^2_{(u,v)} \rightarrow \mathbb R$ give an intrinsic
$X_1$-graph $S$, we recall the formula \eqref{igper} for the
horizontal perimeter of $S$.
\begin{Cor}\label{C:svig}
Let $S$ be a $C^2$ minimal, intrinsic $X_1$-graph, then for any
$\mathcal X$ one has
\[
\mathcal V^{H}_{II}(S;\mathcal X)\ =\ \int_\Omega \frac{\mathcal B_\phi(F_{\mathcal X})^2}{\sqrt{1 + \mathcal
B_\phi(\phi)^2}}\ du dv\ -\ \int_\Omega \frac{\phi_v^2 + 2 \mathcal
B_\phi(\phi_v)}{\sqrt{1 + \mathcal B_\phi(\phi)^2}}\ F_{\mathcal X}^2\ du dv,
\]
where $F_{\mathcal X}$ is as in \eqref{inner}.
\end{Cor}
We next derive the second variation formula for special deformations
of the intrinsic graph $S$. We consider compactly supported vector
fields on $S$ of the type $\mathcal X = \psi X_1$, where $\psi\in
C_0^2(S)$. For this family of deformations we obtain from Corollary
\ref{C:svig}.
\begin{Thm}\label{T:svig}
Let $S$ be a $C^2$ minimal, intrinsic $X_1$-graph, given by a
function $\phi:\Omega \subset \mathbb R^2_{(u,v)} \rightarrow \mathbb R$, then for any
$\psi\in C^2_0(S)$ one has
\begin{align}\label{svig}
\mathcal V^H_{II}(S,\psi X_1)\ & =\ \int_\Omega \frac{\mathcal
B_\phi(\psi)^2}{(1 + \mathcal B_\phi(\phi)^2)^{3/2}}\ du dv
\\
& -\ \int_\Omega \frac{\psi^2}{(1 + \mathcal B_\phi(\phi)^2)^{3/2}}
\bigg(2 \big(\mathcal B_\phi(\phi)\big)_v - \phi_v^2\bigg)\ du dv.
\notag\end{align}
\end{Thm}
\begin{Rem}\label{R:abuse}
In the statement of the above result the function $\psi\in
C_0^2(S)$. Slightly abusing the notation in the integral in the
right-hand side of \eqref{svig} we have continued to indicate with
$\psi$ the function in $C^2_0(\Omega)$ obtained by composing the
original $\psi$ with the parametrization of the surface $S$
\[
\Omega \ni (u,v)\ \longmapsto\ \left(\phi(u,v),u,v -
\frac{u}{2}\phi(u,v)\right).
\]
\end{Rem}
\begin{proof}[\textbf{Proof}]
We note that with $\mathcal X = \psi X_1$, we have $a = \psi$, $b =
k = 0$. We also recall, see \cite{DGN:minimal}, that for an
intrinsic $X_1$-graph one has
\[
\overline p\ =\ \frac{1}{\sqrt{1 + \mathcal B_\phi(\phi)^2}}\ ,\ \ \ \overline q\ =\
-\ \frac{\mathcal B_\phi(\phi)}{\sqrt{1 + \mathcal B_\phi(\phi)^2}},
\]
and therefore from \eqref{inner} one has
\begin{equation}\label{Fig}
F_{\mathcal X}\ =\ \frac{\psi}{\sqrt{1 + \mathcal B_\phi(\phi)^2}}.
\end{equation}
From this formula a simple computation gives
\[
\mathcal B_\phi(F_{\mathcal X})\ =\ \frac{\mathcal B_\phi(\psi)}{\sqrt{1 +
\mathcal B_\phi(\phi)^2}}\
-\ \frac{\mathcal B_\phi(\phi) \mathcal B_\phi\big(\mathcal B_\phi(\phi)\big)}{(1 + \mathcal B_\phi(\phi)^2)^{3/2}}.
\]
We now recall that the minimality of $S$ is equivalent to $\phi$
being a solution of the double Burger equation
\[
\mathcal B_\phi(\mathcal B_\phi(\phi))\ =\ 0.
\]
We thus conclude that
\begin{equation}\label{BFig}
\mathcal B_\phi(F_{\mathcal X})\ =\ \frac{\mathcal B_\phi(\psi)}{\sqrt{1 +
\mathcal B_\phi(\phi)^2}}.
\end{equation}
Using \eqref{Fig} and the identity
\[
\big(\mathcal B_\phi(\phi)\big)_v\ -\ \mathcal B_\phi(\phi_v)\ =\
\phi_v^2,
\]
we thus obtain
\[
-\ \int_\Omega \frac{\phi_v^2 + 2 \mathcal B_\phi(\phi_v)}{\sqrt{1 +
\mathcal B_\phi(\phi)^2}}\ F_{\mathcal X}^2\ du dv\ =\ -\ \int_\Omega
\frac{\psi^2}{(1 + \mathcal B_\phi(\phi)^2)^{3/2}} \bigg(2
\big(\mathcal B_\phi(\phi)\big)_v - \phi_v^2\bigg)\ du dv.
\]
On the other hand, \eqref{BFig} gives
\[
\int_\Omega \frac{\mathcal B_\phi(F_{\mathcal X})^2}{\sqrt{1 + \mathcal
B_\phi(\phi)^2}}\ du dv\ =\ \int_\Omega \frac{\mathcal
B_\phi(\psi)^2}{(1 + \mathcal B_\phi(\phi)^2)^{3/2}}\ du dv.
\]
Combining the last two equations we reach the desired conclusion.
\end{proof}
Next, we apply Theorem \ref{T:svig} to the case of a strict
intrinsic graphical strip as in Definition \ref{intdeltastrip}. We
recall the diffeomorphism $\Psi:\mathbb R\times J\to \Omega = \Psi(\mathbb R\times J)
\subset \mathbb R^2_{u,v}$ given by $\Psi(u,s) = (u,v) =
(u,\frac{u^2}{2}G(s)+ F(s)u + \sigma(s))$, see \eqref{if}. As
before, in the statement of the next result given a function
$\psi\in C_0^2(S)$ slightly abusing the notation we will write
$\psi\in C^2_0(\Omega)$. What we mean by this is the composition of the
original $\psi$ with the parametrization of the surface $S$
\[
\Omega \ni (u,v)\ \longmapsto\ \left(\phi(u,v),u,v -
\frac{u}{2}\phi(u,v)\right)
\]
provided in Definition \ref{intdeltastrip}.
\begin{Cor}\label{T:varstrip} Let $S$ be a strict intrinsic
graphical strip defined by functions $F, G, \sigma\in C^2(J)$ and
$\phi(u,v) = F(s(u,v)) + u G(s(u,v))$, as in Definition
\ref{intdeltastrip}. One has for any $\psi\in C^2_0(S)$,
\begin{align}\label{T:strip}
\mathcal V^H_{II}(S,\psi X_1)&\ =\ \int_{\mathbb R \times J}
\bigg(\left(\frac{\partial}{\partial u} (\psi\circ\Psi)(u,s)
\right)^2 \frac{G'(s) \frac{u^2}{2} + F'(s)u +
\sigma'(s)}{(1+G(s)^2)^{\frac{3}{2}}}\\
\notag &\qquad\qquad\qquad\qquad
\ +\
\frac{(\psi\circ\Psi)(u,s)^2}{(1+G(s)^2)^{\frac{3}{2}}}\,
\frac{F'(s)^2-2\sigma'(s)G'(s)}{G'(s) \frac{u^2}{2} + F'(s)u +
\sigma'(s)} \bigg) du ds, \notag
\end{align}
where we have indicated with $\Psi:\mathbb R \times J \to \Omega$ the
diffeomorphism defined by \eqref{if}.
\end{Cor}
\begin{proof}[\textbf{Proof}]
We note that the proof of this theorem is similar to that of
equation (5.12) of \cite{BSV}. Since every strict intrinsic
graphical strip is an intrinsic $X_1$-graph, we can apply the second
variation formula \eqref{svig} in Theorem \ref{T:svig}. In this
formula we want to use the global diffeomorphism $\Psi:\mathbb R \times J
\to \Omega$ to convert the integral on $\Omega$ to an integral on
$\mathbb R\times J$. By \eqref{jac}
\begin{align*}
\det J_\Psi(u,s) & = \det \begin{pmatrix} 1 & 0
\\
v_u & v_s\end{pmatrix} = \det \begin{pmatrix} 1 & 0 \\
G(s)u + F(s) &
G'(s) \frac{u^2}{2} + F'(s)u + \sigma'(s)
\end{pmatrix}
\\
& = G'(s) \frac{u^2}{2} + F'(s)u + \sigma'(s).
\end{align*}
We emphasize that since we are assuming that $S$ is a strict
graphical strip, then \eqref{nd} is in force, and therefore the
Jacobian of $\Psi$ is always different from zero. Recall that we are
also assuming that $\Psi$ is globally one-to-one. The Inverse
Function Theorem gives at every point $(u,v) = \Psi(u,s)$
\[
J_{\Psi^{-1}}(u,v) = \begin{pmatrix} 1 & 0 \\
- \frac{G(s)u + F(s)}{G'(s) \frac{u^2}{2} + F'(s)u + \sigma'(s)} &
\frac{1}{G'(s) \frac{u^2}{2} + F'(s)u + \sigma'(s)}
\end{pmatrix}.
\]
We thus have
\begin{equation}\label{sigmaderivs}
s_u = - \frac{G(s)u + F(s)}{G'(s) \frac{u^2}{2} + F'(s)u +
\sigma'(s)},\;\; \ \ s_v = \frac{1}{G'(s) \frac{u^2}{2} + F'(s)u +
\sigma'(s)}.
\end{equation}
Using \eqref{sigmaderivs} and the assumption that $\phi(u,v)= F(s) +
u G(s)$, we thus find
\begin{equation*}
\begin{split}
\mathcal B_\phi(\phi) &= \phi_u + \phi \phi_v =
G(s)+(G'(s)u+F'(s))s_u+\phi(G'(s)
u+F'(s))s_v\\
&= G(s) - \frac{(F'(s)+uG'(s))(F(s)+uG(s))}{G'(s) \frac{u^2}{2} + F'(s)u + \sigma'(s)} +
\frac{(F'(s)+uG'(s))(F(s)+uG(s))}{G'(s) \frac{u^2}{2} + F'(s)u + \sigma'(s)}\\
&=G(s).
\end{split}
\end{equation*}
This gives,
\[
\big(\mathcal B_\phi(\phi)\big)_v = G'(s) s_v = \frac{G'(s)}{G'(s) \frac{u^2}{2} + F'(s)u + \sigma'(s)},
\]
\[
(\phi_v)^2= \left(F'(s) + u G'(s)\right)^2 s_v^2 = \left(\frac{F'(s) + u G'(s)}{G'(s) \frac{u^2}{2} + F'(s)u + \sigma'(s)}\right)^2.
\]
Combining these formulas yields
\[
2\big(\mathcal B_\phi(\phi)\big)_v - \phi_v^2 =
\frac{2\sigma'(s)G'(s)-F'(s)^2}{G'(s) \frac{u^2}{2} + F'(s)u +
\sigma'(s)}\ .
\]
Substituting this into the second integral in the right-hand side of
\eqref{svig} gives
\[ \mathcal V^H_{II}(S,\psi X_1)\ =\ \int_\Omega
\frac{1}{(1+G(s)^2)^{\frac{3}{2}}} \left ( (\mathcal B_\phi(\psi)^2
+ \psi^2
\left(
\frac{F'(s)^2-2\sigma'(s)G'(s)}{G'(s) \frac{u^2}{2} + F'(s)u + \sigma'(s)} \right )\right ) \; du \;dv\ .\]
Now, to complete the proof, we make the change of variable $(u,v) =
\Psi(u,s)$, with $(u,s)\in \mathbb R \times J$. The Jacobian of such
diffeomorphism is given by \eqref{jac} which gives \[ du dv =
\left(G'(s) \frac{u^2}{2} + F'(s)u + \sigma'(s)\right) du ds.
\] Observe furthermore that
\[
\mathcal B_\phi(\psi) = \psi_u+\phi\psi_v = \psi_u + (F + G
u)\psi_v = \psi_u + v_u \psi_v = \frac{\partial}{\partial u}
\psi(u,v(u,s)) = \frac{\partial }{\partial u} (\psi \circ \Psi)(u,s).
\]
Thus, we conclude that
\begin{align*}
\mathcal V^H_{II}(S,\psi X_1)\ & = \int_{\mathbb R \times J}
\bigg(\left(\frac{\partial }{\partial u} (\psi \circ \Psi)(u,s)) \right)^2
\frac{G'(s) \frac{u^2}{2} + F'(s)u +
\sigma'(s)}{(1+G(\sigma)^2)^{\frac{3}{2}}}
\\
&\qquad\qquad\qquad\qquad
+\ \frac{(\psi\circ
\Psi)(u,s)^2}{(1+G(s)^2)^{\frac{3}{2}}}
\frac{F'(s)^2-2\sigma'(s)G'(s)}{G'(s) \frac{u^2}{2} + F'(s)u +
\sigma'(s)} \bigg) du ds, \notag
\end{align*}
which proves \eqref{T:strip}.
\end{proof}
\section{\textbf{Proof of Theorem \ref{I:unstable}: Strict intrinsic graphical strips are unstable}}\label{S:explicit}
In this section using the techniques of \cite{DGN:stable} and the
modifications of \cite{DGNP}, we construct a variation which
strictly decreases the horizontal area of a strict intrinsic
graphical strip (that is, we find a test function $\psi$ for which
$\mathcal V^H_{II}(S,\psi X_1) < 0$. To construct such a $\psi$
we start by constructing a sequence $\psi_k$. We will show that
for large enough $k$, we have $\mathcal V^H_{II}(S,\psi_k X_1) < 0$.
This proves that such surfaces
are unstable, thus establishing Theorem \ref{I:unstable}.
For any given $\delta>0$, we fix a function $\chi \in
C^\infty_0(\mathbb R)$ so that $0 \le \chi(s) \le 1, \chi(s)=1$ for $|s|
\le \delta, \chi(s)=0$ for $|s|\ge 2\delta$, and $|\chi'|\le
C=C(\delta)$. For each $k \in \mathbb{N}$, we let
$\chi_k(s)=\chi(s/k)$ and hence
\begin{itemize}
\item $\chi_k(s) =0$ for $|s|\ge 2\delta k$
\item $\chi_k(s)=1$ for $|s| \le \delta k$
\item $|\chi_k'(s)| \le C/k$
\end{itemize}
Next, fix a function $\zeta \in C^\infty_0(\mathbb{R})$ with $\zeta
\ge 0$, $supp(\zeta)=[-1,1]$ and $\int_\mathbb{R} \zeta \; ds =1$.
Letting, $\zeta_k(s)=k\zeta(ks)$, we have that $supp({\zeta}_k) =
[-1/k,1/k]$ and $\int_\mathbb R {\zeta}_k(s)\; ds =1$. Let $F$, $G$ and
$\sigma$ be the functions in Definition \ref{intdeltastrip} with
\begin{equation}\label{E:str-cond}
F'(s)^2 - 2\sigma'(s)G'(s) < 0\ \quad\ \ \ s\in J.
\end{equation}
As we have mentioned in the introduction, without loss of generality
we assume that $G', \sigma'>0$ in $J$. We define $F_k = F\star
{\zeta}_k$, $G_k = G\star \zeta_k$, $\sigma_k = \sigma \star
\zeta_k$. Since $F$, $G$ and $\sigma$ are continuous on $J$.
Shrinking $J$ slightly if necessary, we may assume that they are
uniformly continuous on $\bar J$. Therefore $F_k\to F$, $F'_k\to
F'$, $G_k\to G$, $G'_k\to G'$, $\sigma_k \to \sigma$ and
$\sigma_k'\to \sigma'$ uniformly on $\bar J$. The condition
\eqref{E:str-cond} now carries over to $F_k, G_k, \sigma_k$, that
is, there is a positive integer $k_o$ such that if $k > k_o$
(relabeling the sequence if necessary, we take $k_o = 1$) then for
every $s\in J$, $F_k'(s)^2 - 2\sigma_k'(s) G_k'(s) < 0$. The left
hand side of this inequality is precisely the discriminant of the
quadratic expression in the variable $u$:
\[
G'_k(s) \frac{u^2}{2} + F'_k(s)u +
\sigma'_k(s)\ .
\]
Since the discriminant is strictly negative, $G'_k(s) \frac{u^2}{2}+
F'_k(s)u + \sigma'_k(s)$ never vanishes for $u\in\mathbb R$ and $s\in J$.
Next, we construct a sequence of test functions $\psi_k$ to be used
in the formula \eqref{T:strip}. We let
\begin{equation}\label{E:tests}
\psi_k(u,s) \overset{def}{=} \frac{\chi(s)\chi_k(u)}{\left(G'_k(s)
\frac{u^2}{2} + F'_k(s)u + \sigma'_k(s)\right)^{\frac{1}{2}}}\ ,
\end{equation}
We note that $\psi_k\in C^\infty_0(\mathbb R\times J)$ due to the above
considerations. With $\psi_k$ in hand, we analyze $\mathcal
V^H_{II}(S,\psi_k X_1)$. Before proceeding to the computations,
we remark that the function $\psi$ in \eqref{T:strip} is defined on
$\Omega = \Psi(\mathbb R\times J)$. Our $\psi_k$'s have been already defined
on the $(u,s)$ space, that is on $\mathbb R\times J$. Therefore,
occurrences of $\psi\circ\Psi$ in \eqref{T:strip} will be replaced
by $\psi_k$ in the proof of the subsequent two lemmas. We start
with the second term in the right hand side of \eqref{T:strip}.
\begin{Lem}\label{RHS} We have
\begin{equation*}
\begin{split}
\lim_{k \rightarrow \infty} \int_{\mathbb R \times J} \frac{\psi_k(u,s)^2}{(1+G(s)^2)^{\frac{3}{2}}}
&\,\frac{F'(s)^2-2\sigma'(s)G'(s)}{G'(s) \frac{u^2}{2} + F'(s)u +
\sigma'(s)}\, du\;ds \\
&\ =\ -2\pi\int_J
\frac{\chi(s)^2}{(1+G(s)^2)^\frac{3}{2}}
\,\frac{G'(s)}{(2\sigma'(s)G'(s) - F'(s)^2)^{\frac{1}{2}}} \, ds
\end{split}
\end{equation*}
\end{Lem}
\begin{proof}[\textbf{Proof}]
Substituting the quantity $\psi\circ\Psi$ with $\psi_k$ in the second term
of the right hand side of \eqref{T:strip} and recalling the definition of
$\psi_k$ we have
\begin{align}\label{tmp0}
& \lim_{k \rightarrow \infty} \int_{\mathbb R \times J}
\frac{\psi_k(u,s)^2}{(1+G(s)^2)^{\frac{3}{2}}}\,
\frac{F'(s)^2-2\sigma'(s)G'(s)}{G'(s)\frac{u^2}{2} + F'(s) u +
\sigma'(s)}\, du \; ds \\ &\ =\ \lim_{k \rightarrow \infty} \int_J
\chi(s)^2\frac{F'(s)^2 -
2\sigma'(s)G'(s)}{(1+G(s)^2)^{\frac{3}{2}}} \notag\\
& \times \left(\int_{\mathbb R} \frac{\chi_k(u)^2}{(G_k'(s)\frac{u^2}{2} +
F_k'(s) u + \sigma_k'(s)) (G'(s)\frac{u^2}{2} + F'(s) u +
\sigma'(s))} \, du\right) ds \notag\\ &\ =\ \int_J
\chi(s)^2\frac{F'(s)^2-2\sigma'(s)G'(s)}{(1+G(s)^2)^{\frac{3}{2}}}
\left(\int_\mathbb R \frac{1}{(G'(s)\frac{u^2}{2} + F'(s) u +
\sigma'(s))^2}\, du \right ) ds.\notag
\end{align}
In the above, we have used the fact that since for each $u\in\mathbb R$,
\[
G_k'(s)\frac{u^2}{2} + F_k'(s) u + \sigma_k'(s) \longrightarrow
G'(s)\frac{u^2}{2} + F'(s) u + \sigma'(s) \quad\text{as } k \to \infty
\]
uniformly for $s\in \bar J$, and the latter quantity never vanishes,
we have
\[
\frac{1}{2} |G'(s)\frac{u^2}{2} + F'(s) u + \sigma'(s) |
\ <\
|G_k'(s)\frac{u^2}{2} + F_k'(s) u + \sigma_k'(s)|
\ <\
2 |G'(s)\frac{u^2}{2} + F'(s) u + \sigma'(s)|\ .
\]
Hence, Lebesgue dominated convergence theorem allows taking the
limit inside the integral. Next, we want to compute the integral
\[
\int_\mathbb R \frac{1}{(G'(s)\frac{u^2}{2} + F'(s) u + \sigma'(s))^2}\,
du.
\]
Using standard integration techniques we obtain
\[
\int \frac{1}{(Au^2 + Bu + C)^2} \,du
\ =\
\frac{2Au + B}{(4AC-B^2)(Au^2+Bu+C)}\ +\
\frac{4A}{(4AC - B^2)^\frac{3}{2}}\, arctan\left(\frac{2Au+B}{\sqrt{4AC - B^2}}\right)\ .
\]
This implies if $A > 0$
\[
\int_{\mathbb R} \frac{1}{(Au^2 + Bu + C)^2} \,du
\ =\
\frac{4\pi A}{(4AC - B^2)^\frac{3}{2}}\ .
\]
Since we have that $G'(s) > 0$, letting $A = G'(s)/2$, $B = F'(s)$ and $C = \sigma'(s)$ we have
\begin{equation}\label{E:tmp1}
\int_\mathbb R \frac{1}{(G'(s)\frac{u^2}{2} + F'(s) u + \sigma'(s))^2}\, du
\ =\ 2\,\pi \frac{G'(s)}{(2\sigma'(s)G'(s) - F'(s)^2)^\frac{3}{2}}\ .
\end{equation}
Substituting \eqref{E:tmp1} in \eqref{tmp0} we reach the desired
conclusion.
\end{proof}
Now we turn to the first term in the right hand side of \eqref{T:strip}.
\begin{Lem}\label{LHS} We have
\begin{align*}
& \lim_{k \rightarrow \infty}
\int_{\mathbb R \times J}
\left(\left(\frac{\partial
\psi(u,s)}{\partial u} \right )^2
\frac{G'(s)\frac{u^2}{2} + F'(s) u + \sigma'(s)}{(1+G(s)^2)^{\frac{3}{2}}}\right
)\,du \; ds \\
& \qquad\qquad
\ = \frac{\pi}{2}\int_J
\frac{\chi(s)^2}{(1+G(s)^2)^{\frac{3}{2}}}\,
\frac{G'(s)}{(2\sigma'(s)G'(s)-F'(s)^2)^{\frac{1}{2}}} \, ds
\end{align*}
\end{Lem}
\begin{proof}[\textbf{Proof}]
Again, we closely follow the development in \cite{DGNP}. By
recalling \eqref{E:tests} we first obtain
\[
\frac{\partial \psi_k}{\partial u}(u,s) \ =\
\frac{\chi(s)}{2}\left(\frac{2\chi_k'(u) Q_k(u,s) - \chi_k(u)
D_k(u,s)}{Q_k(u,s)^\frac{3}{2}}\right),
\]
where we have let
\[
Q_k(u,s) = G_k'(s)\frac{u^2}{2} + F_k'(s) u + \sigma_k'(s)
\quad\text{and}\quad D_k(u,s) = u G_k'(s) + F_k'(s).
\]
For the computations that follow, it is convenient to also let
\[
Q(u,s) = G'(s)\frac{u^2}{2} + F'(s) u + \sigma'(s)
\quad\text{and}\quad D(u,s) = \frac{\partial}{\partial u} Q(u,s) = u
G'(s) + F'(s).
\]
It follows that
\[
\left(\frac{\partial \psi_k}{\partial u}(u,s)\right)^2 =
\chi(s)^2\left(\frac{\chi_k'(u)^2}{Q_k(u,s)} -
\frac{1}{2}(\chi_k(u)^2)'\frac{D_k(u,s)}{Q_k(u,s)^2} +
\frac{1}{4}\chi_k(u)^2 \frac{D_k(u,s)^2}{Q_k(u,s)^3}\right).
\]
Substituting the quantity $\psi\circ\Psi$ in the first term of the
right hand side of \eqref{T:strip}, and using the above expression
for $\psi_{k,u}$, we have
\begin{equation*}
\begin{split}
\int_{\mathbb R\times J} \left(\frac{\partial
\psi_k(u,s)}{\partial u} \right )^2
\frac{G'(s)\frac{u^2}{2} + F'(s) u +
\sigma'(s)}{(1+G(s)^2)^{\frac{3}{2}}} \, du\;ds & = \int_J
\frac{\chi(s)^2}{(1+G'(s)^2)^{\frac{3}{2}}}
\left(\,\fbox{1}+\fbox{2}+\fbox{3}\, \right )\,ds
\end{split}
\end{equation*}
where,
\begin{align*}
\fbox{1} = \int_\mathbb R \chi_k'(u)^2 &\frac{Q(u,s)}{Q_k(u,s)}\, du,
\qquad
\fbox{2} = -\frac{1}{2}\int_\mathbb R (\chi_k^2(u))'Q(u,s)\frac{D_k(u,s)}{Q_k(u,s)^2}\, du, \\
& \fbox{3} = \frac{1}{4} \int_\mathbb R \chi_k(u)^2 Q(u,s)
\frac{D_k(u,s)^2}{Q_k(u,s)^3} \, du.
\end{align*}
Since $|\chi_k'(u)| \le \frac{C}{k}$, by Lebesgue dominated
convergence theorem we have
\begin{equation}\label{LHS0}
\lim_{k \rightarrow \infty} \fbox{1} =0
\end{equation}
In addition, since $D_k(u,s) \rightarrow D(u,s), Q_k(u,s) \rightarrow Q(u,s)$, and
$\chi_k(s) \rightarrow 1$ when $k \rightarrow \infty$, we obtain
\begin{align}\label{LHS3}
\lim_{k \rightarrow \infty} \fbox{3}
&\ =\ \frac{1}{4} \int_\mathbb R \frac{D(u,s)^2}{Q(u,s)^2}\, du
\ =\
-\,\frac{1}{4} \int_\mathbb R \frac{\partial}{\partial u} Q(u,s)\,\frac{\partial}{\partial u}\left(\frac{1}{Q(u,s)}\right)\,du \\
\notag
&\ =\ \frac{1}{4}\int_\mathbb R\frac{\partial^2 Q(u,s)}{\partial u^2}\,\frac{1}{Q(u,s)}\,du
\ =\ \frac{1}{4}\int_\mathbb R \frac{G'(s)}{G'(s)\frac{u^2}{2} + F'(s) u + \sigma'(s)}\,du \\
\notag
&\ =\ \frac{\pi\,G'(s)}{(2\sigma'(s)G'(s) - F'(s)^2)^\frac{1}{2}}\ .
\end{align}
The third equality above is obtained by integration by parts whereas
in the last equality, we have used the fact that $G'(s)>0$ and
standard calculus techniques. Now we turn to the quantity
$\fbox{2}$.
\begin{align}\label{LHS4}
\lim_{k \rightarrow \infty}\fbox{2}
&\ =\ -\lim_{k \rightarrow \infty} \frac{1}{2}
\int_\mathbb R \left(\chi_k(u)^2)\right)' Q(u,s)\frac{D_k(u,s)}{Q_k(u,s)}\,du \\
& =
- \lim_{k \rightarrow \infty}\frac{1}{2}\int_\mathbb R \chi_k(u)^2 \frac{\partial}{\partial u}\left(\frac{Q(u,s)\,D_k(u,s)}{Q_k(u,s)^2}\right)\,du \\
\notag\\ & = - \lim_{k \rightarrow \infty}\frac{1}{2}\int_\mathbb R \chi_k(u)^2
\bigg(\frac{Q_u(u,s)\,D_k(u,s)}{Q_k(u,s)^2}
\notag\\
& + \frac{Q(u,s)\,D_{k,u}(u,s)}{Q_k(u,s)^2} -
2\,\frac{Q(u,s)\,D_k(u,s)\,Q_{k,u}(u,s)}{Q_k(u,s)^3}\bigg)\,du
\notag\\ & = - \frac{1}{2}\int_\mathbb R \frac{Q_u(u,s)\,D(u,s)}{Q(u,s)^2}
+ \frac{D_u(u,s)}{Q(u,s)} - 2\,\frac{D(u,s)Q_u(u,s)}{Q(u,s)^2}\,du
\notag\\ & =
-\frac{1}{2}\int_\mathbb R \frac{G'(s)}{Q(u,s)}\,du -
\frac{1}{2}\int_\mathbb R \frac{\partial}{\partial u} Q(u,s)\,\frac{\partial}{\partial u}\left(\frac{1}{Q(u,s)}\right)\,du \notag\\
\notag & = -\frac{1}{2}\int_\mathbb R \frac{G'(s)}{Q(u,s)}\,du +
\frac{1}{2}\int_\mathbb R \frac{Q_{uu}(u,s)}{Q(u,s)}\,du = 0, \notag
\end{align}
since $Q_{uu}(u,s) = G'(s)$. Combining \eqref{LHS0}, \eqref{LHS3}
and \eqref{LHS4}, we obtain the desired conclusion.
\end{proof}
Combining \eqref{T:strip} with Lemmas \ref{RHS} and \ref{LHS} we can
now prove Theorem \ref{I:unstable} in the introduction.
\vspace{.2in}
\noindent
\begin{proof}[\textbf{Proof of Theorem \ref{I:unstable}.}]
Let $\psi_k$ be the function constructed in \eqref{E:tests} and
consider $\psi_k\circ \Psi^{-1}\in C^2_0(\Omega)$, where $\Psi$ is the
diffeomorphism in \eqref{if}. If we lift this function to the
surface, and by abuse of notation we continue to indicate with
$\psi_k$ such lifted function, we obtain a function in $C^2_0(S)$.
From Corollary \ref{T:varstrip}, Lemmas \ref{RHS}, \ref{LHS} and the
fact that $G'(s) > 0$ on $J$ we deduce that
\[
\lim_{k \rightarrow \infty}\mathcal V^H_{II}(S,(\psi_k X_1)
=
\left(\frac{\pi}{2}-2\pi\right)\int_J
\frac{\chi(s)^2}{(1+G(s)^2)^\frac{3}{2}}
\,\frac{G'(s)}{(2\sigma'(s)G'(s) - F'(s)^2)^{\frac{1}{2}}} \, ds
< 0.
\]
Therefore, for large enough $k$ we have $\mathcal V^H_{II}(S,\psi_k
X_1) < 0$. This completes the proof.
\end{proof}
\section{\textbf{Proof of Theorem \ref{T:existstrip0}: Existence of strict intrinsic graphical strips}}\label{S:strips}
The main objective of this section is to establish the crucial
Theorem \ref{T:existstrip0} in the introduction. The proof of this
result will be accomplished in several steps. Before we turn to the
general discussion it will be helpful for the understanding of
Definition \ref{intdeltastrip} to analyze directly the situation of
the surfaces introduced in \eqref{cat0}.
\subsection{The sub-Riemannian catenoid is unstable}\label{SS:catenoid}
In what follows we illustrate the construction of a strict intrinsic
graphical strip for the hyperboloids of revolution in $\mathbb H^1$
described by \eqref{cat0}. This is an interesting example of a
complete embedded minimal surface in $\mathbb H^1$ which has empty
characteristic locus and which is neither a graph over any plane,
nor an intrinsic graph in the sense of \cite{FSSC2}, \cite{FSSC3}.
Such surface should be considered as the sub-Riemannian analogue of
the \emph{catenoid} in the classical theory of minimal surfaces. We
emphasize that \eqref{cat0} does not contain any strict graphical
strip in the sense of \cite{DGNP}, and therefore the results in that
paper do not apply to it. Instead, as a consequence of the following
calculations and Theorem \ref{I:unstable} we are able to conclude
that the surface \eqref{cat0} is unstable. To fix the ideas we will
focus on the case $a=0, b=4$, in which case we have from
\eqref{cat0}
\begin{equation}\label{cat}
t^2 - \frac{1}{4}\left((x^2+y^2)-1\right).
\end{equation}
A local parametrization of $S$ as a ruled surface is given by
\begin{equation}\label{theta}
\theta(r,s) = \left(r \sin s + \cos s, r \cos s - \sin s,
\frac{r}{2}\right),\ \ \ r\in \mathbb R, -\pi< s < \pi.
\end{equation}
Clearly, if we consider the open set $U = \mathbb R \times (-\pi,\pi)$,
then $\theta(U)$ does not cover the whole catenoid, but this fact in
inconsequential for what follows. We now consider the projection
mapping $\Pi:\mathbb R^3 \to \mathbb R^2\times \{0\}$ given by
\[
\Pi(x,y,t) = (0,y,t+\frac{xy}{2}).
\]
We thus have
\begin{align*}
\Pi(\theta(U)) & = \left(0,r \cos s - \sin s,\frac{r}{2} + \frac{(r
\sin s + \cos s)(r \cos s - \sin s)}{2}\right).
\\
& = \left(0,r \cos s - \sin s, \frac{r^2}{2} \sin s \cos s + r
\cos^2 s - \frac{\sin s \cos s}{2}\right)
\end{align*}
We now define a mapping from the $(r,s)$ to the $(u,s)$ plane by
setting \[ \Lambda(r,s) = (r \cos s - \sin s,s). \] With
$\epsilon\in (0,\pi/4)$ to be chosen later, and
\[
U_\epsilon = \mathbb R \times (-\epsilon,\epsilon),
\]
it is clear that $\Lambda$ is a $C^\infty$ diffeomorphism of
$U_\epsilon$ onto its image $\Lambda(U_\epsilon)$. Notice that,
thanks to the fact that $1<\sec s<\sqrt 2$ for $-\epsilon <
s<\epsilon$, we have $\Lambda(U_\epsilon) = U_\epsilon$. Let us
notice that the inverse diffeomorphism is given by
\[ (r,s) = \Lambda^{-1}(u,s) = \left(\frac{u+\sin s}{\cos
s},s\right) = (u \sec s + \tan s, s).
\]
Next, we define a mapping from the $(r,s)$ to the $(u,v)$ plane by
setting
\[
\Phi(r,s) = (u,v)
\]
with
\begin{equation}\label{1}
\begin{cases}
u = r \cos s - \sin s,
\\
v = \frac{r^2}{2} \sin s \cos s + r \cos^2 s - \frac{\sin s \cos
s}{2}.
\end{cases}
\end{equation}
We want to show that $\Phi$ is a diffeomorphism onto its image. To
see this we take the composition $\Psi = \Phi \circ \Lambda^{-1} :
U_\epsilon \to \mathbb R^2$, which maps the $(u,s)$ to the $(u,v)$ plane.
We obtain
\[
(u,v) = \Psi(u,s) = \Phi(\Lambda^{-1}(u,s)) = \left(u,G(s)
\frac{u^2}{2} + F(s) u + \sigma(s)\right),
\]
where
\[
\begin{cases}
G(s) = \tan s,
\\
F(s) = \sec s,
\\
\sigma(s) = \frac{\tan s}{2}.
\end{cases}
\]
Let us observe that the determinant of the Jacobian of $\Psi(u,s)$
at any point $(u,s)\in U_\epsilon$ is given by
\[
G'(s)\frac{u^2}{2} + F'(s)u + \sigma'(s) = \frac{\sec^2 s}{2}
\left[u^2 + 2 \sin s\ u + 1 \right].
\]
Since for the quadratic expression within the square brackets we
have
\[
\Delta = \sin^2 s - 1 < 0,
\]
it is clear that such determinant never vanishes. We next show that
$\Psi$ is globally one-to-one on $U_\epsilon$ provided that
$\epsilon>0$ is chosen sufficiently small. Suppose by contradiction
that $(u,s), (u',s')\in U_\epsilon$, $(u,s) \not= (u',s')$, and
$\Psi(u,s) = \Psi(u',s')$. It cannot be $u\not= u'$ (since then
$\Psi(u,s) \not= \Psi(u',s')$). We can thus suppose that $s\not=
s'$, but $u= u'$. Since $\tan s$ is strictly increasing, $s\not= s'$
implies $G(s)\not= G(s')$. But then we must have
\begin{equation}\label{2}
u^2 + 2 \frac{F(s) - F(s')}{G(s) - G(s')} u + 1 = 0.
\end{equation}
We would like to show that there exists $0<\epsilon <\pi/4$ such
that for every $s,s'\in (-\epsilon,\epsilon)$, with $s\not= s'$, one
has
\begin{equation}\label{3}
\left(\frac{F(s) - F(s')}{G(s) - G(s')}\right)^2 < 1.
\end{equation}
If this were the case then we would reach a contradiction since this
implies that the equation \eqref{2} has no real solutions. Now
\eqref{3} is equivalent to
\begin{equation}\label{4}
\left(\frac{\sec s - \sec s'}{\tan s - \tan s'}\right)^2 < 1,
\end{equation}
for every $s,s'\in (-\epsilon,\epsilon)$, with $s\not= s'$. Without
restriction we can assume $s<s'$, otherwise we reverse their role.
Using the mean value theorem we find that for some $\xi, \xi'\in
(s,s')\subset (-\epsilon,\epsilon)$
\[
\frac{\sec s - \sec s'}{\tan s - \tan s'} = \frac{\sec \xi \tan
\xi}{1 + \tan^2 \xi'} \to 0,\ \ \ \text{as}\ \epsilon \to 0^+.
\]
Therefore, we can achieve \eqref{3} provided that $\epsilon > 0$ is
sufficiently small. Having fixed $\epsilon$ in such a way, the map
$\Psi : U_\epsilon \to \mathbb R^2$ defines a $C^\infty$ diffeomorphism
from the $(u,s)$ plane onto its image $V_\epsilon \overset{def}{=}
\Psi(U_\epsilon)$, which is an open set of the $(u,v)$ plane. We now
claim that there exists $\delta = \delta(\epsilon)>0$ such that
\begin{equation}\label{strip}
\Omega \overset{def}{=} \mathbb R \times (-\delta,\delta) \subset V_\epsilon.
\end{equation}
To prove \eqref{strip} it suffices to show that, as $s$ ranges over
the interval $(-\epsilon,\epsilon)$ the $v$-coordinate of the
vertices of the parabolas $v = v(u,s) = \frac{\tan s}{2} u^2 + \sec
s\ u + \frac{\tan s}{2}$ are uniformly bounded away from zero. Let
us notice that the line $s = 0$ in the $(u,s)$ plane is mapped to
the line $v=u$ of the $(u,v)$ plane. For $s\not= 0$ the $v$
coordinate of the vertex of the parabola is given by
\[
v(s) = - \frac{\sec^2 s(1-\sin^2 s)}{2 \tan s} = - \frac{\cot s}{2}.
\] Now on the interval $0<s<\epsilon$ we have $v(s)\to -\infty$ as
$s\to 0^+$, whereas on $(-\epsilon,0)$ we have $v(s) \to + \infty$
as $s\to 0^-$. Since $\cot s$ is strictly decreasing on
$(-\epsilon,\epsilon)$, we conclude that if we take
\[
\delta = \delta(\epsilon) = \frac{\cot \epsilon}{2},
\]
then \eqref{strip} is verified. Since the composition of
diffeomorphisms is a diffeomorphism as well, we conclude that
\[
\Phi \overset{def}{=} \Psi \circ \Lambda : U_\epsilon \to \Omega
\subset \mathbb R^2_{u,v}
\] is also a diffeomorphism. At this point, using the inverse
diffeomorphism $\Phi^{-1}: \Omega \to U_\epsilon$, we define
\[
\phi(u,v) = \theta_1(\Phi^{-1}(u,v)),\ \ \ (u,v)\in \Omega,
\]
where $\theta_1(r,s) = r \sin s + \cos s$ is the first component of
the map $\theta$ in \eqref{theta}. Notice that
\[
\phi(u,v) = F(s(u,v)) + G(s(u,v)) u,
\]
where $(r(u,v),s(u,v))$ is the inverse diffeomorphism of \eqref{1}.
With this definition of $\phi$ we now see that portion of the
catenoid which is parametrized by $\theta$ on the open set
$U_\epsilon = \mathbb R\times (-\epsilon,\epsilon)$ is in fact given as the
$X_1$-graph
\[
\left(\phi(u,v),u, v - \frac{u}{2} \phi(u,v)\right),
\]
for $(u,v)\in \Omega$. Finally, let us notice that such piece of the
surface is a strict intrinsic graphical strip in the sense of
Definition \ref{intdeltastrip} since the condition
\[
F'(s)^2 < 2 G'(s) \sigma'(s),\ \ \ \ s\in (-\epsilon,\epsilon),
\]
is verified.
\subsection{Proof of Theorem \ref{T:existstrip0}}\label{SS:MT}
The above analysis should allow the reader a clear understanding of
the motivation behind the Definition \ref{intdeltastrip} of strict
intrinsic graphical strip. Our next objective is proving that,
similarly to the sub-Riemannian catenoid, every complete minimal
surface without boundary and with empty characteristic locus
contains a strict intrinsic graphical strip, unless the surface is a
vertical plane. In this general case the construction of the strict
graphical strip is more difficult. Our approach hinges on the
following basic representation theorem for minimal surfaces which is
a consequence of the results in \cite{GP}, and which has already
proved crucial in \cite{DGNP}.
\begin{Thm}\label{T:thma}
Let $S$ be a $C^2$ complete embedded non-characteristic minimal
surface without boundary and assume that it is not a vertical plane.
Let $g_0\in S$ be a point admitting a neighborhood (in $S$) that may
be written as a graph over the plane $t=0$. There exist a
neighborhood $U$ of $g_0$, an interval $J$, and functions $h_0 \in
C^2(J)$, $\gamma \in C^3(J,\mathbb R^2)$, with $|\gamma'(s)| = 1$ for $s\in
J$, such that $U$ is parameterized by $\mathscr{L}: \mathbb R \times J \to
\mathbb{H}$
\begin{equation}\label{seedrep}
\mathscr{L}(r,s)\ =\
\left (\gamma(s)+r{(\gamma')}^\perp(s),h_0(s)-\frac{r}{2}\gamma(s) \cdot
\gamma'(s)\right )
\end{equation}
for $s \in J, r \in \mathbb R$. Moreover, with $W_0(s)=h_0'(s)+\frac{1}{2}\gamma' \cdot
\gamma^\perp(s)$ and $\kappa(s)=\gamma'' \cdot (\gamma')^\perp$, we have that
\begin{equation}\label{cl}
1-2W_0(s)\kappa(s)\ < 0\ ,\ \ \ s\in J\ .
\end{equation}
\end{Thm}
The proof of Theorem \ref{T:thma} will be presented after Corollary
\ref{foliation} below. We first develop some preparatory results.
\begin{Lem}\label{L:ruled}
Let $D\subset \mathbb R^2$ be an open set, $g\in C^2(D)$, and consider the
$C^2$ map $G:D \rightarrow \mathbb{H}^1$ given by
$G(x,y)=(x,y,g(x,y))$. Suppose that $S=G(D)$ is a non-characteristic
minimal surface. Then $S$ is foliated by horizontal straight lines
which are the integral curves of $\boldsymbol \nu_H^\perp = \overline q X_1 - \overline p X_2$.
\end{Lem}
\begin{proof}[\textbf{Proof}]
Writing $S$ as the level set $\phi(x,y,t) =
g(x,y)-t=0$ we have that
\[\boldsymbol \nu_H = \overline p\;X_1+\overline q\;X_2,\]
where
\[\overline p=\frac{X_1 \phi}{\sqrt{(X_1 \phi)^2+(X_2 \phi)^2}},\; \overline q=\frac{X_2 \phi}{\sqrt{(X_1 \phi)^2+(X_2 \phi)^2}}.\]
The reader should keep in mind here that
\begin{equation}\label{pq}
p = X_1 \phi = X_1g +\frac{y}{2} = g_x + \frac{y}{2},\ \ \ q = X_2
\phi = X_2g -\frac{x}{2} = g_y - \frac{x}{2}.
\end{equation}
We emphasize that the assumption that $S$ be non-characteristic is
equivalent to
\[
W = \sqrt{(X_1 \phi)^2+(X_2 \phi)^2 }\neq 0\ \ \ \ \text{on}\ D.
\] By Proposition \ref{P:mc} we see that assumption that $S$ be
minimal reads
\[X_1\overline p + X_2\overline q\ =\ 0,\]
which, using the fact that $\overline p, \overline q$ are independent of $t$, is
equivalent to
\begin{equation}\label{MSEp}
div V =\overline p_x+\overline q_y\ =\ 0.
\end{equation}
Here, we view $V=\overline p\partial_x+\overline q\partial_y$ as a vector field on
$D$ and $div$ is the Euclidean divergence. We now claim that if
$c(s)=(c_1(s),c_2(s))\subset D$ is an integral curve in $D$ of
$V^\perp = \overline q \partial_x - \overline p
\partial_y$, then $C(s)=(c_1(s),c_2(s),g(c_1(s),c_2(s)))$ must be an
integral curve of $\boldsymbol \nu_H^\perp$ on $S$. To see this suppose that
$c'(s) = V^\perp(c(s))$, which means $c_1'(s) = \overline q(c(s)), c_2'(s) =
- \overline p(c(s))$. Now from these equations and from \eqref{pq} one has
\begin{equation}\label{Tcomponent}
c_1'\left(g_x(c) + \frac{c_2}{2}\right) + c_2'\left(g_y(c) -
\frac{c_1}{2}\right) = \overline q(c) p(c) - \overline p(c) q(c) = \frac{q(c) p(c) -
p(c) q(c)}{W} = 0,
\end{equation}
where for simplicity we have omitted the variable $s$ when writing
$c$ instead of $c(s)$. Now,
\[
C'(s) = (c_1'(s),c_2'(s),g_x(c_1(s),c_2(s))c_1'(s) +
g_y(c_1(s),c_2(s)) c_2'(s)).
\]
Using the formula
\begin{equation}\label{passage}
a X_1 + b X_2 + c T = \left(a,b,c+ \frac{bx - ay}{2}\right),
\end{equation}
which allows to pass from the standard representation in terms of
the Cartesian coordinates in $\mathbb H^1$ to that with respect to the
orthonormal basis $\{X_1,X_2,T\}$, we find
\begin{align}\label{horiz}
C'(s)&= c_1'(s) X_1(c(s)) +c_2'(s) X_2(c(s)) \\
& +\left (\nabla g(c_1(s),c_2(s)) \cdot
(c_1'(s),c_2'(s))+\frac{c_1'(s)c_2(s)-c_1(s)c_2'(s)}{2} \right ) T.
\notag\end{align}
From \eqref{Tcomponent} we conclude that the component of $C'(s)$
with respect to $T$ is identically equal to zero, and therefore
\[
C'(s) = c_1'(s) X_1(c(s)) + c_2'(s) X_2(c(s)) = \overline q(c(s)) X_1(c(s))
- \overline p(c(s))X_2(c(s)) = \boldsymbol \nu_H^\perp(c(s)).
\]
This proves the claim.
Since $V$ is a unit vector field, we have that
\begin{equation}\label{uniteqs}
\overline p\ \overline p_x+ \overline q\ \overline q_x = \overline p\ \overline p_y + \overline q\ \overline q_y = 0.
\end{equation}
Combining equations \eqref{uniteqs} and \eqref{MSEp}, we conclude
that if $c(s) = (c_1(s),c_2(s))$ is an integral curve of $V^\perp$,
with $c(0) = z = (x,y)\in D$, then \[ \frac{d}{ds} V^\perp(c(s))
\equiv 0\ ,
\] and therefore $V^\perp(c(s)) \equiv V^\perp(c(0)) = V^\perp(z)$. It follows that
\[
c(t) = z + s V^\perp(z), \] i.e., $c(s)$ is a segment of straight
line in $D^2$ passing through $z$. If we write $c(t)=(x + as,y+
bs)$, then
\[
C(s)=(x+as,y+bs,g(x+as,y+bs)),
\]
and so
\[
C'(s) = (a,b,a g_x + b g_y).
\]
At this point we note that the vanishing of the $T$ component in
\eqref{Tcomponent} now implies that
\[
\frac{d}{ds} g(x+as,y+bs) = g_x a + g_y b = -
\frac{c_1'(s)c_2(s)-c_1(s)c_2'(s)}{2} = \frac{bx-ay}{2}.
\]
This gives
\[
g(x+as,y + bs) = g(z) + \frac{bx-ay}{2}s,
\]
and therefore,
\[
C(s)=\left(x+as,y+bs,g(z) + \frac{bx-ay}{2}s\right),
\]
i.e., $C(s)$ is a straight line segment in $\mathbb H^1$.
\end{proof}
\begin{Lem}\label{L:lemma2} Suppose $S$
be a $C^2$ non-characteristic minimal surface such that no open
subset of $S$ may be written as a graph over the $xy$-plane. Then,
$S$ is a piece of a vertical plane and, hence, is foliated by
horizontal straight lines which are the integral curves of
$\boldsymbol \nu_H^\perp$.
\end{Lem}
\begin{proof}[\textbf{Proof}]
Let $(x_0,y_0,t_0)\in S$ and let $U\subset \mathbb H^1$ be an open
neighborhood of $(x_0,y_0,t_0)$ such that $S\cap U = \{(x,y,t)\in
U\mid \phi(x,y,t) = 0\}$ for a $\phi\in C^2(U)$ having $\nabla
\phi\not= 0$ in $U$. By the assumption that no open subset of $S$
may be written as a graph over the $xy$-plane, we see that it must
be $\phi_t=0$ in $U$. Then, $\phi(x,y,t)=\phi_0(x,y)$ in $U$ and
therefore $S\cap U$ is a portion of a ruled surface over a curve
$c$ in the $xy$-plane. Furthermore, due to the special structure of
$\phi$ one easily recognizes that the assumption that $S$ be
$H$-minimal now translates into the fact that $\phi_0$ must satisfy
the classical minimal surface equation
\begin{equation*} div \left(\frac{\nabla
\phi_0}{\sqrt{1+|\nabla \phi_0|^2}}\right) = 0\ , \end{equation*} on
the open set $\tilde U = \pi(U)\subset \mathbb R^2$, where $\pi(x,y,t) =
(x,y)$. This equation is in fact equivalent to
\begin{equation}\label{lmse} (1 + \phi_{0,y}^2) \phi_{0,xx} - 2
\phi_{0,x} \phi_{0,y} \phi_{0,xy} + (1 + \phi_{0,x}^2) \phi_{0,yy}\
=\ 0\ .
\end{equation}
Since $\nabla \phi_0 \not= 0$ in $U$, by the Implicit Function
Theorem, we may locally describe the curve $c$ by either $y=g(x)$ or
$x=f(y)$. In the former case, we have $\phi_0(x,y) = y - g(x)$, and
thus \eqref{lmse} implies that $g''=0$. We conclude that there
exists an open set $V\subset \mathbb H^1$ containing $(x_0,y_0,t_0)$ such
that $S\cap V$ is a piece of a vertical plane. The second case leads
to the same conclusion. By the assumption that $S$ be $C^2$ we now
conclude that if for two such different open sets $V_1, V_2$ one has
$V_1\cap V_2\not= \varnothing$, then the two corresponding portions
of planes $S\cap V_1$ and $S\cap V_2$ must be part of the same
plane. This completes the proof.
\end{proof}
In the next lemma we combine into a single result the two different
situations considered in Lemmas \ref{L:ruled} and \ref{L:lemma2}.
\begin{Lem}\label{locfol} Let $S$ be a $C^2$ minimal
surface in $\mathbb{H}^1$ with empty characteristic locus, and let
$p$ be a point in the interior of $S$ (in the relative topology).
Then, there exists a neighborhood $\Delta$ of $p$ in $S$ which is
foliated by horizontal straight line segments which are integral
curves of $\boldsymbol \nu_H^\perp$.
\end{Lem}
\begin{proof}[\textbf{Proof}]
For every $p\in \overset{\circ}{S}$, there exists an open set
$U\subset \mathbb H^1$ and a $\phi\in C^2(U)$ such that $\nabla \phi\not= 0$
in $U$ and $\Sigma = S\cap U = \{(x,y,t)\in U\mid \phi(x,y,t) =
0\}$. Let $S_1=\{(x,y,t)\in \Sigma | \phi_t(x,y,t) \neq 0\}$,
$S_2=\{(x,y,t)\in \Sigma | \phi_t(x,y,t) = 0\}$. Notice that, either
$\phi_t\equiv 0$ on $\Sigma$ and in such case $S_2 = \Sigma$ is a
vertical cylinder over a curve in the $xy$ plane, or there exists an
open set $V\subset \mathbb H^1$ such that $S_2\cap V$ is a $C^1$ curve in
$\mathbb H^1$. In the former case we can invoke Lemma \ref{L:lemma2} to
conclude that $\Delta = \Sigma$ is foliated by horizontal straight
line segments which are integral curves of $\boldsymbol \nu_H^\perp$. We are thus
left with the case in which $S_1\not= \varnothing$. By shrinking
$\Sigma$ if necessary we can assume that $\Sigma=S_1 \cup S_2$,
where $S_2$ is a $C^1$ curve.
In our arguments, we consider integral curves of $\boldsymbol \nu_H^\perp$
passing through points on the surface $S$. To make this notion
precise, we recall that as $S$ is a $C^2$ submanifold of
$\mathbb{H}^1 =\mathbb{R}^3$, every point $p \in S$ is contained in
a coordinate chart $i: D \subset \mathbb R^2 \rightarrow S$ with $i \in C^2(D)$.
For any $C^1$ vector field, $U_0$, defined on $i(D)$, the integral
curve of $U_0$ passing through $q \in i(D)$ is simply $i(\gamma)$
where $\gamma \subset D$ is a solution to the initial value problem:
\begin{equation*}
\begin{split}
\gamma'(t)&=i^{-1}_*(U_0)(\gamma(t))\\
\gamma(0)&=i^{-1}(q).
\end{split}
\end{equation*}
Direct calculation then shows that
\[
\frac{d}{dt}i(\gamma)=i_*i^{-1}_*U_0(\gamma(t))=U_0(i(\gamma(t))),
\]
and $i(\gamma(0))=i(i^{-1}(q))=q$. As $U_0$ (and hence $i^* U_0$)
is $C^1$, the standard theorems concerning solutions to ODE apply to
the integral curves of $U_0$ on $S$. In particular, we may conclude
that given $q \in S$, there exists (at least for a short time) a
unique integral curve of $U_0$. Similarly, we conclude that
integral curves of $U_0$ on $S$ have continuous dependence on
parameters.
By Lemma \ref{L:ruled}, each point in $S_1$ is contained in a
neighborhood which is foliated by straight line segments which are
integral curves of $\nu_H^\perp$. Thus, those portions of integral
curves of $\boldsymbol \nu_H^\perp$ contained in $S_1$ are at least piecewise
linear. By the fact that $\boldsymbol \nu_H^\perp$ is $C^1$ and the uniqueness
of solutions to ode's, we must have that these portions of integral
curves are straight lines. We may extend each such line segment
maximally within $S_1$. If a limit point of a maximally extended
line segment were in $S_1$, we could apply Lemma \ref{L:ruled} to
extend it further, violating the assumption that we had extended
maximally. Thus we conclude that the limit points of the line
segment are in $\partial S_1 \cup S_2$.
Consider $p \in S_2$ and let $c$ be the integral curve of
$\boldsymbol \nu_H^\perp$ with $c(0)=p$. Let $B_\epsilon$ be the metric ball of
radius $\epsilon$ centered at $p$ and $c_\epsilon = c \cap
B_\epsilon$. Then, there exists an $\epsilon >0$ sufficiently small
so that one of the following possibilities occurs:
\begin{enumerate}
\item $c_\epsilon \cap S_2$ is closed and has no interior;
\item $c_\epsilon \cap S_2$ is closed with nonempty interior and $p$ is in the
interior;
\item $c_\epsilon \cap S_2$ is closed with nonempty interior and $p$ is contained in the boundary of the interior of $c_\epsilon \cap S_2$.
\end{enumerate}
In the first case, $c_\epsilon \cap S_1$ is open and dense in
$c_\epsilon$. By Lemma \ref{L:ruled}, every point in $c_\epsilon
\cap S_1$ is contained in an open line segment which is a subset of
$c_\epsilon$. As $c_\epsilon \cap S_2$ is closed and is contained
in the boundary of $c_\epsilon \cap S_1$, we conclude that
$c_\epsilon$ is piecewise linear. By the smoothness of $\boldsymbol \nu_h^\perp$
and the uniqueness of solutions to ODE, we conclude $c_\epsilon$ is
a single straight line segment.
In the second case, we may shrink $\epsilon$ so that $c_\epsilon
\cap S_2=c_\epsilon$ and $S_2$ divides $B_\epsilon \cap S$ into
exactly two pieces $N_1,N_2$. We next show that if $q \in N_1$ is
contained in a line segment, $L \subset N_1$, which reaches the
boundary of $N_1$ then the length of $L$ is at least
$2(\epsilon-\delta)$ where $\delta$ is the Euclidean distance from
$p$ to $q$. Observe that the endpoints of $L$ can not be in $S_2$.
If one were in $S_2$, then by the uniqueness of solutions of ODE, we
conclude that $L$ and $S_2$ coincide. This contradicts our
assumption that $q \not \in S_2$. Thus, $L$ must be a line segment
in $B_\epsilon$ which has both its boundary points in $\partial
B_\epsilon$. By construction, the Euclidean distance from $p$ to
the endpoints of $L$ is $\epsilon$. Denoting the Euclidean distance
from $p$ to $q$ by $\delta$, the triangle inequality implies that
the length of $L$ is at least $2(\epsilon -\delta)$.
Let $ q_i \in N_1$ be a sequence of points converging to $p$ and let
$L_i$ be the maximal line segment which is the integral curve of
$\boldsymbol \nu_H^\perp$ through $q_i$ which is contained in $N_1$. By the
continuous dependence on parameters of the solutions to an ODE and
the fact the $\boldsymbol \nu_H^\perp$ is $C^1$, we know $L=\lim_{i \rightarrow \infty}
L_i$ exists and is an integral curve of $\boldsymbol \nu_H^\perp$ passing through
$p$. Moreover, since $L$ is the limit of lines segments each of
whose lengths are bounded below by $2(\epsilon - \delta_i)$ (where
$\delta_i$ is the Euclidean distance from $p$ to $q_i$), we conclude
$L$ is a line segment of length at least $2\epsilon$. Note that so
far, we have shown that every point in $S_1$ and every point in
$S_2$ that fall in cases one and two are contained in an open line
segment which is an integral curve of $\boldsymbol \nu_H^\perp$.
We are left with points of $S_2$ which fall into the third category.
The collection of such points in $S_2$ is, by construction, closed
and has empty interior. Thus, $c_\epsilon$ contains an open dense
set of points that are either in $S_1$ or fall in one of the first
two cases above. For each such points, Lemma \ref{L:ruled} or the
discussion of the first two cases yields an open line segment
containing the point which is a subset of $c_\epsilon$. Thus, as in
the argument for case one, $c_\epsilon$ is piecewise linear and, by
the smoothness of $\boldsymbol \nu_H^\perp$, must be a single straight line
segment.
Using the arguments above for points in $S_2$ and Lemma
\ref{L:ruled} for points in $S_1$, we see that integral curve of
$\boldsymbol \nu_H^\perp$ through any point contains a line segment through that
point. Thus, all such integral curves are piecewise linear and, by
the smoothness of $\boldsymbol \nu_H^\perp$, must be straight lines. Combining
all of these arguments shows that $\Sigma$ is foliated by straight
line segments which are integral curves of $\boldsymbol \nu_H^\perp$.
\end{proof}
\begin{Cor}\label{foliation} Let $S$ be a $C^2$ connected complete
non-characteristic minimal surface without boundary in
$\mathbb{H}^1$. Then, $S$ is foliated by horizontal straight lines
which are integral curves of $\boldsymbol \nu_H^\perp$.
\end{Cor}
\noindent
\begin{proof}[\textbf{Proof}]
Since $S$ is assumed to have no boundary, for any $p \in S$ Lemma
\ref{locfol} implies that there exists an open neighborhood of $p$
which is foliated by such straight line segments. By the smoothness
of $\boldsymbol \nu_H^\perp$, we have that $S$ itself is foliated by such
straight line segments. It remains to show that the entirety of
each line is contained in $S$.
Let $L:(-\epsilon,\epsilon) \rightarrow S$ be a line segment with $L(0)=p
\in S$ and $L'(t)=\boldsymbol \nu_H^\perp(L(t))$ and let $\tilde{L}:\mathbb{R}
\rightarrow \mathbb{H}^1$ be the full line containing $L$ so that
$\tilde{L}(t)=L(t)$ for $t \in(-\epsilon,\epsilon)$. Let \[ I =
\{t\in \mathbb R\mid \tilde{L}(t)\in S\}\ . \] By construction, $I$ is not
empty since $0\in I$. Let $t_i \in I$ be a sequence of parameters so
that $t_i \rightarrow t_\infty$ where $t_\infty $ is a limit point of $I$.
By completeness of $S$, we must have that $\lim_{i \rightarrow
\infty}\tilde{L}(t_i) = \tilde{L}(t_\infty)$ is an element of $S$.
Thus, $I$ is closed as it must contain all of its limit points. But,
$I$ is open as well. To see this, consider $p=\tilde{L}(t)$ for a
fixed $t \in I$. As $\partial S = \varnothing$, $p$ is in the
interior of $S$ and so, by Lemma \ref{locfol}, $p$ is contained in a
neighborhood which is foliated by straight lines which are integral
curves of $\boldsymbol \nu_H^\perp$. Thus, $I$ must contain an open neighborhood
of $t$. Since $I$ is both open and closed, we conclude that $I =
\mathbb{R}$ and that $\tilde{L}(\mathbb R) \subset S$.
\end{proof}
\noindent
\begin{proof}[\textbf{Proof of Theorem \ref{T:thma}}]
By Corollary \ref{foliation}, we have that $S$ is foliated by
horizontal straight lines which are integral curves of $\boldsymbol \nu_H^\perp$.
Let $O$ be an open neighborhood of $g_0$ which may be written as a
graph $(x,y,h(x,y))$ with $h \in C^2$. Consider a unit tangential
vector field, $\mathcal{W}$, defined on $O$ which is perpendicular
(with respect to the fixed Riemannian metric) to $\boldsymbol \nu_H^\perp$. Let
$(\gamma_1(s),\gamma_2(s),h_0(s))$ be an integral curve of
$\mathcal{W}$ so that $\gamma(0)=g_0$ with domain $J$. Note that
$\gamma_1,\gamma_2,h_0 \in C^2(J)$ as $\boldsymbol \nu_H^\perp$ is $C^1$. Let $N$
be the collection of lines in the foliation which pass through point
of the curve $(\gamma_1(J),\gamma_2(J),h_0(J))$. Then, since for a
fixed $s_0 \in J$, we have from \eqref{passage}
\begin{align*}
\mathscr{L}_{s_0}'(r)& =(\gamma_2'(s_0),-\gamma_1'(s_0),
-\frac{1}{2}(\gamma_1(s_0),\gamma_2(s_0))\cdot(\gamma_1'(s_0),\gamma_2'(s_0)))
\\
& = \gamma_2'(s_0)\;X_1-\gamma_1'(s_0)\;X_2 = \boldsymbol \nu_H^\perp,
\end{align*}
the line of the foliation passing through
$(\gamma_1(s_0),\gamma_2(s_0),h_0(s_0))$ is given by
\[\mathscr{L}_{s_0}(r)=(\gamma_1(s_0)+r\gamma_2'(s_0),\gamma_2(s_0)-r\gamma_1'(s_0),h_0(s_0)-\frac{r}{2}(\gamma_1(s_0),\gamma_2(s_0))\cdot(\gamma_1'(s_0),\gamma_2'(s_0)))\]
Thus, $N$ may be parametrized by $\mathscr{L}:\mathbb R \times J \rightarrow
\mathbb{H}^1$ given by
\begin{equation}\label{T3.4-eq1}
\mathscr{L}(r,s) = (
\gamma_1(s)+r\gamma_2'(s),\gamma_2(s)-r\gamma_1'(s),h_0(s)-\frac{r}{2}\gamma(s)\cdot\gamma'(s)).
\end{equation}
It remains to show that $\gamma=(\gamma_1,\gamma_2) \in C^3(J)$. As $O$ is a graph over a region $\bar{O}$ of the xy-plane, $\bar{\mathscr{L}}(r_0,s)=(\gamma_1(s)+r\gamma_2'(s),\gamma_2(s)-r\gamma_1'(s))$ parametrizes a subset of $\bar{O}$ with $s \in J, r \in (-\epsilon,\epsilon)$ for $\epsilon$ sufficiently small. Under this parametrization, $V=\overline p \;\partial_x+\overline q \;\partial_y = \gamma_1'(s) \;\partial_x+\gamma_2'(s)\; \partial_y$. We
first observe that, for a fixed $r=r_0$, the curve $s\to
\bar{\mathscr{L}}(r_0,s)$ coincides with the integral curve of
$V$ through the point $\bar{\mathscr{L}}(r_0,0)$ on their
mutual domain of definition (we may assume, by shrinking $J$ if
necessary, that $J$ is the mutual domain of definition). To see
this, note that the definition of $\bar{\mathscr{L}}$ gives
\[
\bar{\mathscr L}_s(r,s) = (\gamma_1'(s) + r\gamma_2''(s),\gamma_2'(s) - r\gamma_1''(s))
\]
This implies
\[\langle \bar{\mathscr{L}}_s(r_0,s),V^\perp \rangle= \gamma_2'\gamma_1'+r\gamma_2'\gamma_2''-\gamma_1'\gamma_2'+r \gamma_1''\gamma_1'=0.\]
The last equality follows from the fact that $|\gamma'|\equiv 1$ on
$J$. Let $\bar{c}\subset \mathbb{R}^2$ be the integral curve of $V$ passing through
$\bar{\mathscr{L}}(r_0,0)$. We note that $\bar{c}$ is parameterized by
arc-length and, to avoid confusion, we will denote its parameter by
$\xi$. Since $V$ is $C^1$, we have that $\bar{c} \in C^2(\xi)$.
Moreover, since $O$ is given by $(x,y,h(x,y))$ with $h \in C^2$, we
see that $c(\xi) = h(\bar{c}(\xi))$ is $C^2(\xi)$ as well.
To facilitate our computations, we note that
\[
|\bar{\mathscr{L}}_s(r_0,s)|=|1-r_0 \kappa(s)|. \] This can be
verified as follows. Recalling that $|\gamma'| = 1$ and that $\kappa
= \gamma_1'' \gamma_2' - \gamma_2'' \gamma_1'$, one easily obtains
\[
|\bar{\mathscr{L}}_s(r_0,s)|^2 = 1 - 2 r \kappa(s) +
r^2(\gamma_1''(s)^2 + \gamma_2''(s)^2).
\]
Now, some elementary considerations give
\[
\kappa(s)^2 = ((\gamma_1''(s)^2 + \gamma_2''(s)^2)|\gamma'(s)|^2 - 2
(\gamma'(s)\cdot \gamma''(s))^2 = (\gamma_1''(s)^2 +
\gamma_2''(s)^2),
\]
and this implies the desired conclusion. Let now $\kappa_0=
\underset{s\in J}{\sup} |\kappa(s)|$. If $\kappa_0=0$, then $\gamma$
is a line segment and hence $\gamma$ is certainly $C^3$. Assuming
$\kappa_0
>0$, we pick $r_0< \min \{\kappa_0^{-1},\epsilon\}$ which implies that
$|\bar{\mathscr{L}}_s(r_0,s)|=|1-r_0 \kappa(s)|=1-r_0\kappa(s)$. We note
that $\xi$ is differentiable in $s$ as $\bar{c}(\xi)$ is the
reparameterization by arclength of $\bar{\mathscr{L}}(r_0,s)$ and that
$\frac{d\xi}{ds}=1-r_0\kappa(s)$. Similarly,
\[\frac{ds}{d\xi} = \frac{1}{1-r_0\kappa(s)}\]
which, by our choice of $r_0$, is equal to $\sum_{n=0}^\infty
(r_0\kappa(s))^n$. Next, we compute
\begin{equation*}
\begin{split}
c'(\xi) = \frac{d}{d \xi} h(\bar{c}(\xi)) &= \frac{\partial}{\partial s} ( h(\gamma_1(s)+r\gamma_2'(s),\gamma_2(s)-r\gamma_1'(s))) \frac{ds}{d\xi} \\
&= \frac{\partial}{\partial s} \left ( h_0(s) - \frac{r_0}{2}\gamma(s) \cdot \gamma'(s) \right ) \frac{1}{1-r_0\kappa(s)} \\
&= \left ( h_0'(s) -\frac{r_0}{2} -\frac{r_0}{2} \gamma(s)\cdot \gamma''(s) \right ) \frac{1}{1-r_0\kappa(s)}\\
&= \left ( h_0'(s) -\frac{r_0}{2} -\frac{r_0}{2} \gamma(s)\cdot \gamma''(s) \right ) \left ( \sum_{n=0}^\infty (r_0\kappa(s))^n \right )\\
&= h_0'(s) +r_0\alpha(s)+r_0^2\kappa(s)\alpha(s)+ r_0^3\kappa(s)^2\alpha(s) + \dots
\end{split}
\end{equation*}
where $\alpha(s)=-\frac{1}{2} -\frac{1}{2}\gamma(s)\cdot \gamma''(s) + \kappa(s) h_0'(s)$. At this point we can make some simplifications. First, we note that as $\kappa(s) = \gamma'' \cdot (\gamma')^\perp$, and $\gamma'\cdot \gamma'' =0$ (as $|\gamma'(s)|=1$), we have
\[
\gamma''(s)= \kappa(s) (\gamma'(s))^\perp
\]
So, letting $\beta(s)= - \frac{1}{2}\gamma \cdot (\gamma'(s))^\perp + h_0'(s)$,we rewrite $\alpha(s) = -\frac{1}{2} +\kappa(s) \beta(s)$. Moreover,
\begin{equation*}
\begin{split}
r_0\alpha(s)+r_0^2\kappa(s)\alpha(s)&+ r_0^3\kappa(s)^2\alpha(s) + \cdots
\ =\
r_0\alpha(s) \left ( \sum_{n=0}^\infty (r_0 \kappa(s))^n \right ) \\
&\ =\ \frac{r_0\alpha(s)}{1-r_0\kappa(s)} \\
& \ =\ -\,\left(\frac{r_0}{2} \frac{1}{1-r_0\kappa(s)} - \beta(s) \frac{r_0\kappa(s)}{1-r_0\kappa(s)}\right) \\
&\ =\ -\,\left(\frac{r_0}{2} \frac{1}{1-r_0\kappa(s)} + \beta(s) - \frac{\beta}{1-r_0\kappa(s)}\right) \\
&\ =\
-\,\left(\beta(s) + \frac{r_0 - 2\,\beta(s)}{1 - r_0\,\kappa(s)}\right)\ .
\end{split}
\end{equation*}
We conclude that
\[
c'(\xi)\ =\ h_0'(s) - \beta(s) - \frac{1}{2} \frac{r_0-2\beta(s)}{1-r_0\kappa(s)}
\]
Since $c'(\xi)$ is again differentiable in $\xi$ and $\xi$ is differentiable in $s$, we conclude, by the chain rule, that $c'(\xi)$ is also differentiable in $s$. Noting that $h_0'(s)$ and $\beta(s)$ are once differentiable in $s$, we conclude that $(1-r_0\kappa(s))^{-1}$, and hence $\kappa(s)$, is differentiable in $s$. But, since $\gamma''(s) = \kappa(s) (\gamma'(s))^\perp$, $\gamma''(s)$ is differentiable and hence $\gamma \in C^3(s)$.
Lastly, we examine the impact of the assumption that $S$ contains no
characteristic points on the neighborhood $N$. Using the parametrization derived above, we
see that the tangent space is spanned by $\nu_H^\perp$ and
\[
\hat{W}=(\gamma_1'(s)+r\gamma_2''(s))\;X_1+(\gamma_2'(s)-r\gamma_1''(s))\;X_2
+
(W_0(s)-r+\frac{r^2}{2}\kappa(s))\;T\]
where, as in the statement of the Theorem, we let
$W_0(s)=h_0'(s)+\frac{1}{2}\gamma'\cdot\gamma^\perp$ and
$\kappa(s)=\gamma''\cdot(\gamma')^\perp$. $S$ will have a
characteristic point when $<\hat{W},T>=0$,
i.e. when $r=\frac{1\pm\sqrt{1-2W_0(s)\kappa(s)}}{2W_0(s)}$. Thus, $S$ is
noncharacteristic if and only if $1-2W_0(s)\kappa(s)<0$.
\end{proof}
Note that, without loss of generality (by simply reparametrizing
$\gamma$), we may assume that any fixed $s \in J$ may be treated as
$s=0$. We will use such a normalization and assume that $J$ is a
neighborhood of $0$.
We wish to examine the behavior of this patch with respect to the notion
of an $X_1$ graph. Consider the following definitions.
\begin{Def}\label{D:pm} Let $C_1(x_0,y_0,t_0)$ denote the integral curve
of the vector field $X_1$ passing through the point $(x_0,y_0,t_0)$. In other words,
\[
C_1(x_0,y_0,t_0)=\left \{\left(x_0+r,y_0,t_0-\frac{y_0}{2}\,r\right)\,\Bigl|\, r \in \mathbb R\right \}\ .
\]
\end{Def}
Using Definition \ref{D:pm} we next introduce the notion of intrinsic projection of a point to the plane $x=0$.
\begin{Def}\label{D:ipm} We define the \emph{intrinsic projection map}
\[\Pi(x_0,y_0,t_0)=\{(0,y,t)\} \cap C_1(x_0,y_0,t_0)=(0,y_0,t_0+y_0x_0/2)\ .
\]
\end{Def}
The following equation follows directly from the definition.
\begin{equation}\label{parabola}
\Pi \circ \mathscr{L}(r,s)\ =\ (0,\gamma_2(s)-r\gamma_1'(s),h_0(s)+\frac{1}{2}\gamma_1(s)\gamma_2(s)-r\gamma_1(s)\gamma_1'(s)-\frac{r^2}{2}\gamma_1'(s)\gamma_2'(s))
\end{equation}
\begin{Lem}\label{lemma1} Let $S$ be a portion of an $H$-minimal
surface parameterized by a seed curve/height function pair
$(\gamma(s),h_0(s))$ via \eqref{seedrep} with $r \in \mathbb R$, $s\in I$.
Let $P(s,r)=\Pi \circ \mathscr{L}(r,s)$ be given as in \eqref{parabola}. There exists
an interval $J\subset I$ containing so that $P:\mathbb R \times J \subset\mathbb R^2_{(r,s)} \rightarrow
\mathbb R^2_{(y,t)}$ is a one-to-one $C^2$ diffeomorphism onto its image.
\end{Lem}
\begin{proof}[\textbf{Proof}]
The following properties of the seed curve $\gamma: I \to \mathbb R^2$ are essential to our proof. We gather them here for the sake of convenience.
\begin{itemize}
\item[(i)] $|\gamma'(s)| = 1$.
\item[(ii)] $1 - 2W_0(s)\kappa(s) < 0$.
\item[(iii)]There exists an interval $J\subset I$ such that for all $s\in J$, $\gamma_1'(s) \neq 0$.
\end{itemize}
Properties (i), (ii) and the definitions of $W_0$ and $\kappa$ were establish in Theorem \ref{T:thma}.
Suppose (iii) is not true, then together with (i) we would have $\gamma'(s) = (0,1)1$ for all $s\in I$. This would implies $\kappa(s) = \gamma''(s)\cdot \gamma'(s)^\perp$ vanishes identically on $I$ and hence (ii) would not be possible. Therefore, by the continuity of $\gamma_1'$, we can extract a sub-interval $J$ of $I$ on which $\gamma_1'(s) \neq 0$. To continue we define two auxilary functions $\zeta$ and $\Psi$ by means of $\gamma$ as follows.
\begin{align*}
\zeta:\mathbb R\times J \to \mathbb R^2\ , \qquad \zeta(r,s) &\ =\ (\gamma_2(s) - r\,\gamma_1'(s), s)\ , \\
\Psi: \zeta(\mathbb R\times J)\to \mathbb R^2\ ,\qquad (u,v) = \Psi(u,s) &\ =\ \left(u,\sigma(s) + F(s)\,u + \frac{G(s)}{2}\,u^2\right)\ .
\end{align*}
where $F,G,\sigma : J \to \mathbb R$ is given by
\begin{align}\label{FGsigma}
F(s) &\ =\ \gamma_1(s) + \frac{\gamma_2(s)\gamma_2'(s)}{\gamma_1'(s)} = \frac{\gamma \cdot \gamma'}{\gamma_1'}\\
\notag
G(s) &\ =\ -\,\frac{\gamma_2'(s)}{\gamma_1'(s)} \\
\notag
\sigma(s) &\ =\ h_0(s) - \frac{1}{2} \gamma_2(s)\,F(s)\ .
\notag
\end{align}
Due to property (iii) above and the the fact that $\gamma \in C^3(I)$, the functions $\zeta, \Psi, F, G, \sigma$ are well defined and are $C^2(J)$. One can verify by a straight forward computation that
\[
\Pi \circ \mathscr{L}(r,s) \ =\ \Psi \circ \zeta(r,s)\ .
\]
Therefore, if we show that $\Psi \circ \zeta: \mathbb R \times J \to \mathbb R^2$ is one one then $\Pi\circ \mathscr{L}$ is also one one. To this end, we will show separately that both $\zeta$ and $\Psi$ are one to one. The fact that $\zeta$ is one one is easy to verify and follows from the fact that $\gamma_1'(s) \neq 0$ on $J$. We also note that
\[
\zeta(\mathbb R\times J)\ =\ \mathbb R\times J\ .
\]
To show that $\Psi$ is one to one, we first consider its second component:
$v(u,s) = \sigma(s) + F(s)u + \frac{G(s)}{2}u^2$. We have
\[
\frac{\partial}{\partial s} v(u,s)\ =\
\sigma'(s) + F'(s)\,u + \frac{G'(s)}{2}\,u^2\ .
\]
Although it is tedious, nevertheless one can verify by straight forward computations that the following identity holds for any $s\in J$ and any $u\in \mathbb R$:
\[
F'(s)^2 - 2\sigma'(s)G'(s)\ =\ 1 - 2W_0(s)\kappa(s) + (|\gamma'(s)|^2 + 1)(|\gamma'(s)|^2 - 1)\ <\ 0\ .
\]
The strict inequality above is due to properties (i) and (ii) of $\gamma$. This in turn implies that the quadratic expression in $u$
\[
\frac{\partial}{\partial s} v(u,s)\ =\
\sigma'(s) + F'(s)\,u + \frac{G'(s)}{2}\,u^2
\]
do not vanish for any fixed $u\in\mathbb R$ and any $s\in J$. Hence we have
\[
\left|\frac{\partial}{\partial s} v(u,s)\right| > 0\ ,\ s\in J
\]
that is, $v(u,s)$ is monotone in $s$ for any fixed $u\in\mathbb R$.
We infer from this fact and the definition of $\Psi$ that $\Psi$ is one one. This completes the proof.
\end{proof}
Several important facts about the functions $F,G,\sigma,\Psi$ were established in the proof of Lemma \ref{lemma1} we single them out here for references.
\begin{Pro}\label{P:Plemma1}
The functions $F,G,\sigma$ satisfy
\begin{equation}\label{E:nonchar}
F'(s)^2 - 2\sigma'(s)G'(s)\ <\ 0\ .
\end{equation}
The function $\Psi:\mathbb R\times J\to \mathbb R^2$ is invertible on its image.
We let $(u,s) = \Psi^{-1}(u,v)$.
In particular, $s = s(u,v)$ is the second component of $\Psi^{-1}$.
\end{Pro}
These two lemmas show that every $C^2$ noncharacteristic complete
noncompact embedded $H$-minimal surface which is not itself a
vertical plane contains a subsurface which can be written as an
intrinsic graph. To make the presentation as clean as possible, we
prove an intermediate lemma.
\begin{Lem}\label{lemma3}
Let $S$ be a $C^2$ noncharacteristic complete noncompact embedded
$H$-minimal surface which is not itself a vertical plane and let
$J$ and the functions
$F,G,\sigma,\Psi$ be the ones from the proof of Lemma \ref{lemma1}
and $s$ as in Proposition \ref{P:Plemma1}.
If $\phi:\Psi(\mathbb R\times J)\to \mathbb R^2$ is given by
\[
\phi(u,v)\ =\ F(s(u,v))+u G(s(u,v))\quad \text{for } (u,v) \in \Omega = \Psi(\mathbb R\times J)\ .
\]
Then
\[
S_0\ =\ \{(0,u,v)\circ(\phi(u,v),0,0)\, |\, (u,v) \in \Omega\}
\]
is a sub surface of $S$.
\end{Lem}
\begin{proof}[\textbf{Proof}]
With the functions $\Psi, \phi, s, F, G, \sigma$ and $\Omega$ as in the
statement of the Lemma, we define $\Phi:\Omega \to \mathbb{H}^1$ as follows
\[
\Phi(u,v) \ =\ \left(\phi(u,v), u, v - \frac{1}{2}\,u\,\phi(u,v)\right)\ .
\]
Our intention is to show that $\Phi(\Omega) = \mathscr{L}(\mathbb R\times J)$. We begin by comparing the second components of $\Phi$ and $\mathscr{L}$. Note that if
\begin{equation}\label{E:u_id}
u\ = \ \gamma_2(s) - r\,\gamma_1'(s)\ ,
\end{equation}
then
\begin{align}\label{phi_id}
\phi(u,v) &\ =\ F(s(u,v)) + u\,G(s(u,v) \\
\notag
&\ =\
F(s) + (\gamma_2(s) - r\,\gamma_1'(s))\,G(s) \\
\notag
\text{(by \eqref{FGsigma})}
&\ =\
\gamma_1(s) + \frac{\gamma_2(s)\gamma_2'(s)}{\gamma_1'(s)} - \Bigl(\gamma_2(s) - r\,\gamma_1'(s)\Bigr)\frac{\gamma_2'(s)}{\gamma_1'(s)} \\
\notag
&\ =\
\frac{\gamma_1(s)\gamma_1'(s) + \gamma_2(s)\gamma_2'(s) - \gamma_2(s)\gamma_2'(s) + r\,\gamma_1'(s)\gamma_2'(s)}{\gamma_1'(s)} \\
\notag
&\ =\ \gamma_1(s) + r\,\gamma_2'(s)\ ,
\notag
\end{align}
which is the first component of $\mathscr{L}$. We now turn to the third component of $\Phi$. Keeping in mind that for $(u,v)\in\Omega = \Psi(\mathbb R\times J)$ we
have
\[
v\ =\ \sigma(s) + F(s) u + \frac{G(s)}{2}\,u^2
\]
hence
\begin{align*}
v - \frac{1}{2}\,u\,\phi(u,v) & =
\sigma(s) + F(s) u + \frac{G(s)}{2}\,u^2 - \frac{1}{2}\,u\,\phi(u,v) \\
\text{(by \eqref{E:u_id}, \eqref{FGsigma} and \eqref{phi_id})} & =
h_0(s) - \frac{1}{2} \gamma_2(s)\left(\gamma_1(s) +
\frac{\gamma_2(s)\gamma_2'(s)}{\gamma_1'(s)}\right) \\
& + \left(\gamma_1(s) +
\frac{\gamma_2(s)\gamma_2'(s)}{\gamma_1'(s)}\right)(\gamma_2(s) -
r\,\gamma_1'(s)) \\ & -
\frac{1}{2}\frac{\gamma_2'(s)}{\gamma_1'(s)}(\gamma_2(s) -
r\,\gamma_1'(s))^2 - \frac{1}{2}(\gamma_2(s) -
r\,\gamma_1'(s))(\gamma_1(s) + r\,\gamma_2'(s)) \\ & = h_0(s) -
\frac{r}{2} \gamma(s)\cdot\gamma'(s) \notag
\end{align*}
which is the third component of $\mathscr{L}$.
\end{proof}
Finally, we turn to the
\begin{proof}[\textbf{Proof of Theorem \ref{T:existstrip0}}]
Since $S$ is not itself a vertical plane, Lemma \ref{L:lemma2} guarantee the
existence of a point $g_o\in S$ and a neighborhood $N$ of $g_o$ such that $N$ can be written as a graph over the plane $t = 0$. Theorem \ref{T:thma} then provides the necessary parameterization of such a neighborhood by the map $\mathscr{L}$ whose domain is $\mathbb R\times J$. Lemmas \ref{lemma1}, \ref{lemma3} and Proposition \ref{P:Plemma1} then show that the portion $\mathscr{L}(\mathbb R\times J)\subset S$ can
be reparameterized to conform to Definition \ref{intdeltastrip} hence, establishing the required $\delta$-graphical strip.
\end{proof}
Combining this with Theorem \ref{T:existstrip0}, we can now easily
prove the main Theorem.
\vspace{.2in}
\noindent
\begin{proof}[\textbf{Proof of Theorem \ref{I:main}}]
Suppose $S$ is a $C^2$ complete embedded noncharacteristic
$H$-minimal surface without boundary which is not a vertical plane.
Then, Theorem \ref{T:existstrip0} shows that $S$ contains an
intrinsic graphical strip, $S_0$, and thus, by Theorem
\ref{I:unstable}, $S_0$, and hence $S$, is not stable.
\end{proof}
|
2,877,628,090,445 | arxiv | \section{Introduction}
The turbulent cascade, a mechanism for the nonlinear transfer of energy across scales, is a key idea for understanding kinetic magnetized plasma turbulence. By considering simplified models, in uniform magnetic geometry, one can obtain a theoretical prediction for the spectrum of fluctuations, valid across an ``inertial range'' of scales, free from energy sources and sinks. Though such a theory is not able to fully describe the behavior of realistic turbulence, which hosts instabilities, damped modes, complicated magnetic geometries, {\em etc.}, it nevertheless constitutes a quantitative prediction of nonlinear behavior of the underlying gyrokinetic equation, an equation which generally governs actual systems of practical interest. The existence of such theoretical test cases is valuable for validating the solution methods employed by gyrokinetic codes, and as a foundation for physical interpretation of the volumes of data they produce.
Here, a novel quasi-two-dimensional fluid system is derived from the electrostatic gyrokinetic system, to describe fluctuations that predominantly vary in the directions perpendicular to the mean magnetic field, \ie in the ``drift plane'', at scales $\ell$ larger than the Larmor radius $\rho$, corresponding to a species of interest. The notion that quasi-two-dimensional behavior might underly magnetized plasma turbulence is intuitively justified by the fact that the magnetic guide field renders the dynamics inherently anisotropic. Furthermore, instabilities that drive electrostatic turbulence in fusion plasmas, \eg the ion-temperature-gradient (ITG) and electron-temperature-gradient (ETG) modes, exhibit a kind of localization along the field line, accompanied by the domination of perpendicular dynamics over parallel dynamics. The fluid limit $\ell \gg \rho$ is of particular importance, because the energy of the fluctuations is predominantly found at such scales -- these are the scales of importance, most directly affecting the performance of fusion devices. Furthermore, the reduction of complexity afforded by fluid limits can reveal important features of the dynamics, that do not manifest in the analysis of the general gyrokinetic equations.
Although similar systems as the one presented here have been proposed and studied in the past, most notably the Hasegawa-Mima (HM) equation, \cite{hasegawa}, the present derivation takes special care in considering the consequences of the appearance of nonlinear finite-Larmor-radius (FLR) terms that appear in the dynamical equation for the electrostatic potential -- \ie the ``vorticity'' equation. Such terms introduce a closure problem in the fluid moment hierarchy, where lower moments are coupled to ever higher ones, generally without end. This motivates the cold ion limit that underlies the HM equation, which eliminates the inconvenient terms, but is however not generally appropriate for application to fusion plasmas. In the present work, it is noted that the presence of these terms introduces rapid dynamics, and a multiple-scales analysis is proposed in which the fluid moment hierarchy closes at the pressure moment, without using an ad hoc closure scheme, leading to a relatively simple system involving only two fields.
What is immediately apparent is that the presence of the additional field (the pressure perturbation) breaks the nonlinear conservation of enstrophy that is famously satisfied by the HM equation, and there causes an ``inverse cascade'' of energy to large scales. The new system, we argue, should exhibit distinct nonlinear behavior, including a shallower energy spectrum when the effect of the nonlinear FLR terms is sufficiently strong. Direct numerical simulation of the fluid model gives some confidence in these predictions. The results of this work may help to interpret observations of turbulence in parameter regimes where the dynamics tend toward the quasi-two-dimensional limit. We discuss possible examples, including cases explored in previous gyrokinetic turbulence simulations in tokamak and stellarator geometries.
\section{Equations and definitions}\label{eqns-sec}
We assume uniform magnetic geometry, where the magnetic guide field is constant and points in the $\hat{z}$-direction. One species is assumed to be kinetic, with the other species satisfying a simple Boltzmann response model. We begin with a nondimensional form of the gyrokinetic system \citep{plunk-jfm}, normalized relative to the kinetic species: $v_{\perp}/v_{\mathrm{th}} \rightarrow v$ (with $v_{\mathrm{th}} = \sqrt{T/m}$, and $T$ and $m$ are the temperature and mass of the kinetic species) is the normalized perpendicular velocity and the normalized wavenumber is $k_{\perp}\rho \rightarrow k$ where thermal Larmor radius of the kinetic species is $\rho = v_{\mathrm{th}}/\Omega_c$ and $\Omega_c = qB/m$. The two-dimensional gyrokinetic equation is written as follows in terms of the perturbed gyrocenter distribution function $g({\bf R}, v, t)$, where ${\bf R} = \hat{\bf x} X + \hat{\bf y} Y$ is the gyrocenter position:
\begin{equation}
\frac{\partial g}{\partial t} + \poiss{\gyroavg{\varphi}}{g} = \CollisionOp{h}.
\label{gyro-g}
\end{equation}
\noindent where $\CollisionOp{h}$ is the collision operator (not treated here explicitly); the Poisson bracket is $\poiss{A}{B} = \hat{\bf z}\times\bnabla A \cdot \bnabla B = \partial_x A \partial_y B - \partial_y A \partial_x B$ and the gyro-average is defined $\gyroavg{A({\bf r})} = \frac{1}{2\pi}\int_0^{2\pi} d\vartheta A({\bf R} + \mbox{\boldmath $\rho$} (\vartheta))$, where the Larmor radius vector is $ \mbox{\boldmath $\rho$} (\vartheta) = {\bf {\hat z}}\times{\bf v} = v_{\perp}({\bf {\hat y}}\cos{\vartheta} - {\bf {\hat x}}\sin{\vartheta})$ and $\vartheta$ is the gyro-angle. (Note that the quantity inside of the collision operator is $h = g + \gyroavg{\varphi}F_0$. Note also that the spatial coordinate is ${\bf R}$ in the gyrokinetic equation and, formally, the spatial derivatives are to be interpreted in this variable, but for simplicity we avoid making the distinction explicit.) We mostly ignore the collision operator but note that some mechanism of coarse-graining will be necessary to get sensible solutions out of the equation. Quasi-neutrality yields the electrostatic potential $\varphi({\bf r}, t)$, where ${\bf r} = \hat{\bf x} x + \hat{\bf y} y$ is the position-space coordinate:
\begin{equation}
2\pi\int_0^{\infty} v dv \angleavg{g} = (1 + \tau)\varphi - \Gamma_0\varphi,
\label{qn-g}
\end{equation}
\noindent where the $g$ is implicitly assumed to be integrated over parallel velocity so that $2\pi\int_0^{\infty} v dv$ completes the integration over three-dimensional velocity space. The angle average is defined $\angleavg{A({\bf R})} = \frac{1}{2\pi}\int_0^{2\pi} d\vartheta A({\bf r} - \mbox{\boldmath $\rho$} (\vartheta))$, and the term $\tau\varphi$ is the adiabatic density response, and $\tau = T_i/(Z T_e)$ for the case of ion scales and $\tau = Z T_e/ T_i$ for the case of electron scales. For the ion case, this Boltzmann response might be considered reasonable if zonal flows are strongly suppressed. The operator $\Gamma_0\phi = 2\pi\int_0^{\infty} v dv \;F_0(v)\angleavg{\gyroavg{\phi}}$ is more naturally expressed in Fourier space, assuming a Maxwellian background $F_0 = \exp[-v^2/2]/(2\pi)$, \ie $\Gamma_0\varphi = \sum_{\bf k} \exp(i {\bf k}\cdot{\bf r}) \hat{\Gamma}_0 \hat{\varphi}$, with
\begin{equation}
\hat{\Gamma}_0(k) = \int_0^{\infty} v dv \; \mbox{$\mathrm{e}$} ^{-v^2/2}J_0^2(kv) = I_0(k^2)e^{-k^2},\label{gamma0-def}
\end{equation}
\noindent where $I_0$ is the zeroth-order modified Bessel function.
\section{Fluid limit}\label{eqns-sec}
We expand in the limit
\begin{equation}
\delta = k^2 \ll 1,
\end{equation}
\ie we assume that the scales of interest are larger than the Larmor radius of the species of interest. For electron scales, the limit is considered subsidiary to the adiabatic ion limit, so scales of the turbulence must remain much smaller than the ion Larmor scale, \ie, $\rho_e/\rho_i \ll k \ll 1$. Note that there may also be a minimum applicable $k$ imposed by dynamics parallel to the magnetic field, but treating this explicitly is outside the scope of this work. We will only need the first two orders of the expansion in $\delta$. The gyrokinetic equation, henceforth omitting explicit collisional effects, is written as
\begin{equation}
\frac{\partial g}{\partial t} + \poiss{\left(1+ \frac{v^2}{4} \nabla^2\right)\varphi}{g} \approx 0,\label{gk-lw-eqn}
\end{equation}
\noindent and Eqn.~\ref{qn-g} becomes
\begin{equation}
\tau\varphi - \nabla^2\varphi \approx 2\pi \int_0^{\infty} vdv \left(1+ \frac{v^2}{4} \nabla^2\right) g.\label{qn-lw-eqn}
\end{equation}
We will denote $v$-moments of $g$ as
\begin{equation}
G_n = 2\pi \int vdv \left(\frac{v}{2}\right)^n g.
\end{equation}
\subsection{Naive expansion}\label{naive-sec}
To give a taste for the issues that arise in the expansion, let us take an initial informal look at the moments of the gyrokinetic equation. We first examine the density moment. We include only the ostensibly dominant nonlinear terms. We stress that this equation is given only for illustrative purposes, and is not to be taken as a basis for the later derivations of the paper:
\begin{equation}
\frac{\partial}{\partial t}\left(\tau \varphi -\nabla^2\varphi - \nabla^2 G_2\right) + \poiss{\varphi}{-\nabla^2 \varphi -\nabla^2 G_2 } + \poiss{G_2}{ -\nabla^2 \varphi} = 0.\nonumber
\end{equation}
Note that the term $-\partial_t \nabla^2 \varphi$, which appears in the HM equation, should be neglected here because it is formally smaller than $\partial_t \varphi$ by one power of the ordering parameter $\delta$. Likewise, the term $-\partial_t \nabla^2 G_2$ must be considered negligible if the ordering $G_2 \sim \varphi$ and $\partial_t G_2 \sim \partial_t \varphi$ holds. The situation is, however, a bit more subtle. The above equation couples to the $v^2$ moment of $g$, $G_2$, and the equation for this and other such moments can be written, neglecting higher-order FLR terms, as
\begin{equation}
\frac{\partial G_n}{\partial t} + \poiss{\varphi}{G_n} = 0.\label{Gn-nonresonant-eqn}
\end{equation}
What we now notice, examining these two equations, is that the density equation is driven by nonlinear terms that appear to be much smaller than those controlling the higher moments of the distribution function -- that is, the equations for $G_n$ have dominant contributions from the $E\times B$ nonlinearity, while the potential evolves under the influence of terms like the ``polarization drift'' nonlinearity, which is smaller by a factor of $\delta = k^2$. One possible resolution of this apparent imbalance is to consider $G_n$ itself to be large, as for instance in the non-resonant limit of the ITG/ETG mode, \ie $G_n \sim \delta^{-1} \varphi$. In this case, the polarization drift nonlinearity can be neglected, and we see the justification for retaining the additional time derivative term above, since $\partial_t\varphi \sim \partial_t \nabla^2 G_2$. This term can be evaluated from the Laplacian of Eqn.~\ref{Gn-nonresonant-eqn}, yielding
\begin{equation}
\tau \frac{\partial \varphi}{\partial t} + \nabla^2\poiss{\varphi}{G_2} + \poiss{\varphi}{-\nabla^2 G_2 } + \poiss{G_2}{ -\nabla^2 \varphi} = 0.\label{phi-nonresonant-eqn}
\end{equation}
Eqns.~\ref{Gn-nonresonant-eqn}-\ref{phi-nonresonant-eqn} demonstrate a consistent fluid limit, but cannot describe ITG or ETG turbulence in the resonant limit, where $\varphi \sim G_2$. This corresponds the more physically reasonable scenario of a modest turbulence drive -- \ie not very far from the linear critical gradient, or considering the weakly unstable, large-scale modes that dominate the turbulence spectrum. To treat this limit properly, we must account for the fact that $\varphi$ evolves much more slowly than $G_n$. Physically, it can be argued that, in a turbulent state, Eqn.~\ref{Gn-nonresonant-eqn} will then describe rapid mixing of $G_n$ by $E\times B$ vortices, so that any initial variation along streamlines of constant $\varphi$ will decay on a fast timescale (with the help of some explicit dissipation), leaving $G_n$ to be constant along those streamlines \citep{cowley-private}. To account for such processes more carefully, we abandon conventional perturbation theory in favor of the method of multiple scales (see, \ie \cite{bender}). We will henceforth disregard the equations presented here, in section \ref{naive-sec}, and proceed to derive equations that contain only terms justified by a set of explicitly stated ordering assumptions.
\subsection{Multiscale expansion}
We introduce the fast and slow time variables $t_\mathrm{f}$, and $t_\mathrm{s}$, such that $\partial_{t_\mathrm{f}} \sim \poiss{\varphi}{ .}$ and $\partial_{t_\mathrm{s}} \sim \poiss{\nabla^2\varphi}{ .} \sim \delta \partial_{t_\mathrm{f}}$ and expand the fields as
\begin{eqnarray}
\varphi = \varphi^{(0)}(t_\mathrm{s}, t_\mathrm{f}, x, y) + \varphi^{(1)}(t_\mathrm{s}, t_\mathrm{f}, x, y) + \dots,\\
G_n = G_n^{(0)}(t_\mathrm{s}, t_\mathrm{f}, x, y) + G_n^{(1)}(t_\mathrm{s}, t_\mathrm{f}, x, y) + \dots,
\end{eqnarray}
where $\varphi^{(m+1)}/\varphi^{(m)} \sim {\cal O}(\delta)$, {\em etc}. We reiterate that the assumptions we have made are $\delta \ll 1$, the above multi-scale expansion, and the quasi-two-dimensional approximation, whereby variation in the direction along the magnetic field is neglected, and the non-kinetic species is assumed to follow a Boltzmann distribution, implying Eqn.~\ref{qn-g}; no further approximations will be made in this section. We proceed to examine the moments of gyrokinetic equation, order by order. The density moment at dominant order in $\delta$ is
\begin{equation}
\frac{\partial \varphi^{(0)}}{\partial t_\mathrm{f}} = 0,
\end{equation}
from which we formally establish that $\varphi^{(0)}$ depends only on the slow time variable. At next order in $\delta$ we obtain
\begin{equation}
\tau \frac{\partial \varphi^{(0)}}{\partial t_\mathrm{s}} + \tau\frac{\partial \varphi^{(1)}}{\partial t_\mathrm{f}} - \frac{\partial}{\partial t_\mathrm{f}}\nabla^2 G_2^{(0)} + \poiss{\varphi^{(0)}}{-\nabla^2 \varphi^{(0)} -\nabla^2 G_2^{(0)} } + \poiss{G_2^{(0)}}{ -\nabla^2 \varphi^{(0)}} = 0.\label{phi-0-eqn}
\end{equation}
We introduce a time-average operator to extract the smooth-time behavior from this equation
\begin{equation}
\tAvg{A} = \frac{1}{\Delta t} \int_{t_\mathrm{f}-\Delta t/2}^{t_\mathrm{f}+\Delta t/2} dt_\mathrm{f}^\prime A(t_\mathrm{f}^\prime).\label{tAvg-def}
\end{equation}
This time average extends over a period of time much longer than the short timescale ($\Delta t^{-1} \ll \poiss{\varphi}{.}$). Applying this average to Eqn.~\ref{phi-0-eqn}, we obtain
\begin{equation}
\tau \frac{\partial \varphi^{(0)}}{\partial t_\mathrm{s}} + \poiss{\varphi^{(0)}}{-\nabla^2 \varphi^{(0)} -\nabla^2 \tAvg{G}_2^{(0)} } + \poiss{\tAvg{G}_2^{(0)}}{ -\nabla^2 \varphi^{(0)}} = 0.\label{phi-0-avg-eqn}
\end{equation}
The dominant-order equation for $G_n$ is
\begin{equation}
\frac{\partial G_n^{(0)}}{\partial t_\mathrm{f}} + \poiss{\varphi}{G_n^{(0)}} = 0,\label{G0-fast-eqn}
\end{equation}
from which, upon time averaging, we conclude that $\tAvg{G}_n^{(0)}$ is constant along closed streamlines of constant $\varphi$. Informally, we will say that $\tAvg{G}_n^{(0)}$ is a function of $\varphi$, although it can be multi-valued. For $n = 2$ we adopt the notation
\begin{equation}
\tAvg{G}_2^{(0)} = \chi(\varphi, t_\mathrm{s}).
\end{equation}
Note that, formally, we must exclude special points and lines where $\bnabla \varphi = 0$ (o-points, and the ``separatrices'' that include x-points) but these should occupy negligible volume in the $x$-$y$ plane. At the next order, we will obtain the smooth evolution of $G_n$,
\begin{equation}
\frac{\partial G_n^{(0)}}{\partial t_\mathrm{s}} + \frac{\partial G_n^{(1)}}{\partial t_\mathrm{f}} + \poiss{\varphi^{(0)}}{G_n^{(1)}} + \poiss{\varphi^{(1)}}{G_n^{(0)}} + \poiss{\nabla^2 \varphi^{(0)}}{G_{n+ 2}^{(0)}} = 0.\label{G0-eqn}
\end{equation}
To this equation we apply two averages, the time average, and also an average along streamlines of constant $\varphi$. To define this average, we introduce a coordinate $s$ which parameterizes these streamlines and satisfies $\hat{\bf z}\times\bnabla \varphi \cdot \bnabla s = 1$ for convenience. Then we define
\begin{equation}
\sAvg{A} = \frac{\oint ds A(s)}{\oint ds}.
\end{equation}
The integral over $s$ is either closed in the sense that the streamlines are closed, or effectively closed by periodic boundary conditions. The second term of Eqn.~\ref{G0-eqn} is annihilated by the time average. Noting that $\poiss{F(\varphi)}{ A} = \partial_s(A F')$, the third term is zero under the $s$-average, as is the last term after time average, since $\tAvg{G}_{n + 2}^{(0)}$ is a function of $\varphi^{(0)}$ by Eqn.~\ref{G0-fast-eqn}. This is a crucial cancellation since the fluid moment hierarchy is consequently shown to be closed.
It is convenient to now introduce notation for the part of a field that varies on the fast timescale, \ie the ``fluctuating part'', complementing the mean component defined by Eqn.~\ref{tAvg-def}:
\begin{equation}
\nAvg{A} = A - \tAvg{A}.\label{nAvg-def}
\end{equation}
What results from the double average of Eqn.~\ref{G0-eqn} can then be expressed
\begin{equation}
\sAvg{\frac{\partial \tAvg{G}_n^{(0)}}{\partial t_\mathrm{s}}} + \sAvg{\tAvg{\poiss{\nAvg{\varphi}^{(1)}}{\nAvg{G}_n^{(0)}}}} = 0.\label{G0-avg-eqn}
\end{equation}
To evaluate the nonlinear term of Eqn.~\ref{G0-avg-eqn}, we must obtain dynamical equations for the fluctuating fields $\nAvg{\varphi}^{(1)}$ and $\nAvg{G}_n^{(0)}$. These come from Eqns.~\ref{phi-0-eqn} and \ref{G0-fast-eqn}, respectively. The fluctuating part of Eqn.~\ref{G0-fast-eqn} is
\begin{equation}
\frac{\partial \nAvg{G}_n^{(0)}}{\partial t_\mathrm{f}} + \poiss{\varphi^{(0)}}{\nAvg{G}_n^{(0)}} = 0.\label{G0-fast-eqn-final}
\end{equation}
Subtracting Eqn.~\ref{phi-0-avg-eqn} from Eqn.~\ref{phi-0-eqn}, and using the Laplacian of Eqn.~\ref{G0-fast-eqn-final} to evaluate $\partial_{t_\mathrm{f}}\nabla^2 \nAvg{G}_n^{(0)}$, we find
\begin{equation}
\tau\frac{\partial \nAvg{\varphi}^{(1)}}{\partial t_\mathrm{f}} + \nabla^2\poiss{\varphi^{(0)}}{\nAvg{G}_2^{(0)}} + \poiss{\varphi^{(0)}}{-\nabla^2 \nAvg{G}_2^{(0)} } + \poiss{\nAvg{G}_2^{(0)}}{ -\nabla^2 \varphi^{(0)}} = 0.\label{phi-1-navg-eqn}
\end{equation}
Finally, noting that $\sAvg{\partial_{t_\mathrm{s}}\varphi^{(0)}} = 0$ (from Eqn.~\ref{phi-0-avg-eqn}), we obtain, from Eqn.~\ref{G0-avg-eqn}, an expression determining the explicit time dependence of $\chi$:
\begin{equation}
\left(\frac{\partial \chi}{\partial t_\mathrm{s}}\right)_{\varphi} + \sAvg{\tAvg{\poiss{\nAvg{\varphi}^{(1)}}{\nAvg{G}_2^{(0)}}}} = 0,\label{G0-avg-eqn-2}
\end{equation}
where the partial time derivative is taken at constant $\varphi^{(0)}$. To summarize, Eqns.~\ref{G0-fast-eqn-final} and \ref{phi-1-navg-eqn} are the fast-time equations that determine $\nAvg{\varphi}^{(1)}$ and $\nAvg{G}_n^{(0)}$, which can be substituted into the slow-time equation \ref{G0-avg-eqn-2} for $\chi$, and coupled with the following equation (a repetition of Eqn.~\ref{phi-0-avg-eqn} written in terms of $\chi$) to close the system:
\begin{equation}
\tau \frac{\partial \varphi^{(0)}}{\partial t_\mathrm{s}} + \poiss{\varphi^{(0)}}{-\nabla^2 \varphi^{(0)} -\nabla^2 \chi } + \poiss{\chi}{ -\nabla^2 \varphi^{(0)}} = 0.\label{phi-0-avg-eqn-2}
\end{equation}
Noting that $\tAvg{\varphi} \approx \varphi^{(0)}$ and $\nAvg{\varphi} \approx \nAvg{\varphi}^{(1)}$, we can, without introducing ambiguity, simply drop the superscripts in what follows.
The final system, Eqns.~\ref{G0-fast-eqn-final}-\ref{phi-0-avg-eqn-2}, has some noteworthy features. First, Eqn.~\ref{G0-avg-eqn-2} has the appearance of a heat transport equation, where the flux is carried by the rapidly varying pressure perturbation $\nAvg{G}_2$ and the small amplitude fluctuating potential $\nAvg{\varphi}$. Note also how Eqns.~\ref{G0-fast-eqn-final} and \ref{phi-1-navg-eqn} bear a strong resemblance to the fluid system given by Eqns.~\ref{Gn-nonresonant-eqn}-\ref{phi-nonresonant-eqn}, where a similar ordering is satisfied, namely $\nAvg{\varphi} \ll \nAvg{G}_2$.
\section{Decaying turbulence}
We will avoid the complications introduced by the instabilities that physically drive turbulence, and instead now consider decaying turbulence. (We note that a linear instability could be added to this fluid system using the non-resonant limit of the toroidal branch of the ITG or ETG mode, but this would require some care to maintain consistency with the ordering assumptions, as discussed in section \ref{naive-sec}.) Let us consider periodic boundary conditions, and include explicit dissipation using a fourth-order hyperviscosity term. Without drive terms, Eqn.~\ref{G0-fast-eqn-final} implies the rapid decay of $\nAvg{G}_n$ to zero, implying $(\partial_{t_\mathrm{s}}\chi)_{\varphi} = 0$ (\ie it only depends on the time via its dependence on $\varphi$). The variation of $\chi$ in $\varphi$ (or more formally, its variation between distinct lines of constant $\varphi$) can be considered as an initial condition of our calculation. We need only then solve a single equation, which, neglecting superscripts for order and the subscripts of the time variable $t_\mathrm{s}$, becomes
\begin{equation}
\tau \frac{\partial \varphi}{\partial t} + \poiss{\varphi}{-\nabla^2 \varphi -\nabla^2 \chi } + \poiss{\chi}{ -\nabla^2 \varphi} = \nu_4 \nabla^4 \varphi.\label{phi-chi-eqn}
\end{equation}
The electrostatic energy
\begin{equation}
E = \frac{\tau}{2}\int dx dy \varphi^2
\end{equation}
is conserved by the nonlinearity for arbitrary $\chi$, which can be verified by multiplying the equation by $\varphi$ and integrating over the $x$-$y$ domain. Note that the resulting integral of the final nonlinear term of Eqn.~\ref{phi-chi-eqn} can be rewritten as $- \int dx dy\; {\bf v}_E\cdot\bnabla(\varphi \chi^\prime \nabla^2\varphi)$, with ${\bf v}_E = \hat{\bf z}\times\bnabla\varphi$, which is zero using $\bnabla\cdot{\bf v}_E = 0$ and periodicity.
Another quantity of interest is the enstrophy, which we will define here as
\begin{equation}
Z = \frac{\tau}{2}\int dx dy |\bnabla\varphi|^2.
\end{equation}
The enstrophy balance equation is found by multiplying Eqn.~\ref{phi-chi-eqn} by $-\nabla^2\varphi$ and integrating over $x$ and $y$. Note that the presence of $\chi$ in the equation breaks enstrophy conservation if $\chi$ is a nonlinear function of $\varphi$. The nonlinear invariance of $Z$ is associated with the inverse cascade of energy in Hasegawa-Mima turbulence. We thus expect to recover the spectra corresponding to the potential limit of the Hasegawa-Mima equation (\ie where $\nabla^2 \varphi \ll \varphi$; see \citet{plunk-jfm}), if $\chi$ is small, and qualitatively different cascade when $\chi$ is sufficiently large.
The Hasegawa-Mima spectra can be derived in the rough ``phenomenological'' style, in terms of the fluctuation amplitude at scale $\ell$, denoted $\varphi_\ell$, by assuming constancy of nonlinear flux of its nonlinear invariants (see \eg \citet{frisch, plunk-jfm}). For the forward cascade, \ie at scales smaller than the scale of energy injection, the enstrophy flux, denoted $\varepsilon_Z$, is assumed constant (independent of scale $\ell$), which is expressed as follows:
\begin{equation}
\varepsilon_Z = \tau_{\mathrm{NL}}^{-1} \ell^{-2} \varphi_\ell^2 \sim \varphi_\ell^3\ell^{-6},
\end{equation}
with $\tau_{\mathrm{NL}}(\ell)$ denoting the nonlinear turnover time. This leads to the scaling $\varphi_\ell \sim \ell^2\varepsilon_Z^{1/3}$, implying a one-dimensional energy spectrum of $E(k) \sim k^{-5}$. The constancy of the scale-by-scale flux of energy, expected for the inverse cascade at scales larger than the injection scale, is expressed as
\begin{equation}
\varepsilon_E = \tau_{\mathrm{NL}}^{-1} \varphi_\ell^2 \sim \varphi_\ell^3\ell^{-4},
\end{equation}
implying $\varphi_\ell \sim \ell^{4/3}\varepsilon_E^{1/3}$ and a spectrum $E(k) \sim k^{-11/3}$.
Because the additional nonlinear terms of Eqn.~\ref{phi-chi-eqn}, henceforth called the ``$\chi$ nonlinearity'', formally break enstrophy conservation, we expect that if they are sufficiently strong, the inverse cascade should be eliminated and the forward cascade of $Z$ replaced with a direct cascade of $E$. If this flux is carried by the HM nonlinearity, one might expect to observe the spectrum $E(k) \sim k^{-11/3}$, as suggested by \citet{plunk-prl-2019}. On the other hand, balancing the $\chi$ nonlinearity with the HM nonlinearity, scale-by-scale, implies the linear relation $\chi_\ell \sim \varphi_\ell$, \ie $\chi \propto \varphi$, which would imply that enstrophy is actually a nonlinear invariant, preventing the forward cascade of $E$. For this reason, we may expect to observe an energy spectrum distinct from $k^{-11/3}$, whose steepness depends on the relationship between $\chi_\ell$ and $\varphi_\ell$, which itself depends on details of the turbulence.
Providing a definitive prediction of this relationship is beyond the scope of the present work, but a power law seems to be a reasonable possibility to explore, \ie $\chi_\ell \sim \varphi_\ell^\alpha$. Note that any super-linear scaling $\alpha > 1$ should lead to a spectrum shallower than $k^{-11/3}$, while a sub-linear scaling $\alpha < 1$ would imply $\chi$ is not analytic in $x$ and $y$. The fluctuating fields $\nAvg{G}_2$ and $\nAvg{\varphi}$ could be especially active in regions of low $E\times B$ shear (see Eqn.~\ref{G0-fast-eqn-final}), causing local extrema in the function $\chi$, via Eqn.~\ref{G0-avg-eqn-2}, so that a quadratic relationship prevails in such regions, $\chi_\ell \sim \varphi_\ell^2$. Whether or not this seems plausible, assuming a simple nonlinear relationship will allow us to make the discussion now more concrete; qualitatively similar conclusions should apply for all $\alpha > 1$. Let us consider the following form for $\chi$:
\begin{equation}
\chi(\varphi) = \frac{\lambda}{2} \varphi^2.\label{chi-eqn}
\end{equation}
The nonlinear energy flux by the $\chi$ terms is then expressed as $\varepsilon_E \sim \lambda \varphi_\ell^4 \ell^{-4}$, implying $\varphi_\ell \sim \ell (\varepsilon_E/\lambda)^{1/4}$, and the corresponding energy spectrum
\begin{equation}
E(k) \sim k^{-3}.
\end{equation}
This spectrum should prevail in cases where the $\chi$-nonlinearity dominates (\eg large $\lambda$). At sufficiently low $\lambda$, one expects a return to the HM behavior, implying $E(k) \sim k^{-5}$ for the forward cascade.
Some sort of hybrid behavior may also be possible, although the broad scale range needed for clear observation of this may be not be present for realistic conditions encountered in fusion plasmas. One might argue that, because the amplitude of fluctuations $\varphi_\ell$ is generally expected to decrease as scales do, the cubic nonlinearity should be dominant at large scales, and subdominant at small scales. Thus, for sufficiently large $\lambda$, the energy cascade scaling $\varphi_\ell \sim \ell (\varepsilon_E/\lambda)^{1/4}$ should hold from the injection scale, down to a transition scale, which can be found by balancing the HM nonlinearity with the $\chi$-nonlinearity, \ie $\varphi_\ell^2\ell^{-4} \sim \lambda \varphi_\ell^3 \ell^{-4}$. Defining the outer scale $\ell_\mathrm{o}$ as the scale of energy injection (or initial energy containing scale), and $\varphi_{\mathrm{o}} = \varphi_{\ell_\mathrm{o}}$, we can write the $\varphi_\ell$ scaling as $\varphi_\ell \sim (\ell/\ell_\mathrm{o}) \varphi_{\mathrm{o}}$, so that the above balance occurs at the ``transition'' scale $\ell_t \sim \ell_\mathrm{o} / (\lambda \varphi_{\mathrm{o}})$. Thus, if $\lambda \varphi_{\mathrm{o}} \gtrsim 1$ one might expect $E \sim k^{-3}$ scaling for $\ell_{\mathrm{o}}^{-1} < k < \ell_t^{-1}$ followed by $E \sim k^{-5}$ for $k > \ell_t^{-1}$.
\subsection{Direct numerical simulations}
To explore the behavior of the model, Eqn.~\ref{phi-chi-eqn}, and test the theoretical predictions, we perform direct numerical simulations, assuming the simple quadratic form of $\chi(\varphi)$ in Eqn.~\ref{chi-eqn}. This introduces a nonlinearity that is cubic in $\varphi$, which can be treated pseudo-spectrally using a padding factor of $2$ for dealiasing; higher order nonlinearities require additional padding \citep{HOSSAIN1992}. The boundary conditions for the simulations are periodic in $x$ and $y$, and $\tau = 1$ for all simulations.
Fig.~\ref{E-spectra-fig} compares the simulation results with the theoretical scaling laws. All simulations are initialized with randomly phased fluctuation amplitude of $\varphi \sim 1$ around $k = 1$, falling off exponentially at higher $k$. Note that although the model assumes $k \ll 1$ there is no conflict in using $k > 1$ for the simulations, as scaling symmetries of the model allow the results to be reinterpreted for $k \ll 1$. The spectrum found for the $\lambda = 0$ case is roughly consistent with the theoretical power law $k^{-5}$ expected for the potential limit of the HM equation. We note that similar results (not shown here) are encountered for $\lambda \lesssim 0.1$. At larger $\lambda$, the breaking of enstrophy conservation is indeed observed in the time trace of $Z$, as the energy fills in the spectrum at large $k$. For the case labeled $\lambda \rightarrow \infty$ in Fig.~\ref{E-spectra-fig}, the spectrum seems consistent with the theoretical $k^{-3}$ prediction at scales smaller than the injection scale. Note that this limit is obtained by actually setting $\lambda = 1$ and simply removing the HM nonlinearity (\ie the first term of Eqn.~\ref{phi-chi-eqn}) from the equation, as can be formally justified by rescaling Eqn.~\ref{phi-chi-eqn} in the limit $\lambda \rightarrow \infty$. Similar behavior is observed for $\lambda \gtrsim 1$. Intermediate values of $\lambda$ show intermediate behavior.
One example is shown in Fig.~\ref{lambda-mid-fig}, which seems to show evidence of a transition scale between the two theoretical power laws, giving some support to the predictions of a hybrid scenario described theoretically in the previous section. A more extensive set of simulations would be needed to test the predictions in detail, for instance the dependence of the transition scale $\ell_t$ on system parameters. We would like to generally stress that the results of the numerical simulations presented here come at a very modest computational expense, and larger scale computational effort, especially using a gyrokinetic code, could offer a more extensive test of the conclusions of this work.
\begin{figure}
\includegraphics[width=0.45\columnwidth]{fig1a.pdf}
\includegraphics[width=0.45\columnwidth]{fig1b.pdf}
\caption{Comparison of spectra exhibited by HM-type system ($\lambda = 0$) and our two-dimensional turbulence model ($\lambda \rightarrow \infty$).}
\label{E-spectra-fig}
\end{figure}
\begin{figure}
\begin{center}
\includegraphics[width=0.45\columnwidth]{fig2.pdf}
\end{center}
\caption{Energy spectrum for a case of intermediate strength of $\chi$ nonlinearity ($\lambda = 0.5$).}
\label{lambda-mid-fig}
\end{figure}
\section{Discussion}
A novel fluid system has been derived to describe the behavior of certain classes of quasi-two-dimensional electrostatic magnetized plasma turbulence. A possible application is to describe the energy cascade in cases of streamer-dominated ETG turbulence (note the spectrum, noted to be close to $k^{-11/3}$, in Fig.~5 of \citet{plunk-prl-2019}), where the nonlinear stability of elongated turbulent eddies is believed to stem from the two-dimensional character of the dominant instabilities, \eg the absence of sufficient variation of the mode structure in the direction along the magnetic field \citep{jenko-dorland-prl-2002}. This turbulent state is, however, sensitive to magnetic geometry, and seems to vanish when, for instance, the global magnetic shear is varied in such a way as to induce stronger parallel electron flow to the ETG mode. The ensuing dynamics then depends on kinetic physics involving the parallel streaming term, absent from two-dimensional models. A second possible application of the present model might be to describe ITG turbulence in cases where the zonal flows are suppressed. One candidate is a case observed with simulations of the HSX stellarator having surprisingly steep fluctuation spectra \citep{plunk-prl-2017}, found to be close to $k^{-10/3}$.
Although the presented model has limited application, it fills a significant gap in present theories describing gyrokinetic turbulence cascades, as it accounts for the essential nonlinear terms that arise when the cold ion approximation is invalid. These terms, it is found, alter the conservative properties of the nonlinearity, with significant consequences on the cascade, so that, even in the two-dimensional limit, the inverse cascade of energy can be shut down. The numerical simulations confirm that the size of the pressure perturbation ($\chi$) can control the cascade type, and HM-like behavior can be recovered if it is sufficiently small. This may underlie the slow secular growth of large-scale zonal flows \citep{Guttenfelder-Candy} and other coherent structures \citep{nakata-vortex-streets} in simulations of ETG turbulence, and the related appearance of a Dimits shift phenomenon in near-marginal cases \citep{Colyer_2017}.
{\bf Acknowledgements.} This work has been carried out within the framework of the EUROfusion Consortium and has received funding from the Euratom research and training programme 2014-2018 and 2019-2020 under grant agreement No 633053. The views and opinions expressed herein do not necessarily reflect those of the European Commission.
\bibliographystyle{unsrtnat}
|
2,877,628,090,446 | arxiv | \section{System description}
We consider the system consisting of one queue of finite capacity $N$,
served by single server.
Customers arrive at the system according to the Poisson flow of rate $\lambda$.
If a customer sees the system full it is lost,
otherwise it occupies one place in the queue if the server
is busy and the server if it is idle.
Service times are constant, equal to $d>0$.
Thus the cumulative distribution function $B(x)$
of the random variable equal to customers service time is
the step function: it is $0$ if $x<d$ and $1$ otherwise.
Upon service completion one customer from the head
of the queue enters server i.e. the service discipline is FCFS.
General renovation mechanism is implemented in the system.
It works as follows. Define $N+1$ numbers, say $q_i\ge 0$,
$0 \le i \le N$, satisfying \mbox{$\sum_{i=0}^N q_i=1$}.
When the service is completed the served customer leaves the system
and additional $i$ customers are removed from the
queue with probability $q_i$. Such mechanism of removing
customers from the system is called renovation
in \cite{Zaryadov2010,Kreinin}.
For the purpose of comparison with RED
we need to introduce several refinements into the renovation procedure.
Firstly notice that after the renovation, the queue may become empty
and thus the server will be idle until the next arrival. From the practical point of view
it is more appealing to leave at least one customer in
the system after the renovation.
Secondly, it may happen that upon service completion it is required to remove
more customers, than are actually waiting in the queue.
For such a conflict we will consider separately two resolution options:
\noindent
\textit{Option 1}.
If upon service completion there are $1 \le i \le N$ customers
waiting in the queue, then
\begin{itemize}
\item[--] with probability $q_0$ nothing happens;
\item[--] with probability $q_j$, $0<j<i$, exactly $j$ customers
from the queue leave the system and those customers
are chosen successively \textit{starting from the head of the queue};
\item[--] with probability $Q_i=q_i+q_{i+1}+\dots+ q_N$ exactly $(i-1)$
customers from the queue leave the system. Again those customers
are chosen successively \textit{starting from the head of the queue}.
\end{itemize}
\noindent \textit{Option 2}.
If upon service completion there are $1 \le i \le N$ customers
waiting in the queue, then
\begin{itemize}
\item[--] with probability $q_0+Q_i$ nothing happens, where
$Q_i=q_i+q_{i+1}+\dots+ q_N$;
\item[--] with probability $q_j$, $0<j<i$, exactly $j$ customers
from the queue leave the system and those customers
are chosen successively \textit{starting from the head of the queue}.
\end{itemize}
\noindent Throughout the paper, for the sake of brevity,
we use the agreement that $\sum_{k=0}^{-1} \equiv 0$.
\section{Option 1}
\subsection{Stationary distribution}
Let $N(t)$ be the total number of customers at instant $t$
and $E(t)$ be the elapsed service time of the customer in server
(if there is one) in the $M/D/1/N$ queue with renovation as described above.
In order to compute the stationary queue size moments we need the distribution
\begin{equation}
\label{pn}
\lim_{t \rightarrow \infty} \mathbf{P}\{ N(t)=n \}=P_n,\ 0 \le n \le N+1,
\end{equation}
and for the loss rate, due to PASTA property of Poisson arrivals, -- the distribution
it is sufficient to know
\begin{equation}
\label{pnx}
\lim_{t \rightarrow \infty} \mathbf{P}\{ N(t)=n, E(t)<x \}=P_n(x), \ 1\le n \le N, \ x \in [0,d],
\end{equation}
Since we are dealing with the finite-capacity
queue and work conserving service discipline, these
stationary distributions exist.
The analytic method for finding $P_n$
has been developed in \cite{Zaryadov2009,Zaryadov2010}.
Even though the renovation mechanism that we consider
differs from the one in \cite{Zaryadov2009,Zaryadov2010},
the method still works.
The distributions (\ref{pn}) and (\ref{pnx}) can be found as follows.
At first we find the stationary distribution \mbox{$\{P^+_n, \ 0 \le n \le N\}$}
of the Markov chain $\{ \nu(t), \ t \ge 0\}$ embedded at service completion
epochs.
Then using well-known results for the Markov regenerative processes (see \cite[Theorem 9.19]{kulk}),
we calculate \mbox{$\{P_n, \ 0 \le n \le N+1\}$} by $P_n=\sum_{i=0}^N P^+_i f_{in} / f^*$,
where $f_{in}$ is the mean time spent by the system in state $n$, starting
from $i$, and $f^*$ is the mean time between transitions of the $\{ \nu(t), \ t \ge 0\}$.
Finally, relations for the functions $P_n(x)$ are found from the results for
the $M/D/1/N$ queue.
Let $\beta_i=[{(\lambda d)^i / i!}]e^{-\lambda d}$ and
$B_0=1-\beta_0$, $B_i=B_{i-1}-\beta_i$, $i\ge 1$.
The entries of the transition probability matrix $P=(p_{ij})$
of the embedded Markov chain $\{ \nu(t), \ t \ge 0\}$
have the form
$$
p_{0j}=p_{1j}=
\begin{cases}
\beta_0, & j=0,\\
\sum_{k=1}^N \beta_k q_{k-1} + B_N q_{N-1}+
\\
\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,+ \sum_{k=1}^N \beta_k Q_{k} + B_N q_{N}, & j=1,\\
\sum_{k=j}^N \beta_k q_{k-j} + B_N q_{N-j}, & 2 \le j \le N,
\end{cases}
$$
$$
p_{ij}=
\begin{cases}
0, & j=0,\\
\sum_{k=0}^{N+1-i} \beta_k Q_{k+i-1} + B_{N+1-i} q_{N}
+
\\
\,\,\,\,\,\,\,\,\,\,\,+
\sum_{k=\max(0,2-i)}^{N+1-i} \beta_k q_{k+i-2} + B_{N+1-i} q_{N-1}, & j=1,\\
\sum_{k=\max(0,j-i+1)}^{N+1-i} \beta_k q_{k-j+i-1} + B_{N+1-i} q_{N-j}, & 2 \le j \le N,\\
\end{cases}
\ \ 2 \le i \le N.
$$
The matrix $P=(p_{ij})$ does not have any special structure
and so the values of $P_n^+$ are found by solving the system
of linear algebraic equations
$$
{\vec P}^+={\vec P}^+P, \ \
{\vec P}^+ {\vec 1} =1,
$$
where ${\vec P}^+= (P^+_0,\dots,P^+_N)$.
There are numerous methods for performing the solution
(for example, Gaussian elimination method. For others one can
refer to \cite{stew}).
In order to compute the values of
\mbox{$\{P_n, \ 0 \le n \le N+1\}$} using the relation
$P_n=\sum_{i=0}^N P^+_i f_{in} / f^*$,
we need expressions for $f_{in}$ and $f^*$.
For the stationary mean time $f^*$
between transitions of the embedded Markov chain $\{ \nu(t), \ t \ge 0\}$
it holds that
$$
f^*=P^+_0 \left ( {1\over \lambda} + d \right )
+ (1-P^+_0) d.
$$
\noindent Since customers, that are waiting in the queue,
can leave the queue \textit{only} on service completions,
we have
$$
f_{in}
=
\begin{cases}
0, & 0 \le n \le i-1,\\
f_{0,n-i+1}, & i \le j \le N+1,
\end{cases}
\ \ 1 \le i \le N.
$$
Clearly $f_{00}={1/ \lambda}$
and other values of $f_{0n}$ are computed by conditioning on the number of arrivals
during one service completion.
For $f_{01}$ we can write
$$
f_{01}=\int_0^\infty x e^{-\lambda x} dB(x)
+
\int_0^\infty dB(x) \int_0^x t \lambda e^{-\lambda t} dt,
$$
which, by remembering the property of the Laplace-Stieltjes
transform \mbox{$\int_0^\infty g(x) dB(x)=g(d)$},
can be reduced to
$$
f_{01}= d e^{-\lambda d}
+
{1\over \lambda} \left ( 1- \sum_{k=0}^1 {(\lambda d)^k \over k!} e^{-\lambda d} \right ).
$$
It is straightforward to generalize this result for $1 \le n \le N$:
$$
f_{0n}= {1\over \lambda} {(\lambda d)^{n} \over n!} e^{-\lambda d}
+
{1\over \lambda} \left ( 1- \sum_{k=0}^n {(\lambda d)^k \over k!} e^{-\lambda d} \right )
=
{1\over \lambda} \left ( 1- \sum_{k=0}^{n-1} {(\lambda d)^k \over k!} e^{-\lambda d} \right ).
$$
For $f_{0,N+1}$ the expression will be different, which is due to the fact that
the system capacity is finite and at some instant (when the queue becomes full)
the state of the system stops changing due to new arrivals
and will change only when the service is completed.
One way to compute $f_{0,N+1}$ is to consider all possible events, which gives
$$
f_{0,N+1}=
\int_0^\infty x \left ( 1- \sum_{k=0}^{N-1} {(\lambda x)^k \over k!} e^{-\lambda x} \right ) dB(x)
+
\int_0^\infty
{N \over \lambda}
\left ( \sum_{k=0}^{N} {(\lambda x)^k \over k!} e^{-\lambda x} -1 \right )
dB(x)
=
$$
$$
=
d \left ( 1- \sum_{k=0}^{N-1} {(\lambda d)^k \over k!} e^{-\lambda d} \right )
+
{N \over \lambda}
\left ( \sum_{k=0}^{N} {(\lambda d)^k \over k!} e^{-\lambda d} -1 \right ).
$$
\noindent The other way is to recall that $\sum_{n=1}^{N+1}f_{in}=d$
and thus once $f_{0n}$, $1 \le n \le N$, are computed, $f_{0,N+1}=d-\sum_{n=1}^{N} f_{in}$.
Since $P_n$ are found, the moments $\mathbf{E} N^m$ of the total number in the system
can be computed according to the definition i.e.
$\mathbf{E} N^m
=
\sum_{k=0}^{N+1} k^m p_k$.
Coming back to the functions $P_n(x)$, notice that
the differential equations for the functions $p_n(x)=P'_n(x)$ coincide with those for
the classical $M/D/1/N$ queue. Thus
they have the form (see, for example, \cite[subsection 4.14]{Riordan1962}):
\begin{equation}
\label{eq3}
p_n(x)=e^{-\lambda x} [1-B(x)] \sum\limits_{k=0}^{n-1} p_{n-k}(0) {(\lambda x)^k \over k!}, \ 1 \le n \le N.
\end{equation}
Here $p_{n}(0)$ are the boundary conditions, which are in our case different
from the boundary conditions for the classical $M/D/1/N$ due to the presence of renovation.
We can follow the classic argumentation for obtaining the boundary conditions for
$M/G/1$-type queues remembering renovation.
But since the probabilities $P_n=\int_0^d p_n(x) dx$ have been found above,
we can integrate (\ref{eq3}) from $0$ to $\infty$ and find the
relation between $P_n$ and $p_{n}(0)$. This gives
\begin{eqnarray}
P_n &=& \sum\limits_{k=0}^{n-1} {\lambda^k \over k!} p_{n-k}(0)
\int_0^d e^{-\lambda x} [1-B(x)] x^k dx
=
\nonumber
\\
&=&
{1\over \lambda}
\sum\limits_{k=0}^{n-1} B_k p_{n-k}(0), \ 1 \le n \le N.
\label{s1}
\end{eqnarray}
This is the system of $N$ linear algebraic equations
with $N$ unknowns $p_1(0), \dots, p_N(0)$, which can be solved
iteratively, starting from $n=1$:
\begin{eqnarray}
p_n(0)
=
{1 \over B_0}
\left (
\lambda P_n
-
\sum\limits_{k=1}^{n-1} B_k p_{n-k}(0)
\right ), \ 1 \le n \le N.
\end{eqnarray}
Since the values of $p_n(0)$, $1 \le n \le N$,
are now known and the functions $p_n(x)$ can be computed from (\ref{eq3}).
\subsection{Loss probability}
Let $\pi$ be the probability that the arriving customer (or tagged customer)
will be lost. Due to the PASTA property of Poisson arrivals,
$p_n(x)$ is also the probability density that the arriving
customer sees $n$ customers in the system and sees the elapsed
service time equal to $x$.
Firstly, the arriving customer is lost
if it sees the system full, which happens with probability
$P_{N+1}$.
If the arriving customer sees the system busy but not full,
then the derivations become tricky.
Indeed, let the arriving customer see one customer in the system
and the elapsed service time equal to $x$ (the probability density of
this even is $p_1(x)$).
Then if no customers arrive until the service is completed
(i.e. during the time $d-x$), then the tagged customer will
not be lost. But if there was at least one arrival
during the remaining service time $d-x$, then
the tagged customer will be lost with probability $Q_1$.
Now assume the tagged customer sees one customer
waiting in the queue and the elapsed service time equal to $x$
(the probability density of this even is $p_2(x)$).
Then if no customers arrive until the current service is completed
(i.e. during the time $d-x$), then the tagged customer
will be lost with some (yet unknown) probability $r_0$.
But if there was at least one arrival during the remaining service time $d-x$, then
the tagged customer will be lost with some (also unknown) probability $r^*_0$.
Clearly $r_0$ ($r^*_0$) is the probability of the event ``the customer is lost if
there are 0 customers in front of it in the queue and no customers (at least one customer) behind it
and the elapsed service time of the customer in server is 0''
and thus
$$
r_{0} = [1-e^{-\lambda d}] Q_1,
\ \
r^*_{0} = Q_1.
$$
For the tagged customer seeing $i$ customers
waiting in the queue and the elapsed service time equal to $x$,
we will have two probabilities $r_{i-1}$ and $r^*_{i-1}$,
which are computed recursively from $r_{j}$ and $r^*_{j}$, $0\le j<i-1$.
Putting it altogether,
we have the following expression for the loss probability $\pi$:
\begin{eqnarray}
\pi= P_{N+1}+ Q_1 \int_0^d p_1(x) [1-e^{-\lambda (d-x)}] dx
+
\sum\limits_{i=2}^{N-1}
\int_0^d p_i(x) e^{-\lambda (d-x)} dx \sum\limits_{j=0}^{i-2} q_j r_{i-2-j}
+
\nonumber
\\
+
\sum\limits_{i=2}^{N-1}
\int_0^d p_i(x) [1-e^{-\lambda (d-x)}] dx
\left (
\sum\limits_{j=0}^{i-2} q_j r^*_{i-2-j}+Q_i
\right )
+
P_N \sum\limits_{j=0}^{N-2} q_j r_{N-2-j},
\label{ploss}
\end{eqnarray}
where the probabilities $r_{i}$ and $r^*_{i}$ are computed
from relations
\begin{eqnarray}
r_{0} \!\!\!\!\!&=&\!\!\!\!\! [1-e^{-\lambda d}] Q_1,
\\
r^*_{0} \!\!\!\!\!&=&\!\!\!\!\! Q_1,
\\
r_{i} \!\!\!\!\!&=&\!\!\!\!\!
e^{-\lambda d} \sum\limits_{j=0}^{i-1} q_j r_{i-1-j}
\!+\!
[1\!-\!e^{-\lambda d}]
\left (
\sum\limits_{j=0}^{i-1} q_j r^*_{i-1-j}\!+\!Q_{i+1}
\right )\!\!, 1 \le i \le N\!-\!2,
\\
r^*_{i} \!\!\!\!\!&=&\!\!\!\!\!
\sum\limits_{j=0}^{i-1} q_j r^*_{i-1-j}\!+\!Q_{i+1}, 1 \le i \le N\!-\!2.
\end{eqnarray}
Even though the expression (\ref{ploss}) can be simplified by computing
the integrals explicitly, it is not our goal here.
For small and moderate values of $d$, $N$ and $\lambda$
the expression (\ref{ploss}) presents almost no computational
difficulties and can be directly used for numerical implementation.
\subsection{Consecutive losses}
We will not derive here the expression for the moments of the consecutive losses
and just notice the following. When comparing with RED-type
schemes, we are interested in consecutive losses between two accepted arrivals.
Due to the fact that losses of two types occur in the system (due to the full queue and due to the renovation)
this is a hard nut to crack.
One of the feasible solutions follows from the results for the distribution
of consecutive losses under a RED scheme. In order to make the exposition
simpler and the argument more transparent,
we change the deterministic service to exponential with rate $\mu$ (thus we deal
in this subsection with the $M/M/1/K$ queue).
Let $L$ be the random variable equal to a length of a series
of consecutive losses.
The probability $\mathbf{P}\{ L=k \}$ is equal to the fraction
\begin{equation}
\label{cl1}
{\mathbf{P} \{\mbox{``an arrival accepted, next $k$ arrivals lost, the next arrival accepted''} \}
\over
\mathbf{P} \{\mbox{``an arrival accepted, the next arrival lost''} \}
}.
\end{equation}
Given that the system is busy, the probability that an arrival occurs earlier that the
service completion is simply $\delta=\lambda/(\lambda+\mu)$.
Again by utilizing the PASTA property of Poisson arrivals
and the law of total probability we obtain the expression for
the denominator in (\ref{cl1}):
\begin{multline*}
\mathbf{P} \{\mbox{``an arrival accepted, the next arrival lost''} \}
=
\\
=
\sum_{n=0}^N P_n (1-d_{n})\sum_{i=1}^{n+1} \delta d_i (1-\delta)^{n+1-i}.
\end{multline*}
The expression for the numerator in (\ref{cl1})
can be obtained by simple recursion.
Denote by $l_{k,i}$ the probability
that $k$ consecutive arrivals are lost
and $(k+1)^{st}$ arrival is accepted.
It holds
\begin{eqnarray*}
l_{1,i}=\sum_{k=1}^{i} \delta (1- d_k) (1-\delta)^{i-k}
+(1-\delta)^{i}(1- d_0), \ 1 \le i \le N+1,
\\
l_{n,1}=\delta d_1 l_{n-1,1},
\
l_{n,i}=\delta d_i l_{n-1,i}+(1-\delta)l_{n,i-1},
2 \le i \le N+1.
\end{eqnarray*}
Putting all together, we get the expression for
$\mathbf{P}\{ L=k \}$:
$$
\mathbf{P}\{ L=k \}
=
{
\sum\limits_{n=0}^N P_n (1-d_{n}) l_{k,n+1}
\over
\sum\limits_{n=0}^N P_n (1-d_{n})\sum\limits_{i=1}^{n+1} \delta d_i (1-\delta)^{n+1-i}
}, \ k \ge 1.
$$
Now the moments of the number of consecutive losses can be calculated according
to the definition.
\section{Option 2}
\subsection{Stationary distribution}
The distributions $P_n$ and $P_n(x)$,
as defined by (\ref{pn}) and (\ref{pnx}),
can be found following the same steps
in the previous section. The only difference
will be in the expressions for $P_n^+$
of the embedded Markov chain
$\{ \nu(t), \ t \ge 0\}$.
It is straightforward to see that
the entries of the transition probability matrix $P=(p_{ij})$
of the embedded Markov chain $\{ \nu(t), \ t \ge 0\}$
under \textit{Option 2}
have the form
$$
p_{0j}=p_{1j}=
\begin{cases}
\beta_0, & j=0,\\
\beta_j Q_j + \sum_{k=j}^N \beta_k q_{k-j} + B_N q_{N-j}, & 1 \le j \le N-1,\\
(q_0 + q_N) B_{N-1}, & j=N,
\end{cases}
$$
$$
p_{ij}=
\begin{cases}
0, & j=0,\\
\sum_{k=j-1}^{N-1} \beta_k q_{k-j+1} + B_{N-1} q_{N-j}, & 1 \le j \le i-2,\\
\beta_{j-i+1} Q_j + \sum_{k=j-1}^{N-1} \beta_k q_{k-j+1} + B_{N-1} q_{N-j}, & i-1 \le j \le N-1,\\
(q_{0} + q_{N})B_{N-i} , & j=N,
\end{cases}
\ 2 \le i \le N.
$$
The matrix $P=(p_{ij})$, just like in the case of \textit{Option 1},
does not have any special structure.
So the probabilities $P_n^+$ are found by solving the system
of linear algebraic equations
$$
{\vec P}^+={\vec P}^+P, \ \
{\vec P}^+ {\vec 1} =1,
$$
where ${\vec P}^+= (P^+_0,\dots,P^+_N)$.
Now the distributions $P_n$ and $P_n(x)$,
and moments of the number in the system
can be computed using the relations in the previous section.
\subsection{Loss probability}
Let $\pi$ be the probability that the arriving customer (or tagged customer)
will be lost. The expression for $\pi$ under \textit{Option 2}
is more involved than under \textit{Option 1} given by (\ref{ploss}).
This is due to the fact that the accepted customer
may be lost either after the fist service completion or the second etc.
and the chance to be lost varies, depending on the number of
new customers, that arrived between successive service completions.
Let us introduce two quantities:
$\gamma_{ij}$, $1 \le i \le N$, $j \ge 0$, --- probability that the arriving customer
finds $i$ customers in the system and until the next
service completion exactly $j$ new customers arrive
at the system;
$r_{ij}$, $0\le j \le N-1$, $0 \le i \le N-j-1$, --- probability that the customer waiting in the queue
\textit{will not} be served, if there are $j$ customers in front of it in the queue (excluding the one in server)
and $i$ behind.
Given that $\gamma_{ij}$ and $r_{ij}$ are known,
the loss probability $\pi$ can be computed as
\begin{eqnarray}
\pi
=&& \!\!\!\!\!\!\!\!\!\!
P_{N+1}
+
\sum_{j=0}^{N-i}
\gamma_{ij}
\left (
\sum_{k=0}^{i-2}
q_k r_{j,i-2-k}
+
\sum_{k=i}^{i+j-1}
q_k
+
Q_{j+i}
r_{j,i-2}
\right )
+
\nonumber
\\
&&+
\sum_{j=N-i+1}^{\infty}
\gamma_{ij}
\left (
\sum_{k=0}^{i-2}
q_k r_{N-i,i-2-k}
+
\sum_{k=i}^{N-1}
q_k
+
Q_{N}
r_{N-i,i-2}
\right ).
\label{ploss2}
\end{eqnarray}
Due to the PASTA property of Poisson arrivals,
the expression for $\gamma_{ij}$ simply follows
from the law of total probability:
\begin{equation}
\gamma_{ij}
=
\int_0^d p_{i}(x) {(\lambda x)^j \over j!} e^{-\lambda x} dx,
\ 1 \le i \le N, \ j \ge 0.
\end{equation}
Again by applying the law of total probability,
we get the relations for the recursive computation of $r_{ij}$, $0\le j \le N-1$, $0 \le i \le N-j-1$:
\begin{eqnarray}
r_{i0}
=
&& \!\!\!\!\!\!\!\!\!\!\!\!
\sum_{m=0}^{N-i-1}
\beta_m
\sum_{k=1}^{m+i}
q_k
+
\sum_{m=N-i}^{\infty}
\beta_m
\sum_{k=1}^{N-1}
q_k,
\\
r_{ij}=
&& \!\!\!\!\!\!\!\!\!\!\!\!
\sum_{m=i}^{N-1-j}
\beta_{m-i}
\left (
\sum_{k=0}^{j-1}
q_k r_{m,j-1-k}
\!+\!
\sum_{k=j+1}^{m+j}
q_k
\!+\!
Q_{j+m+1}
r_{j,j-1}
\right )
+
\nonumber
\\
&& \!\!\!\!\!\!\!\!\!\!\!\! +
\sum_{m=N-j-i}^{\infty}
\beta_m
\left (
\sum_{k=0}^{j-1}
q_k r_{N-j-1,j-1-k}
\!+\!
\sum_{k=j+1}^{N-1}
q_k
\!+\!
Q_{N}
r_{N-j-1,j-1}
\right )\!\!.
\end{eqnarray}
The expressions above can be further simplified by computing
the integrals explicitly, but we will not do it since
for small and moderate values of $d$, $N$ and $\lambda$
they can be directly used for numerical implementation.
\section{Conclusion}
Although the renovation mechanism is based on completely different idea
than the RED-type AQMs, as numerical experiments
show, it allows to achieve
comparable system performance.
Yet the choice of the values of its
parameters $q_i$ presents certain difficulties.
We are unaware of any analytic way of choosing
$q_i$ and thus we have to resort to
special search algorithms.
Metaheuristics (like particle swarm optimization)
are also applicable here.
\section*{Acknowledgements}
This work was supported by the Russian Foundation for Basic Research (grant 15-07-03406).
\section*{References}
|
2,877,628,090,447 | arxiv | \section{Introduction}
\label{sec:intro}
Automatic verification of semantic properties of modern programming languages
is an important step toward reliable software systems.
For higher-order programming languages with inductive datatypes
or polymorphic instantiation, the main verification tool has been type systems,
which traditionally capture only coarse data-type properties (such as $\mathtt{int}$s are
only added to $\mathtt{int}$s),
and require the programmer to explicitly annotate program invariants if
more precise invariants about program computations are required.
For example, \emph{refinement} type systems \cite{XiPfenning99}
associate data types with refinement predicates that capture richer properties of
program computation.
Using refinement types, one can state, for instance, that a program variable $\mathtt{xs}$ has the refinement type
``non-zero integer," or that the integer division function has the refinement type
$\mathtt{int} \rightarrow \reftyp{\valu}{\mathtt{int}}{\valu \not = 0} \rightarrow \mathtt{int}$
which states that the second argument must be non-zero.
Then if a program with refinement type type-checks, one can assert that there is no
division-by-zero error in the program.
The idea of refinement types to express precise program invariants is
well-known~\cite{XiPfenning99,Ou2004,ATS,Dunfield,Flanagan06,GordonRefinement09}.
However, in each of the above systems, the programmer must provide refinements for
each program type, and the type system {\em checks} the provided type refinements for
consistency.
We believe that this burden of annotations has limited the widespread adoption of refinement
type systems.
For {\em imperative} programming languages, algorithms based on abstract interpretation
can be used to {\em automatically infer} many program invariants
\cite{SLAMPOPL02,HJMM04,CousotPLDI03}, thereby proving many semantic properties of practical interest.
However, these tools do not precisely model modern programming features such as closures
and higher-order functions or inductive datatypes, and so in practice, they
are too imprecise when applied to higher-order programs.
In this paper, we present an algorithm to {\em automatically}
verify properties of higher-order programs through
refinement type inference (RTI) by
combining refinement type systems for higher-order programs
with invariant synthesis techniques for first-order programs.
Our main technical contribution is a translation
from type constraints derived from a refinement type system for
higher-order programs to a first-order imperative program with assertions,
such that the assertions hold in the first-order program
iff there is a refinement type that makes the higher-order program
type-check.
Moreover, a suitable type refinement for the higher-order program
can be constructed from the invariants of the first-order program.
Thus, our algorithm replaces the manual annotation burden for refinement types with
automatically constructed program invariants on the translated program,
thus enabling fully automatic verification of programs written
in modern languages.
\begin{figure}[t]
\vspace{1ex}
\centering
\begin{minipage}[t]{.8\columnwidth}
\tikzset{
state/.style={
rectangle,
rounded corners,
draw=black, thick,
minimum height=2em,
minimum width=10em,
inner sep=2pt,
text centered,
},
}
\begin{tikzpicture}[->,>=stealth',shorten >=1pt,auto, node distance=1.2cm,
semithick]
\tikzstyle{every state}=[draw=black]
\node[draw=none] (ML) {
\begin{minipage}[t]{.9\columnwidth}
\centering
OCaml Program\\(with assertions)
\end{minipage}
};
\node[state,below of=ML] (Gen) {Constraint Generation};
\path (ML) edge (Gen);
\node[state,below of=Gen,double] (RTI) {RTI Translation};
\path (Gen) edge node[right]{Subtyping Constraints} (RTI);
\node[state,below of=RTI] (AI) {Abs. Interpretation};
\path (RTI) edge node[right]{Simple IMP Program} (AI);
\node[draw=none,node distance=1.5cm,below left of=AI] (Safe) {Safe};
\path (AI) edge (Safe);
\node[draw=none,node distance=1.5cm,below right of=AI] (Unsafe) {Unsafe};
\path (AI) edge (Unsafe);
\node at (RTI.center) {
\fbox{
\begin{minipage}[t]{19em}
\mbox{}
\vspace{10.5em}
\end{minipage}}
};
\end{tikzpicture}
\end{minipage}
\caption{RTI algorithm.}
\label{fig:algo}
\end{figure}
\iffalse
\begin{figure}[ht]
\begin{center}
\includegraphics[width=1.7in]{figs/rti}
\caption{\textbf{\HMC Algorithm}}
\label{fig:algo}
\end{center}
\end{figure}
\fi
\noindent
The RTI algorithm (Figure~\ref{fig:algo})
proceeds in three steps.
\mypara{Step 1: Type-Constraint Generation.}
First, it performs Hindley-Milner type inference \cite{Milner82} to construct
\ML types for the program, and uses these types to generate
\emph{refinement templates}, \textit{i.e.,}\xspace types in which
\emph{refinement variables} $\kappa$ are used
to represent the unknown refinement predicates.
Then, the algorithm uses a standard syntax-directed procedure
to generate subtyping constraints over the templates
such that the program type checks (\textit{i.e.,}\xspace is safe) if
the subtyping constraints are satisfiable
\cite{XiPfenning99,Knowles07,LiquidPLDI08,GordonRefinement09}.
\mypara{Step 2: Translation.}
Second, it translates the set of type constraints
to a \emph{first-order, imperative program over base values}
such that the type constraints are satisfiable if and only if
the imperative program does not violate any assertions.
\mypara{Step 3: Abstract Interpretation.}
Finally, an abstract interpretation technique for first order
imperative programs is used to prove that the first order
program is safe.
The proof of safety produced by this analysis automatically
translates to solutions to the
refinement type variables, thus generating refinement types for the
original \ML program.
The main contribution of this paper is the RTI translation algorithm.
The advantage of the translation is that
it allows one to apply any of the well-developed semantic
imperative program analyses based on abstract interpretation (\textit{e.g.,}\xspace
polyhedra~\cite{CousotHalbwachs78} and octagons~\cite{CousotPLDI03},
counterexample-guided predicate abstraction refinement
(CEGAR)~\cite{SLAMPOPL02,HJMM04},
Craig interpolation~\cite{HJMM04,McMillan06},
constraint-based invariant
generation~\cite{Sankaranarayanan05,RybalchenkoVMCAI07}
random interpretation~\cite{Gulwani03},
{\it etc.})
to the verification of modern software with
polymorphism, inductive datatypes, and higher-order functions.
Instead of painstakingly reworking each semantic analysis
for imperative programs to the higher order setting,
possibly re-implementing them in the process, one
can use our translation, and apply any existing analysis as is.
In fact, using the translation, our implementation {\em directly}
uses a CEGAR and interpolation based safety verification tool
to verify properties of \ocaml programs.
In essence, our algorithm separates syntactic reasoning about function calls
and inductive data types (handled well by typing constraints) from
semantic reasoning about data invariants (handled well by abstract domains).
The translation from refinement type constraints to
imperative programs in Step~2 is the key enabler.
The translation, and the proof that the satisfiability of type constraints and
safety of the translated program are equivalent, are based on the following
observations.
The first observation is that refinement type
variables $\kappa$ define \emph{relations} over the value being
defined by the refinement type
and the finitely many variables that are in-scope at the
point where the type is defined.
In the imperative program, each finite-arity relation can be encoded
with a variable that encodes a relation.
Each refinement type constraint can be encoded as a straight-line
sequence that reads tuples from and writes tuples to the relation variables,
and the set of constraints can be encoded as a non-terminating while-loop
that in each iteration, non-deterministically executes
one of the blocks.
Thus, the problem of determining the existence of appropriate relations
reduces to that of computing (overapproximations) of the set of tuples
in each relation variable in the translated
program~(Theorem~\ref{th:translate}).
Our second observation is that if the translated program is in a special
\emph{read-write-once} form, where within each straight-line block
a relation variable is read and written \emph{at most once},
then one can replace all relation-valued variables with variables whose
values range over tuples~(Theorem~\ref{th:rwo-equiv}).
Moreover, we prove that we can, without affecting satisfiability,
preprocess the refinement typing constraints so that the translated program is
a read-write-once program~(Theorem~\ref{th:clone}).
Together, the observations yield a simple and direct translation
from refinement type inference to simple imperative programs.
We have instantiated our algorithm in a verification tool for \ocaml programs.
Our implementation
generates refinement type constraints using the algorithm of~\cite{LiquidPLDI08},
and uses the \ARMC~\cite{PADL07} software model checker to verify the translated programs.
This allows fully automatic verification of a set of \ocaml benchmarks
for which previous approaches either required manual annotations
(either the refinement types \cite{XiPfenning99} or their constituent
predicates \cite{LiquidPLDI08}), or an elaborate customization and
adaptation of the counterexample-guided abstraction refinement
paradigm~\cite{TerauchiPOPL2010}.
Thus, we show, for the first time, how abstract interpretation can be
lifted ``as-is'' to the practical refinement type inference for
modern, higher-order languages.
While we have focused on the verification of
functional programs, our approach is language independent,
and requires only an appropriate refinement type system for the source
language.
\section{Overview}
\begin{figure}[t]
\begin{small}
\begin{center}
\begin{verbatim}
let rec iteri i xs f =
match xs with
| [] -> ()
| x::xs' -> f i x;
iteri (i+1) xs' f
let mask a xs =
let g j y = a.(j) <- y && a.(j) in
if Array.length a = List.length xs then
iteri 0 xs g
\end{verbatim}
\end{center}
\end{small}
\caption{\ML Example}
\label{ex:ml-abc}
\end{figure}
We begin with an example that illustrates how our refinement
type inference (\HMC) algorithm combines type
constraints and abstract interpretation to automatically verify safety properties of
\emph{functional} \ML programs with higher-order functions and
recursive structures.
We show that the combination of syntactic type constraints
and semantic abstract interpretation enables the automatic verification of properties
that are currently beyond the scope of either technique in isolation.
\mypara{An \ML Example. }
Figure~\ref{ex:ml-abc}(a) shows a simple ML program that
updates an array $\mathtt{a}$ using the elements of the list $\mathtt{xs}$.
The program comprises two functions.
The first is a higher-order list \emph{indexed-iterator}, $\mathtt{iteri}$,
that takes as arguments a starting index $\mathtt{i}$,
a (polymorphic) list $\mathtt{xs}$,
and an iteration function $\mathtt{f}$.
The iterator goes over the elements of the list and invokes $\mathtt{f}$ on each element
and the index corresponding to the element's position in the list.
The second is a client, $\mathtt{mask}$, of the iterator $\mathtt{iteri}$ that takes as input a
boolean array $\mathtt{a}$ and a list of boolean values $\mathtt{xs}$, and if the
lengths match, calls the indexed iterator with an iteration function $\mathtt{g}$
that masks the $\mathtt{j}^{th}$ element of the array.
Suppose that we wish to statically verify the safety of the array reads and writes
in function $\mathtt{g}$; that is to prove that whenever $\mathtt{g}$ is invoked,
$0 \leq \mathtt{j} < {\mathtt{len}}\xspace(\mathtt{a})$.
As this example combines higher-order functions, recursion, data-structures, and
arithmetic constraints on array indices, it is difficult to analyze automatically
using either existing type systems or abstract interpretation implementations in isolation.
The former do not precisely handle arithmetic on indices, and the latter
do not precisely handle higher-order functions and are often imprecise on
data structures.
We show how our \HMC technique can automatically prove the correctness
of this program.
\mypara{Refinement Types.}
To verify the program, we compute program invariants that are expressed
as \emph{refinements} of \ML types with predicates over program
values \cite{Knowles07,GordonRefinement09,LiquidPLDI08}.
The predicates are additional constraints that must be satisfied by
every value of the type. A base value, say of type $\mathtt{int}$,
can be described by the refinement type
$\reftyp{\valu}{\mathtt{int}}{p}$
where $\valu$ is a special \emph{value variable} representing the type
being defined, and $p$ is a \emph{refinement predicate} which constrains
the range of $\valu$ to a subset of integers.
For example, the type
$\reftyp{\valu}{\mathtt{int}}{0 \leq \valu < {\mathtt{len}}\xspace(\mathtt{a})}$
denotes the set of integers $c$ that are between $0$ and the
value of the expression ${\mathtt{len}}\xspace(\mathtt{a})$.
Thus, the unrefined type $\mathtt{int}$ abbreviates $\reftyp{\valu}{\mathtt{int}}{{\it true}}$,
which does not constrain the set of integers.
Base types can be combined to construct \emph{dependent function types},
written $\ftyp{\mathtt{x}}{T_1} \rightarrow T_2$,
where $T_1$ is the type of the domain,
$T_2$ the type of the range,
and where the name $\mathtt{x}$ for the
formal parameter can appear in the refinement predicates
in $T_2$.
For example, the type
$$\ftyp{\mathtt{x}}{\reftyp{\valu}{\mathtt{int}}{\valu\geq 0}} \rightarrow \reftyp{\valu}{\mathtt{int}}{\valu = \mathtt{x}+1}$$
is the type of a function which takes a non-negative integer parameter and returns an
output which is one more than the input.
In the following, we write $\tau$ for the type $\reftyp{\valu}{\tau}{{\it true}}$.
When $\valu$ and $\tau$ are clear from the context, we write
$\sreftyp{p}$ for $\reftyp{\valu}{\tau}{p}$.
\mypara{Safety Specification.}
Refinement types can be used to \emph{specify} safety properties by
encoding pre-conditions into primitive operations of the language.
For example, consider the array read $\mathtt{\mathtt{a}.(\mathtt{j})}$ (resp.\
write $\mathtt{\mathtt{a}.(\mathtt{j}) \leftarrow \mathtt{e}}$) in $\mathtt{g}$ which is an
abbreviation for ${\mathtt{get}\ \mathtt{a}\ \mathtt{j}}$ (resp.\ ${\mathtt{set}\ \mathtt{a}\ \mathtt{j}\
\mathtt{e}}$).
By giving $\mathtt{get}$ and $\mathtt{set}$ the refinement types
\begin{align*}
& \ftyp{\mathtt{a}}{\alpha \mathtt{intarray}} \rightarrow
{\reftyp{\valu}{\mathtt{int}}{0 \leq \valu < {\mathtt{len}}\xspace(\mathtt{a})}} \rightarrow
\alpha\ , \\
& \ftyp{\mathtt{a}}{\alpha \mathtt{intarray}} \rightarrow
{\reftyp{\valu}{\mathtt{int}}{0 \leq \valu < {\mathtt{len}}\xspace(\mathtt{a})}} \rightarrow
\alpha \rightarrow
\mathtt{unit}\ ,
\end{align*}
we can specify that in any program the array accesses must be within
bounds. More generally, arbitrary safety properties can be specified
by giving $\mathtt{assert}$ the appropriate refinement type \cite{LiquidPLDI08}.
\mypara{Safety Verification.}
The \ML type system is too imprecise to prove the safety
of the array accesses in our example as it infers that $\mathtt{g}$
has type
${\ftyp{\mathtt{j}}{\mathtt{int}} \rightarrow \ftyp{\mathtt{y}}{\mathtt{bool}} \rightarrow \mathtt{unit}}$,
\textit{i.e.,}\xspace that $\mathtt{g}$ can be called with \emph{any} integer $\mathtt{j}$.
If the programmer manually provides the refinement types
for all functions and polymorphic type instantiations,
refinement-type checking~\cite{XiPfenning99,Dunfield,GordonRefinement09}
can be used to verify that the provided types were consistent
and strong enough to prove safety.
This is analogous to providing pre- and post-conditions and loop-invariants
for verifying imperative programs.
For our example, the refinement type system could check the program
if the programmer provided the types:
$$
\begin{array}{rl}
\mathtt{iteri} :: & \ftyp{\mathtt{i}}{\mathtt{int}} \rightarrow
\ftyp{\mathtt{xs}}{\reftyp{\valu}{\alpha\ \ \mathtt{list}}{0 \leq {\mathtt{len}}\xspace(\valu)}} \rightarrow \\
& (\ftyp{\mathtt{j}}{\sreftyp{\mathtt{i} \leq \valu < {\mathtt{len}}\xspace(\mathtt{xs})}} \rightarrow \alpha \rightarrow \mathtt{unit}) \rightarrow \mathtt{unit}\\
\mathtt{g} :: & \ftyp{\mathtt{j}}{\sreftyp{0 \leq \valu < {\mathtt{len}}\xspace(\mathtt{a})}} \rightarrow \mathtt{bool} \rightarrow \mathtt{unit}\\
\end{array}
$$
Here, we omitted refinement predicates that are equal to true, e.g.,
for \texttt{i} in the type of~\texttt{iteri}.
\mypara{Automatic Verification via \HMC.}
As even this simple example illustrates, the type annotation burden for
verification is extremely high.
Instead, we would like to verify the program without requiring the programmer
to provide every refinement type.
The \HMC algorithm proceeds in three steps.
First, we syntactically analyze the \emph{source} program to generate
subtyping constraints over refinement templates.
Second, we translate the constraints into an equivalent simple imperative
\emph{target} program.
Third, we semantically analyze the target program to determine whether
it is safe, from which we conclude that the constraints are satisfiable and
hence, the source program is safe.
Next, we illustrate these steps using Figure~\ref{ex:ml-abc} as
the source program.
\subsection{Step 1: Constraint Generation}
In the first step, we generate a system of refinement type constraints
for the source program \cite{Knowles07,LiquidPLDI08}.
To do so, we
(a)~build templates that refine the \ML types with
refinement variables that stand for the unknown refinements, and
(b)~make a syntax-directed pass over the program to generate subtyping
constraints that capture the flow of values.
For the functions $\mathtt{iteri}$ and $\mathtt{g}$ from Figure~\ref{ex:ml-abc},
with the respective \ML types
\begin{align*}
&\ftyp{\mathtt{i}}{\mathtt{int}}
\rightarrow \ftyp{\mathtt{xs}}{\alpha\ \ \mathtt{list}}
\rightarrow (\ftyp{\mathtt{j}}{\mathtt{int}} \rightarrow \alpha \rightarrow \mathtt{unit}) \rightarrow
\mathtt{unit} \\
& \ftyp{\mathtt{j}}{\mathtt{int}} \rightarrow \mathtt{bool} \rightarrow \mathtt{unit}
\intertext{we would generate the respective templates}
& \ftyp{\mathtt{i}}{\mathtt{int}}
\rightarrow \ftyp{\mathtt{xs}}{\sreftyp{0 \leq {\mathtt{len}}\xspace(\valu)}}
\rightarrow (\ftyp{\mathtt{j}}{\sreftyp{\kappa_1}} \rightarrow \alpha \rightarrow \mathtt{unit}) \rightarrow
\mathtt{unit} \\
& \ftyp{\mathtt{j}}{\sreftyp{\kappa_2}} \rightarrow \mathtt{bool} \rightarrow \mathtt{unit}
\end{align*}
Notice that these templates simply refine the \ML types with refinement
variables $\kappa_1, \kappa_2$ that stand for the unknown refinements.
For clarity of exposition, we have added the refinement ${\it true}$
for some variables (\textit{e.g.,}\xspace for the type $\alpha$ and $\mathtt{bool}$);
our system would automatically infer the unknown refinements.
We model the length of lists (resp.\ arrays) with an uninterpreted
function $\mathtt{len}$ from the lists (resp.\ arrays) to integers, and
(again, for brevity) add the refinement stating $\mathtt{xs}$
has a non-negative length in the type of $\mathtt{iteri}$.
After creating the templates, we make a syntax-directed pass over the
program to generate constraints that capture relationships
between refinement variables. There are two kinds of type constraints --
{\em well-formedness} and {\em subtyping}.
\mypara{Well-formedness Constraints }
capture scoping rules, and ensure that the
refinement predicate for a type can only refer to variables that are in scope.
Our example has two constraints:
\begin{align}
\ftyp{\mathtt{i}}{\mathtt{int}}; \ftyp{\mathtt{xs}}{\alpha\ \ \mathtt{list}} \vdash\ & \reftyp{\valu}{\mathtt{int}}{\kappa_1} \label{eq:w1} \tag{w1} \\
\ftyp{\mathtt{a}}{\mathtt{bool}\ \mathtt{intarray}}; \ftyp{\mathtt{xs}}{\alpha\ \ \mathtt{list}} \vdash\ & \reftyp{\valu}{\mathtt{int}}{\kappa_2} \label{eq:w2} \tag{w2}
\end{align}
The first constraint states that $\kappa_1$, which represents the unknown
refinement for the first parameter passsed to the higher-order
iterator $\mathtt{iteri}$, can only refer to the two program variables that are in-scope at that
point, namely $\mathtt{i}$ and $\mathtt{xs}$.
Similarly, the second constraint states that $\kappa_2$, which refines
the first argument of $\mathtt{g}$, can only refer to $\mathtt{a}$ and $\mathtt{xs}$, which
are in scope where $\mathtt{g}$ is defined.
\mypara{Subtyping Constraints }
reduce the flow of values within the program into subtyping
relationships that must hold between the source and target of the flow.
Each constraint is of the form
\begin{align}
G \vdash\ & T_1 <: T_2 \notag
\intertext{where $G$ is an \emph{environment} comprising a sequence of type bindings,
and $T_1$ and $T_2$ are refinement templates.
The constraint intuitively states that under the environment $G$, the
type $T_1$ must be a subtype of $T_2$.
The subtyping constraints are generated syntactically from the code.
First consider the function $\mathtt{iteri}$.
The call to $\mathtt{f}$ generates}
G \vdash\ & \sreftyp{\valu = \mathtt{i}} <: \set{\kappa_1} \label{eq:c1} \tag{c1}
\intertext{where the environment $G$ comprises the bindings}
G \doteq\ & \ftyp{\mathtt{i}}{\sreftyp{{\it true}}};\ \ftyp{\mathtt{xs}}{\sreftyp{0 \leq {\mathtt{len}}\xspace(\valu)}}; \notag \\
& \ftyp{\mathtt{x}}{\sreftyp{{\it true}}};\ \ftyp{\mathtt{xs'}}{\sreftyp{0 \leq {\mathtt{len}}\xspace(\valu) = {\mathtt{len}}\xspace(\mathtt{xs}) - 1}}
\notag
\intertext{the constraint ensures that at the callsite,
the type of the actual is a subtype of the formal.
The bindings in the environment
are simply the refinement templates for the variables in scope at the point
the value flow occurs. The type system yields the information
that the length of $\mathtt{xs'}$ is one less than $\mathtt{xs}$ as the former is the
tail of the latter \cite{XiPfenning99,LiquidPLDI09}.
Similarly, the recursive call to $\mathtt{iteri}$ generates}
G \vdash\ & \set{\ftyp{\mathtt{j}:\kappa_1} \rightarrow \alpha \rightarrow \mathtt{unit}} <: \notag \\
& \set{(\ftyp{\mathtt{j}}{\kappa_1} \rightarrow \alpha \rightarrow
\mathtt{unit})[\mathtt{i}+1/\mathtt{i}][\mathtt{xs'}/\mathtt{xs}]}
\notag
\intertext{which states that type of the actual $\mathtt{f}$ is
a subtype of the third formal parameter of $\mathtt{iteri}$
after applying substitutions
$\SUBST{}{\mathtt{i}}{\mathtt{i}+1}$ and $\SUBST{}{\mathtt{xs}}{\mathtt{xs'}}$
that capture the passing in of the actuals
$\mathtt{i}+1$ and $\mathtt{xs'}$ for the first two parameters respectively.
By pushing the substitutions inside and applying the standard rules for
function subtyping, this constraint simplifies to}
G \vdash\ & \set{\SUBST{\SUBST{\kappa_1}{\mathtt{i}+1}{\mathtt{i}}}{\mathtt{xs'}}{\mathtt{xs}}} <:
\set{\kappa_1} \label{eq:c2} \tag{c2}
\end{align}
Next, consider the function $\mathtt{mask}$. The array accesses inside $\mathtt{g}$
generate the ``bounds-check" constraint
\begin{align}
G'; \ftyp{\mathtt{j}}{\set{\kappa_2}}; \ftyp{\mathtt{y}}{\sreftyp{{\it true}}} \vdash\ & \sreftyp{\valu = \mathtt{j}} <:
\sreftyp{0 \leq \valu < {\mathtt{len}}\xspace(\mathtt{a})} \label{eq:c3} \tag{c3}
\intertext{where $G' \doteq\ \ftyp{\mathtt{a}}{\mathtt{bool}\ \mathtt{intarray}}; \ftyp{\mathtt{xs}}{\sreftyp{0 \leq {\mathtt{len}}\xspace (\valu)}}$
has bindings for the other variables in scope.
Finally, the flow due to the third parameter for the call to $\mathtt{iteri}$ yields}
G'; \mathtt{len}(\mathtt{a}) = {\mathtt{len}}\xspace(\mathtt{xs}) \vdash\
& \set{\ftyp{\mathtt{j}}{\kappa_2} \rightarrow \tau} <: \notag \set{\SUBST{(\ftyp{\mathtt{j}}{\kappa_1} \rightarrow \tau)}{\mathtt{i}}{0}}
\intertext{where for brevity we write $\tau$ for $\mathtt{bool} \rightarrow
\mathtt{unit}$, and omit the trivial substitution $\SUBST{}{\mathtt{xs}}{\mathtt{xs}}$ due to
the second parameter.
The last conjunct in the environment captures the guard
from the $\mathtt{if}$ under whose auspices the call occurs.
By pushing the substitutions inside and
applying standard function subtyping, the above reduces to}
G'; \mathtt{len}(\mathtt{a}) = {\mathtt{len}}\xspace(\mathtt{xs}) \vdash\
& \set{\SUBST{\kappa_1}{\mathtt{i}}{0}} <: \set{\kappa_2}
\label{eq:c4} \tag{c4}
\end{align}
For brevity we omit trivial constraints like ${\cdot \vdash\ \mathtt{int} <: \mathtt{int}}$.
If the set of constraints constructed above is satisfiable,
then there is a valid refinement typing of the program~\cite{LiquidPLDI08},
and hence the program is safe.
\subsection{Step 2: Translation to Imperative Program}
Determining the satisfiability of the constraints
requires semantic analysis about program computations.
In the second step, our key technical contribution,
we show a translation that reduces the constraint
satisfiability problem to checking the safety of
a simple, imperative program.
Our translation is based on two observations.
\mypara{Refinements are Relations.}
The first observation is that type refinements are defined through
{\em relations}:
the set of values denoted by a refinement type
$\reftyp{\valu}{\tau}{p}$ where $p$ refers to the program
variables $\mathtt{x}_1,\ldots,\mathtt{x}_n$
of the respective base types $\tau_1,\ldots,\tau_n$
is equivalent to the set
$$
\set{t_0 \mid \exists (t_1,\ldots,t_n) \mbox{ s.t.\ }
\begin{array}[t]{@{}l}
(t_0, t_1,\ldots,t_n) \in R_p \mathrel{\wedge} \\
\quad t_1 = \mathtt{x}_1 \wedge \ldots t_n = \mathtt{x}_n
\end{array}
}
$$
where $R_p$ is an $(n+1)$-ary relation in $\tau \times \tau_1\times\ldots\times \tau_n$ defined by
$p$.
For example, the set of values denoted by
$\reftyp{\valu}{\mathtt{int}}{\valu \leq \mathtt{i}}$
is equivalent to the set:
$$\set{t_0 \mid \exists t_1 \mbox{ s.t.\ } (t_0, t_1) \in R_{\leq} \wedge t_1 = \mathtt{i}}\ ,$$
where $R_\leq$ is the standard $\leq$-ordering relation over the integers.
In other words, each refinement variable $\kappa$ can be seen as
the projection on the first co-ordinate of
a $(n+1)$-relation over the variables $(\valu, x_1,\ldots,x_n)$,
where $x_1,\ldots,x_n$ are the variables in the well-formedness
constraint for $\kappa$ (\textit{i.e.,}\xspace the variables in scope of $\kappa$).
Thus, the problem of determining the satisfiability of the constraints
is analogous to the problem of determining the existence of appropriate
relations.
\mypara{Relations are Records.}
The second observation is that the problem of finding appropriate relations
can be reduced to the problem of analyzing a simple imperative program
with variables ranging over relations.
In the imperative program, each refinement variable,
standing for an $n$-ary relation, is translated into a record variable with
$n$-fields.
Each subtyping constraint can be translated into a block of reads-from
and writes-to the corresponding records.
The set of all tuples that can be written into a
given record on some execution of the program defines the corresponding relation.
The entire program is an infinite loop, which in each iteration
non-deterministically chooses a block of reads and writes defined by
a constraint.
The arity of a relation,
and hence the number of fields of the corresponding record, is determined by
the well-formedness constraints.
For example, the constraint~\eqref{eq:w1} specifies that $\kappa_1$
corresponds to a ternary relation, that is, a set of triples
where the $0^{th}$ element (corresponding to $\valu$) is an integer,
the $1^{st}$ element (corresponding to $\mathtt{i}$) is an integer,
and the $2^{nd}$ element (corresponding to $\mathtt{xs}$) is a list.
We encode this in the imperative program via a record variable ${\kvar}_1$
with three fields ${\kvar}_1.0$, ${\kvar}_1.1$ and ${\kvar}_1.2$.
Figure~\ref{ex-ml-imperative} shows the imperative program translated from
the constraints for our running example.
We use the subtyping constraints to define the flow of tuples into records.
For example, consider the constraint~\eqref{eq:c2} which is translated
to the block marked \verb+/*c2*/+.
Each variable in the type environment is translated to a corresponding
variable in the program.
The block has a sequence of assignments that define the environment
variables.
For example, we know $\mathtt{i}$ has type $\mathtt{int}$,
so there is an assignment of an arbitrary integer to $\mathtt{i}$.
When there is a known refinement in the binding, the non-deterministic assignment is followed by an
assume operation (a conditional) that establishes that the value
assigned satisfied the given refinement.
For example $\mathtt{xs}$ gets assigned an arbitrary value, but then the assume
establishes the fact that the length of $\mathtt{xs}$ is non-negative.
Similarly $\mathtt{xs'}$ gets assigned an arbitrary value, that has non-negative
length and whose length is 1 less than that of $\mathtt{xs}$.
The LHS of \eqref{eq:c2} reads a tuple from ${\kvar}_1$ whose first and
second fields are assumed to equal the ${\mathtt{i}+1}$ and $\mathtt{xs'}$
respectively.
Finally, the triple $(\valu, \mathtt{i}, \mathtt{xs})$ is written into the record
${\kvar}_1$ which is the RHS of \eqref{eq:c2}.
Next, consider the translated block for the bounds-check constraint
\eqref{eq:c3}. Here, the translation is as before but the
RHS is a known refinement predicate (that stipulates the integer be within
bounds). In this case, instead of writing into the record that defines the
RHS, the translation contains an assertion over the corresponding variables
that ensures that the refinement predicate holds.
\mypara{{Relational}\xspace vs. {Imperative}\xspace Semantics.}
There is a direct correspondence between the
refinement-relations and the record variables when the
translated program is interpreted under a {Relational}\xspace semantics,
where
(1)~the records range over (initially empty) \emph{sets of tuples},
(2)~each write adds a new tuple to the record's set, and,
(3)~each read non-deterministically selects some tuple from the record's set.
Under these semantics, we can show that the constraints
are satisfiable iff the imperative program is safe (\textit{i.e.,}\xspace no assert fails on any execution)
(Theorem~\ref{th:translate}).
Unfortunately, these semantics preclude the direct application of mature
invariant generation and safety verification techniques
\textit{e.g.,}\xspace those based on abstract interpretation or CEGAR-based
software model checking, as those techniques
do not deal well with set-valued variables.
We would like to have an imperative semantics where each record
contains a single value, the last tuple written to it.
We show that there is a syntactic subclass of programs for which
the two semantics coincide.
That is, a program in the subclass is safe under the imperative
semantics if and only if it is safe under the set-based semantics
(Theorem~\ref{th:rwo-equiv}).
Furthermore, we show a technique that ensures that the translated program
belongs to the subclass (Theorem~\ref{th:clone}).
The attractiveness of the translation is that the resulting
programs fall in a particularly pleasant subclass of programs
which do not have any advanced language features like
higher-order functions, polymorphism, and recursive data structures,
or variables over complex types such as sets,
that are the bane of semantic analyses.
Thus, the translation yields simple imperative programs to which
a wide variety of semantic analyses directly apply.
\subsection{Step 3: Invariant Generation.}
Together these results imply that we can run off-the-shelf
abstract interpretation and invariant generation tools on the
translated program, and use the result of the analysis to
determine whether the original \ML program is typable.
For the translated program shown in Figure~\ref{ex-ml-imperative},
the CEGAR-based software model checker \ARMC~\cite{PADL07}
finds that the assertion is never violated, and
computes the invariants:
\begin{align*}
& {\kvar}_1.1 \leq {\kvar}_1.0 \wedge {\kvar}_1.0 < \mathtt{len}({\kvar}_1.2) \\
& 0 \leq \kappa_2.0 < \mathtt{len} ({\kvar}_2.1)
\end{align*}
which, when plugging in $\valu$, $\mathtt{i}$ and $\mathtt{xs}$ for the
$0^{th},1^{st},2^{nd}$ fields of ${\kvar}_1$
and $\valu$, $\mathtt{a}$ for the $0^{th}, 1^{st}$ fields of ${\kvar}_2$
respectively, yields the refinements
\[
\kappa_1 \doteq\ \mathtt{i} \leq \valu < \mathtt{len}(\mathtt{xs}) \ \ \
\kappa_2 \doteq\ 0 \leq \valu < \mathtt{len} (\mathtt{a})
\]
which suffice to typecheck the original \ML.
Indeed, these predicates for $\kappa_1$ and $\kappa_2$ are easily shown to satisfy the
constraints (c1), (c2), (c3), and (c4).
\begin{comment}
\mypara{Outline.}
This concludes a high-level overview of the \HMC approach.
In the rest of the paper, we start by formalizing the
notions of refinement predicates, type constraints and constraint
satisfaction (Section~\ref{sec:constraints}.
Next, we describe the syntax and semantics of the target imperative
programs (Section~\ref{sec:imp}).
Next, we make precise our translation, and prove the equivalence of
the (source) type constraints and the (target) translated program
(Section~\ref{sec:equiv}).
After that, we describe our prototype implementation and preliminary
experiments using \HMC to verify small but complex examples
(Section~\ref{sec:experiments}), and we conclude with a discussion of
the ramifications of our technique and connections to related work
(Section~\ref{sec:discussion}).
\end{comment}
\begin{figure}[t]
\begin{small}
\[
\begin{array}{rl}
{{\mathtt{loop}}}\{ & \mathtt{{/*} c1 {*/}} \\
& \HAVOC{\mathtt{i}};\\
& \HAVOC{\mathtt{xs}};\ {{\mathtt{assume}}}(0 \leq \mathtt{len}(\mathtt{xs}));\\
& \HAVOC{\mathtt{xs'}};\ {{\mathtt{assume}}}(0 \leq \mathtt{len}(\mathtt{xs'}) = \mathtt{len}(\mathtt{xs})-1);\\
& \HAVOC{\valu};\ {{\mathtt{assume}}}(\valu = \mathtt{i});\\
& {{\mathtt{set}}}{{\kvar}_1}{(\valu,\mathtt{i},\mathtt{xs})}\\[4pt]
{\mathrm{[\!] }} & \mathtt{{/*} c2 {*/}} \\
& \HAVOC{\mathtt{i}};\\
& \HAVOC{\mathtt{xs}};\ {{\mathtt{assume}}}(0 \leq \mathtt{len}(\mathtt{xs}));\\
& \HAVOC{\mathtt{xs'}};\ {{\mathtt{assume}}}(0 \leq \mathtt{len}(\mathtt{xs'}) = \mathtt{len}(\mathtt{xs})-1);\\
& {{\mathtt{get}}}{{\kvar}_1}{(t_0,t_1,t_2)}; \\
& {{\mathtt{assume}}}(t_1 = \mathtt{i} +1);\\
& {{\mathtt{assume}}}(t_2 = \mathtt{xs'});\\
& \ASSIGN{\valu}{t_0};\\
& {{\mathtt{set}}}{{\kvar}_1}{(\valu, \mathtt{i}, \mathtt{xs})}\\[4pt]
{\mathrm{[\!] }} & \mathtt{{/*} c3 {*/}}\\
& \HAVOC{\mathtt{a}};\\
& \HAVOC{\mathtt{xs}};\ {{\mathtt{assume}}}(0 \leq \mathtt{len}(\mathtt{xs}));\\
& {{\mathtt{get}}}{{\kvar}_2}{(t_0, t_1, t_2)};\\
& \ASSIGN{\mathtt{j}}{t_0};\\
& {{\mathtt{assert}}}(0 \leq j < \mathtt{len}(\mathtt{a}))\\[4pt]
{\mathrm{[\!] }} & \mathtt{{/*} c4 {*/}}\\
& \HAVOC{\mathtt{a}};\\
& \HAVOC{\mathtt{xs}};\ {{\mathtt{assume}}}(0 \leq \mathtt{len}(\mathtt{xs}));\\
& {{\mathtt{assume}}}(\mathtt{len}(\mathtt{a}) = \mathtt{len}(\mathtt{xs})); \\
& {{\mathtt{get}}}{{\kvar}_1}{(t_0, t_1, t_2)};\\
& {{\mathtt{assume}}}(t_1 = 0);\\
& {{\mathtt{assume}}}(t_2 = \mathtt{xs});\\
& \ASSIGN{\valu}{t_0};\\
& {{\mathtt{set}}}{{\kvar}_2}{(\valu, \mathtt{a}, \mathtt{xs})} \\
\} &
\end{array}
\]
\end{small}
\caption{Translated Program}
\label{ex-ml-imperative}
\end{figure}
\section{Constraints}\label{sec:constraints}
We start by formalizing constraints over types refined with predicates.
To this end, we make precise the notions of
refinement predicates (Section~\ref{sec:logic}),
refinement types
(Section~\ref{sec:reftypes}),
constraints over refinement types
and the notion of satisfaction
(Section~\ref{sec:refconstr}).
A discussion of how such constraints can be generated in a syntax-guided
manner from program source
is outside the scope of this paper; we refer the reader to the large body
of prior research that addresses this
issue~\cite{XiPfenning99,Knowles07,LiquidPLDI08,GordonRefinement09}.
\mypara{Notation.}
We use uppercase ($Z$) to denote sets, lowercase $z$ to denote
elements, and $\mybar{Z}$ for a sequence of elements in~$Z$.
\subsection{Refinement Logic}
\label{sec:logic}
Figure~\ref{fig:logic} shows the syntax of refinement predicates.
In our discussion, we restrict the predicate language to the typed quantifier-free
logic of linear integer arithmetic and uninterpreted functions.
However, it is straightforward to extend the logic to include
other domains equipped with effective decision procedures
and abstract interpreters.
\mypara{Types and Environments.}
Our logic is equipped with a fixed
set of \emph{types} denoted $\tau$,
comprising the basic types
$\mathtt{int}$ for \emph{integer} values,
$\mathtt{bool}$ for \emph{boolean} values,
and $\mathtt{ui}$, a family of \emph{uninterpreted types} that are used to
encode complex source language types such as products, sums, polymorphic
type variables, recursive types {\it etc.}.
We assume there is a fixed set of uninterpreted functions.
Each uninterpreted function $\mathtt{f}$ has a fixed type
$\tau_\mathtt{f} \doteq\ \mybar{\tau^i_\mathtt{f}} \rightarrow \tau^o_\mathtt{f}$.
An \emph{environment} is a sequence of variable-type bindings.
\mypara{Expressions and Predicates.}
In our logic, \emph{expressions} $e$ comprise variables, linear arithmetic
(\textit{i.e.,}\xspace addition and multiplication by constants), and applications of
uninterpreted functions $\mathtt{f}$.
Note that as is standard in semantic program analyses, complex
operations like division or non-linear multiplication be modelled using
uninterpreted functions.
Finally, \emph{predicates} comprise atomic comparisons of expressions, or
boolean combinations of sub-predicates.
We write ${\it true}$ (resp. ${\it false}$) as abbreviations for $0=0$ (resp. $0=1$).
\mypara{Well-formedness.}
We say that a predicate $p$ is \emph{well-formed} in an environment
$\Gamma$ if every variable appearing in $p$ is bound in $\Gamma$ and
$p$ is ``type correct'' in the environment $\Gamma$.
\mypara{Validity.}
For each type $\tau$, we write $\univ{\tau}$ to denote the set of
concrete values of~$\tau$.
An \emph{interpretation} $\sigma$ is a map from
variables $x$ to concrete values, and
functions $\mathtt{f}$ to maps from $\univ{\mybar{\tau^i_\mathtt{f}}}$ to $\univ{\tau^o_\mathtt{f}}$.
We say that $\sigma$ is \emph{valid} under $\Gamma$ if
for each $\ftyp{x}{\tau} \in \Gamma$, we have $\sigma(x) \in \univ{\tau}$.
We say that a predicate $p$ is \emph{valid} in an environment $\Gamma$,
if $\sigma(p)$ evaluates to ${\it true}$ for
every $\sigma$ valid under $\Gamma$.
\subsection{Refinement Types}
\label{sec:reftypes}
Figure~\ref{fig:constraints} shows the syntax of refinement types and
environments.
\mypara{Refinements.}
A \emph{refinement} $r$ is either a predicate $p$ drawn from our logic,
or a \emph{refinement variable with pending substitutions}
$\SUBST{\kappa}{x_1}{y_1}\ldots\SUBST{}{x_n}{y_n}$.
Intuitively, the former represent \emph{known} refinements (or invariants),
while the latter represent the \emph{unknown} invariants that hold
of different program values.
The notion of pending substitutions~\cite{AbadiCardelliCurienLevy,Knowles07} offers a
flexible way of capturing the value flow that arises in the context of
function parameter passing (in the functional setting), or assignment
(in the imperative setting), even when the underlying
invariants are unknown.
\mypara{Refinement Types and Environments.}
A \emph{refinement type} $\reftyp{\valu}{\tau}{r}$ is a triple
consisting of a \emph{value variable} $\valu$ denoting the value being
described by the refinement type, a type $\tau$ describing the
underlying type of the value, and a refinement $r$.
A \emph{refinement environment} $G$ is a sequence of refinement type
bindings.
The value variables are special variables distinct from the program
variables, and can occur inside the refinement predicates.
Thus, intuitively, the refinement type describes the set of
concrete values of the underlying type $\tau$ which additionally
satisfy the refinement predicate. For example, the refinement type:
$\reftyp{\valu}{\mathtt{int}}{\valu \not = 0}$
describes the set of non-zero integers and,
$\reftyp{\valu}{\mathtt{int}}{\valu = \mathtt{x} + \mathtt{y}}$
describes the set of integers whose value equals
the sum of the values of the (program) variables $\mathtt{x}$ and $\mathtt{y}$.
Note that path-sensitive branch information can be captured by adding
suitable bindings to the refinement environment.
For example, the fact that some expression is only evaluated under
the if-condition that $\mathtt{x} > 100$ can be captured in the
environment via a refinement type binding
$\ftyp{\mathtt{x}_b}{\reftyp{\valu}{\mathtt{bool}}{\mathtt{x} > 100}}$.
\subsection{Refinement Constraints and Solutions}
\label{sec:refconstr}
Figure~\ref{fig:constraints} shows the syntax of refinement constraints.
Our refinement type system has two kinds of constraints.
\mypara{Subtyping Constraints} are of the form
$${G \vdash\ \reftyp{\valu}{\tau}{r_1} <: \reftyp{\valu}{\tau}{r_2}}$$
Intuitively, a subtyping constraint states that when the program variables satisfy
the invariants described in $G$, the set of values described by
the refinement $r_1$ must be \emph{subsumed by}
the set of values described by the refinement type $r_2$.
\mypara{Well-formedness Constraints} are of the form
${\Gamma \vdash\ \reftyp{\valu}{\tau}{r}}$.
Intuitively, a well-formedness constraints states that the refinement $r$
must be a well-typed predicate in the environment $G$ extended with the
binding $\ftyp{\valu}{\tau}$ for the value variable.
\mypara{Embedding.}
To formalize the notions of constraint validity and satisfaction, we embed subtyping
constraints into our logic. We define the function $\embed{\cdot}$ that maps
refinement types, environments and subtyping constraints to predicates in
our logic.
\begin{displaymath}
\begin{array}{ll}
\embed{\reftyp{\valu}{\tau}{p}} & \doteq\ p \\
\embed{\EXT{\ftyp{x}{T}}{G}} & \doteq\ \SUBST{\embed{T}}{x}{\valu} \wedge \embed{G} \\
\embed{\emptyset} & \doteq\ {\it true} \\
\embed{G \vdash\ T_1 <: T_2} & \doteq\ \embed{G}
\Rightarrow \embed{T_1} \Rightarrow \embed{T_2}\\
\end{array}
\end{displaymath}
Similarly, we define the function $\shape{\cdot}$ that maps refinement
types and environments to types and environments in our logic.
\begin{displaymath}
\begin{array}{ll}
\shape{\reftyp{\valu}{\tau}{p}} & \doteq\ \tau \\
\shape{\EXT{\ftyp{x}{T}}{G}} & \doteq\ \EXT{\ftyp{x}{\shape{T}}}{\shape{G}}\\
\shape{\emptyset} & \doteq\ \emptyset \\
\end{array}
\end{displaymath}
\mypara{Validity.}
A subtyping constraint ${G \vdash\ T_1 <: T_2}$
that does not contain refinement variables
is \emph{valid} if the predicate
$\embed{G \vdash\ T_1 <: T_2}$
is valid under environment $\shape{G}$.
A well-formedness constraint ${\Gamma \vdash\ \reftyp{\valu}{\tau}{p}}$
that does not contain refinement variables
is \emph{valid} if the predicate $p$ is well-formed
in the environment $\Gamma$.
\mypara{Relational Interpretations.}
We assume, without loss of generality, that each refinement variable $\kappa$
is associated with a unique well-formedness constraint
$\ftyp{x_1}{\tau_1};\ldots;\ftyp{x_n}{\tau_n} \vdash\ \reftyp{\valu}{\tau_0}{\kappa}$
called the well-formedness constraint for $\kappa$.
In this case, we say $\kappa$ has \emph{arity} $n+1$.
Furthermore, we assume that wherever a $\kappa$ of arity $n+1$ appears in
a subtyping constraint, it appears with a sequence of $n$ pending
substitutions $\SUBST{}{x_1}{y_1} \ldots \SUBST{}{x_n}{y_n}$.
This assumption is without loss of generality, as we can enforce it
with trivial substitutions of the form $\SUBST{}{x_i}{x_i}$.
A \emph{relational interpretation} for $\kappa$ of arity $n+1$, is
an $(n+1)$-ary relation in $\univ{\tau_0}\times \ldots\times\univ{\tau_n}$.
A \emph{relational model} is a map from refinement variables
$\kappa$ to relational interpretations.
\mypara{Constraint Satisfaction.}
A set of constraints $C$ is \emph{satisfiable} if
for all interpretations for uninterpreted functions $\mathtt{f}$,
there exists a relational model $S$ such that,
when each occurrence of a refinement type
$\reftyp{\valu}{\tau}{\SUBST{\kappa}{x_1}{y_1} \ldots \SUBST{}{x_n}{y_n}}$ in $C$
is substituted with
$$\reftyp{\valu}{\tau}{\exists
t_1,\ldots,t_n.S(\kappa)(\valu,t_1,\ldots,t_n) \wedge t_1 = y_1\wedge \ldots t_n = y_n)}$$
every subtyping
constraint after the substitution is valid.
In this case, we say that $S$ is a \emph{solution} for $C$.
\begin{figure}[t]
\begin{displaymath}
\begin{array}{lrll}
\tau & ::= & & \textbf{Types:} \\
& \mid & \mathtt{int} & \mbox{base type of integers} \\
& \mid & \mathtt{bool} & \mbox{base type of booleans} \\
& \mid & \mathtt{ui} & \mbox{complex uninterpreted type} \\[4pt]
\Gamma & ::= & & \textbf{Environments:} \\
& \mid & \EXT{\ftyp{x}{\tau}}{\Gamma} & \mbox{binding} \\
& \mid & \emptyset & \mbox{empty} \\[4pt]
e & ::= & & \textbf{Expressions:} \\
& \mid & x & \mbox{variable} \\
& \mid & n & \mbox{integer} \\
& \mid & e_1 + e_2 & \mbox{addition} \\
& \mid & n \times e & \mbox{affine multiplication} \\
& \mid & \mathtt{f}(\mybar{e}) & \mbox{function application} \\[4pt]
p & ::= & & \textbf{Predicates:} \\
& \mid & e_1 \bowtie e_2 & \mbox{comparison} \\
& \mid & \neg p & \mbox{negation} \\
& \mid & p_1 \wedge p_2 & \mbox{conjunction} \\
& \mid & p_1 \Rightarrow p_2 & \mbox{implication} \\[4pt]
r & ::= & & \textbf{Refinements:} \\
& \mid & p & \mbox{predicate} \\
& \mid &
\SUBST{\kappa}{x_1}{y_1}\ldots\SUBST{}{x_n}{y_n}
& \mbox{ref.\ var.\ with substitutions} \\[4pt]
T & ::= & \reftyp{\valu}{\tau}{r} & \textbf{Refinement Types} \\[4pt]
G & ::= & & \textbf{Refinement Environments:} \\
& \mid & \EXT{\ftyp{x}{T}}{G}& \mbox{binding} \\
& \mid & \emptyset & \mbox{empty} \\[4pt]
c & ::= & G \vdash\ T_1 <: T_2
& \textbf{Subtype Constraints} \\[4pt]
w & ::= & \Gamma \vdash\ T & \textbf{WF Constraints}
\end{array}
\end{displaymath}
\caption{\textbf{Predicates, Refinements and Constraints.}}
\label{fig:logic}
\label{fig:constraints}
\end{figure}
\section{Imperative Programs}\label{sec:imp}
\HMC translates the satisfiability problem for refinement type constraints
to the question of checking the safety of an imperative program in a simple
imperative language \textsc{Imp}\xspace.
In this section, we formalize the syntax of \textsc{Imp}\xspace programs
and define the {Relational}\xspace semantics
and the {Imperative}\xspace semantics.
\subsection{Syntax}
\label{sec:impsyntax}
Figure~\ref{fig:impsyntax} shows the syntax of \textsc{Imp}\xspace programs.
An \emph{instruction} ($\mathtt{I}$) is a sequence of assignments, assumptions
and assertions.
A \emph{program} ($\mathtt{P}$) is an infinite loop over a block, whose body
is a non-deterministic choice between a finite number of instructions
$\mathtt{I}_1,\ldots,\mathtt{I}_n$.
Next, we describe the different kinds of instructions.
For ease of notation, we assume that there is only one base type $\tau$,
and let $V$ denote the set of values of type $\tau$.
\mypara{Variables.} \textsc{Imp}\xspace programs have two kinds of variables.
(1)~\emph{base} variables, denoted by
$\valu$, $x$, $y$ and $t$ (and subscripted versions thereof),
which range over values of type $\tau$.
(2)~\emph{relation\xspace} variables, denoted by ${\kvar}$,
each of which have a fixed arity $n$ and range over tuples of values
or sets of $n$-tuples of values depending on the semantics.
\mypara{Base Assignments.}
\textsc{Imp}\xspace programs have two kinds of assignments to base variables.
Either
(1)~an expression over base variables (cf. Figure~\ref{fig:logic})
is evaluated and assigned to the base variable, or,
(2)~an arbitrary value of the appropriate base type is assigned to the base
variable, \textit{i.e.,}\xspace the variable is ``havoc-ed" with a non-deterministically
chosen value.
\mypara{Tuple Assignments.}
The operations \emph{get tuple} and \emph{set tuple}
respectively read a tuple from and write a tuple to a relation\xspace
variable.
\mypara{Assumes and Asserts.}
\textsc{Imp}\xspace programs have the standard assume and assert instructions using
predicates over the base variables (cf. Figure~\ref{fig:logic}).
We write ${{\mathtt{skip}}}$ as an abbreviation for ${{\mathtt{assume}}}(0=0)$.
\begin{figure}[t]
\begin{small}
\begin{displaymath}
\begin{array}{llll}
\mathtt{I} & ::= & & \textbf{Instructions:}\\
& \mid & \ASSIGN{x}{e} & \mbox{assign expr}\\
& \mid & \HAVOC{x} & \mbox{havoc}\\
& \mid & {{\mathtt{get}}}{{\kvar}}{(t_0,\ldots,t_n)} & \mbox{get tuple}\\
& \mid & {{\mathtt{set}}}{{\kvar}}{(x_0,\ldots,x_n)} & \mbox{set tuple}\\
& \mid & {{\mathtt{assume}}}(p) & \mbox{assume} \\
& \mid & {{\mathtt{assert}}}(p) & \mbox{assert} \\
& \mid & \mathtt{I}_1; \mathtt{I}_2 & \mbox{sequence} \\[4pt]
\mathtt{P} & ::= & {{\mathtt{loop}}} \{\mathtt{I}_1 {[\!]} \ldots {[\!]} \mathtt{I}_n \} & \mbox{Program}
\end{array}
\end{displaymath}
\end{small}
\caption{\textbf{Imperative Programs: Syntax}}
\label{fig:impsyntax}
\end{figure}
\subsection{{Relational}\xspace Semantics}
\label{sec:relsemantics}
We define the {Relational}\xspace semantics as a state transition system.
In this semantics, ${\kvar}$ variables range over
\emph{sets of} tuples over $V$.
\mypara{{Relational}\xspace States.}
A state $\istate^{\sharp}$ in the {Relational}\xspace semantics
is either the special \emph{error state} ${\mathcal{E}}$
or a map from program variables to values such that
every base variable is mapped to a value in $V$,
and every relation\xspace variable of arity $n$ is mapped
to a (possibly empty) set of tuples in $V^n$.
Let $\istates^{\sharp}$ be the set of all {Relational}\xspace-program states.
For a state $\istate^{\sharp}$ which is not ${\mathcal{E}}$, variable $x$ and value $v$
we write $\UPD{\istate^{\sharp}}{x}{v}$ for the map which maps $x$
to $v$ and every other key $x'$ to $\istate^{\sharp}(x')$.
We lift maps $\istate^{\sharp}$ from base variables to values to maps
from expressions (and predicates) to values in in
the natural way.
\mypara{Initial State.}
The initial state $\istate^{\sharp}_0$ of an \textsc{Imp}\xspace program in the {Relational}\xspace semantics is a map in which
every base variable is mapped to a fixed value from $V$,
and every relation\xspace variable is mapped to the empty set.
\mypara{Transition Relation.}
The transition relation is defined through a $\mathsf{Post}^{\sharp}$ operator,
shown in Figure~\ref{fig:relsemantics},
which maps a state $\istate^{\sharp}$ and an instruction $\mathtt{I}$
to the \emph{set} of states that the program can be in
\emph{after} executing the instruction from the state $\istate^{\sharp}$.
We lift $\mathsf{Post}^{\sharp}$ to a set of states $\hat{\istates^{\sharp}}\subseteq \istates^{\sharp}$ in the natural way:
$$\mathsf{Post}^{\sharp}(\hat{\istates^{\sharp}}, \mathtt{I}) \doteq\ \bigcup \set{\mathsf{Post}^{\sharp}(\istate^{\sharp}, \mathtt{I})\mid \istate^{\sharp} \in \hat{\istates^{\sharp}}}$$
Notice that the program halts if
a get instruction is executed with an \empty{empty} relation\xspace variable, or
an ${{\mathtt{assume}}}(p)$ is executed in a state that does not satisfy $p$.
\mypara{Safety.}
Let $\mathtt{P}$ be the program ${{\mathtt{loop}}}\{ \mathtt{I}_1 {[\!]} \ldots {[\!]} \mathtt{I}_n \}$.
The set of \emph{{Relational}\xspace-reachable states} of $\mathtt{P}$, denoted $\mathsf{Reach}^{\sharp}(\mathtt{P})$ is
defined by induction as:
$$
\begin{array}{ll}
\mathsf{Reach}^{\sharp}(\mathtt{P}, 0) & \doteq\ \set{\istate^{\sharp}_0}\\
\mathsf{Reach}^{\sharp}(\mathtt{P}, m+1) & \doteq\ \bigcup \set{\mathsf{Post}^{\sharp}(\mathsf{Reach}^{\sharp}(\mathtt{P}, m), \mathtt{I}_j) \mid 1 \leq j \leq n}\\
\mathsf{Reach}^{\sharp}(\mathtt{P}) & \doteq\ \bigcup \set{\mathsf{Reach}^{\sharp}(\mathtt{P}, m) \mid 0 \leq m}
\end{array}
$$
A program $\mathtt{P}$ is \emph{{Relational}\xspace-safe} if ${\mathcal{E}} \not \in\mathsf{Reach}^{\sharp}(\mathtt{P})$.
\begin{figure*}[t]
\begin{small}
$$\begin{array}{ll}
\multicolumn{2}{l}{\textbf{Common Operations}}\\
\mathsf{Post}^{\sharp}({\mathcal{E}}, \mathtt{I}) & \doteq\ \set{{\mathcal{E}}} \\
\mathsf{Post}^{\sharp}(\istate^{\sharp},\mathtt{I}_1;\mathtt{I}_2) & \doteq\ \mathsf{Post}^{\sharp}(\mathsf{Post}^{\sharp}(\istate^{\sharp}, \mathtt{I}_1), \mathtt{I}_2) \\
\mathsf{Post}^{\sharp}(\istate^{\sharp},\ASSIGN{x}{e}) & \doteq\ \set{\UPD{\istate^{\sharp}}{x}{\istate^{\sharp}(e)}} \\
\mathsf{Post}^{\sharp}(\istate^{\sharp},\HAVOC{x}) & \doteq\ \set{\UPD{\istate^{\sharp}}{x}{c} \mid c \in V}\\
\mathsf{Post}^{\sharp}(\istate^{\sharp},{{\mathtt{assume}}}(p)) & \doteq\ \begin{cases}
\set{\istate^{\sharp}} & \mbox{if }\istate^{\sharp}(p) = {\it true}\\
\emptyset & \mbox{otherwise}
\end{cases}\\
\mathsf{Post}^{\sharp}(\istate^{\sharp},{{\mathtt{assert}}}(p)) & \doteq\ \begin{cases}
\set{\istate^{\sharp}} & \mbox{if } \istate^{\sharp}(p) = {\it true}\\
\set{{\mathcal{E}}} & \mbox{otherwise}
\end{cases}\\[0.2in]
\multicolumn{2}{l}{\textbf{Tuple Operations: {Relational}\xspace Semantics}}\\
\mathsf{Post}^{\sharp}(\istate^{\sharp},{{\mathtt{get}}}{{\kvar}}{(t_0,\ldots,t_n)}) & \doteq\
\{\UPD{\istate^{\sharp}}{t_0}{v_0}\ldots\UPD{}{t_n}{v_n} \mid (v_0,\ldots,v_n) \in \istate^{\sharp}({\kvar})\}\\
\mathsf{Post}^{\sharp}(\istate^{\sharp},{{\mathtt{set}}}{{\kvar}}{(x_0,\ldots,x_n)}) & \doteq\
\set{\UPD{\istate^{\sharp}}{{\kvar}}{\istate^{\sharp}({\kvar}) \cup \set{(\istate^{\sharp}(x_0),\ldots,\istate^{\sharp}(x_n))}}}\\[0.2in]
\multicolumn{2}{l}{\textbf{Tuple Operations: {Imperative}\xspace Semantics}}\\
\mathsf{Post}(s,{{\mathtt{get}}}{{\kvar}}{(t_0,\ldots,t_n)})
& \doteq\ \begin{cases}
\set{\UPD{s}{t_0}{v_0}\ldots\UPD{}{t_n}{v_n}} & \mbox{if } s({\kvar}) = (v_0,\ldots,v_n) \\
\emptyset & \mbox{if } s({\kvar}) = \perp \\
\end{cases} \\
\mathsf{Post}(s, {{\mathtt{set}}}{{\kvar}}{(x_0,\ldots,x_n)}) & \doteq\ \set{\UPD{s}{{\kvar}}{(s(x_0),\ldots,s(x_n))}}
\end{array}$$
\end{small}
\caption{\textbf{{Relational}\xspace and {Imperative}\xspace Semantics: Other cases of $\mathsf{Post}$
identical to $\mathsf{Post}^{\sharp}$}}
\label{fig:impsemantics}
\label{fig:relsemantics}
\end{figure*}
\subsection{{Imperative}\xspace Semantics}
\label{sec:impsemantics}
Next, we define the {Imperative}\xspace semantics, as a state
transition system. In this semantics, ${\kvar}$ variables
${\kvar}$ range over tuples over $V$.
\mypara{{Imperative}\xspace States.}
In the {Imperative}\xspace semantics,
each state $s$ is either the special \emph{error state} ${\mathcal{E}}$
or a map from program variables to values such that
every base variable is mapped to a value in $V$, and
every relation\xspace variable of arity $n$ is mapped either to
a tuple in $V^n$ or to the special \emph{undefined} value $\perp$.
Let $\Sigma$ denote the set of all a {Imperative}\xspace-program states.
\mypara{Initial State.}
The initial state $s_0$ of an \textsc{Imp}\xspace program in the {Imperative}\xspace semantics is a map in which
every base variable is mapped to a fixed value from $V$,
and every relation\xspace variable is mapped to $\perp$.
\mypara{Transition Relation.}
The transition relation is defined using a $\mathsf{Post}$ operator,
which is identical to $\mathsf{Post}^{\sharp}$ in the {Relational}\xspace semantics except
for the tuple-get and tuple-set instructions.
Figure~\ref{fig:impsemantics} shows the operator $\mathsf{Post}$ for get and set operations.
Again, $\mathsf{Post}$ is lifted to a set of states in the natural way.
Notice that the program halts if a get instruction is executed with
an \emph{undefined} relation\xspace variable, or an ${{\mathtt{assume}}}(p)$ is executed in a state
that does not satisfy $p$.
\mypara{Safety.}
Let $\mathtt{P}$ be the program ${{\mathtt{loop}}}\{ \mathtt{I}_1 {[\!]} \ldots {[\!]} \mathtt{I}_n \}$.
The set of \emph{{Imperative}\xspace-reachable states} of $\mathtt{P}$, denoted $\mathsf{Reach}(\mathtt{P})$ is
defined by induction as:
$$
\begin{array}{ll}
\mathsf{Reach}(\mathtt{P}, 0) & \doteq\ \set{s_0}\\
\mathsf{Reach}(\mathtt{P}, m+1) & \doteq\ \bigcup \set{\mathsf{Post}(\mathsf{Reach}(\mathtt{P}, m), \mathtt{I}_j) \mid 1 \leq j \leq n}\\
\mathsf{Reach}(\mathtt{P}) & \doteq\ \bigcup \set{\mathsf{Reach}(\mathtt{P}, m) \mid 0 \leq m}
\end{array}
$$
A program $\mathtt{P}$ is \emph{{Imperative}\xspace-safe} if ${\mathcal{E}} \not \in \mathsf{Reach}(\mathtt{P})$.
\section{From Type Constraints to \textsc{Imp}\xspace Programs} \label{sec:equiv}
In this section we formalize the translation from type constraints
into \textsc{Imp}\xspace programs and prove that the constraints are satisfiable
if and only if the translated program is safe.
\subsection{Translation}\label{sec:translation}
\begin{figure}[t]
\begin{displaymath}
\begin{array}{rll}
\multicolumn{3}{l}{\mbox{\textbf{Refinement Type Translation}}} \\[4pt]
\translate{\reftyp{\valu}{\tau}{p}}_{get}
& \doteq\ & \HAVOC{\valu}; \\
& & {{\mathtt{assume}}}(p) \\[4pt]
\translate{\reftyp{\valu}{\tau}{p}}_{set}
& \doteq\ & {{\mathtt{assert}}}(p) \\[4pt]
\translate{\reftyp{\valu}{\tau}{\SUBST{\kappa}{x_1 \ldots x_n}{y_1 \ldots y_n}}}_{get}
& \doteq\ & {{\mathtt{get}}}{{\kvar}}{(t_0,\ldots,t_n)}; \\
& & {{\mathtt{assume}}}(y_1 = t_1); \\
& & \vdots \\
& & {{\mathtt{assume}}}(y_n = t_n); \\
& & \ASSIGN{\valu}{t_0} \\[4pt]
\translate{\reftyp{\valu}{\tau}{\SUBST{\kappa}{x_1 \ldots x_n}{y_1 \ldots y_n}}}_{set}
& \doteq\ & {{\mathtt{set}}}{{\kvar}}{(\valu,y_1,\ldots,y_n)} \\[8pt]
\multicolumn{3}{l}{\mbox{\textbf{Binding Translation}}} \\[4pt]
\translate{\EXT{\ftyp{x}{T}}{G}}
& \doteq\ & \translate{\tau}_{get};\ \ASSIGN{x}{\valu};\ \translate{G} \\[4pt]
\translate{\cdot}
& \doteq\ & {{\mathtt{skip}}} \\[8pt]
\multicolumn{3}{l}{\mbox{\textbf{Constraint Translation}}} \\[4pt]
\translate{G \vdash\ T_1 <: T_2}
& \doteq\ & \translate{G};\ \translate{T_1}_{get};\ \translate{T_2}_{set} \\[8pt]
\multicolumn{3}{l}{\mbox{\textbf{Constraint Set Translation}}} \\[4pt]
\translate{\set{c_1,\ldots,c_n}}
& \doteq\ & {{\mathtt{loop}}} \{ \translate{c_1} {\mathrm{[\!] }} \ldots {\mathrm{[\!] }} \translate{c_n} \} \\
\end{array}
\end{displaymath}
\caption{\textbf{Translating Constraints to \textsc{Imp}\xspace Programs}}
\label{fig:translate}
\end{figure}
Figure~\ref{fig:translate} formalizes the translation
from (a set of) refinement type constraints $C$ to an \textsc{Imp}\xspace program $\translate{C}$.
We use the WF constraints to translate each relation\xspace
variable $\kappa$ of arity $n+1$ into a corresponding
tuple variable ${\kvar}$ of arity $n+1$.
The translation is syntax-driven.
We translate each subtyping constraint
$G \vdash\ T_1 <: T_2$
into a straight-line block of instructions with three parts:
a sequence of instructions that establishes
the environment bindings
($\translate{G}$),
a sequence of instructions that ``gets" the
values corresponding to the LHS
($\translate{T_1}_{get}$)
and a sequence of instructions that ``sets" the (LHS) values
into the appropriate RHS
($\translate{T_2}_{set}$).
The translation for a set of constraints is an infinite loop
that non-deterministically chooses among the blocks for each constraint.
Each environment binding gets translated as a ``get".
Bindings with unknown refinements are translated into tuple-get operations,
followed by ${{\mathtt{assume}}}$ statements that establish the equalities
corresponding to the pending substitutions.
Bindings with known refinements are translated into non-deterministic assignments
followed by a ${{\mathtt{assume}}}$ that enforces that the refinement holds on the
non-deterministic value.
Each ``set" operation to an unknown refinement is translated into a
tuple-set instruction that writes the tuple corresponding to the pending
substitutions into the translated tuple variable.
Finally, each ``set" operation corresponding to a known refinement is
translated to an ${{\mathtt{assert}}}$ instruction; intuitively, in such constraints
the RHS defines an upper bound on the set of values populating the type,
and the ${{\mathtt{assert}}}$ serves to enforce the upper bound requirement in the
translated program.
The correctness of the procedure is stated by the following theorem.
\begin{theorem}{}\label{th:translate}
$C$ is satisfiable iff $\translate{C}$ is \emph{{Relational}\xspace-safe}.
\end{theorem}
The proof of this theorem follows from the properties of the
following function $\alpha$ that maps a set $\hat{\istates^{\sharp}}\subseteq\istates^{\sharp}$
of {Relational}\xspace-states to constraint solutions:
$$\alpha(\hat{\istates^{\sharp}}) \doteq\
\lambda \kappa. \bigcup \set{\istate^{\sharp}({\kvar}) \mid \istate^{\sharp} \in \hat{\istates^{\sharp}}}$$
The function $\alpha$ enjoys the following property, which can be
proven by induction on the construction of $\mathsf{Reach}^{\sharp}$, that relates the
satisfying solutions of the constraints to the {Relational}\xspace-reachable states
of the translated program.
Theorem~\ref{th:translate} follows from the following observations.
If $S$ satisfies $C$ then
$\alpha(\mathsf{Reach}^{\sharp}(\translate{C}))(\kappa) \subseteq S(\kappa)$
for all $\kappa$.
If ${\mathcal{E}} \not \in \mathsf{Reach}^{\sharp}(\translate{C})$ then
$\alpha(\mathsf{Reach}^{\sharp}(\translate{C}))$ satisfies $C$.
\subsection{Read-Write-Once Programs}
\label{sec:rwo}
At this point, via Theorem~\ref{th:translate}, we have reduced
checking satisfiability of type constraints to the problem of
verifying assertions of \textsc{Imp}\xspace programs under the
(non-standard) {Relational}\xspace semantics.
Unfortunately, under these semantics, the program contains variables
(${\kvar}$) which range over \emph{sets} of tuples.
This makes it inconvenient to directly apply abstract-interpretation
based techniques for imperative programs which typically assume the
(standard) {Imperative}\xspace semantics; each technique has to be painstakingly
adapted to the non-standard semantics.
We would be home and dry if we could prove the equivalence of the
{Relational}\xspace and {Imperative}\xspace semantics; that is, if we could show that
an \textsc{Imp}\xspace program was {Relational}\xspace-safe if and only if it was {Imperative}\xspace safe.
Unfortunately, this is not true.
\smallskip\noindent\textbf{\emph{Example.}\xspace}
Consider the \textsc{Imp}\xspace program:
$$
\begin{array}{rlcll}
{{\mathtt{loop}}}\{ \quad &
\begin{array}{l}
\HAVOC{\valu};\\
{{\mathtt{set}}}{\kappa}{(\valu)}
\end{array}
&
{\mathrm{[\!] }} &
\begin{array}{l}
{{\mathtt{get}}}{\kappa}{(t_0)};\\
\ASSIGN{\valu}{t_0}; \ASSIGN{x}{\valu}; \\
{{\mathtt{get}}}{\kappa}{(t_0)};\\
\ASSIGN{\valu}{t_0}; \ASSIGN{y}{\valu}; \\
{{\mathtt{assert}}}{(x=y)}
\end{array}& \}
\end{array}
$$
This program \emph{is not} {Relational}\xspace-safe as the set-operation in the first
instruction populates ${\kvar}$ with the set of all integers,
and the get-operation in the second instruction can assign different values
to integer values to $x$ and $y$.
However the program \emph{is} {Imperative}\xspace-safe as whenever the second
instruction executes, ${\kvar}$ will be undefined or contain
some arbitrary integer that is assigned to both $x$ and $y$,
which causes the assert to succeed.
This example pinpoints exactly why the two semantics differ.
In the {Relational}\xspace semantics, in any given loop iteration,
different gets on the same ${\kvar}$ can return
\emph{different} tuples, while in the {Imperative}\xspace
semantics the gets are correlated and return the same tuple.
\mypara{Read-Write-Once Programs.}
An \textsc{Imp}\xspace instruction is a \emph{read-write-once} instruction if
any relation\xspace variable ${\kvar}$ is read from and written
to at most once in the instruction.
That is, read-write-once means at most one write and at most one read
(and not at most one read or write).
An \textsc{Imp}\xspace program is a \emph{read-write-once} program if each instruction
in its loop is a read-write-once instruction.
We can show that for Read-Write-Once \textsc{Imp}\xspace programs
the {Relational}\xspace and {Imperative}\xspace semantics are equivalent.
\begin{theorem}{}\label{th:rwo-equiv}
If $\mathtt{P}$ is a \emph{read-write-once} $\textsc{Imp}\xspace$ program then
$\mathtt{P}$ is \emph{{Relational}\xspace-safe} iff $\mathtt{P}$ is \emph{{Imperative}\xspace-safe}.
\end{theorem}
To prove this theorem, we formalize the connection between the
reachable states under the two different semantics, using the function
${\sf Expand}$, which maps a {Relational}\xspace-state to a set of {Imperative}\xspace states:
\begin{align*}
{\sf Expand}(\istate^{\sharp}) \doteq\ &
\left\{ s \mid
\begin{array}{ll}
s(x) = \istate^{\sharp}(x) & \mbox{for base variables}\\
s({\kvar}) = \mybar{v} & \mbox{if }\mybar{v} \in \istate^{\sharp}(\kappa)\\
s({\kvar}) = \perp & \mbox{if }\istate^{\sharp}({\kvar}) = \emptyset\\
s = {\mathcal{E}} & \mbox{if }\istate^{\sharp} = {\mathcal{E}}
\end{array}
\right\} \\
\intertext{We lift the function to sets of {Relational}\xspace states in the natural way:}
{\sf Expand}(\hat{\istates^{\sharp}}) \doteq\ & \bigcup \set{{\sf Expand}(\istate^{\sharp}) \mid \istate^{\sharp} \in \hat{\istates^{\sharp}}}
\end{align*}
Next, we can show that read-write-once instructions enjoy the following
property, by case splitting on the form of $I$.
\begin{lemma}{\textbf{[Step]}}\label{lemma:step-lemma}
If $\mathtt{I}$ is a read-write-once instruction then
${\sf Expand}(\mathsf{Post}^{\sharp}(\istate^{\sharp}, \mathtt{I})) = \mathsf{Post}({\sf Expand}(\istate^{\sharp}), \mathtt{I})$.
\end{lemma}
We use this property to show that the reachable states under the different
semantics are equivalent.
\begin{lemma}{}\label{lemma:expand}
If $\mathtt{P} = {{\mathtt{loop}}} \set{\mathtt{I}_1 {\mathrm{[\!] }} \ldots {\mathrm{[\!] }} \mathtt{I}_n}$
is a read-write-once program, then ${\sf Expand}(\mathsf{Reach}^{\sharp}(\mathtt{P})) = \mathsf{Reach}(\mathtt{P})$.
\end{lemma}
\includeProof{
\begin{proof}
To prove that $\mathsf{Reach}(\mathtt{P}) \subseteq {\sf Expand}(\mathsf{Reach}^{\sharp}(\mathtt{P}))$,
we show
$${\forall m:\; \mathsf{Reach}(\mathtt{P}, m) \subseteq {\sf Expand}(\mathsf{Reach}^{\sharp}(\mathtt{P}))}$$
by straightforward induction on $m$, noting that
$s_0 \in {\sf Expand}(\istate^{\sharp}_0)$, and
$\mathsf{Post}({\sf Expand}(\istate^{\sharp}),\mathtt{I})\subseteq \mathsf{Post}^{\sharp}(\istate^{\sharp},\mathtt{I})$ for
any {Relational}\xspace-state $\istate^{\sharp}\in\istates^{\sharp}$, instruction $\mathtt{I}$, and any
program $\mathtt{P}$ (not necessarily read-write-once).
To show inclusion in the other direction,
we prove
$${\forall m:\; {\sf Expand}(\mathsf{Reach}^{\sharp}(\mathtt{P}, m)) \subseteq \mathsf{Reach}(\mathtt{P})}$$
by induction on $m$.
For the base case,
$${\sf Expand}(\mathsf{Reach}^{\sharp}(\mathtt{P},0)) = \mathsf{Reach}(\mathtt{P}, 0) \subseteq \mathsf{Reach}(\mathtt{P})$$
by the definition of the initial states.
By induction, assume that
$${\sf Expand}(\mathsf{Reach}^{\sharp}(\mathtt{P}, m)) \subseteq \mathsf{Reach}(\mathtt{P})$$
Let ${s' \in {\sf Expand}(\mathsf{Reach}^{\sharp}(\mathtt{P}, m+1))}$.
By Lemma~\ref{lemma:step-lemma}, either
$s'$ is already in $\mathsf{Reach}^{\sharp}(\mathtt{P}, m)$,
in which case the inductive hypothesis applies and
hence $s' \in \mathsf{Reach}(\mathtt{P})$, or
$$s' \in \mathsf{Post}({\sf Expand}(\mathsf{Reach}^{\sharp}(\mathtt{P}, m), \mathtt{I}_j)$$
for some $j$. That is, there is a
$s \in {\sf Expand}(\mathsf{Reach}^{\sharp}(\mathtt{P}, m)$ such that
$s' \in \mathsf{Post}(s, \mathtt{I}_j)$.
From the induction hypothesis
$s \in \mathsf{Reach}(\mathtt{P})$.
As $\mathsf{Reach}(\mathtt{P})$ is closed under $\mathsf{Post}$,
we conclude $s' \in \mathsf{Reach}(\mathtt{P})$.
\end{proof}
}
\subsection{Cloning}
\label{sec:cloning}
At this point, we have shown that the {Imperative}\xspace semantics of
read-write-once programs are equivalent to the {Relational}\xspace semantics.
All that remains is to show that the translation procedure of
Figure~\ref{fig:translate} produces read-write-once programs.
Unfortunately, this is not true.
\smallskip\noindent\textbf{\emph{Example.}\xspace}
Consider the following constraints:
$$
\begin{array}{c}
\emptyset \vdash\ \sreftyp{\kappa}\ ,
\emptyset \vdash\ \sreftyp{{\it true}} <: \sreftyp{\kappa}\ ,
\EXT{\ftyp{x}{{\kappa}}}{\ftyp{y}{{\kappa}}} \vdash\ \sreftyp{{\it true}} <: \sreftyp{x = y}
\end{array}
$$
It is easy to check that on the above constraints, the translation
procedure yields the \textsc{Imp}\xspace program from the previous example, which is not
read-write-once.
The reason the translated program is not a read-write-once program is that
there can be constraints $G \vdash\ T_1 <: T_2$
in which $\kappa$ occurs in multiple places within $G$ and $T_1$.
To solve this problem, we can simply \emph{clone} the $\kappa$ variables
that occur multiple times inside a constraint, and use different clones
at each occurrence!
We formalize this as a procedure ${\sf Clone}$ that maps a finite set of
constraints to another finite set. The procedure works as follows.
For each $\kappa$ that is read upto $n$ times in some constraint, we
make $n$ clones, $\kappa^1,\ldots,\kappa^n$, and
\begin{enumerate}
\item for the $i^{th}$ occurence of $\kappa$ within any constraint,
we use the $i^{th}$ clone $\kappa^i$ (instead of $\kappa$), and,
\item for each constraint where $\kappa$ appears on the right hand side,
we make $n$ clones of the constraints where in the $i^{th}$ cloned constraint,
we use $\kappa^i$ (instead of $\kappa$).
\end{enumerate}
The first step ensures that each $\kappa$ is read-once in any constraint, and
the second step ensures that the clones correspond to exactly the same set of tuples
as the original variable $\kappa$.
We can prove that ${\sf Clone}$ enjoys the following properties.
\begin{theorem}{}\label{th:clone}
Let $C$ be a finite set of constraints.
\begin{enumerate}
\item $\translate{{\sf Clone}(C)}$ is a read-write-once program.
\item ${\sf Clone}(C)$ is satisfiable iff $C$ is satisfiable.
\end{enumerate}
\end{theorem}
It is easy to verify that $\translate{{\sf Clone}(C)}$ is a read-write-once
program.
Furthermore, any satisfying solution for the original
constraints can be mapped directly to a solution for
the cloned constraints.
To go in the other direction, we must map a solution that satisfies the
cloned constraints to one that satisfies the original constraints.
This is trivial if the solution for the cloned constraints
maps each clone $\kappa^i$ to the same set of tuples.
We show that if the cloned constraints have a satisfying solution,
they have a solution that satisfies the above property.
To this end, we prove the following lemma that states
that for \emph{any} set of constraints, the satisfying
solutions are closed under intersection.
\begin{lemma}{}\label{lemma:solnintersect}
If $S_1$ and $S_2$ are solutions that satisfy $C$
then $S_1 \cap S_2 \doteq\ \lambda \kappa. S_1(\kappa) \cap S_2(\kappa)$
satisfies $C$.
\end{lemma}
Thus if $S$ satisfies the cloned constraints
then by symmetry and Lemma~\ref{lemma:solnintersect}
the solution that maps \emph{each} cloned variable
to $\cap_{i=1}^{n}S(\kappa^i)$ also satisfies
the cloned constraints, and hence, directly yields
a solution to the original constraints.
Finally, as a corollary of
Theorems~\ref{th:translate},\ref{th:rwo-equiv},\ref{th:clone}
we get our main result that reduces the question
of refinement type constraint satisfaction,
to that of safety verification.
\begin{theorem}{}\label{th:equiv}
$C$ is satisfiable iff $\translate{{\sf Clone}(C)}$ is {Imperative}\xspace-safe.
\end{theorem}
While we state Theorems \ref{th:translate} and \ref{th:clone} as
preserving satisfiability, the proof shows how the solutions can be
effectively mapped between $C$ and $\translate{C}$ (or
$\translate{{\sf Clone}(C)}$.
In particular, while the intersection of two non-trivial solutions can
be a trivial solution, it would be guaranteed that in that case, the
trivial solution satisfies~$C$.
Stated in terms of invariants, Lemma \ref{lemma:solnintersect} states
the observation that that there may be several non-comparable
inductive invariants to prove a safety property, but in that case, the
intersection of all the inductive invariants is also an inductive
invariant.
\section{Experiments}\label{sec:experiments}
\newcommand{\invpage}[1]{
\begin{minipage}[h]{.35\linewidth}
\begin{displaymath}
\begin{array}{c}
#1\\[\jot]
\end{array}
\end{displaymath}
\end{minipage}
}
\newcommand{\typepage}[1]{
\begin{minipage}[h]{.4\linewidth}
\begin{displaymath}
\begin{array}{c@{\;\doteq\ \;}l}
#1\\[\jot]
\end{array}
\end{displaymath}
\end{minipage}
}
\newcommand{\mathpage}[1]{
\begin{minipage}[h]{.3\linewidth}
\begin{displaymath}
#1\\[\jot]
\end{displaymath}
\end{minipage}
}
\begin{comment}
\begin{table*}[t]
\begin{small}
\centering
\begin{tabular}{|l||l|l|l|}
\hline
\textbf{Program} & \textbf{Time(sec)} & \textbf{Invariant} & \textbf{Refinement Types} \\ \hline\hline
max & 0.091
& ${\kvar}_1.1\leq{\kvar}_1.0 \wedge {\kvar}_1.2\leq{\kvar}_1.0$
& ${\kvar}_x \doteq\ {\it true}, {\kvar}_y \doteq\ {\it true}, {\kvar}_1 \doteq\ x \leq v \mathrel{\wedge} y \leq v$ \\ \hline
sum & 0.071
& $0 \leq {\kvar}_2.0 \wedge {\kvar}_2.1 \leq {\kvar}_2.0$
& ${\kvar}_k \doteq\ {\it true}, {\kvar}_2 \doteq\ 0 \leq v \wedge k \leq v$ \\ \hline
foldn & 0.060 &
${0 \leq {\kvar}_i.0 \wedge 0 \leq {\kvar}_3.0 \wedge {\kvar}_3.0 < {\kvar}_3.2}$ &
\mathpage{{\kvar}_i \doteq\ 0 \leq v, {\kvar}_3 \doteq\ 0 \leq v \wedge v < n} \\ \hline
arraymax & 0.135 &
\invpage{
0\leq {\kvar}_4.0 \wedge 0\leq {\kvar}_5.0 \mathrel{\wedge}\\[\jot]
0\leq {\kvar}_6.0 \wedge {\kvar}_g.0 < \mathtt{len}({\kvar}_g.1)
}
&
\typepage{
{\kvar}_4 & 0 \leq v, {\kvar}_5 \doteq\ 0 \leq v,\\[\jot]
{\kvar}_6 & 0 \leq v, {\kvar}_g \doteq\ v < \mathtt{len(a)}
}
\\ \hline
mask & 0.098 &
\invpage{
{\kvar}_1.0 < {\kvar}_1.1+\mathtt{len}({\kvar}_1.4) \wedge {\kvar}_1.1 \leq {\kvar}_1.0 \mathrel{\wedge} \\[\jot]
0 \leq {\kvar}_2.0 \wedge {\kvar}_2.0 < \mathtt{len}({\kvar}_2.3)
}
&
\typepage{
{\kvar}_1 & v < i+\mathtt{len(xs)} \wedge i \leq v, \\[\jot]
{\kvar}_2 & 0 \leq v \wedge v < \mathtt{len(a)}
}
\\ \hline
samples & 0.117 &
\invpage{
0 \leq {\kvar}_2.0 \wedge {\kvar}_2.0 < \mathtt{len}({\kvar}_2.4) \mathrel{\wedge} \\[\jot]
0 \leq {\kvar}_3.0 \wedge {\kvar}_3.0 < \mathtt{len}({\kvar}_3.3) \mathrel{\wedge} \\[\jot]
0 \leq {\kvar}_6.0
}
&
\typepage{
{\kvar}_2 & 0 \leq v \wedge v < \mathtt{len(b)}, \\[\jot]
{\kvar}_3 & 0 \leq v \wedge v < \mathtt{len(a)}, \\[\jot]
{\kvar}_6 & 0 \leq v
}
\\ \hline
\end{tabular}
\caption{Experimental evaluation using a predicate abstraction-based verification tool. Time is given in seconds.
The third column presents the invariant computed by our tool for the translated program.
The last column shows the refinement types obtained from the computed invariants.}
\label{tab:experiments}
\end{small}
\end{table*}
\end{comment}
\iffalse
\begin{table*}[t]
\begin{small}
\centering
\begin{tabular}{|l||c|c|c|}
\hline
\textbf{Program} & \textbf{Time(sec)} & \textbf{Invariant} & \textbf{Refinement Types} \\ \hline\hline
max & 0.091 & \invpage{{\kvar}_1.1\leq{\kvar}_1.0 \wedge {\kvar}_1.2\leq{\kvar}_1.0} &
\mathpage{{\kvar}_x \doteq\ {\it true}, {\kvar}_y \doteq\ {\it true}, {\kvar}_1 \doteq\ x \leq v \mathrel{\wedge} y \leq v} \\ \hline
sum & 0.071 &
\invpage{0 \leq {\kvar}_2.0 \wedge {\kvar}_2.1 \leq {\kvar}_2.0} &
\mathpage{{\kvar}_k \doteq\ {\it true}, {\kvar}_2 \doteq\ 0 \leq v \wedge k \leq v} \\ \hline
foldn & 0.060 &
\invpage{0 \leq {\kvar}_i.0 \wedge 0 \leq {\kvar}_3.0 \wedge {\kvar}_3.0 < {\kvar}_3.2} &
\mathpage{{\kvar}_i \doteq\ 0 \leq v, {\kvar}_3 \doteq\ 0 \leq v \wedge v < n} \\ \hline
arraymax & 0.135 &
\invpage{
0\leq {\kvar}_4.0 \wedge 0\leq {\kvar}_5.0 \mathrel{\wedge}\\[\jot]
0\leq {\kvar}_6.0 \wedge {\kvar}_g.0 < \mathtt{len}({\kvar}_g.1)
}
&
\typepage{
{\kvar}_4 & 0 \leq v, {\kvar}_5 \doteq\ 0 \leq v,\\[\jot]
{\kvar}_6 & 0 \leq v, {\kvar}_g \doteq\ v < \mathtt{len(a)}
}
\\ \hline
mask & 0.098 &
\invpage{
{\kvar}_1.0 < {\kvar}_1.1+\mathtt{len}({\kvar}_1.4) \wedge {\kvar}_1.1 \leq {\kvar}_1.0 \mathrel{\wedge} \\[\jot]
0 \leq {\kvar}_2.0 \wedge {\kvar}_2.0 < \mathtt{len}({\kvar}_2.3)
}
&
\typepage{
{\kvar}_1 & v < i+\mathtt{len(xs)} \wedge i \leq v, \\[\jot]
{\kvar}_2 & 0 \leq v \wedge v < \mathtt{len(a)}
}
\\ \hline
samples & 0.117 &
\invpage{
0 \leq {\kvar}_2.0 \wedge {\kvar}_2.0 < \mathtt{len}({\kvar}_2.4) \mathrel{\wedge} \\[\jot]
0 \leq {\kvar}_3.0 \wedge {\kvar}_3.0 < \mathtt{len}({\kvar}_3.3) \mathrel{\wedge} \\[\jot]
0 \leq {\kvar}_6.0
}
&
\typepage{
{\kvar}_2 & 0 \leq v \wedge v < \mathtt{len(b)}, \\[\jot]
{\kvar}_3 & 0 \leq v \wedge v < \mathtt{len(a)}, \\[\jot]
{\kvar}_6 & 0 \leq v
}
\\ \hline
\end{tabular}
\caption{Experimental evaluation using a predicate abstraction-based verification tool. Time is given in seconds.
The third column presents the invariant computed by our tool for the translated program.
The last column shows the refinement types that we obtain from the computed invariants.}
\label{tab:experiments}
\end{small}
\end{table*}
\fi
\begin{table}[t]
\begin{small}
\centering
\begin{tabular}{|l||c|c|}
\hline
\textbf{Program} & \textbf{Time} & \textbf{Invariant} \\ \cline{3-3}
& \textbf{(sec)}& \textbf{Refinement Types} \\ \hline\hline
%
max & 0.091 & ${\kvar}_1.1\leq{\kvar}_1.0 \wedge
{\kvar}_1.2\leq{\kvar}_1.0$ \\ \cline{3-3}
& & ${\kvar}_x \doteq\ {\it true}, {\kvar}_y \doteq\ {\it true}, {\kvar}_1
\doteq\ x \leq v \mathrel{\wedge} y \leq v$ \\ \hline
%
sum & 0.071 &
$0 \leq {\kvar}_2.0 \wedge {\kvar}_2.1 \leq {\kvar}_2.0 $ \\\cline{3-3}
& & ${\kvar}_k \doteq\ {\it true}, {\kvar}_2 \doteq\ 0 \leq v \wedge k \leq
v$ \\ \hline
foldn & 0.060 &
$0 \leq {\kvar}_i.0 \wedge 0 \leq {\kvar}_3.0 \wedge {\kvar}_3.0 <
{\kvar}_3.2$ \\ \cline{3-3}
&& ${\kvar}_i \doteq\ 0 \leq v, {\kvar}_3 \doteq\ 0 \leq v \wedge v <
n$ \\ \hline
%
arraymax & 0.135 &
$ 0\leq {\kvar}_4.0 \wedge 0\leq {\kvar}_5.0 \mathrel{\wedge} $\\
& & $0\leq {\kvar}_6.0 \wedge {\kvar}_g.0 <
\mathtt{len}({\kvar}_g.1)$ \\ \cline{3-3}
& & ${\kvar}_4 0 \doteq\ \leq v, {\kvar}_5 \doteq\ 0 \leq v,$\\
& & ${\kvar}_6 \doteq\ 0 \leq v, {\kvar}_g \doteq\ v < \mathtt{len(a)}$ \\\hline
mask & 0.098 &
${\kvar}_1.0 < \mathtt{len}({\kvar}_1.4) \wedge {\kvar}_1.1 \leq {\kvar}_1.0 \mathrel{\wedge}$ \\
& & $0 \leq {\kvar}_2.0 \wedge {\kvar}_2.0 < \mathtt{len}({\kvar}_2.3)$\\\cline{3-3}
& & ${\kvar}_1 v < \mathtt{len(xs)} \wedge i \leq v,$\\
& & ${\kvar}_2 \doteq\ 0 \leq v \wedge v < \mathtt{len(a)}$ \\\hline
%
samples & 0.117 &
$ 0 \leq {\kvar}_2.0 \wedge {\kvar}_2.0 < \mathtt{len}({\kvar}_2.4) \mathrel{\wedge}$\\
& & $0 \leq {\kvar}_3.0 \wedge {\kvar}_3.0 <
\mathtt{len}({\kvar}_3.3) \mathrel{\wedge} 0 \leq {\kvar}_6.0$ \\\cline{3-3}
& & ${\kvar}_2 \doteq\ 0 \leq v \wedge v < \mathtt{len(b)}, $ \\
& & ${\kvar}_3 \doteq\ 0 \leq v \wedge v < \mathtt{len(a)}, {\kvar}_6 \doteq\ 0 \leq v$\\\hline
\end{tabular}
\caption{Experimental evaluation using a predicate abstraction-based
verification tool on examples from~\cite{LiquidPLDI08}.
The third column presents the invariant for the translated
program, and the resulting refinement types.}
\label{tab:experiments}
\end{small}
\end{table}
We have implemented a verification tool for \ocaml programs based on \HMC.
We use the liquid types infrastructure implemented in \dsolve \cite{LiquidPLDI08}
to generate refinement type constraints from \ocaml programs.
We use \ARMC \cite{PADL07},
a software model checker using predicate abstraction and interpolation-based
refinement, as the verifier for the translated imperative program.
\iffalse
Our implementation exploits the following structure of the reachability set of the translated
programs.
Since the variables ${\kvar}$ corresponding to refinement predicates
are independent, the vector of predicates defining an abstract state can be partitioned
such that each partition only refers to one relation variable.
This {\em Cartesian} abstraction leads to convergence in time linear in the height
of the abstract domain and linear in the number of relation variables.
For refinement, base variables can be limited in scope to each block,
and the only information that flows between the blocks refers to
relation variables.
\fi
Table~\ref{tab:experiments} shows the results of running our tool on a
suite of small \ocaml examples from~\cite{LiquidPLDI08}.
For array manipulating programs, the safety objective is to prove
array accesses are within bounds.
For \textsc{max} we prove that the output is larger than input values.
For \textsc{sum} we prove that the sum is larger than the largest
summation term.
\begin{table}[t]
\begin{small}
\centering
\begin{tabular}{|l||c|c|c|}
\hline
\textbf{Program} & \textbf{Time} & \textbf{\# iterations}
& \textbf{\# predicates}\\ \hline\hline
boolflip.ml & 2.17s & 7 & 21 \\ \hline
sum.ml & 0.24s & 5 & 14 \\ \hline
sum-acm.ml & 0.11s & 1 & 3 \\ \hline
sum-all.ml & 3.51s & 10 & 26 \\ \hline
mult.ml & 4.67s & 10 & 25 \\ \hline
mult-cps.ml & 780.24s & 11 & 27 \\ \hline
mult-all.ml & 18.44s & 9 & 24 \\ \hline\hline
boolflip-e.ml & 0.65s & \multicolumn{2}{c|}{} \\ \cline{1-2}
sum-e.ml & 0.01s & \multicolumn{2}{c|}{} \\ \cline{1-2}
sum-acm-e.ml & 0.02s & \multicolumn{2}{c|}{} \\ \cline{1-2}
sum-all-e.ml & 0.79s & \multicolumn{2}{c|}{} \\ \cline{1-2}
mult-e.ml & 0.01s & \multicolumn{2}{c|}{} \\ \cline{1-2}
mult-cps-e.ml & 7.69s & \multicolumn{2}{c|}{} \\ \cline{1-2}
mult-all-e.ml & 144.93s & \multicolumn{2}{c|}{} \\ \hline
\end{tabular}
\caption{Experimental evaluation of our tool on Depcegar
benchmarks~\cite{TerauchiPOPL2010}.
The third column presents the number of abstraction refinment
iterations required by \ARMC.
The last column gives the number of predicates discovered by
\ARMC.
For the programs with suffix ``-e'', which are incorrect, we omit
the number of iterations and predicates and only show the time
required by \ARMC to find a counterexample. }
\label{tab-terauchi-popl2010}
\end{small}
\end{table}
Table~\ref{tab-terauchi-popl2010} presents the running time of our
tool on the benchmark programs for the Depcegar
verifier~\cite{TerauchiPOPL2010}.
We observe that despite of our blackbox treatment of \ARMC as a
constraint solver we obtain competitive running times compared to
Depcegar on most of the examples (Depcegar uses a customized procedure
for unfolding constraints and creating interpolation queries that
yield refinement types).
Most of the predicates discovered by the interpolation-based
abstraction refinement procedure implemented in \ARMC fall into the
fragment ``two variables per inequality.''
The example \textsc{mask} required a predicate that refers to three
variables, see~${\kvar}_1$.
While our initial experiments used a CEGAR-based tool, we expect optimized
abstract interpreters for numerical domains to also work well for this class of properties.
\section{Extensions and Related Work}\label{sec:discussion}
\begin{comment}
\subsection{Verifying Imperative Programs}
While we have focused on the verification of functional
programs, as we show next, our approach is not limited to functional
programming languages: type constraints are a generic mechanism
to capture program flow information, and the combination of
types and invariant generation can be used to reason
about imperative programs as well.
We demonstrate this through an example.
Consider the \Java program in Figure~\ref{ex-java}.
The procedure $\mathtt{getValidSamples}$ filters
data from an array and puts the indices to valid data in a list.
The procedure $\mathtt{sumValidSamples}$
calls $\mathtt{getValidSamples}$ to get the indices to valid data
in a list, and traverse the list to sum the corresponding
elements in the array.
Our objective is to statically prove that accesses to the array
$\mathtt{a}$ on line 8 are within bounds.
The proof requires a universally quantified assertion on the elements
of the list $\mathtt{l}$: each element in the list in $\mathtt{getValidSamples}$
is between $0$ and the length of $\mathtt{b}$.
Using this invariant, and the flow from $\mathtt{l}$ to $\mathtt{vs}$ in
$\mathtt{sumValidSamples}$, we have the invariant that every
element in $\mathtt{vs}$ is between $0$ and the length of $\mathtt{a}$.
Thus, when an element $\mathtt{j}$ is extracted from the list in line 7,
we can assert that $0\leq \mathtt{j} < \mathtt{a}.\mathtt{length}$ (abbreviated to
$0 \leq \mathtt{j} < {\mathtt{len}}\xspace(\mathtt{a})$) so the array access is safe.
\begin{figure}[t]
\begin{small}
\begin{verbatim}
List<int> getValidSamples(float[] b){
0: int i = 0;
List<int> l = new ArrayList<int>();
while(i < b.length){
1: if (validSample(b[i])){
2: l.add(i);
}
3: i++;
}
4: return l;
}
float sumValidSamples(float[] a){
5: List<int> vs = getValidSamples(a);
6: Iterator<int> itr = vs.iterator();
float sum = 0.0;
while(itr.hasNext()){
7: int j = itr.next();
8: sum += a[j];
}
return sum;
}
\end{verbatim}
\end{small}
\caption{\Java example with ADTs}
\label{ex-java}
\end{figure}
The proof requires automatic reasoning about list contents and arrays, and
invariant generation tools for imperative programs can be imprecise.
Instead, we use \HMC to decompose the argument into two parts.
In the first part, we use a type-based approach
to capture global flow of information in the program,
and in the second part, we discharge assertions on
base elements using invariant generation
for imperative programs.
\begin{figure*}[t]
\begin{small}
\[
\begin{array}{lrcl}
\mbox{(c0)} & \ftyp{\mathtt{b}}{\xspace \mathtt{float[]}} & \vdash\ & \sreftyp{\valu=0} <: \kappa_6\\
\mbox{(c1)} & \ftyp{\mathtt{b}}{\xspace \mathtt{float[]}},\ftyp{\mathtt{i}}{\kappa_6},
\ftyp{\mathtt{l}}{\mathtt{List}{\tuple{\kappa_2}}}, \mathtt{i} < {\mathtt{len}}\xspace(\mathtt{b}) & \vdash\ &
\sreftyp{\valu=\mathtt{i}} <: \sreftyp{0 \leq \valu < {\mathtt{len}}\xspace(\mathtt{b})}\\
\mbox{(c2)} & \ftyp{\mathtt{b}}{\xspace \mathtt{float[]}}, \ftyp{\mathtt{i}}{\kappa_6},\ftyp{\mathtt{l}}{\mathtt{List}\tuple{\kappa_2}},
\mathtt{i} < {\mathtt{len}}\xspace(\mathtt{b}) & \vdash\ & \sreftyp{\valu=\mathtt{i}} <: \kappa_2\\
\mbox{(c3)} & \ftyp{\mathtt{b}}{\xspace \mathtt{float[]}}, \ftyp{\mathtt{i}}{\kappa_6},
\ftyp{\mathtt{l}}{\mathtt{List}\tuple{\kappa_2}},
\mathtt{i} < {\mathtt{len}}\xspace(\mathtt{b}) & \vdash\ & \sreftyp{\valu=\mathtt{i}+1} <: \kappa_6\\
\mbox{(c4)} & \ftyp{\mathtt{b}}{\xspace \mathtt{float[]}}, \ftyp{\mathtt{i}}{\kappa_6} & \vdash\ &
\mathtt{List}\tuple{\kappa_2} <: \mathtt{List}\tuple{\kappa_1}\\
\mbox{(c5)} & \ftyp{\mathtt{a}}{\xspace \mathtt{float[]}} & \vdash\ &
\mathtt{List}\tuple{\SUBST{\kappa_1}{\mathtt{b}}{\mathtt{a}}} <: \mathtt{List}\tuple{\kappa_3}\\
\mbox{(c6)} & \ftyp{\mathtt{a}}{\xspace \mathtt{float[]}}, \ftyp{\mathtt{vs}}{\mathtt{List}\tuple{\kappa_3}} & \vdash\ &
\mathtt{Iterator}\tuple{\kappa_3} <: \mathtt{Iterator}\tuple{\kappa_4}\\
\mbox{(c7)} & \ftyp{\mathtt{a}}{\xspace \mathtt{float[]}},
\ftyp{\mathtt{vs}}{\mathtt{List}\tuple{\kappa_3}},
\ftyp{\mathtt{itr}}{\mathtt{Iterator}\tuple{\kappa_4}} & \vdash\ & \kappa_4 <: \kappa_5\\
\mbox{(c8)} & \ftyp{\mathtt{a}}{\xspace \mathtt{float[]}},
\ftyp{\mathtt{vs}}{\mathtt{List}\tuple{\kappa_3}},
\ftyp{\mathtt{itr}}{\mathtt{Iterator}\tuple{\kappa_4}},
\ftyp{\mathtt{j}}{\kappa_5} & \vdash\ &
\set{\valu=\mathtt{j}} <: \set{0 \leq \valu < {\mathtt{len}}\xspace(\mathtt{a})}
\end{array}
\]
\end{small}
\caption{Subtyping constraints for \Java program in Figure~\ref{ex-java}}
\label{ex-java-constraints}
\end{figure*}
We start with a refinement of \Java's types with refinement predicates.
As before, we construct refinement templates for the types
shown below
$$\begin{array}{l}
\mathtt{List}\tuple{\kappa_1}\ \mathtt{getValidSamples}(\xspace \mathtt{float[]}\ \mathtt{b});\\
\mathtt{List}\tuple{\kappa_2}\ \mathtt{l};\\
\sreftyp{\kappa_6}\ \mathtt{i};\\
\mathtt{List}\tuple{\kappa_3}\ \mathtt{vs};\\
\mathtt{Iterator}\tuple{\kappa_4}\ \mathtt{itr};\\
\sreftyp{\kappa_5}\ \mathtt{j};\\
\end{array}$$
where all refinements $\kappa$ abbreviate $\reftyp{\valu}{\mathtt{int}}{\kappa}$.
The variables not shown above are not refined.
Using the templates, we can construct type constraints as before.
The well-formedness constraints, capturing scope information, are
\[
\begin{array}{cl}
\mbox{(w1)} & \ftyp{\mathtt{b}}{\xspace \mathtt{float[]}} \vdash\ \mathtt{List}\tuple{\kappa_1}\\
\mbox{(w2)} & \ftyp{\mathtt{b}}{\xspace \mathtt{float[]}} \vdash\ \kappa_6\\
\mbox{(w3)} & \ftyp{\mathtt{b}}{\xspace \mathtt{float[]}}, \ftyp{\mathtt{i}}{\mathtt{int}} \vdash\ \mathtt{List}\tuple{\kappa_2} \\
\\
\mbox{(w4)} & \ftyp{\mathtt{a}}{\xspace \mathtt{float[]}} \vdash\ \kappa_3\\
\mbox{(w5)} & \ftyp{\mathtt{a}}{\xspace \mathtt{float[]}}, \ftyp{\mathtt{vs}}{\mathtt{List}\tuple{\mathtt{int}}} \vdash\ \kappa_4\\
\mbox{(w6)} & \ftyp{\mathtt{a}}{\xspace \mathtt{float[]}},
\ftyp{\mathtt{vs}}{\mathtt{List}\tuple{\mathtt{int}}},\ftyp{\mathtt{itr}}{\mathtt{Iterator}\tuple{\mathtt{int}}} \vdash\ \kappa_5\\
\end{array}
\]
The subtyping constraints are shown in Figure~\ref{ex-java-constraints}.
HIDE
By looking at the constraints, it is clear that $\kappa_1 = \kappa_2$ (from (c4)),
$\kappa_3 = \kappa_4$ (from (c6)), and $\kappa_4 = \kappa_5$ (from (c7)) in a least
solution. We perform these unifications, and furthemore, apply subtyping
to rin which we replace complex refined types (eg $\mathtt{List}\tuple{\kappa_2}$)
with the corresponding non-refined type ($\mathtt{List}\tuple{\mathtt{int}}$).
We are left with the following simplified constraints.
\[
\begin{array}{cl}
\mbox{(c0)} & b:\xspace \mathtt{float[]} \vdash\ \set{\valu=0} <: \kappa_6\\
\mbox{(c1)} & b:\xspace \mathtt{float[]}, i:\kappa_6, l:List\tuple{int}, i < L(b) \vdash\ \\
& \hspace{0.4cm} \set{\valu=i} <: \set{0 \leq \valu < L(b)}\\
\mbox{(c2)} & b:\xspace \mathtt{float[]}, i:\kappa_6, l:List\tuple{int}, i < L(b) \vdash\ \set{\valu=i} <: \kappa_2\\
\mbox{(c3)} & b:\xspace \mathtt{float[]}, i:\kappa_6, l:List\tuple{int}, i < L(b) \vdash\ \set{\valu=i+1} <: \kappa_6\\
\mbox{(c5$'$)} & a:\xspace \mathtt{float[]} \vdash\ \kappa_2[a/b] <: \kappa_3\\
\mbox{(c8)} & a:\xspace \mathtt{float[]}, vs:List\tuple{int}, itr:Iterator\tuple{int}, j:\kappa_3 \vdash\ \\
& \hspace{0.4cm} \set{\valu=j} <: \set{0 \leq \valu < L(a)}
\end{array}
\]
(Note that (c5$'$) is derived from $(c5)$ using the rule for type constructors.)
END HIDE
Now we can translate the constraints into an {Imperative}\xspace program
as before, and apply invariant generation
techniques for imperative programs.
In the resulting imperative program, the variables range over (tuples of) integers,
and in particular, there is no reasoning about data structures.
Using ARMC, we find that (a) all the assertions hold, and (b) the constraints on the refinement
variables are
$\kappa_1 =\kappa_2: 0\leq \valu < {\mathtt{len}}\xspace(\mathtt{b})$,
$\kappa_3 = \kappa_4 = \kappa_5: 0\leq \valu < {\mathtt{len}}\xspace(\mathtt{a})$, and
$\kappa_6: 0\leq \valu < {\mathtt{len}}\xspace(\mathtt{b})$.
This gives the corresponding refined types that
show the program is safe \textit{i.e.,}\xspace every array access is within bounds.
\end{comment}
\begin{comment}
\begin{figure}[t]
\begin{small}
\[
\begin{array}{ll}
{{\mathtt{loop}}} \{ \\
// c0\\
& b := \mathtt{nondet}; {{\mathtt{assume}}}(L(b)\geq 0);\\
& \valu := 0;\\
& \kappa_6 := {{\mathtt{set}}}(\valu, b);\\
{\mathrm{[\!] }} // c1 \\
& b := \mathtt{nondet}; {{\mathtt{assume}}}(L(b)\geq 0);\\
& t[\valu, b] := {{\mathtt{get}}}(\kappa_6);\\
& i := t.\valu;\\
& l := \mathtt{nondet}; {{\mathtt{assume}}}(L(l) \geq 0);\\
& {{\mathtt{assume}}}(i < L(b));\\
& \valu := i;\\
& {{\mathtt{assert}}}(0 \leq \valu < L(b));\\
{\mathrm{[\!] }} // c2 \\
& b := \mathtt{nondet}; {{\mathtt{assume}}}(L(b)\geq 0);\\
& t[\valu, b] := {{\mathtt{get}}}(\kappa_6);\\
& i := t.\valu; \\
& l := \mathtt{nondet}; {{\mathtt{assume}}}(L(l)\geq 0);\\
& {{\mathtt{assume}}}(i < L(b));\\
& \valu := i ;\\
& \kappa_2 := {{\mathtt{set}}}(\valu, b, i);\\
{\mathrm{[\!] }} // c3 \\
& b := \mathtt{nondet}; {{\mathtt{assume}}}(L(b)\geq 0);\\
& t := {{\mathtt{get}}}(\kappa_6);\\
& i := t.\valu;\\
& l := \mathtt{nondet}; {{\mathtt{assume}}}(L(l)\geq 0);\\
& {{\mathtt{assume}}}(i < L(b));\\
& \valu := i + 1;\\
& \kappa_6 := {{\mathtt{set}}}(\valu, b, i);\\
{\mathrm{[\!] }} // c5'\\
& a := \mathtt{nondet}; {{\mathtt{assume}}}(L(a)\geq 0);\\
& t[\valu,b,i] := {{\mathtt{get}}}(\kappa_2);\\
& {{\mathtt{assume}}}(t.b = a);\\
& \kappa_3 := {{\mathtt{set}}}(t.\valu, a);\\
{\mathrm{[\!] }} // c8 \\
& a := \mathtt{nondet}; {{\mathtt{assume}}}(L(a)\geq 0);\\
& vs := \mathtt{nondet}; {{\mathtt{assume}}}(L(vs)\geq 0);\\
& itr := \mathtt{nondet};\\
& t[\valu,a] := {{\mathtt{get}}}(\kappa_3);\\
& j := t.\valu;\\
& \valu := j;\\
& {{\mathtt{assert}}}(0\leq \valu < L(a));\\
\}
\end{array}
\]
\end{small}
\caption{Imperative program for \Java example}
\label{ex-java-imperative}
\end{figure}
\end{comment}
\subsection{Completeness}
The soundness of safety verification for higher-order programs
for any domain follows from the soundness of constraint generation
(\textit{e.g.,}\xspace Theorem~1 in \cite{LiquidPLDI08}) and Theorem~\ref{th:equiv}.
Since the safety verification problem for higher-order programs
is undecidable, the technique cannot be complete in general.
Even in the finite-state case, in which
each base type has a finite domain (\textit{e.g.,}\xspace booleans),
completeness depends on the generation of type constraints.
For example, in our examples and in our implementation, we have
assumed a \emph{context insensitive} constraint generation from program
syntax, \textit{i.e.,}\xspace we have not distinguished the types of the same function
at different call points.
This entails a loss of information, as the following example demonstrates.
Consider
\begin{verbatim}
let check f x y = assert (f x = y) in
check (fun a -> a) false false ;
check (fun a -> not a) false true
\end{verbatim}
where the builtin function ${{\mathtt{assert}}}$ has the type
$\reftyp{\valu}{\mathtt{bool}}{\valu} \rightarrow \mathtt{unit}$.
The refinement template for \verb+check+
generated by our constraint generation process is
\[
(\mathtt{x}: \reftyp{\valu}{\mathtt{bool}}{\kappa_1}\rightarrow \set{\kappa_2}) \rightarrow \set{\kappa_3} \rightarrow \set{\kappa_4}\rightarrow \mathtt{unit}
\]
which is too weak to show that the program is safe.
This is because the template ``merges'' the two call sites for \verb+check+.
One way to get context sensitivity is through \emph{intersection types}
\cite{FreemanPfenning91,Dunfield,NaikPalsberg,KobayashiPOPL09}.
For the above example, we can show type safety
using the following refined type for \verb+check+:
\[
\begin{array}{cl}
\bigwedge & \begin{array}{l}
(\mathtt{x}: \mathtt{bool} \rightarrow \set{\valu = \mathtt{x}})\rightarrow \set{\lnot \valu}\rightarrow \set{\lnot\valu}\rightarrow \mathtt{unit}\\
(\mathtt{x}: \mathtt{bool} \rightarrow \set{\valu = \lnot\mathtt{x}})\rightarrow \set{\lnot \valu}\rightarrow \set{\valu}\rightarrow \mathtt{unit}
\end{array}
\end{array}
\]
It is important to note that Theorems~\ref{th:translate} and~\ref{th:rwo-equiv}
hold for \emph{any} set of constraints.
Thus, one way to get completeness in the finite state case
is to generate refinement templates using intersection types,
perform the translation to \textsc{Imp}\xspace programs,
and then using a complete invariant generation
technique for finite state systems.
The key observation (made in \cite{KobayashiPOPL09})
that ensures a finite number of constraints, is
that there is at most a finite number of ``contexts'' in the finte state case,
and hence a finite number of terms in the intersection types.
The bad news is that the bound on the number of contexts is
$\mathsf{exp}_n(k)$, where $n$ is the highest order of any
function in the program, $k$ is the maximum arity of any function in the program,
and $\mathsf{exp}_n(k)$ is a stack of $n$ exponentials, defined by
$\mathsf{exp}_0(k) = k$, and $\mathsf{exp}_{n+1}(k) = 2^{\mathsf{exp}_n(k)}$.
Fully context-sensitive constraints are used in \cite{KobayashiPOPL09}
to show completeness in the finite case, at the price of
$\mathsf{exp}_n(k)$ in {\em every case}, not just the worst case.
In our exposition and our implementation, we have traded
off precision for scalability: while we lose precision
by generating context-insensitive constraints, we avoid
the $\mathsf{exp}_n$ blow-up that comes with full context sensitivity.
However, it has been shown through practical benchmarks that since the types themselves capture
relations between the inputs and outputs, the context-insensitive
constraint generation suffices to prove a variety of complex programs safe
\cite{LiquidPLDI08, LiquidPLDI09, GordonRefinement09}.
When considering completeness properties in special cases, we point
out completeness wrt.~the discovery of refinement predicates in
octagons/difference bounds abstract domains~\cite{MineOctagon06} and
template-based invariant generation for linear
arithmetic~\cite{ColonCAV03} and extensions with uninterpreted
function symbols~\cite{BeyerVMCAI07}, which carries over from
respective verification approaches.
\subsection{Related Work}
\mypara{Higher-Order Programs.}
Kobayashi \cite{KobayashiPOPL09,KobayashiLICS09} gives an algorithm
for model checking arbitrary $\mu$-calculus properties of finite-data
programs with higher order functions by a reduction to model checking
for higher-order recursion schemes (HORS)~\cite{Ong}.
For safety verification, \HMC shows a promising alternative.
First, the reduction to HORS critically depends on a finite-state abstraction of the data.
In contrast, our reduction defers the data abstraction to the abstract interpreter working
on the imperative program, thus enabling the direct application of abstract interpreters working
over infinite domains.
Since abstract interpreters over infinite abstract domains are strictly
more powerful than (infinite families of) finite ones \cite{CousotCousot92comparison},
our approach can be strictly more powerful for infinite-state programs.
Second, in the translation of an abstracted program to a HORS,
this algorithm eliminates Boolean variables by enumerating
all possible assignments to them, giving an exponential
blow-up from the program to the HORS.
In contrast, our technique preserves the Boolean state \emph{symbolically},
enabling the use of efficient symbolic algorithms for verification.
For example, for the simple example:
\begin{verbatim}
let f b1 ... bn x =
if (b1 || ... || bn) then lock x;
if (b1 || ... || bn) then unlock x
in let f (*) ... (*) (newlock ())
\end{verbatim}
where we wish to prove that lock and unlock alternate.
Kobayashi's translation \cite{KobayashiPOPL09} gives an {\em exponential} sized HORS,
with a version of $\mathtt{f}$ for each assignment to \verb+b1,...,bn+.
In contrast, our reduction preserves the source-level expressions and is linear,
and amenable to symbolic verification techniques (e.g., BDDs).
Previous experience with software model checking \cite{SLAMPOPL02,HJMM04,fsoft06}
shows that the number of reachable states is often drastically
smaller than $2^p$ where $p$ is the number of Booleans.
Thus, the pre-processing step that enumerates Booleans
may not lead to a scalable implementation.
Might \cite{Might07} describes {\em logic-flow analysis}, a general safety verification
algorithm for higher-order languages, which is the product
of a $k$-CFA like call-strings analysis and a form of SMT-based
predicate abstraction (together with widening).
In contrast, our work shows how higher-order
languages can be analyzed directly via
abstract analyses designed for first-order imperative languages.
Inference of refinement types using conterexample-guided techniques
was recentrly identified as a promising
direction~\cite{UnnoPPDP09,TerauchiPOPL2010}.
In contrast, our approach is not limited to CEGAR and facilitates the
applicability of a wide range abstract interpretation techniques for
precise reasoning about program data.
\mypara{Software Verification.}
This work was motivated by the recent success in
software model checking for first-order imperative
programs \cite{SLAMPOPL02,HJMM04,CousotPLDI03,McMillan06},
and the desire to apply similar techniques to modern programming
languages with higher order functions.
Our starting point was refinement types \cite{FreemanPfenning91,Knowles07},
implemented in dependent ML \cite{XiPfenning99} to give strong static guarantees,
and the work on liquid types \cite{LiquidPLDI08,LiquidPLDI09}
that applied predicate abstraction to infer refinement types.
By enabling the application of automatic invariant generation from software
model checking,
\HMC reduces the need for programmer annotations in refinement type systems.
\bibliographystyle{plain}
|
2,877,628,090,448 | arxiv | \section{Introduction} \label{sec:intro}
\subsection{Motivation}
The Toda lattice is a completely integrable system of modern vintage (late 1960’s), which over the ensuing decades has shown remarkable resilience in applicability to a wide range of problems in applied mathematics, geometry and analysis. Though initially posed in a classical mechanical setting, under an appropriate coordinate transformation it acquires the form of a Lax equation on tri-diagonal Jacobi matrices. This has opened the door to many other extensions including to discrete and ultra-discrete dynamical analogues (described later in this introduction) as well as Lie theoretic generalizations, due to Kostant, inspired by the Lax formulation. A third, more recent, type of extension comes from lifting these Lax representations from Jacobi matrices to a larger phase space related to Borel Lie algebras. This is the Full Kostant Toda lattice and it too has many potential applications to a variety of mathematical areas. Some of these have begun to emerge very recently (see comments at the end of section
\ref{section:summary}). This paper will lay a firm and uniform foundation for potential applications of the Full Kostant Toda lattice
in its several continuous and discretized versions and do this in a way that makes Lie theoretic generalizations natural.
In this introduction we will present
some historical and conceptual background that will enable us to informally summarize our main results in Section \ref{section:summary}
and then make some brief connections to related literature. After that we outline the remainder of the paper.
\subsection{The Classical Toda Lattice} \label{history}
The Toda lattice \cite{bib:toda} is a dynamical system on $\mathbb{R}^{2n}$, with coordinates $(p_1,\ldots,p_n,q_1,\ldots,q_n)$. The system is Hamiltonian with respect to the standard symplectic structure on $\mathbb{R}^{2n}$ with Hamiltonian
\begin{equation} \label{hamiltonian}
H(p_1,\ldots,p_n,q_1,\ldots,q_n)=\dfrac{1}{2}\sum_{j=1}^n p_j^2 + \sum_{j=1}^{n-1}e^{q_j-q_{j+1}}.
\end{equation}
(Equations (\ref{eqoneofhamiltod}) - (\ref{eqtwoofhamiltod}) present the associated classical Hamiltonian ODEs.)
Flaschka's transformation \cite{bib:fl} introduces the variable replacement $$(p_1,\ldots,p_n,q_1,\ldots,q_n)\mapsto (a_1,\ldots,a_n,b_1,\ldots,b_{n-1})$$ given by setting $a_j=-p_j$ for $j=1,\ldots,n$, and $b_j=e^{q_j-q_{j+1}}$ for $j=1,\ldots,n-1$. In these variables, the Hamiltonian equations may be re-expressed as a Lax equation
\begin{equation} \label{Lax}
\dfrac{d}{ds}X=[X,\pi_-(X)],~~~~X(0)=X_0
\end{equation}
where
\begin{equation} \label{hessenbergprojection}
X \doteq \left[\begin{array}{cccc} a_1 & 1\\ b_1 & a_2 & \ddots\\ &\ddots & \ddots & 1\\ &&b_{n-1}&a_n\end{array}\right]
\qquad
\pi_-(X)= \left[\begin{array}{cccc} 0 & \\ b_1 & 0 & \\ &\ddots & \ddots & \\ &&b_{n-1}&0\end{array}\right].
\end{equation}
The formulation as a Lax equation (\ref{Lax}) opens a door to connections with symmetry groups (Lie theory) and integrability which spurred the initial fascination that continues to current applications within both pure and applied mathematics. Everything discussed here may be formulated in a general Lie theoretic framework which is an important aspect of this topic; however, for brevity of exposition we will remain with the above specific formulation.
\subsection{dToda} \label{sec:dtoda}
We now consider the following discrete dynamics on tridiagonal Hessenberg matrices; those of the form (\ref{hessenbergprojection}),
\begin{equation}\label{groupversoflax}
X(t+1)=\Pi_-^{-1}(X(t))X(t)\Pi_-(X(t)), \,\,\,\, t \in \mathbb{N} \cup \{ 0 \}
\end{equation}
where $\Pi_-$ is the projection onto the lower part of the lower-upper factorisation of $X(t)$.
Equation (\ref{Lax}) is in fact an infinitessimal analogue of Equation (\ref{groupversoflax}). Equation (\ref{Lax}) is not the {\it exact} infinitessimal version of (\ref{groupversoflax}); the exact version is given by (\ref{luloglaxen}). However, though these two continuous systems are not the same flows on the stated phase space of tridiagonal matrices, they do commute with one another in the Hamiltonian sense of being in involution \cite{bib:arnold}. Traditional usage \cite{bib:h} refers to (\ref{groupversoflax}) as
the {\it discrete Toda lattice} and also denotes this system by {\it dToda}, so we will continue with that usage but bear in mind the distinction.\\
\n More precisely, dToda is constructed as follows: if one can factor a matrix $X(t)$, of the tridiagonal Hessenberg form shown in (\ref{hessenbergprojection}), as $X(t)=L(t)R(t)$, where $L(t)$ is lower triangular and $R(t)$ is upper triangular, then
the time $t+1$ matrix $X(t+1)$ in (\ref{groupversoflax}) is given by $$X(t+1)=R(t)L(t).$$
This is because $\Pi_-(X(t))=L(t)$, so
$$\Pi_-^{-1}(X(t))X(t)\Pi_-(X(t))
=L(t)^{-1}L(t)R(t)L(t)=R(t)L(t).$$
This way of viewing the dynamics, due to Symes \cite{bib:symes78}, amounts to performing a lower-upper factorisation of $X(t)$, then flipping the factors. In Section
\ref{section:dfktodarecdtoda}, we will specify positivity conditions which ensure that this dynamics may be continued for all discrete time.
Since $X(t)$ is a tridiagonal Hessenberg matrix, one can see that $L(t)$ is lower bi-diagonal with ones on its diagonal and $R(t)$ is upper bi-diagonal with ones on its superdiagonal. Therefore, the product $X(t+1)=R(t)L(t)$ is itself once again a tridiagonal Hessenberg matrix; i.e., this flow preserves the Toda lattice phase space. In fact it is the {\it stroboscope} of a solution to the Lax equation in continuous time $s$ \cite{bib:watkins},
\begin{equation}
\dfrac{d}{ds}X=[X,\pi_-(\log X)] \label{luloglaxen}
\end{equation}
that commutes with (\ref{Lax}). \begin{rem} \label{rem:symes} In our applications here we will always take $X$ to have positive eigenvalues so that $\log X$ may be uniquely defined in terms of the principal branch of the logarithm along the positive real axis. From this it follows that this continuous flow is well-defined and commutes with the original Toda flow
\cite{bib:watkins, bib:dlt}. The Hamiltonian for (\ref{luloglaxen}) is $H_{LU} = \mbox{Tr}(X \log X - X)$.
\end{rem}
\subsection{The Box-Ball System}
The passage from the time-discrete dToda system to a system that is spatially discrete as well is mediated by a process called {\it ultra-discretization} (see Section \ref{sec:ultradis} ). The resultant system is denoted {\it udToda} and has a remarkable presentation in terms of cellular automata. We briefly describe the latter here.
The (classical) box-ball system (BBS) consists of an infinite number of boxes arranged as a one-dimensional array with a finite number of boxes containing a ball. A simple evolution rule is provided for the box-ball dynamics:\index{Basic Box-Ball Evolution}\index{Box-Ball System}
\begin{enumerate}[(1)]
\item Take the left-most ball that has not been moved and move it to the left-most empty box to its right.
\item Repeat (1) until all balls have been moved precisely once.
\end{enumerate}
\n Since the algorithm requires one to know which balls have been moved, we can, without technically changing the algorithm, introduce a colour-coding based on whether balls have moved or not. Balls will be blue until they have moved, after which they will become red. When all balls are red, the colours should be reset to blue, ready for the next time step. Or, equivalently, a $0$-th step of colouring all balls blue should be prescribed. We will use the latter for a minor benefit in brevity. Below is an example of the evolution with this colour-coding, with each ball movement separated into a sub-step:
\begin{figure}[H]
\centering
\tikz[scale=0.6]{
\foreach \x in {0,1,2,3,4,5,6,7,8,9,10,11,12,13,14,15}
{\draw[fill=white] (\x,3) -- (\x+1,3) -- (\x+1,4) -- (\x,4) -- cycle;
}
\foreach \x in {1,2,3,7,10,11,13}
{\draw[fill=white] (\x,3) -- (\x+1,3) -- (\x+1,4) -- (\x,4) -- cycle;
\fill[cyan] (\x+0.5,3.5) circle (0.25);
}
\foreach \x in {}
{\draw[fill=white] (\x,3) -- (\x+1,3) -- (\x+1,4) -- (\x,4) -- cycle;
\fill[red] (\x+0.5,3.5) circle (0.25);
}
\foreach \x in {16}
{\draw[fill=white,white] (\x,3) -- (\x+2,3) -- (\x+2,4) -- (\x,4) -- cycle;
\draw[-] (\x,3) -- (\x,4);
\draw[-] (\x,3) -- (\x+2,3);
\draw[-] (\x,4) -- (\x+2,4);
\draw[-] (\x+1,3) -- (\x+1,4);
\node at (\x+1.5,3.5) {$\cdots$};
}
\foreach \x in {0}
{\draw[fill=white,white] (\x,3) -- (\x-2,3) -- (\x-2,4) -- (\x,4) -- cycle;
\draw[-] (\x,3) -- (\x,4);
\draw[-] (\x,3) -- (\x-2,3);
\draw[-] (\x,4) -- (\x-2,4);
\draw[-] (\x-1,3) -- (\x-1,4);
\node at (\x-1.5,3.5) {$\cdots$};
}
}
\tikz[scale=0.6]{
\foreach \x in {0,1,2,3,4,5,6,7,8,9,10,11,12,13,14,15}
{\draw[fill=white] (\x,3) -- (\x+1,3) -- (\x+1,4) -- (\x,4) -- cycle;
}
\foreach \x in {2,3,7,10,11,13}
{\draw[fill=white] (\x,3) -- (\x+1,3) -- (\x+1,4) -- (\x,4) -- cycle;
\fill[cyan] (\x+0.5,3.5) circle (0.25);
}
\foreach \x in {4}
{\draw[fill=white] (\x,3) -- (\x+1,3) -- (\x+1,4) -- (\x,4) -- cycle;
\fill[red] (\x+0.5,3.5) circle (0.25);
}
\foreach \x in {16}
{\draw[fill=white,white] (\x,3) -- (\x+2,3) -- (\x+2,4) -- (\x,4) -- cycle;
\draw[-] (\x,3) -- (\x,4);
\draw[-] (\x,3) -- (\x+2,3);
\draw[-] (\x,4) -- (\x+2,4);
\draw[-] (\x+1,3) -- (\x+1,4);
\node at (\x+1.5,3.5) {$\cdots$};
}
\foreach \x in {0}
{\draw[fill=white,white] (\x,3) -- (\x-2,3) -- (\x-2,4) -- (\x,4) -- cycle;
\draw[-] (\x,3) -- (\x,4);
\draw[-] (\x,3) -- (\x-2,3);
\draw[-] (\x,4) -- (\x-2,4);
\draw[-] (\x-1,3) -- (\x-1,4);
\node at (\x-1.5,3.5) {$\cdots$};
}
}
\tikz[scale=0.6]{
\foreach \x in {0,1,2,3,4,5,6,7,8,9,10,11,12,13,14,15}
{\draw[fill=white] (\x,3) -- (\x+1,3) -- (\x+1,4) -- (\x,4) -- cycle;
}
\foreach \x in {3,7,10,11,13}
{\draw[fill=white] (\x,3) -- (\x+1,3) -- (\x+1,4) -- (\x,4) -- cycle;
\fill[cyan] (\x+0.5,3.5) circle (0.25);
}
\foreach \x in {4,5}
{\draw[fill=white] (\x,3) -- (\x+1,3) -- (\x+1,4) -- (\x,4) -- cycle;
\fill[red] (\x+0.5,3.5) circle (0.25);
}
\foreach \x in {16}
{\draw[fill=white,white] (\x,3) -- (\x+2,3) -- (\x+2,4) -- (\x,4) -- cycle;
\draw[-] (\x,3) -- (\x,4);
\draw[-] (\x,3) -- (\x+2,3);
\draw[-] (\x,4) -- (\x+2,4);
\draw[-] (\x+1,3) -- (\x+1,4);
\node at (\x+1.5,3.5) {$\cdots$};
}
\foreach \x in {0}
{\draw[fill=white,white] (\x,3) -- (\x-2,3) -- (\x-2,4) -- (\x,4) -- cycle;
\draw[-] (\x,3) -- (\x,4);
\draw[-] (\x,3) -- (\x-2,3);
\draw[-] (\x,4) -- (\x-2,4);
\draw[-] (\x-1,3) -- (\x-1,4);
\node at (\x-1.5,3.5) {$\cdots$};
}
}
\tikz[scale=0.6]{
\foreach \x in {0,1,2,3,4,5,6,7,8,9,10,11,12,13,14,15}
{\draw[fill=white] (\x,3) -- (\x+1,3) -- (\x+1,4) -- (\x,4) -- cycle;
}
\foreach \x in {7,10,11,13}
{\draw[fill=white] (\x,3) -- (\x+1,3) -- (\x+1,4) -- (\x,4) -- cycle;
\fill[cyan] (\x+0.5,3.5) circle (0.25);
}
\foreach \x in {4,5,6}
{\draw[fill=white] (\x,3) -- (\x+1,3) -- (\x+1,4) -- (\x,4) -- cycle;
\fill[red] (\x+0.5,3.5) circle (0.25);
}
\foreach \x in {16}
{\draw[fill=white,white] (\x,3) -- (\x+2,3) -- (\x+2,4) -- (\x,4) -- cycle;
\draw[-] (\x,3) -- (\x,4);
\draw[-] (\x,3) -- (\x+2,3);
\draw[-] (\x,4) -- (\x+2,4);
\draw[-] (\x+1,3) -- (\x+1,4);
\node at (\x+1.5,3.5) {$\cdots$};
}
\foreach \x in {0}
{\draw[fill=white,white] (\x,3) -- (\x-2,3) -- (\x-2,4) -- (\x,4) -- cycle;
\draw[-] (\x,3) -- (\x,4);
\draw[-] (\x,3) -- (\x-2,3);
\draw[-] (\x,4) -- (\x-2,4);
\draw[-] (\x-1,3) -- (\x-1,4);
\node at (\x-1.5,3.5) {$\cdots$};
}
}
\tikz[scale=0.6]{
\foreach \x in {0,1,2,3,4,5,6,7,8,9,10,11,12,13,14,15}
{\draw[fill=white] (\x,3) -- (\x+1,3) -- (\x+1,4) -- (\x,4) -- cycle;
}
\foreach \x in {10,11,13}
{\draw[fill=white] (\x,3) -- (\x+1,3) -- (\x+1,4) -- (\x,4) -- cycle;
\fill[cyan] (\x+0.5,3.5) circle (0.25);
}
\foreach \x in {4,5,6,8}
{\draw[fill=white] (\x,3) -- (\x+1,3) -- (\x+1,4) -- (\x,4) -- cycle;
\fill[red] (\x+0.5,3.5) circle (0.25);
}
\foreach \x in {16}
{\draw[fill=white,white] (\x,3) -- (\x+2,3) -- (\x+2,4) -- (\x,4) -- cycle;
\draw[-] (\x,3) -- (\x,4);
\draw[-] (\x,3) -- (\x+2,3);
\draw[-] (\x,4) -- (\x+2,4);
\draw[-] (\x+1,3) -- (\x+1,4);
\node at (\x+1.5,3.5) {$\cdots$};
}
\foreach \x in {0}
{\draw[fill=white,white] (\x,3) -- (\x-2,3) -- (\x-2,4) -- (\x,4) -- cycle;
\draw[-] (\x,3) -- (\x,4);
\draw[-] (\x,3) -- (\x-2,3);
\draw[-] (\x,4) -- (\x-2,4);
\draw[-] (\x-1,3) -- (\x-1,4);
\node at (\x-1.5,3.5) {$\cdots$};
}
}
\tikz[scale=0.6]{
\foreach \x in {0,1,2,3,4,5,6,7,8,9,10,11,12,13,14,15}
{\draw[fill=white] (\x,3) -- (\x+1,3) -- (\x+1,4) -- (\x,4) -- cycle;
}
\foreach \x in {11,13}
{\draw[fill=white] (\x,3) -- (\x+1,3) -- (\x+1,4) -- (\x,4) -- cycle;
\fill[cyan] (\x+0.5,3.5) circle (0.25);
}
\foreach \x in {4,5,6,8,12}
{\draw[fill=white] (\x,3) -- (\x+1,3) -- (\x+1,4) -- (\x,4) -- cycle;
\fill[red] (\x+0.5,3.5) circle (0.25);
}
\foreach \x in {16}
{\draw[fill=white,white] (\x,3) -- (\x+2,3) -- (\x+2,4) -- (\x,4) -- cycle;
\draw[-] (\x,3) -- (\x,4);
\draw[-] (\x,3) -- (\x+2,3);
\draw[-] (\x,4) -- (\x+2,4);
\draw[-] (\x+1,3) -- (\x+1,4);
\node at (\x+1.5,3.5) {$\cdots$};
}
\foreach \x in {0}
{\draw[fill=white,white] (\x,3) -- (\x-2,3) -- (\x-2,4) -- (\x,4) -- cycle;
\draw[-] (\x,3) -- (\x,4);
\draw[-] (\x,3) -- (\x-2,3);
\draw[-] (\x,4) -- (\x-2,4);
\draw[-] (\x-1,3) -- (\x-1,4);
\node at (\x-1.5,3.5) {$\cdots$};
}
}
\tikz[scale=0.6]{
\foreach \x in {0,1,2,3,4,5,6,7,8,9,10,11,12,13,14,15}
{\draw[fill=white] (\x,3) -- (\x+1,3) -- (\x+1,4) -- (\x,4) -- cycle;
}
\foreach \x in {13}
{\draw[fill=white] (\x,3) -- (\x+1,3) -- (\x+1,4) -- (\x,4) -- cycle;
\fill[cyan] (\x+0.5,3.5) circle (0.25);
}
\foreach \x in {4,5,6,8,12,14}
{\draw[fill=white] (\x,3) -- (\x+1,3) -- (\x+1,4) -- (\x,4) -- cycle;
\fill[red] (\x+0.5,3.5) circle (0.25);
}
\foreach \x in {16}
{\draw[fill=white,white] (\x,3) -- (\x+2,3) -- (\x+2,4) -- (\x,4) -- cycle;
\draw[-] (\x,3) -- (\x,4);
\draw[-] (\x,3) -- (\x+2,3);
\draw[-] (\x,4) -- (\x+2,4);
\draw[-] (\x+1,3) -- (\x+1,4);
\node at (\x+1.5,3.5) {$\cdots$};
}
\foreach \x in {0}
{\draw[fill=white,white] (\x,3) -- (\x-2,3) -- (\x-2,4) -- (\x,4) -- cycle;
\draw[-] (\x,3) -- (\x,4);
\draw[-] (\x,3) -- (\x-2,3);
\draw[-] (\x,4) -- (\x-2,4);
\draw[-] (\x-1,3) -- (\x-1,4);
\node at (\x-1.5,3.5) {$\cdots$};
}
}
\tikz[scale=0.6]{
\foreach \x in {0,1,2,3,4,5,6,7,8,9,10,11,12,13,14,15}
{\draw[fill=white] (\x,3) -- (\x+1,3) -- (\x+1,4) -- (\x,4) -- cycle;
}
\foreach \x in {}
{\draw[fill=white] (\x,3) -- (\x+1,3) -- (\x+1,4) -- (\x,4) -- cycle;
\fill[cyan] (\x+0.5,3.5) circle (0.25);
}
\foreach \x in {4,5,6,8,12,14,15}
{\draw[fill=white] (\x,3) -- (\x+1,3) -- (\x+1,4) -- (\x,4) -- cycle;
\fill[red] (\x+0.5,3.5) circle (0.25);
}
\foreach \x in {16}
{\draw[fill=white,white] (\x,3) -- (\x+2,3) -- (\x+2,4) -- (\x,4) -- cycle;
\draw[-] (\x,3) -- (\x,4);
\draw[-] (\x,3) -- (\x+2,3);
\draw[-] (\x,4) -- (\x+2,4);
\draw[-] (\x+1,3) -- (\x+1,4);
\node at (\x+1.5,3.5) {$\cdots$};
}
\foreach \x in {0}
{\draw[fill=white,white] (\x,3) -- (\x-2,3) -- (\x-2,4) -- (\x,4) -- cycle;
\draw[-] (\x,3) -- (\x,4);
\draw[-] (\x,3) -- (\x-2,3);
\draw[-] (\x,4) -- (\x-2,4);
\draw[-] (\x-1,3) -- (\x-1,4);
\node at (\x-1.5,3.5) {$\cdots$};
}
}
\caption{A box-ball system time evolution (one time step).}\label{firstbbsexample}
\end{figure}
\n Each box-ball configuration is coordinatised by counting balls in blocks and spaces between blocks. Tokihiro \cite{bib:tokihiro} showed that dToda can be spatially discretised (ultradiscretisation, \ref{sec:ultradis}) in such a way that the spatial discretisation (udToda) describes the coordinate evolution of the box-ball system.\\
Although the coordinates were originally developed to count block lengths, coordinate evolution of udToda makes sense even when coordinates are taken to be zero. In fact, making geometric sense of the vanishing of coordinates leads to a remarkable new cellular automaton known as the ghost-box-ball system (GBBS), which the authors first introduced in \cite{bib:era}.
\subsection{The Full Kostant-Toda Lattice} \label{sec:FKT}
The main themes of this paper concern an extension
of the Lax equation (\ref{Lax}) from the tridiagonal Hessenberg phase space represented in (\ref{hessenbergprojection}) to the full phase space of {\it all} lower Hessenberg matrices which will be denoted by
$\mathcal{H}$.
This dynamical system on $\mathcal{H}$ was first introduced in \cite{bib:efs}, and named therein as the {\it Full Kostant-Toda lattice } (abbreviated in this paper as FKToda), in the context of complete integrability for dynamical systems associated to the classical semi-simple Lie algebras. From the viewpoint of complete integrability it is also related to (but different from) the Toda lattice equations defined on generic symmetric matrices \cite{bib:dlnt}.
The central idea for us is that the various discrete analogues of FKToda that we will consider may be deconstructed in terms of coupled systems of dToda. Hence, it is worth mentioning here that, in continuous time, the {\it tridiagonal} Hessenberg matrices comprise an invariant subspace for FKToda. A more general and systematic discussion of the invariance of bandlimited Hessenberg matrices is presented in Section \ref{ExtRed}.
\subsection{Summary of Results} \label{section:summary}
\begin{figure}[H]
\centering\renewcommand{\arraystretch}{1.4}
\begin{tabular}{|c|c|c|}
\hline
Scale & Classical System & Full System\\
\hline
Discrete space, continuous time & Toda Lattice & Full Kostant-Toda (FKToda)\\
Discrete space, discrete time & ``Discrete Toda'' (dToda) & Symes's Map (dFToda)\\
Ultradiscrete space, discrete time & Box-Ball System (BBS) & Ultradiscrete Full Toda (udFToda)\\
\hline
\end{tabular}
\caption{The dynamical systems in their three spatio-temporal scales, and in both the classical and full settings.}
\label{casttable}
\end{figure}
This paper provides a description of how the classical systems just described (listed in the middle column of Figure \ref{casttable}) extend to the Full systems listed in the last column. The table also summarizes the notations for these systems that will be used going forward. In particular this work establishes precise conditions for existence and uniqueness of solutions to the full systems and explains the underlying group theoretic framework for understanding their properties and potential generalizations.
The first column describes the scales, or domains, usually associated with these models. Another scale that might have been considered here is that of continuous space and continuous time. One model for that in the middle column would be the KdV equation, for which the classical Toda lattice is often regarded as an integrable spatial discretization. A natural analogue for this in the third column would be the KP equation. There is a large literature related to a discretization
of KP referred to as dKP \cite{bib:hirota81} and \cite{bib:knw} and references therein. However these are usually presented in a formal infinite dimensional setting. Our approach stems from the Kostant-Kirillov Poisson bracket on the dual of a Lie algebra \cite{bib:efs}, corresponding to the Hessenberg matrices in the cases that we focus on here. It is in this setting that it becomes natural to consider matrix factorizations and potential applications such as to representation theory, random matrix theory, orthogonal polynomials \cite{bib:ew}. However, explorations of connections to dKP, along the lines related to \cite{bib:sik} as described below and in section \ref{sec:connectionsliterature}, is something to be considered in the future.\\
\n As we have just mentioned, as well as seen above in \ref{groupversoflax} and later in \ref{fulltodasolnfactorsconj}, matrix factorisation plays a fundamental central role in the dynamical systems we are considering. Specifically, we consider the lower-upper factorisations. There is a deeper factorisation structure dating back to the work of Loewner and Whitney (Section \ref{sec:lw}) applied to \textit{totally positive} Hessenberg matrices (\ref{resphase}). This is rooted in the study of weighted path matrices for planar networks \cite{bib:fz}, which have a relation to recent developments in first and last passage percolation \cite{bib:ganguly}. Such structures also appear in a related context in \cite{bib:er}. This deeper factorisation produces a unique coordinatisation of the Toda phase spaces in terms of so-called \textit{Lusztig} coordinates (Section \ref{sec:lusztigfactors}). These coordinates and their associated factorizations, which may be generally referred to as {\it Lusztig factorizations}, are a recurring theme throughout this paper. \\
\n We discuss Lusztig factorization at the three spatio-temporal scales described in Figure \ref{casttable}. First, in the discrete time, discrete space scale, Theorem \ref{remark:substepsdtoda} provides a decomposition of the full discrete Toda map (dFToda) as a sequence of coupled dToda maps. We further establish through Theorem \ref{thm:wp} and Corollary \ref{cor:wp} the well-posedness of the dFToda map under certain positivity conditions, and explicitly describes the evolution on the Lusztig coordinates in terms of $\tau$-functions.\\
\n As we move on to the ultradiscrete setting, we introduce a full extension of udToda (or BBS) in Definition \ref{definitionofudftodaevol}, in a way that most naturally mirrors the full extensions of classic Toda to full Kostant-Toda, and dToda to dFToda. In some sense, this should be considered the minimal extension that serves this purpose. As an additional benefit of this particular system, we establish a method of fully capturing the Robinson-Schensted-Knuth (RSK) correspondence (a fundamental algorithmic bijection with applications in combinatorics and representation theory \cite{bib:aigner}) in the udFToda dynamics, right down to determining both the insertion and recording tableaux. However, we defer full details of this to an upcoming paper, opting to demonstrate the main ideas through an example which alludes to the general picture. This can be found in Remark \ref{rem:rskasfulludtoda}.\\[3pt]
\n Lastly, in the discrete space, continuous time setting, we establish in Section \ref{sec:return} a Lusztig coordinate description of the full Kostant-Toda lattice, culminating in both matrix and coordinate descriptions of dynamics in terms of Lusztig coordinates in Theorem \ref{thm:matrixandcoordrepoffktoda}.
\bigskip
Discussions of bidiagonal representations of the Full Kostant Toda lattice and its discretizations have appeared very recently elsewhere in the literature. In \cite{bib:sik} another Toda-type system, the so-called discrete hungry Toda lattice (dhToda), is presented along with a prescription for a continuum limit that yields FKToda in bidiagonal form. The approach in \cite{bib:sik} differs from ours in that it goes from dhToda to FKToda while ours proceeds directly, in the opposite way, from continuous to discrete,
by embedding an integrable discrete system, due to Symes, in FKToda.
\medskip
This paper also yields an alternative perspective on and extension of integrable systems aspects of the seminal work of O'Connell and collaborators that relates the classical Toda lattice
to random matrix theory and associated stochastic processes as well as geometric versions of RSK. Extensions of these kinds of connections to the Full Toda systems is an interesting possible future direction for exploration. In Appendix \ref{appendixb} we show through Theorem \ref{thm:oconnellsetup} and Corollary \ref{cor:oconnellsetup} how one can obtain, simply from Theorem \ref{thm:matrixandcoordrepoffktoda}, O'Connell's differential equations in \cite{bib:o}.
\medskip
These points of comparison will be expanded on in Section \ref{sec:connectionsliterature}.
\subsection{Outline}
In section \ref{sec:intro} and section \ref{sectionDTodaBBS} we provide the historical and technical background, respectively, needed for this paper. For an expert in the classical Toda integrable systems, much of this background could be skipped or skimmed.
The development of our first results takes place in section \ref{section:dfktodarecdtoda} where dFToda is explicitly described as a coupled system of dToda lattices. This section also provides precise conditions for existence and well-posedness of the dFToda flows. Section \ref{sec:ud} directly reads off, from those previous results, the detailed structure of udFToda in terms of tropicalized Lusztig parameters and relates this, by example, to the RSK algorithm. In section \ref{geometry} we change gears somewhat and present the prior results in a fully Lie theoretic framework. This enables us, in section \ref{sec:LusDynam}, to intrinsically describe the dynamics on Lusztig parameters in terms of tau functions associated to the fundamental representations of semisimple Lie algebras. Finally in section \ref{sec:return} we show how the discrete system analysis we have carried out informs a deeper analysis of the continuous dynamics of FKToda. Section \ref{sec:conclusions} presents some concluding remarks describing connections to the current literature and directions for future exploration based on what has been done here. Appendix \ref{appA}
goes into some of the more technical aspects of the relation between tau functions and factorization. Appendix \ref{appendixb} details the derivation of O'Connell's ODEs from our results in section \ref{sec:return}.
\section{Background}\label{sectionDTodaBBS}
In this section we present the essential technical background needed to precisely state and prove our results. This includes the factorization method for solving both the continuous and discrete Toda lattices, the coordinate representations for discrete and ultra-discrete Toda, and Lusztig's method for unique bi-diagonal factorization of unipotent matrices.
\subsection{Solution by Factorization} \label{features}
As observed in Section \ref{sec:FKT}, the classical tri-diagonal system is an invariant subsystem of FKToda. The dynamics of FKToda is specified by the Lax equation (\ref{Lax}) {\it extended} to the full phase space of lower Hessenberg matrices $\mathcal{H} $; i.e., to all matrices of the form
\begin{eqnarray*}
X \in \left(\begin{array}{ccccc}
* & 1 & & &\\
*& * & 1 & &\\
\vdots & \ddots & \ddots & \ddots &\\
\vdots & & \ddots & \ddots & 1\\
* & \dots & \dots & * & *
\end{array} \right).
\end{eqnarray*}
Having the form of a Lax equation points the way to constructing explicit solutions of FKToda; effectively, this Lax equation is the infinitesimal version of the statement that solutions advance by conjugating initial data with respect to a group element coming from the lower unipotent projection of $e^{sX}$. This was made precise independently by Adler, Kostant and Symes, in the late 1970's (\cite{bib:adler}, \cite{bib:kostant}, \cite{bib:symes}). In the version applied here, this is based on the so-called $LU$ factorization (equivalent to Gaussian elimination): given an invertible matrix $g \in GL(n, \mathbb{R})$ one seeks a factorization of the form
$g = L R$ where $L$ is lower unipotent and $R$ is invertible, upper triangular. When such a factorization exists (and generically it does), it is unique and $L$ will be denoted by $\Pi_-(g)$ and $R$ by $\Pi_+(g)$.
In this extended setting the following realizes the construction of explicit solutions.
\begin{thm} \cite{bib:adler, bib:kostant, bib:symes} \label{factorisationthmbkg}
{\textbf{(The Factorization Theorem)}} To solve the FKToda system, \vspace{0.15cm}
\begin{equation} \label{Lax2}
\dfrac{d}{ds}X=[X,\pi_-(X)],~~~~X(0)=X_0 \in \mathcal{H},
\end{equation}
where now
\begin{eqnarray*}
\pi_- (X) \in \left(\begin{array}{ccccc}
0 & & & &\\
*& 0 & & &\\
\vdots & \ddots & \ddots & &\\
\vdots & & \ddots & 0 &\\
* & \dots & \dots & * & 0
\end{array} \right),
\end{eqnarray*}
factor $e^{s X_0 }=\Pi_-( e^{s X_0})\Pi_+(e^{s X_0})$, if possible (locally it is). Then, the solution is given by\vspace{0.15cm}
\begin{equation}\label{fulltodasolnfactorsconj}
X(s)=\Pi_-^{-1}(e^{s X_0})X_0\Pi_-(e^{s X_0}).
\end{equation}
\end{thm}
It is a straightforward application of the product rule using (\ref{Lax2}) to see that
\begin{equation} \label{Isospectral}
\frac{d}{ds} \tr X^k = \tr \frac{d}{ds} X^k = \tr \left[ X^k , \pi_- (X) \right] = 0.
\end{equation}
This implies the so-called {\it isospectrality} of the Toda lattice: the eigenvalues of $X$ remain invariant under the Toda flow.
Moreover, with respect to an underlying symplectic structure generalizing the standard one for (\ref{hamiltonian}) \cite{bib:efs}, these constants of motion are in involution in the sense of Arnold-Liouville \cite{bib:arnold}.
\subsection{Discretization}
As with the continuous case, Symes's form of dToda described in Section \ref{sec:dtoda} has a natural extension to a discretization of FKToda, again based on the $LU$ factorization algorithm.
Symes's dynamics is inductively defined as a two-step discrete evolution on Hessenberg matrices. If at (discrete) time $t$ one has a matrix $X(t)$, one obtains $X(t+1)$ as follows:
\begin{enumerate}
\item Perform Gaussian elimination to factor $X(t)=L(t)R(t)$, with $L(t)$ lower unipotent and $R(t)$ upper triangular.
\item Permute the factors to define $X(t+1) = R(t)L(t)$.
\end{enumerate}
By construction, one has
\begin{equation} \label{symeseqn}
X(t+1)=R(t)L(t) = (L(t)^{-1}X(t))L(t) = L(t)^{-1}X(t)L(t).
\end{equation}
\n Thus, this discrete evolution is given by conjugating a matrix by its lower unipotent factor. Since the spectrum of a matrix is invariant under conjugation, it follows that the eigenvalues are constants of motion for this discrete evolution; i.e., this discrete flow is isospectral. This is completely analogous to what was seen in Theorem \ref{factorisationthmbkg}.
\n This flow is what we shall henceforth call discrete time full Kostant-Toda (dFToda).
\begin{prop} \cite{bib:symes, bib:dlt, bib:watkins} \label{prop:dft}
To solve the discrete-time Full Toda lattice with initial condition $X(0)=X_0$, factor $e^{t\log X_0}=X_0^t=\Pi_-(X_0^t)\Pi_+(X_0^t)$, if possible. Then, the solution is given by\vspace{0.4cm}
\begin{equation}\label{todasolnfactorsconj}
X(t)=\Pi_-^{-1}(X_0^t)X_0\Pi_-(X_0^t),
\end{equation}
for all $t\in \mathbb{N} \cup\{0\}$. (See Section \ref{ExtRed} for discussion of extending this flow to all $t \in \mathbb{Z}$.)
\end{prop}
It is of course not always the case that a given matrix has an LU factorization in which the coefficients of the factors do not become singular. It is possible to continue the dynamics (both continuous and discrete) through these singularities \cite{bib:efs}; however, we will not need to deal with that in this paper since it will be seen in
Corollary \ref{cor:wp} that under appropriate {\it positivity} conditions on the initial data $X_0$, the discrete flows exist for all forward time without any singularities arising.
\subsection{Coordinate Representations}
Up to this point it has been both sufficient and useful to represent the Toda equations we consider in the compact form of a Lax equation defined on a matrix phase space. But going forward we will need to work with explicit coordinate representations of these differential and difference equations. The classical Toda lattice discussed in section \ref{history} first appeared in coordinate form as Hamilton's equations for the Hamiltonian (\ref{hamiltonian}) which are
given by
\begin{align}
\dot{q}_j&=p_j ~~~~~~~~~~~~~~~~~~~~\,~~~~~~ j=1,\ldots,n\label{eqoneofhamiltod}\\\notag\\
\dot{p}_j&=\left\{\begin{array}{cl}
-e^{q_1-q_2} & \text{if }j=1\\
e^{q_{j-1}-q_j}-e^{q_{j}-q_{j+1}} & \text{if }1<j<n\\
e^{q_{n-1}-q_n} & \text{if }j=n
\end{array}\right.\label{eqtwoofhamiltod}
\end{align}
\n In this representation, boundary conditions of $q_0=-\infty$ and $q_{n+1}=\infty$ have been imposed, which result in $e^{q_0-q_1}=e^{q_n-q_{n+1}}=0$.
In the remainder of this subsection we describe the corresponding coordinate forms of the discrete and ultra-discrete Toda systems.
\subsubsection{dToda}
In the tridiagonal case, it is traditional \cite{bib:h} to use $I_1,\ldots,I_n$ to denote the diagonal entries of the upper bidiagonal matrix $R(t)$ and $V_1,\ldots,V_{n-1}$ to denote the subdiagonal entries of the lower bidiagonal matix $L(t)$:
\begin{equation}\label{ltrtdtoda}
L(t)=\arraycolsep=3.1pt\def\arraystretch{1.5}\left[\begin{array}{cccc}
1\\
V_1^t&1\\&\ddots&\ddots\\
&&V_{n-1}^t&1
\end{array}\right],~~~~\text{and}~~~~R(t)=\arraycolsep=4.4pt\def\arraystretch{1.3}\left[\begin{array}{cccc}
I_1^t & 1\\
&I_2^t&\ddots\\
&&\ddots&1\\
&&&I_n^t
\end{array}\right],
\end{equation}
\n The dynamic evolution $L(t+1)R(t+1)=R(t)L(t)$ can then be written out explicitly, and shown \cite{bib:tokihiro} to be equivalent to the following system:
\begin{equation} \label{explicitDT}
\left\{\renewcommand{\arraystretch}{1.6}\setlength{\tabcolsep}{16pt}
\begin{array}{lcl}
V_0^t=V_n^t=0\\%&&\forall t\\
I_i^{t+1}=V_i^t+\dfrac{I_i^t\cdots I_{1}^t}{I_{i-1}^{t+1}\cdots I_{1}^{t+1}}&&i=1,\ldots,n\\
V_i^{t+1}I_i^{t+1}=I_{i+1}^tV_i^t &~~~~~~~& i=1,\ldots,n-1
\end{array}
\right.
\end{equation}
\subsubsection{Ultra-discretization and Box-Ball Systems}\label{sec:ultradis}
The space-time discretization of the Toda lattice we begin with is derived from dToda by a semiclassical type of limit, known as {\it Maslov dequantization} or {\it tropicalization} \cite{bib:lmrs,bib:era}. The resulting ultradiscrete system is a cellular automaton, first introduced in 1990 by Takahashi and Satsuma \cite{bib:ts} who referred to it as the soliton box-ball system (BBS) for reasons that will become evident below.\\
\n To perform the ultradiscretisation, one pushes forwards the semiring structure on $(\R_{\geq 0},+,\times)$ to $\R\cup\{\infty\}$ by the family of bijections $D_\hbar$, for $\hbar>0$, given by
\begin{equation}
D_\hbar(x)=\left\{\begin{array}{ccl}
-\hbar\ln x &~~~& \text{if }x\neq 0\\
-\infty && \text{if }x= 0
\end{array}\right.,
\end{equation}
and taking the limit as $\hbar\to 0^+$. The essential result of this is that it replaces the operations $+$ and $\times$ on $\R_{\geq 0}$ by operations $\min$ and $+$, respectively, on $\R\cup\{\infty\}$.\\
\n For dToda, given in the form of Equations \ref{explicitDT}, this process amounts to making the change of variables
\begin{equation}
I_i^t=e^{-\frac{1}{\hbar}Q_i^t(\hbar)},~~~~
V_i^t=e^{-\frac{1}{\hbar}W_i^t(\hbar)},\label{tropvarsforbbstoda}
\end{equation}
and taking the $\hbar$-limit. The result of this semiclassical limit produces Equations \ref{thmbbscoords2}, which describe the (coordinate) evolution of the box-ball system.\\
\n Recall that the (basic) box-ball system consists of a one-dimensional infinite array of boxes with a finite number of the boxes filled with balls, and no more than one ball in each box (see, for example, Figure \ref{firstbbsexfordef}).
One refers to a full consecutive sequence of balls (having empty boxes on each side) as a {\it block}. One may think of these blocks as being coherent solitary masses.
\begin{figure}[H]
\centering
\tikz[scale=0.6]{
\foreach \x in {0,1,2,3,4,5,6,7,8,9,10,11,12,13,14,15}
{\draw[fill=white] (\x,3) -- (\x+1,3) -- (\x+1,4) -- (\x,4) -- cycle;
}
\foreach \x in {1,2,3,7,10,11,13}
{\draw[fill=white] (\x,3) -- (\x+1,3) -- (\x+1,4) -- (\x,4) -- cycle;
\fill[cyan] (\x+0.5,3.5) circle (0.25);
}
\foreach \x in {}
{\draw[fill=white] (\x,3) -- (\x+1,3) -- (\x+1,4) -- (\x,4) -- cycle;
\fill[red] (\x+0.5,3.5) circle (0.25);
}
\foreach \x in {16}
{\draw[fill=white,white] (\x,3) -- (\x+2,3) -- (\x+2,4) -- (\x,4) -- cycle;
\draw[-] (\x,3) -- (\x,4);
\draw[-] (\x,3) -- (\x+2,3);
\draw[-] (\x,4) -- (\x+2,4);
\draw[-] (\x+1,3) -- (\x+1,4);
\node at (\x+1.5,3.5) {$\cdots$};
}
\foreach \x in {0}
{\draw[fill=white,white] (\x,3) -- (\x-2,3) -- (\x-2,4) -- (\x,4) -- cycle;
\draw[-] (\x,3) -- (\x,4);
\draw[-] (\x,3) -- (\x-2,3);
\draw[-] (\x,4) -- (\x-2,4);
\draw[-] (\x-1,3) -- (\x-1,4);
\node at (\x-1.5,3.5) {$\cdots$};
}
}
\caption{A Box-Ball State}\label{firstbbsexfordef}
\end{figure}
Suppose at time $t$, one has $N$ blocks. Let $Q_1^t$, $Q_2^t$, $\ldots$, $Q_N^t$ denote the lengths of these blocks, taken from left to right. Let $W_1^t$, $W_2^t$, $\ldots$, $W_{N-1}^t$ denote the lengths of the sets of empty boxes between the $N$ sets of filled boxes, again taken from left to right. Lastly, let $W_0^t$ and $W_N^t$ be formally defined to be $\infty$, reflecting the fact that the empty boxes continue infinitely in both directions.
\begin{thm} \cite{bib:tokihiro}\label{thmbbscoords2}\index{Box-Ball Coordinate Dynamics}
The coordinates $(W_0^t,Q_1^t,W_1^t,\ldots,Q_N^t,W_N^t)$ evolve under the box ball dynamics according to
\begin{align}
W_0^{t+1}&=W_N^{t+1}=\infty\\
W_i^{t+1}&=Q_{i+1}^t+W_i^t-Q_i^{t+1},~~~~~~~~~~~~~~~~~~~i=1,\ldots,N-1\label{Witplusoneeqnbbs}\\
Q_i^{t+1}&=\min\left(W_i^{t},\sum_{j=1}^i Q_j^t-\sum_{j=1}^{i-1}
Q_j^{t+1}\right),~~~~~i=1,\ldots,N,\label{Qitplusoneeqnbbs}
\end{align}
\end{thm}
\begin{ex}
Starting with the initial state in Figure \ref{firstbbsexfordef} the next iteration is shown in Figure \ref{thmbbscoords}:
\begin{figure}[H]
\centering
\tikz[scale=0.6]{
\foreach \x in {0,1,2,3,4,5,6,7,8,9,10,11,12,13,14,15}
{\draw[fill=white] (\x,3) -- (\x+1,3) -- (\x+1,4) -- (\x,4) -- cycle;
}
\foreach \x in {1,2,3,7,10,11,13}
{\draw[fill=white] (\x,3) -- (\x+1,3) -- (\x+1,4) -- (\x,4) -- cycle;
\fill[cyan] (\x+0.5,3.5) circle (0.25);
}
\foreach \x in {}
{\draw[fill=white] (\x,3) -- (\x+1,3) -- (\x+1,4) -- (\x,4) -- cycle;
\fill[red] (\x+0.5,3.5) circle (0.25);
}
\foreach \x in {16}
{\draw[fill=white,white] (\x,3) -- (\x+2,3) -- (\x+2,4) -- (\x,4) -- cycle;
\draw[-] (\x,3) -- (\x,4);
\draw[-] (\x,3) -- (\x+2,3);
\draw[-] (\x,4) -- (\x+2,4);
\draw[-] (\x+1,3) -- (\x+1,4);
\node at (\x+1.5,3.5) {$\cdots$};
}
\foreach \x in {0}
{\draw[fill=white,white] (\x,3) -- (\x-2,3) -- (\x-2,4) -- (\x,4) -- cycle;
\draw[-] (\x,3) -- (\x,4);
\draw[-] (\x,3) -- (\x-2,3);
\draw[-] (\x,4) -- (\x-2,4);
\draw[-] (\x-1,3) -- (\x-1,4);
\node at (\x-1.5,3.5) {$\cdots$};
}
\draw [decorate,decoration={brace,amplitude=4pt}] (0.9,2.85) -- (-1.9,2.85) node [black,midway,yshift=-0.4cm] {\scriptsize{$W_0^t$}};
\draw [decorate,decoration={brace,amplitude=4pt}] (3.9,2.85) -- (1.1,2.85) node [black,midway,yshift=-0.4cm] {\scriptsize{$Q_1^t$}};
\draw [decorate,decoration={brace,amplitude=4pt}] (6.9,2.85) -- (4.1,2.85) node [black,midway,yshift=-0.4cm] {\scriptsize{$W_1^t$}};
\draw [decorate,decoration={brace,amplitude=4pt}] (7.9,2.85) -- (7.1,2.85) node [black,midway,yshift=-0.4cm] {\scriptsize{$Q_2^t$}};
\draw [decorate,decoration={brace,amplitude=4pt}] (9.9,2.85) -- (8.1,2.85) node [black,midway,yshift=-0.4cm] {\scriptsize{$W_2^t$}};
\draw [decorate,decoration={brace,amplitude=4pt}] (11.9,2.85) -- (10.1,2.85) node [black,midway,yshift=-0.4cm] {\scriptsize{$Q_3^t$}};
\draw [decorate,decoration={brace,amplitude=4pt}] (12.9,2.85) -- (12.1,2.85) node [black,midway,yshift=-0.4cm] {\scriptsize{$W_3^t$}};
\draw [decorate,decoration={brace,amplitude=4pt}] (13.9,2.85) -- (13.1,2.85) node [black,midway,yshift=-0.4cm] {\scriptsize{$Q_4^t$}};
\draw [decorate,decoration={brace,amplitude=4pt}] (17.9,2.85) -- (14.1,2.85) node [black,midway,yshift=-0.4cm] {\scriptsize{$W_4^t$}};
}
\tikz[scale=0.6]{
\foreach \x in {0,1,2,3,4,5,6,7,8,9,10,11,12,13,14,15}
{\draw[fill=white] (\x,3) -- (\x+1,3) -- (\x+1,4) -- (\x,4) -- cycle;
}
\foreach \x in {4,5,6,8,12,14,15}
{\draw[fill=white] (\x,3) -- (\x+1,3) -- (\x+1,4) -- (\x,4) -- cycle;
\fill[cyan] (\x+0.5,3.5) circle (0.25);
}
\foreach \x in {}
{\draw[fill=white] (\x,3) -- (\x+1,3) -- (\x+1,4) -- (\x,4) -- cycle;
\fill[red] (\x+0.5,3.5) circle (0.25);
}
\foreach \x in {16}
{\draw[fill=white,white] (\x,3) -- (\x+2,3) -- (\x+2,4) -- (\x,4) -- cycle;
\draw[-] (\x,3) -- (\x,4);
\draw[-] (\x,3) -- (\x+2,3);
\draw[-] (\x,4) -- (\x+2,4);
\draw[-] (\x+1,3) -- (\x+1,4);
\node at (\x+1.5,3.5) {$\cdots$};
}
\foreach \x in {0}
{\draw[fill=white,white] (\x,3) -- (\x-2,3) -- (\x-2,4) -- (\x,4) -- cycle;
\draw[-] (\x,3) -- (\x,4);
\draw[-] (\x,3) -- (\x-2,3);
\draw[-] (\x,4) -- (\x-2,4);
\draw[-] (\x-1,3) -- (\x-1,4);
\node at (\x-1.5,3.5) {$\cdots$};
}
\draw [decorate,decoration={brace,amplitude=4pt}] (3.9,2.85) -- (-1.9,2.85) node [black,midway,yshift=-0.4cm] {\scriptsize{$W_0^{t+1}$}};
\draw [decorate,decoration={brace,amplitude=4pt}] (6.9,2.85) -- (4.1,2.85) node [black,midway,yshift=-0.4cm] {\scriptsize{$Q_1^{t+1}$}};
\draw [decorate,decoration={brace,amplitude=4pt}] (7.9,2.85) -- (7.1,2.85) node [black,midway,yshift=-0.4cm] {\scriptsize{$W_1^{t+1}$}};
\draw [decorate,decoration={brace,amplitude=4pt}] (8.9,2.85) -- (8.1,2.85) node [black,midway,yshift=-0.4cm] {\scriptsize{$~Q_2^{t+1}$}};
\draw [decorate,decoration={brace,amplitude=4pt}] (11.9,2.85) -- (9.1,2.85) node [black,midway,yshift=-0.4cm] {\scriptsize{$W_2^{t+1}$}};
\draw [decorate,decoration={brace,amplitude=4pt}] (12.9,2.85) -- (12.1,2.85) node [black,midway,yshift=-0.4cm] {\scriptsize{$Q_3^{t+1}$}};
\draw [decorate,decoration={brace,amplitude=4pt}] (13.9,2.85) -- (13.1,2.85) node [black,midway,yshift=-0.4cm] {\scriptsize{$~W_3^{t+1}$}};
\draw [decorate,decoration={brace,amplitude=4pt}] (15.9,2.85) -- (14.1,2.85) node [black,midway,yshift=-0.4cm] {\scriptsize{$Q_4^{t+1}$}};
\draw [decorate,decoration={brace,amplitude=4pt}] (17.9,2.85) -- (16.1,2.85) node [black,midway,yshift=-0.4cm] {\scriptsize{$W_4^{t+1}$}};
}
\caption{The box-ball coordinates on a box-ball system and its time evolution. \label{thmbbscoords}}
\end{figure}
\end{ex}
\subsection{Lusztig Factorization}\label{sec:lusztigfactors}
A matrix $L \in N_-$ is defined to be {\it totally positive} with respect to $N_-$ if every minor that does not identically vanish on $N_-$ has a positive value. We refer the reader to \cite{bib:fz} for more background on this.
We introduce here an elegant parametrization of the totally positive matrices within $N_-$ (denoted $N^{>0}_-$), in terms of negative simple roots, that is due to Lusztig \cite{bib:lu}. In explicit terms this is given by a factorization of the form
\begin{eqnarray} \label{LusFac}
L &=& (1 + \alpha_1 f_{h_1}) \cdots (1 + \alpha_M f_{h_M})
\end{eqnarray}
where $L \in N^{>0}_-$, $M = {n \choose 2}$, $h_j \in \{1, \dots, n \}$, $\alpha_j \in \mathbb{R}_{>0}$, 1 denotes the identity matrix and $f_i$ is the elementary lower matrix with 1 in the
$(i+1, i)$ entry and zero elsewhere. {\color{red}{Set}}
$$
\ell_i(\alpha) \doteq 1 + \alpha f_i = \left[\begin{array}{ccccccc}
1 \\
& 1 \\
& & \ddots \\
&&& 1 \\
&&&\alpha&1 \\
&&&&& \ddots \\
&&&&&& 1
\end{array}\right].
$$
Let $w_0$ denote the longest permutation in the symmetric group, $\frak{S}_n$. Let $s_h$ denote the consecutive transposition in $\frak{S}_n$ that interchanges $h$ and $h+1$. Then $s_h$ are generators for $\frak{S}_n$ and so $w_0$ may be written as a word in these generators,
The minimal length of such a word for $w_0$ is $M$ and so such an expression of the form
$$
w_0 = s_{h_1} \cdots s_{h_M}
$$
will be referred to as a reduced word decomposition of $w_0$. Any other permutation will have representations of shorter length than $M$ and there is only one permutation having minimal length $M$; hence, its reference as the {\it longest} permutation. For further background on these matters we refer the reader to \cite{bib:bfz}.
The permutation matrix corresponding to $w_0$ is
$$\widehat{w}_0 =\renewcommand{\arraystretch}{0.9} \left[\begin{array}{ccccc}
&&&&1\\
&&&1\\
&&1\\
&\iddots\\
1
\end{array}\right],$$
Now let ${\bf h} = (h_1, \dots h_M)$ denote a reduced word decomposition, $w_0 = s_{h_1} \cdots s_{h_M}$. For this article we will fix and always use the following particular reduced word decomposition
\begin{eqnarray}\label{longestwordrepn}
{\bf h^0} &=& (1,2,...,n-1,1,2,...,n-2,..., 3,2,1,2,1).
\end{eqnarray}
One then has
\begin{thm}\label{theorem:bfz} \cite{bib:bfz}
Through the correspondence (\ref{LusFac}), ${\bf h^{0}}$ gives a bijection $\mathbb{R}^M_{>0} \to N^{>0}_-$ between the set of $M$-tuples of positive real numbers, $\left(u_1, \dots, u_M\right)$, and the variety of totally positive lower-unipotent matrices.
\end{thm}
\subsection{Loewner-Whitney Theorem}
\label{sec:lw}
Extending what was described in the previous subsection, one says that a square matrix is {\it totally non-negative} (TNN) if all of its minors are non-negative
We will make use of the following important characterization of TNN matrices.
\begin{thm} \cite{bib:fz}
Any invertible TNN matrix is a product of elementary tridiagonal matrices with non-negative matrix entries.
\end{thm}
By {\it elementary tridiagonal} here one means a matrix that differs from the identity $\mathbb{I}$ in a single entry located either on the main diagonal or immediately above or below it.
This has the following useful refinement
\begin{cor} \label{cor:lw}\cite{bib:fz} (Theorem 14)
Any TNN matrix can be written as a product of the form $LDU$ where $L, D, U$ are respectively lower unipotent, diagonal, upper unipotent and TNN in their respective senses.
\end{cor}
In particular, for this paper we will often be interested in the case of matrices having the form stated in the above corollary but where $DU$ is upper bidiagonal with all superdiagonal entries being 1 and all diagonal entries being positive. We will refer to TNN matrices, $LDU$, with DU having this restricted form as {\it TNN Hessenberg matrices}.
\section{dFToda as Recursive dToda}\label{section:dfktodarecdtoda}
We will now apply Lusztig
factorization to analyze the dynamics of dFToda. This presumes that the total positivity conditions of Theorem \ref{theorem:bfz} are preserved under this dynamics. That is true but we defer the proof of this to Section \ref{sec:wp}.
One may regard Theorem \ref{theorem:bfz} as a unique factorization theorem for elements of $N^{>0}_-$ which may be presented in the form of a bidiagonal factorization
\begin{eqnarray*}
L &=& T_1(u)T_2(u)\cdots T_{n-1}(u)\,\,\, \mbox{where for}\\
u &=& (u_i^j)_{1\leq i\leq j\leq n-1}, \\
T_i(u) &=& \ell_1(u_1^i)\ell_2(u_2^{i+1})\cdots \ell_{n-i}(u_{n-i}^{n-1}).
\end{eqnarray*}
$T_i(u)$ has the block bidiagonal form
$$\left[\begin{array}{ccccc|c}
1&&&&\\u_1^i & 1&&&\\&u_2^{i+1}&\ddots&&\\ &&\ddots &1&\\&&&u_{n-i}^{n-1}&1&\\\hline
&&&&&\mathbb{I}_{i-1}
\end{array}\right]$$
where $\mathbb{I}_{i-1}$ is the $(i-1)\x (i-1)$ identity matrix.\\
\n Starting with dFKToda (as in \ref{symeseqn}), one takes time-dependent Lusztig parameters for $L(t)$ via:
\begin{eqnarray} \label{Tfact}
L(t)=T_1(u(t))T_2(u(t))\cdots T_{n-1}(u(t))
\end{eqnarray}
according to the factorisation in Theorem \ref{theorem:bfz}. Then, after a time-step of dFKToda, one has
$$L(t+1)R(t+1)=R(t)L(t)$$
with $L(t+1)$ endowed with its own Lusztig factorisation:
$$L(t+1)=T_1(u(t+1))T_2(u(t+1))\cdots T_{n-1}(u(t+1)).$$
\begin{lem}\label{lemma:blockpreservationsymes}
For each $i\in \{1,\ldots,n-1\}$, $\alpha\in \R_{>0}^{i}$ and $d\in \R_{>0}^n$, if $R=(\text{diag}(d)+\epsilon)$ and $T=\ell_1(\alpha_1)\ell_2(\alpha_2)\cdots \ell_i(\alpha_i)$ are such that $RT$ has an LU-decomposition, then the lower part of this LU-decomposition has the same structure as $T$ and the upper part has the same structure as $R$.
\end{lem}
\begin{proof}
One has
$$T=\left[\begin{array}{ccccc|c}
1&&&&\\\alpha_1 & 1&&&\\&\alpha_2&\ddots&&\\ &&\ddots &1&\\&&&\alpha_i&1&\\\hline
&&&&&\mathbb{I}_{n-i-1}
\end{array}\right],~~~\text{and}~~~R=\left[\begin{array}{cccc|cccc}
d_1 & 1 & &&&\\
&d_2 & \ddots & & &&\\
&&\ddots & 1 & & &\\
&&&d_{i+1} & 1 & &\\
\hline
&&&&d_{i+2} & 1\\
&&&&&d_{i+3} & \ddots\\
&&&&&&\ddots & 1\\
&&&&&& & d_n
\end{array}\right].$$
The bottom-left block of $RT$ is the $(n-i-1)\times (i+1)$ zero matrix, and the bottom-right block is the same as the bottom right block of $R$. Since the product $RT$ is evidently a tridiagonal matrix with ones on its superdiagonal, its LU decomposition, if it exists, is a product of a lower bidiagonal unipotent matrix and an upper bidiagonal matrix with ones on the superdiagonal. The zero in the $(i+2,i+1)$ entry of $RT$ and the fact that the diagonal entries of the upper factor of $RT$ must be nonzero (because the determinant does not vanish) forces the lower factor of $RT$ to decouple into a block diagonal matrix of two lower bidiagonal unipotent matrices. This then forces the lower-right block of the lower part of $RT$ to be the identity matrix.
\end{proof}
\begin{defn} \label{def:dft}
Define a sequence of upper bidiagonal matrices $(R^{(i)}(t))_{i=0}^{n-1}$ associated to the pair $R(t)$ and $L(t)$ via
\begin{enumerate}[(i)]
\item $R^{(0)}(t)=R(t)$
\item $T_i(u(t+1))R^{(i)}(t)=R^{(i-1)}(t)T_i(u(t))$ for $i=1,\ldots,n-1$.
\end{enumerate}
For convenience, let $H^{(i)}(t)$ denote the tridiagonal Hessenberg matrix of interest in (ii), \textit{i.e.},
\begin{equation}\label{defnhessenbergtruncs}
H^{(i)}(t):=R^{(i-1)}(t)T_i(t).\end{equation}
\end{defn}
\begin{thm}\label{remark:substepsdtoda}
Defining $L(t)=T_1(u(t))T_2(u(t))\cdots T_{n-1}(u(t))$, $R^{(0)}(t)=R(t)$ and $R^{(n-1)}(t)=R(t+1)$, definitions in Definition \ref{def:dft} give the single time-step of dFToda:
$$L(t+1)R(t+1)=R(t)L(t),$$
with each application of Definition \ref{def:dft} (ii) reducing to a dToda time step of decreasing dimension.
\end{thm}
\begin{proof}
By Lemma \ref{lemma:blockpreservationsymes}, one has a factorisation $T_i(u(t+1))$ in part (ii) with $u(t+1)$ replaced with some other tuple $v$ since the block structure is preserved by the (tridiagonal) dToda map. This definition would then give $$R(t)L(t)=T_1(v)T_2(v)\cdots T_{n-1}(v)R^{(n-1)}(t).$$
Thus, $T_1(v)T_2(v)\cdots T_{n-1}(v)$ and $R^{(n-1)}(t)$ are the lower and upper parts of the $LU$ decomposition of $R(t)L(t)$. Hence, $L(t+1)=T_1(v)T_2(v)\cdots T_{n-1}(v)$ and $R(t+1)=R^{(n-1)}(t)$. However, by the uniqueness of Lusztig factorisation (Theorem \ref{theorem:bfz}), one must then have $v=u(t+1)$, which is why (ii) is presented in this form. This means that the recursive dToda map applications on pairs $R^{(i-1)}(t)$ and $T_i(u(t))$ combine precisely to give the dFToda evolution on the Lusztig parameters.\\[3pt]
Now we show that Definition \ref{def:dft} (ii) reduces to a dToda on the top-left $(n-i+1)\x (n-i+1)$ block. One sees this by defining $(I_j^{t,i})_{j=1}^n$ to be the diagonal entries of $R^{(i)}(t)$ for $i=0,\ldots,n-1$. Thus,
$$I_j^{t,0}=I_j^t,~~~~I_j^{t,n-1}=I_j^{t+1}$$
in the usual notation of dToda, and one then has
$$T_i(u(t+1))=\left[\begin{array}{ccccc|c}
1&&&&\\u_1^i(t+1) & 1&&&\\&u_2^{i+1}(t+1)&\ddots&&\\ &&\ddots &1&\\&&&u_{n-i}^{n-1}(t+1)&1&\\\hline
&&&&&\mathbb{I}_{i-1}
\end{array}\right]$$
and
$$R^{(i)}(t)=
\left[\begin{array}{cccc|cccc}
I_1^{t,i} & 1&&&&&&\\
&I_2^{t,i} & \ddots&&&&&\\
&&\ddots & 1&&\\
&& & I_{n-i+1}^{t,i}&1&\\\hline
&&& & I_{n-i+2}^{t,i-1}&1&\\
&&&& & I_{n-i+3}^{t,i-1}&\ddots&\\
&&&&& & \ddots&1\\
&&&&& & &I_{n}^{t,i-1}\\
\end{array}\right]$$
observing that the lower-right block of $R^{(i)}(t)$ is equal to the lower-right block of $R^{(i-1)}(t)$ by Lemma \ref{lemma:blockpreservationsymes}. Furthermore, because of this block structure, the top-left blocks are determined precisely by one step of the $(n-i+1)\x (n-i+1)$ dToda map:
\begin{equation}\label{eqn:dfktodasubstep}
\left(R^{(i)}(t)\right)_{[n-i+1],[n-i+1]}\left(T_i(u(t+1))\right)_{[n-i+1],[n-i+1]}
=
\left(T_i(u(t))\right)_{[n-i+1],[n-i+1]}
\left(R^{(i-1)}(t)\right)_{[n-i+1],[n-i+1]}
\end{equation}where, for $S_1,S_2\subset [n]$, the notation $(M)_{S_1,S_2}$ picks out the submatrix of $M$ with row indices in $S_1$ and column indices in $S_2$. \textit{i.e.}
$$
\left[\begin{array}{cccc}
I_1^{t,i-1} & 1\\
& I_2^{t,i-1} & \ddots\\
& & \ddots & 1\\
& & & I_{n-i+1}^{t,i-1}\\
\end{array}\right]
\left[\begin{array}{ccccc}
1\\
u_1^i(t) & \ddots\\
&\ddots & 1\\
&&u_{n-i}^{n-1}(t) & 1\\
\end{array}\right]$$
$$=$$
$$
\left[\begin{array}{ccccc}
1\\
u_1^i(t+1) & \ddots\\
&\ddots & 1\\
&&u_{n-i}^{n-1}(t+1) & 1\\
\end{array}\right]
\left[\begin{array}{cccc}
I_1^{t,i} & 1\\
& I_2^{t,i} & \ddots\\
& & \ddots & 1\\
& & & I_{n-i+1}^{t,i}\\
\end{array}\right]
$$
which shows that each of the sub-steps of the algorithm in Definition \ref{def:dft} is indeed a dToda map, each time of lower dimension than the last.
\end{proof}
\subsection{Well-posedness of dFToda} \label{sec:wp}
The prior discussion in this section assumes that it is always possible to carry out the required factorizations. We now show that under some reasonable positivity assumptions these factorizations do always exist.
\begin{thm} \label{thm:wp}
Assume that the principal minors of $X(t)$ are all positive. Then $X(t)$ necessarily has an $LU$ factorization, $X(t) = L(t) R(t)$. Assume further that $L(t)$ is totally positive with factorization (\ref{Tfact}). Then the principal minors of $X(t+1)$ are also positive, so that the factorization $X(t+1) = L(t+1) R(t+1) $ is defined. Furthermore, $L(t+1)$ is totally positive with a unique factorization of the form (\ref{Tfact}) whose coefficients are explicitly given by
\begin{eqnarray} \label{tauf}
u_j^{i+j-1}(t+1) &=& u_j^{i+j-1}(t) \frac{\tau_{j+1}^{i-1}(t) \tau_{j-1}^i(t)}{\tau_{j}^{i-1}(t) \tau_{j}^i(t)} \,\,\, \text{with} \\ \label{taus}
\tau_{j}^i(t) & = & \tau_j(H^{(i)}(t))=~~ \tau_j\left( R(t) T_1(u(t)) T_2(u(t)) \cdots T_i(u(t)) \right)\label{taufundefnithhess}
\end{eqnarray}
where $\tau_j(M)$ denotes the $j^{th}$ principal minor of the square matrix $M$. (For the sake of brevity, unless it is unclear from context, the time dependence of the $\tau$ functions may not be explicitly stated.)
\end{thm}
\begin{proof}
The idea of the proof will be to decompose a time step of the dFToda dynamics into a coupled sequence of dToda time steps and then inductively apply (\ref{explicitDT}) at each of the dToda sub-steps. For the base step of the induction consider, from Definition \ref{def:dft}, the first stage
\begin{eqnarray*}
T_1(u(t+1))R^{(1)}(t) &=& R^{(0)}(t)T_1(u(t))\\
&=& R(t)T_1(u(t)).
\end{eqnarray*}
By the assumption that the principal submatrices of $X(t)$ have positive determinants (denoted by $\tau^0_j(t)$ for the $j^{th}$ minor -- notation consistent with (\ref{taus}) -- with $\tau^0_0(t) \equiv 1$ ) it follows directly \cite{bib:strang} that
\begin{eqnarray} \label{R-repn}
R(t) &=& \left[\begin{array}{cccc}
\tau^0_1/\tau^0_0 & 1\\
&\tau^0_2/\tau^0_1 & \ddots\\
&&\ddots & 1\\
&&&\tau^0_n/\tau^0_{n-1}
\end{array}\right]
\end{eqnarray}
is well defined and comparing to (\ref{ltrtdtoda}) we set
\begin{eqnarray}
I^t_j = \tau^0_j/\tau^0_{j-1}
\end{eqnarray}
and
$$
T_1(t) = \arraycolsep=3.1pt\def\arraystretch{1.5}\left[\begin{array}{cccc}
1\\
V_1^t&1\\&\ddots&\ddots\\
&&V_{n-1}^t&1
\end{array}\right],
$$
so that
\begin{eqnarray}
V^t_j = u^j_j.
\end{eqnarray}
The assumptions of the theorem imply that
$$
I^t_j > 0,\,\,\, j = 1, \dots, n \qquad V^t_j > 0,\,\,\, j = 1, \dots, n-1 .
$$
One may then conclude that for
\begin{equation}
T_1(u(t+1))=\arraycolsep=3.1pt\def\arraystretch{1.5}\left[\begin{array}{cccc}
1\\
V_1^{t+1}&1\\&\ddots&\ddots\\
&&V_{n-1}^{t+1}&1
\end{array}\right],~~~~\text{and}~~~~R^{(1)}(t)=\arraycolsep=4.4pt\def\arraystretch{1.3}\left[\begin{array}{cccc}
I_1^{t+1} & 1\\
&I_2^{t+1}&\ddots\\
&&\ddots&1\\
&&&I_n^{t+1}
\end{array}\right],
\end{equation}
due to the subtraction free form of (\ref{ltrtdtoda}) one has that
\begin{eqnarray*}
I^{t+1}_1 &=& V^t_1 + I^{t}_1 > 0\\
I^{t+1}_2 &=& V^t_2 + \frac{I^{t}_2 I^{t}_1}{I^{t+1}_1} > 0\\
& \vdots & \\
I^{t+1}_{n-1} &=& V^t_{n-1} + \frac{I^{t}_{n-1} \cdots I^{t}_2}{I^{t+1}_{n-2} \cdots I^{t+1}_1} > 0\\
I^{t+1}_{n} &=& \frac{I^{t}_{n} \cdots I^{t}_2}{I^{t+1}_{n-1} \cdots I^{t+1}_1} > 0\\
V^{t+1}_i &=& \frac{I^{t}_{i+1}}{I^{t+1}_i} V^{t}_i > 0\\
V^{t+1}_n &=& 0.
\end{eqnarray*}
In summary, one has
$$
I^{t+1}_j > 0,\,\,\, j = 1, \dots, n \qquad V^{t+1}_j > 0,\,\,\, j = 1, \dots, n-1 .
$$
\n Hence, the principal submatrices of $T_1(u(t+1))R^{(1)}(t)$ all have positive determinant and $T_1(u(t+1))$ is a product of elementary positive Lusztig factors in standard form.
We are now in a position to claim the inductions step; namely, that if the principal minors of $R^{(i-1)}(t)$ are all positive, and if $T_i(u(t))$ is a product of elementary positive Lusztig factors in standard form, then the same holds for $R^{(i)}(t)$ and $T_i(u(t+1))$, respectively. The argument for this proceeds exactly as for the base case just discussed except that, as a consequence of Lemma \ref{lemma:blockpreservationsymes}, at the $i^{th}$ stage (\ref{explicitDT}) is replaced by
\begin{equation*}
\left\{\renewcommand{\arraystretch}{1.6}\setlength{\tabcolsep}{16pt}
\begin{array}{lcl}
V_0^t= V^{t}_{n-i+1}= \cdots = V_n^t=0\\%&&\forall t\\
I_j^{t+1}=V_j^t+\dfrac{I_j^t\cdots I_{1}^t}{I_{j-1}^{t+1}\cdots I_{1}^{t+1}}&&j=1,\ldots,n\\
V_j^{t+1}I_j^{t+1}=I_{j+1}^tV_j^t &~~~~~~~& j=1,\ldots,n-i
\end{array}
\right.
\end{equation*}
Consequently at the $i^{th}$ stage one has
$$
I^{t+1}_j > 0,\,\,\, j = 1, \dots, n \qquad V^{t+1}_j > 0,\,\,\, j = 1, \dots, n-i .
$$
At the end of this inductive process one has determined an $L(t+1)$ with standard Lusztig factorization which must therefore be totally positive by Lusztig's theorem (Theorem \ref{theorem:bfz}) as well as
$R(t+1) = R^{(n-1)}(t)$ with positive diagonal entries. Hence, $X(t+1)$ has principal minors with positive determinant.
The derivation of the formulae (\ref{tauf} - \ref{taus}) along with the explicit representation
\begin{eqnarray*}
R^{(i)}(t) &=& \left[\begin{array}{cccc}
\tau^{i}_1/\tau^{i}_0 & 1\\
&\tau^{i}_2/\tau^{i}_1 & \ddots\\
&&\ddots & 1\\
&&&\tau^{i}_n/\tau^{i}_{n-1}
\end{array}\right]
\end{eqnarray*}
is deferred to Appendix \ref{appA}.
\end{proof}
\begin{cor} \label{cor:wp}
Let $\mathcal{B}$ denote the subvariety of $\mathcal{H}$ comprised of upper bidiagonal matrices of the form
\begin{eqnarray*}
\left[\begin{array}{cccc}
* & 1\\
& * & \ddots\\
&&\ddots & 1\\
&&& *
\end{array}\right]
\end{eqnarray*}
and let $\mathcal{B}^{>0}$ denote the smaller submanifold in which all diagonal entries are positive. The dFToda map is defined everywhere on the restricted phase space \begin{eqnarray} \label{resphase}
(N_-^{>0} \times \mathcal{B}^{>0}~) \subset \mathcal{H}
\end{eqnarray}
and this space is preserved under all forward iterates $ (t \in \mathbb{N} \cup\{0\}) $ of dFToda.
\end{cor}
\n In the remainder of this paper we denote $(N_-^{>0} \times \mathcal{B}^{>0})$ by
$\mathcal{H}^{>0}$ and refer to it as the subvariety of totally positive Hessenberg matrices. Indeed, (\ref{resphase}) is totally positive with respect to the space of Hessenberg matrices in that every minor that does not identically vanish on $\mathcal{H}$ must be positive. Moreover, this space is dense in the space of TNN Hessenberg matrices as defined in Section \ref{sec:lw}. Going forward we will denote this latter space by $\mathcal{H}^{\geq0}$.
\subsection{Extensions and Reductions} \label{ExtRed}
We conclude this section by describing an extension of the dFToda dynamics and generalizations of its phase space.
\begin{prop}
The dFToda dynamics can be defined for all time $t \in \mathbb{Z}$. The extension to negative time, $t \in \{ -1, -2, -3, \dots \}$ is given by UL factorization: if $X(t) = R(t)L(t)$ then define
\begin{eqnarray} \nonumber
X(t - 1) &=& L(t)R(t)\\ \label{reverse}
&=& L(t) X(t) L(t)^{-1}.
\end{eqnarray}
This is well-defined for all negative time.
\end{prop}
\begin{proof}
This UL factorization clearly constitutes a reverse flow consistent with that of the Symes evolution. This flow exists for all $t$ by the same arguments applied in Theorem \ref{thm:wp} and Corollary \ref{cor:wp}. This follows because the equations for the backwards time flow (\ref{reverse}) have an entirely similar subtraction-free form:
\begin{equation} \label{backwardDT}
\left\{\renewcommand{\arraystretch}{1.6}\setlength{\tabcolsep}{16pt}
\begin{array}{lcl}
V_0^t=V_n^t=0\\%&&\forall t\\
I_i^{t-1}=V_i^t+\dfrac{I_n^tI_{n-1}^t\cdots I_{i}^t}{I_{n}^{t-1}I_{n-1}^{t-1}\cdots I_{i+1}^{t-1}}&&i=1,\ldots,n\\
V_i^{t-1}=\dfrac{I_{i}^tV_i^t}{I_{i+1}^{t-1}} &~~~~~~~& i=1,\ldots,n-1
\end{array}
\right.
\end{equation}
\end{proof}
\begin{defn}
For $d\in\{1,2,\ldots,n\}$, let $\mathcal{H}^d$ be the subset of $\mathcal{H}$ for which entries $h_{ij}=0$ whenever $i-j>d$, \textit{i.e.}, this is the set of lower triangular matrices that are zero below the first $d$ sub-diagonals. ($\mathcal{H}^1$ is the subspace we have referred to as tridiagonal Hessenberg matrices.) Let $N_-^d$ denote elements of $N_-$ for which $n_{ij}=0$ whenever $i-j>d$ and $N_-^{d, > 0}$ the set of totally positive elements in $N_-^d$, \textit{i.e.}, those elements whose minors that do not identically vanish on $N_-^d$ have a positive value.
\end{defn}
\begin{prop}
The dFToda map is defined everywhere on the subspace $(N_-^{d,>0} \times \mathcal{B}^{>0}) \subset \mathcal{H}^d$ and this space is preserved under all iterations, for $t \in \mathbb{Z}$, of the map. For $d<n$ we may refer to this as a {\rm band-limited} sub-system of dFToda.
\end{prop}
\begin{proof} If an initial condition lies in $(N_-^{d,>0} \times \mathcal{B}^{>0})$ then (\ref{tauf}) still holds and it follows from this that iterates remain in this band-limited subspace.
(Note that from (\ref{taus}), by induction the tau factors appearing in (\ref{tauf}) are non-vanishing.) With this subspace invariance in place, the other arguments that were applied for $d = n$ continue to hold, but terminate
at the $d^{th}$ stage.
\end{proof}
\section{Ultradiscrete Full Toda: tropicalizing Lusztig Parameters} \label{sec:ud}
We now begin the process of ultradiscretising the recursive dToda description of dFToda given in Section \ref{section:dfktodarecdtoda}.
\begin{defn}
Define variables $(Q_j^{t,i}(\epsilon))_{j=1}^{n}$ and $W_j^i(t)(\epsilon)$ via
$$I_j^{t,i}=e^{-\frac{1}{\epsilon}Q_j^{t,i}(\epsilon)},~~~~~~~j=1,\ldots,n,~~i=0,\ldots,n-1,$$
and
$$u_j^i(t)=e^{-\frac{1}{\epsilon}W_j^i(t)(\epsilon)},~~~~~~~1\leq j\leq i\leq n-1.$$
\end{defn}
\n For each $1\leq i\leq n-1$, Equation \ref{eqn:dfktodasubstep},
$$ \left(R^{(i)}(t)\right)_{[n-i+1],[n-i+1]}\left(T_i(u(t+1))\right)_{[n-i+1],[n-i+1]}
=
\left(T_i(u(t))\right)_{[n-i+1],[n-i+1]}
\left(R^{(i-1)}(t)\right)_{[n-i+1],[n-i+1]}$$
gives, via ultradiscretisation (see Section \ref{sec:ultradis}), a map from a tuple
$$(W_1^i(t),W_2^{i+1}(t),\ldots,W_{n-i}^{n-1}(t),\ldots Q_1^{t,i-1},Q_2^{t,i-1},\ldots,Q_{n-i+1}^{t,i-1})$$
to
$$(W_1^i(t+1),W_2^{i+1}(t+1),\ldots,W_{n-i}^{n-1}(t+1),\ldots Q_1^{t,i},Q_2^{t,i},\ldots,Q_{n-i+1}^{t,i}),$$
which, by Theorem \ref{remark:substepsdtoda}, is the tropicalisation of a dToda evolution, hence is interpreted as an $(n-i+1)$-soliton box-ball evolution. The precise correspondence here is that for each step $i$:
\begin{itemize}
\item Prior to the BBS evolution, the $j$-th block of balls consists of $Q_j^{t,i-1}$ balls, and the gap between the $j$-th and $(j+1)$-st block of balls is of length $W_j^{i+j-1}(t)$.
\item After the BBS evolution, the $j$-th block of balls consists of $Q_j^{t,i}$ balls, and the gap between the $j$-th and $(j+1)$-st block of balls is of length $W_j^{i+j-1}(t+1)$.
\end{itemize}
\n Thus, one sees that the overall dFKToda evolution
$$
\left(
(I_j^{t,0})_{j=1}^n,~(u_i^j(t))_{1\leq i\leq j \leq n-1}
\right)
~~\mapsto~~
\left(
(I_j^{t,n-1})_{j=1}^n,~(u_i^j(t+1))_{1\leq i\leq j \leq n-1}
\right)
$$
produces an overall ultradiscretised evolution (which we will call udFKToda):
$$
\left(
(Q_j^{t,0})_{j=1}^n,~(W_i^j(t))_{1\leq i\leq j \leq n-1}
\right)
~~\mapsto~~
\left(
(Q_j^{t,n-1})_{j=1}^n,~(W_i^j(t+1))_{1\leq i\leq j \leq n-1}
\right)
$$
whose BBS coordinate interpretation is presented as follows.
\begin{defn}\label{definitionofudftodaevol}
The udFKToda system is an evolution on pairs $(\mathbf{Q}(t),\mathbf{W}(t))$ where $$\mathbf{Q}(t)=(Q_j^t)_{j=1}^n,~~~~~\mathbf{W}(t)=(W_i^j(t))_{1\leq i\leq j\leq n-1},$$
with $Q_j^t\in \N$ and $W_i^j(t)\in \N$. The evolution is given in $n-1$ steps:
\begin{itemize}
\item The first step is to implement the BBS evolution on $$(\infty,Q_1^t,W_1^1(t),Q_2^t,W_2^2(t),\ldots,W_{n-1}^{n-1}(t),Q_{n}^t,\infty)$$
and define $W_j^j(t+1)$ to be the length of the $j$-th gap and $Q_{n}^{t+1}$ to be the length of the $(n-1)$-st block of balls after this application of the BBS evolution.
\item For the $i$-th step (with $i>1$), take the first $n-i+1$ blocks of balls (\textit{i.e.}, all but the last) and separate them with the following sequence of gaps
$$u_1^i(t),~u_2^{i+1}(t),~\ldots,~u_{n-i}^{n-1}(t).$$
Perform the BBS evolution on this box-ball configuration and define $W_j^{i+j-1}(t+1)$ to be the length of the $j$-th gap and $Q_{n-i+1}^{t+1}$ to be the length of the last block of balls after this application of the BBS evolution.
\end{itemize}
After completing steps $1$ through $(n-1)$, all of the time $t+1$ variables
$$\mathbf{Q}(t+1)=(Q_j^{t+1})_{j=1}^n,~~~~~\mathbf{W}(t+1)=(W_i^j(t+1))_{1\leq i\leq j\leq n-1}$$
have been defined.
\end{defn}
\begin{ex}\label{udftodaexample}
Let us compute a single time step of the udFToda flow for $n=4$:
$$\mathbf{Q}(t)=(2,1,3,1),~~~\mathbf{W}(t)=\begin{array}{ccccc}
&&3&&\\&1&&1&\\2&&1&&2
\end{array}.$$
For convenience, we write $\mathbf{W}(t)$ as compactly as a triangular array whose bottom (first) row is $(W_1^1(t),W_2^2(t),W_3^3(t))$, whose second row is $(W_1^2(t),W_2^3(t))$, and whose top (third) row is $(W_1^3(t))$.\\
To determine the first row of $\mathbf{W}(t+1)$ and last entry of $\mathbf{Q}(t+1)$, one takes $\mathbf{Q}(t)$ as the block sizes, with gaps given by the bottom row of $\mathbf{W}(t)$, performing a BBS evolution to obtain:
\begin{figure}[H]
\centering
\tikz[scale=0.39]{
\foreach \x in {0,1,2,3,4,5,6,7,8,9,10,11,12,13,14,15,16,17,18,19,20,21,22,23,24,25,26,27}
{\draw[fill=white] (\x,3) -- (\x+1,3) -- (\x+1,4) -- (\x,4) -- cycle;
}
\foreach \x in {1,2,5,7,8,9,12}
{\draw[fill=white] (\x,3) -- (\x+1,3) -- (\x+1,4) -- (\x,4) -- cycle;
\fill[cyan] (\x+0.5,3.5) circle (0.25);
}
\foreach \x in {}
{\draw[fill=white] (\x,3) -- (\x+1,3) -- (\x+1,4) -- (\x,4) -- cycle;
\fill[red] (\x+0.5,3.5) circle (0.25);
}
\foreach \x in {28}
{\draw[fill=white,white] (\x,3) -- (\x+2,3) -- (\x+2,4) -- (\x,4) -- cycle;
\draw[-] (\x,3) -- (\x,4);
\draw[-] (\x,3) -- (\x+2,3);
\draw[-] (\x,4) -- (\x+2,4);
\draw[-] (\x+1,3) -- (\x+1,4);
\node at (\x+1.5,3.5) {~\scriptsize{$\cdots$}};
}
\foreach \x in {0}
{\draw[fill=white,white] (\x,3) -- (\x-2,3) -- (\x-2,4) -- (\x,4) -- cycle;
\draw[-] (\x,3) -- (\x,4);
\draw[-] (\x,3) -- (\x-2,3);
\draw[-] (\x,4) -- (\x-2,4);
\draw[-] (\x-1,3) -- (\x-1,4);
\node at (\x-1.5,3.5) {\scriptsize{$\cdots$}};
}
}
\\[3pt]
\tikz[scale=0.39]{
\foreach \x in {0,1,2,3,4,5,6,7,8,9,10,11,12,13,14,15,16,17,18,19,20,21,22,23,24,25,26,27}
{\draw[fill=white] (\x,3) -- (\x+1,3) -- (\x+1,4) -- (\x,4) -- cycle;
}
\foreach \x in {3,4,6,10,11,13,14}
{\draw[fill=white] (\x,3) -- (\x+1,3) -- (\x+1,4) -- (\x,4) -- cycle;
\fill[cyan] (\x+0.5,3.5) circle (0.25);
}
\foreach \x in {}
{\draw[fill=white] (\x,3) -- (\x+1,3) -- (\x+1,4) -- (\x,4) -- cycle;
\fill[red] (\x+0.5,3.5) circle (0.25);
}
\foreach \x in {28}
{\draw[fill=white,white] (\x,3) -- (\x+2,3) -- (\x+2,4) -- (\x,4) -- cycle;
\draw[-] (\x,3) -- (\x,4);
\draw[-] (\x,3) -- (\x+2,3);
\draw[-] (\x,4) -- (\x+2,4);
\draw[-] (\x+1,3) -- (\x+1,4);
\node at (\x+1.5,3.5) {~\scriptsize{$\cdots$}};
}
\foreach \x in {0}
{\draw[fill=white,white] (\x,3) -- (\x-2,3) -- (\x-2,4) -- (\x,4) -- cycle;
\draw[-] (\x,3) -- (\x,4);
\draw[-] (\x,3) -- (\x-2,3);
\draw[-] (\x,4) -- (\x-2,4);
\draw[-] (\x-1,3) -- (\x-1,4);
\node at (\x-1.5,3.5) {\scriptsize{$\cdots$}};
}
}
\end{figure}
\n From this, we read off the following:
$$\mathbf{Q}(t+1)=(?,?,?,2),~~~\mathbf{W}(t+1)=\begin{array}{ccccc}
&&?&&\\&?&&?&\\1&&3&&1
\end{array}.$$
Next, to determine the second row of $\mathbf{W}(t+1)$ and the penultimate entry of $\mathbf{Q}(t+1)$, one uses all but the last blocks of the previously obtained BBS (i.e., block lengths of $2$, $1$ and $2$) and separates them with gaps given by the second row of $\mathbf{W}(t)$ to obtain:
\begin{figure}[H]
\centering
\tikz[scale=0.39]{
\foreach \x in {0,1,2,3,4,5,6,7,8,9,10,11,12,13,14,15,16,17,18,19,20,21,22,23,24,25,26,27}
{\draw[fill=white] (\x,3) -- (\x+1,3) -- (\x+1,4) -- (\x,4) -- cycle;
}
\foreach \x in {1,2,4,6,7}
{\draw[fill=white] (\x,3) -- (\x+1,3) -- (\x+1,4) -- (\x,4) -- cycle;
\fill[cyan] (\x+0.5,3.5) circle (0.25);
}
\foreach \x in {}
{\draw[fill=white] (\x,3) -- (\x+1,3) -- (\x+1,4) -- (\x,4) -- cycle;
\fill[red] (\x+0.5,3.5) circle (0.25);
}
\foreach \x in {28}
{\draw[fill=white,white] (\x,3) -- (\x+2,3) -- (\x+2,4) -- (\x,4) -- cycle;
\draw[-] (\x,3) -- (\x,4);
\draw[-] (\x,3) -- (\x+2,3);
\draw[-] (\x,4) -- (\x+2,4);
\draw[-] (\x+1,3) -- (\x+1,4);
\node at (\x+1.5,3.5) {~\scriptsize{$\cdots$}};
}
\foreach \x in {0}
{\draw[fill=white,white] (\x,3) -- (\x-2,3) -- (\x-2,4) -- (\x,4) -- cycle;
\draw[-] (\x,3) -- (\x,4);
\draw[-] (\x,3) -- (\x-2,3);
\draw[-] (\x,4) -- (\x-2,4);
\draw[-] (\x-1,3) -- (\x-1,4);
\node at (\x-1.5,3.5) {\scriptsize{$\cdots$}};
}
}
\\[3pt]
\tikz[scale=0.39]{
\foreach \x in {0,1,2,3,4,5,6,7,8,9,10,11,12,13,14,15,16,17,18,19,20,21,22,23,24,25,26,27}
{\draw[fill=white] (\x,3) -- (\x+1,3) -- (\x+1,4) -- (\x,4) -- cycle;
}
\foreach \x in {3,5,8,9,10}
{\draw[fill=white] (\x,3) -- (\x+1,3) -- (\x+1,4) -- (\x,4) -- cycle;
\fill[cyan] (\x+0.5,3.5) circle (0.25);
}
\foreach \x in {}
{\draw[fill=white] (\x,3) -- (\x+1,3) -- (\x+1,4) -- (\x,4) -- cycle;
\fill[red] (\x+0.5,3.5) circle (0.25);
}
\foreach \x in {28}
{\draw[fill=white,white] (\x,3) -- (\x+2,3) -- (\x+2,4) -- (\x,4) -- cycle;
\draw[-] (\x,3) -- (\x,4);
\draw[-] (\x,3) -- (\x+2,3);
\draw[-] (\x,4) -- (\x+2,4);
\draw[-] (\x+1,3) -- (\x+1,4);
\node at (\x+1.5,3.5) {~\scriptsize{$\cdots$}};
}
\foreach \x in {0}
{\draw[fill=white,white] (\x,3) -- (\x-2,3) -- (\x-2,4) -- (\x,4) -- cycle;
\draw[-] (\x,3) -- (\x,4);
\draw[-] (\x,3) -- (\x-2,3);
\draw[-] (\x,4) -- (\x-2,4);
\draw[-] (\x-1,3) -- (\x-1,4);
\node at (\x-1.5,3.5) {\scriptsize{$\cdots$}};
}
}
\end{figure}
\n From this, we read off the following:
$$\mathbf{Q}(t+1)=(?,?,3,2),~~~\mathbf{W}(t+1)=\begin{array}{ccccc}
&&?&&\\&1&&2&\\1&&3&&1
\end{array}.$$
Finally, we can determine the third (and final) row of $\mathbf{W}(t+1)$ as well as the second (and first) entries of $\mathbf{Q}(t+1)$ by taking all but the last blocks of the previously obtained BBS (i.e., block lengths of $1$ and $1$) and separates them with gaps given by the third row of $\mathbf{W}(t)$ to obtain:
\begin{figure}[H]
\centering
\tikz[scale=0.39]{
\foreach \x in {0,1,2,3,4,5,6,7,8,9,10,11,12,13,14,15,16,17,18,19,20,21,22,23,24,25,26,27}
{\draw[fill=white] (\x,3) -- (\x+1,3) -- (\x+1,4) -- (\x,4) -- cycle;
}
\foreach \x in {1,5}
{\draw[fill=white] (\x,3) -- (\x+1,3) -- (\x+1,4) -- (\x,4) -- cycle;
\fill[cyan] (\x+0.5,3.5) circle (0.25);
}
\foreach \x in {}
{\draw[fill=white] (\x,3) -- (\x+1,3) -- (\x+1,4) -- (\x,4) -- cycle;
\fill[red] (\x+0.5,3.5) circle (0.25);
}
\foreach \x in {28}
{\draw[fill=white,white] (\x,3) -- (\x+2,3) -- (\x+2,4) -- (\x,4) -- cycle;
\draw[-] (\x,3) -- (\x,4);
\draw[-] (\x,3) -- (\x+2,3);
\draw[-] (\x,4) -- (\x+2,4);
\draw[-] (\x+1,3) -- (\x+1,4);
\node at (\x+1.5,3.5) {~\scriptsize{$\cdots$}};
}
\foreach \x in {0}
{\draw[fill=white,white] (\x,3) -- (\x-2,3) -- (\x-2,4) -- (\x,4) -- cycle;
\draw[-] (\x,3) -- (\x,4);
\draw[-] (\x,3) -- (\x-2,3);
\draw[-] (\x,4) -- (\x-2,4);
\draw[-] (\x-1,3) -- (\x-1,4);
\node at (\x-1.5,3.5) {\scriptsize{$\cdots$}};
}
}
\\[3pt]
\tikz[scale=0.39]{
\foreach \x in {0,1,2,3,4,5,6,7,8,9,10,11,12,13,14,15,16,17,18,19,20,21,22,23,24,25,26,27}
{\draw[fill=white] (\x,3) -- (\x+1,3) -- (\x+1,4) -- (\x,4) -- cycle;
}
\foreach \x in {2,6}
{\draw[fill=white] (\x,3) -- (\x+1,3) -- (\x+1,4) -- (\x,4) -- cycle;
\fill[cyan] (\x+0.5,3.5) circle (0.25);
}
\foreach \x in {}
{\draw[fill=white] (\x,3) -- (\x+1,3) -- (\x+1,4) -- (\x,4) -- cycle;
\fill[red] (\x+0.5,3.5) circle (0.25);
}
\foreach \x in {28}
{\draw[fill=white,white] (\x,3) -- (\x+2,3) -- (\x+2,4) -- (\x,4) -- cycle;
\draw[-] (\x,3) -- (\x,4);
\draw[-] (\x,3) -- (\x+2,3);
\draw[-] (\x,4) -- (\x+2,4);
\draw[-] (\x+1,3) -- (\x+1,4);
\node at (\x+1.5,3.5) {~\scriptsize{$\cdots$}};
}
\foreach \x in {0}
{\draw[fill=white,white] (\x,3) -- (\x-2,3) -- (\x-2,4) -- (\x,4) -- cycle;
\draw[-] (\x,3) -- (\x,4);
\draw[-] (\x,3) -- (\x-2,3);
\draw[-] (\x,4) -- (\x-2,4);
\draw[-] (\x-1,3) -- (\x-1,4);
\node at (\x-1.5,3.5) {\scriptsize{$\cdots$}};
}
}
\end{figure}
\n From this final calculation, we can now write down result of applying the udFToda evolution to $\mathbf{Q}(t)$ and $\mathbf{W}(t)$:
$$\mathbf{Q}(t+1)=(1,1,3,2),~~~\mathbf{W}(t+1)=\begin{array}{ccccc}
&&3&&\\&1&&2&\\1&&3&&1
\end{array}.$$
\end{ex}
\subsection{Diagrammatic Representation of udFToda}
To aid in visualising the udFToda process, we employ a diagrammatic representation of a BBS (udToda) evolution. We will represent a BBS evolution
$$(Q_1^t,W_1^t,Q_2^t,\ldots,W_{n-1}^t,Q_n^t)\mapsto
(Q_1^{t+1},W_1^{t+1},Q_2^{t+1},\ldots,W_{n-1}^{t+1},Q_n^{t+1})$$
(where we drop the $\infty$'s on either end) in the following diagram:
\begin{figure}[H]
\centering
\scalebox{0.75}{
\tikz[scale=5]{
\foreach \s in {0.10}{
\foreach \x in {1,...,5}{
\foreach \n in {6}{
\ifnum \x<4
\node at (\x,\n-2) {$Q_{\x}^{t}$};
\fi
}
}
\foreach \x in {1,...,5}{
\foreach \n in {5}{
\ifnum \x<4
\node at (\x,\n) {$Q_{\x}^{t+1}$};
\fi
\draw[->](\x,\n-1+\s)--(\x,\n-\s);
\ifnum \x<5
\draw[-](\x+\s,\n) -- (\x+0.5-\s*1.5,\n-\s);
\draw[-](\x+1-\s,\n) -- (\x+0.5+\s*1.5,\n-\s);
\ifnum \x<3
\node at (\x+0.5,\n-\s) {{$W_{\x}^{t+1}$}};
\fi
\draw[-](\x+\s,\n-1) -- (\x+0.5-\s*1.5,\n+\s-1);
\draw[-](\x+1-\s,\n-1) -- (\x+0.5+\s*1.5,\n+\s-1);
\ifnum \x<3
\node at (\x+0.5,\n+\s-1) {{$W_{\x}^{t}$}};
\fi
\fi}
}
\node at (3+0.5,5-\s) {\tiny{$\cdots$}};
\node at (3+0.5,5+\s-1) {\tiny{$\cdots$}};
\node at (4,5) {$Q_{n-1}^{t+1}$};
\node at (4,4) {$Q_{n-1}^{t}$};
\node at (4+0.5,5-\s) {{$W_{n-1}^{t+1}$}};
\node at (4+0.5,5+\s-1) {{$W_{n-1}^{t}$}};
\node at (5,5) {$Q_{n}^{t+1}$};
\node at (5,4) {$Q_{n}^{t}$};
}
}
}
\end{figure}
\begin{ex}
Consider the following BBS evolution:
\begin{figure}[H]
\centering
\tikz[scale=0.44]{
\foreach \s in {-1}{
\node (1) at (-4,3.5) {$t$: };
\node (2) at (-4,3.5+\s) {$t+1$: };
\foreach \x in {0,1,2,3,4,5,6,7,8,9,10,11,12,13,14,15}
{
\draw[fill=white] (\x,3+\s) -- (\x+1,3+\s) -- (\x+1,4+\s) -- (\x,4+\s) -- cycle;
}
\foreach \x in {4,5,6,8,12,14,15}
{\draw[fill=white] (\x,3+\s) -- (\x+1,3+\s) -- (\x+1,4+\s) -- (\x,4+\s) -- cycle;
\fill[cyan] (\x+0.5,3.5+\s) circle (0.25);
}
\foreach \x in {}
{\draw[fill=white] (\x,3+\s) -- (\x+1,3+\s) -- (\x+1,4+\s) -- (\x,4+\s) -- cycle;
\fill[red] (\x+0.5,3.5+\s) circle (0.25);
}
\foreach \x in {16}
{\draw[fill=white,white] (\x,3+\s) -- (\x+2,3+\s) -- (\x+2,4+\s) -- (\x,4+\s) -- cycle;
\draw[-] (\x,3+\s) -- (\x,4+\s);
\draw[-] (\x,3+\s) -- (\x+2,3+\s);
\draw[-] (\x,4+\s) -- (\x+2,4+\s);
\draw[-] (\x+1,3+\s) -- (\x+1,4+\s);
\node at (\x+1.5,3.5+\s) {$\cdots$};
}
\foreach \x in {0}
{\draw[fill=white,white] (\x,3) -- (\x-2,3) -- (\x-2,4) -- (\x,4) -- cycle;
\draw[-] (\x,3+\s) -- (\x,4+\s);
\draw[-] (\x,3+\s) -- (\x-2,3+\s);
\draw[-] (\x,4+\s) -- (\x-2,4+\s);
\draw[-] (\x-1,3+\s) -- (\x-1,4+\s);
\node at (\x-1.5,3.5+\s) {$\cdots$};
}
}
\foreach \x in {0,1,2,3,4,5,6,7,8,9,10,11,12,13,14,15}
{\draw[fill=white] (\x,3) -- (\x+1,3) -- (\x+1,4) -- (\x,4) -- cycle;
}
\foreach \x in {1,2,3,7,10,11,13}
{\draw[fill=white] (\x,3) -- (\x+1,3) -- (\x+1,4) -- (\x,4) -- cycle;
\fill[cyan] (\x+0.5,3.5) circle (0.25);
}
\foreach \x in {}
{\draw[fill=white] (\x,3) -- (\x+1,3) -- (\x+1,4) -- (\x,4) -- cycle;
\fill[red] (\x+0.5,3.5) circle (0.25);
}
\foreach \x in {16}
{\draw[fill=white,white] (\x,3) -- (\x+2,3) -- (\x+2,4) -- (\x,4) -- cycle;
\draw[-] (\x,3) -- (\x,4);
\draw[-] (\x,3) -- (\x+2,3);
\draw[-] (\x,4) -- (\x+2,4);
\draw[-] (\x+1,3) -- (\x+1,4);
\node at (\x+1.5,3.5) {$\cdots$};
}
\foreach \x in {0}
{\draw[fill=white,white] (\x,3) -- (\x-2,3) -- (\x-2,4) -- (\x,4) -- cycle;
\draw[-] (\x,3) -- (\x,4);
\draw[-] (\x,3) -- (\x-2,3);
\draw[-] (\x,4) -- (\x-2,4);
\draw[-] (\x-1,3) -- (\x-1,4);
\node at (\x-1.5,3.5) {$\cdots$};
}
}
\end{figure}
\n which is represented diagrammatically by
\begin{figure}[H]
\centering
\tikz[scale=0.54]{
\foreach \s in {12}{
\node (A) at (-3,3+\s-\s*0.2) {3};
\node (B) at (3,3+\s-\s*0.2) {1};
\node (C) at (9,3+\s-\s*0.2) {1};
\node (D) at (15,3+\s-\s*0.2) {2};
\node (E) at (0,3+\s-\s*0.2-\s*0.1) {1};
\node (F) at (6,3+\s-\s*0.2-\s*0.1) {3};
\node (G) at (12,3+\s-\s*0.2-\s*0.1) {1};
\node (H) at (0,3+\s*0.4) {3};
\node (I) at (6,3+\s*0.4) {2};
\node (J) at (12,3+\s*0.4) {1};
\node (K) at (-3,3+\s*0.3) {3};
\node (L) at (3,3+\s*0.3) {1};
\node (M) at (9,3+\s*0.3) {2};
\node (N) at (15,3+\s*0.3) {1};
\draw[->] (K) -- (A);
\draw[->] (L) -- (B);
\draw[->] (M) -- (C);
\draw[->] (N) -- (D);
\draw[-] (A) -- (E);
\draw[-] (B) -- (E);
\draw[-] (B) -- (F);
\draw[-] (C) -- (F);
\draw[-] (C) -- (G);
\draw[-] (D) -- (G);
\draw[-] (K) -- (H);
\draw[-] (L) -- (H);
\draw[-] (L) -- (I);
\draw[-] (M) -- (I);
\draw[-] (M) -- (J);
\draw[-] (N) -- (J);
}
}
\end{figure}
\end{ex}
\n Using these basic BBS building blocks, we can now stack them into a representation of the full udFToda evolution. To illustrate the point, the following is the resulting diagram for the udFToda evolution when $n=5$:
\begin{figure}[H]
\centering
\scalebox{1}{
\tikz[scale=2.8]{
\foreach \s in {0.14}{
\foreach \x in {1,...,2}{
\foreach \n [evaluate=\n as \neval using int(\n+\x-1)] in {4}{
\node at (\x,\n) {$Q^{(\n)}_{\x}$};
\draw[->](\x,\n-1+\s)--(\x,\n-\s);
\ifnum \x<2
\draw[-](\x+\s,\n) -- (\x+0.5-\s*1.5,\n-\s);
\draw[-](\x+1-\s,\n) -- (\x+0.5+\s*1.5,\n-\s);
\node at (\x+0.5,\n-\s) {\tiny{$W_{\x}^{\neval}(t+1)$}};
\draw[-](\x+\s,\n-1) -- (\x+0.5-\s*1.5,\n+\s-1);
\draw[-](\x+1-\s,\n-1) -- (\x+0.5+\s*1.5,\n+\s-1);
\node at (\x+0.5,\n+\s-1) {\tiny{$W_{\x}^{\neval}(t)$}};
\fi}
}
\foreach \x in {1,...,3}{
\foreach \n [evaluate=\n as \neval using int(\n+\x-1)] in {3}{
\node at (\x,\n) {$Q^{(\n)}_{\x}$};
\draw[->](\x,\n-1+\s)--(\x,\n-\s);
\ifnum \x<3
\draw[-](\x+\s,\n) -- (\x+0.5-\s*1.5,\n-\s);
\draw[-](\x+1-\s,\n) -- (\x+0.5+\s*1.5,\n-\s);
\node at (\x+0.5,\n-\s) {\tiny{$W_{\x}^{\neval}(t+1)$}};
\draw[-](\x+\s,\n-1) -- (\x+0.5-\s*1.5,\n+\s-1);
\draw[-](\x+1-\s,\n-1) -- (\x+0.5+\s*1.5,\n+\s-1);
\node at (\x+0.5,\n+\s-1) {\tiny{$W_{\x}^{\neval}(t)$}};
\fi}
}
\foreach \x in {1,...,4}{
\foreach \n [evaluate=\n as \neval using int(\n+\x-1)] in {2}{
\node at (\x,\n) {$Q^{(\n)}_{\x}$};
\draw[->](\x,\n-1+\s)--(\x,\n-\s);
\ifnum \x<4
\draw[-](\x+\s,\n) -- (\x+0.5-\s*1.5,\n-\s);
\draw[-](\x+1-\s,\n) -- (\x+0.5+\s*1.5,\n-\s);
\node at (\x+0.5,\n-\s) {\tiny{$W_{\x}^{\neval}(t+1)$}};
\draw[-](\x+\s,\n-1) -- (\x+0.5-\s*1.5,\n+\s-1);
\draw[-](\x+1-\s,\n-1) -- (\x+0.5+\s*1.5,\n+\s-1);
\node at (\x+0.5,\n+\s-1) {\tiny{$W_{\x}^{\neval}(t)$}};
\fi}
}
\foreach \x in {1,...,5}{
\foreach \n [evaluate=\n as \neval using int(\n+\x-1)] in {1}{
\node at (\x,\n) {$Q^{(\n)}_{\x}$};
\draw[->](\x,\n-1+\s)--(\x,\n-\s);
\ifnum \x<5
\draw[-](\x+\s,\n) -- (\x+0.5-\s*1.5,\n-\s);
\draw[-](\x+1-\s,\n) -- (\x+0.5+\s*1.5,\n-\s);
\node at (\x+0.5,\n-\s) {\tiny{$W_{\x}^{\neval}(t+1)$}};
\draw[-](\x+\s,\n-1) -- (\x+0.5-\s*1.5,\n+\s-1);
\draw[-](\x+1-\s,\n-1) -- (\x+0.5+\s*1.5,\n+\s-1);
\node at (\x+0.5,\n+\s-1) {\tiny{$W_{\x}^{\neval}(t)$}};
\fi}
}
\foreach \x in {1,...,5}{
\foreach \n in {0}{
\node at (\x,\n) {$Q^{(\n)}_{\x}$};
}
}
\foreach \x in {1,...,5}{
\node at (\x,0-\s) {\rotatebox{90}{=}};
\node at (\x,-2*\s) {$Q_{\x}^t$};
}
\foreach \x in {2,...,5}{
\node at (\x,7-\x+\s-1) {\rotatebox{90}{=}};
\node at (\x,7-\x+2*\s-1) {$~~~Q_{\x}^{t+1}$};
}
\node at (1,5+\s-1) {\rotatebox{90}{=}};
\node at (1,5+2*\s-1) {$~~~Q_{1}^{t+1}$};
}
\foreach \x in {1,...,4}{
\node at (0.4,\x-0.5) {Step \x ~~~$\rightarrow$ };
}
}
}
\end{figure}
\n Observe that one does indeed have four steps, each of which determines the rows of $\mathbf{W}(t+1)$ row-by-row (from the bottom, up) and the entries of $\mathbf{Q}(t+1)$ from right-to-left.\\[4pt]
\begin{ex} The following is the all-in-one diagrammatic representation of the single time step evolution in Example \ref{udftodaexample}:
\begin{figure}[H]
\centering
\tikz[scale=0.7]{
\node (A1) at (0,0) {2};
\node (B1) at (3,0) {1};
\node (C1) at (6,0) {3};
\node (D1) at (9,0) {1};
\node (A2) at (0,3) {2};
\node (B2) at (3,3) {1};
\node (C2) at (6,3) {2};
\node (D2) at (9,3) {\textbf{\color{blue}{2}}};
\node (A3) at (0,6) {1};
\node (B3) at (3,6) {1};
\node (C3) at (6,6) {\textbf{\color{blue}{3}}};
\node (A4) at (0,9) {\textbf{\color{blue}{1}}};
\node (B4) at (3,9) {\textbf{\color{blue}{1}}};
\node (E1) at (1.5,0.6) {2};
\node (F1) at (4.5,0.6) {1};
\node (G1) at (7.5,0.6) {2};
\node (E2) at (1.5,2.4) {\textbf{\color{red}{1}}};
\node (F2) at (4.5,2.4) {\textbf{\color{red}{3}}};
\node (G2) at (7.5,2.4) {\textbf{\color{red}{1}}};
\node (E3) at (1.5,3.6) {1};
\node (F3) at (4.5,3.6) {1};
\node (E4) at (1.5,5.4) {\textbf{\color{red}{1}}};
\node (F4) at (4.5,5.4) {\textbf{\color{red}{2}}};
\node (E5) at (1.5,6.6) {3};
\node (E6) at (1.5,8.4) {\textbf{\color{red}{3}}};
\draw [-] (A1) -- (E1);
\draw [-] (B1) -- (E1);
\draw [-] (B1) -- (F1);
\draw [-] (C1) -- (F1);
\draw [-] (C1) -- (G1);
\draw [-] (D1) -- (G1);
\draw [-] (A2) -- (E2);
\draw [-] (B2) -- (E2);
\draw [-] (B2) -- (F2);
\draw [-] (C2) -- (F2);
\draw [-] (C2) -- (G2);
\draw [-] (D2) -- (G2);
\draw [-] (A2) -- (E3);
\draw [-] (B2) -- (E3);
\draw [-] (B2) -- (F3);
\draw [-] (C2) -- (F3);
\draw [-] (A3) -- (E4);
\draw [-] (B3) -- (E4);
\draw [-] (B3) -- (F4);
\draw [-] (C3) -- (F4);
\draw [-] (A3) -- (E5);
\draw [-] (B3) -- (E5);
\draw [-] (A4) -- (E6);
\draw [-] (B4) -- (E6);
\draw [->] (A1) -- (A2);
\draw [->] (A2) -- (A3);
\draw [->] (A3) -- (A4);
\draw [->] (B1) -- (B2);
\draw [->] (B2) -- (B3);
\draw [->] (B3) -- (B4);
\draw [->] (C1) -- (C2);
\draw [->] (C2) -- (C3);
\draw [->] (D1) -- (D2);
}
\end{figure}
\n In the above, for ease of comprehension, we use bold blue numbers to represent the entries of $\mathbf{Q}(t+1)$ and bold red numbers for the entries of the triangular array $\mathbf{W}(t+1)$. The bold numbers, collectively, provide all of the time $t+1$ information.
\end{ex}
\subsection{RSK}
In \cite{bib:era}, we demonstrated how the classical box-ball system can be extended to a cellular automaton that captures the udToda dynamics when coordinates are allowed to degenerate to zero. A particular application of this was realised through the encoding of the basic version of a combinatorial algorithm known as \textit{Schensted insertion}. In that paper, the version realised was of word insertion into a word. The full version of Schensted insertion, used in proving the famous Robinson-Schensted-Knuth correspondence, involves word insertion into a so-called \textit{semistandard Young tableau}. It is known that Schensted insertion is an iterated process of coupled Schensted word insertions \cite{bib:ny}. At the level of coordinates, therefore, one should expect the full Schensted insertion process to be captured by udFToda, with zeroes allowed in coordinates. The following example illustrates this.
\begin{ex}\label{schenstedudftoda} The process of Schensted inserting \cite{bib:aigner} the word $1112334$ (or $1^32^13^24^1$) into the tableau
$$\begin{ytableau}
1&1&2&3&3\\
2&2&4\\3&3\\4&4
\end{ytableau}$$
results in the following tableau:
$$\begin{ytableau}
1&1&1&1&1&2&3&3&4\\2&2&2&3&3\\3&3&4\\4&4
\end{ytableau}.$$
We initialise udFToda with $\mathbf{Q}(t)=(0,3,1,2,1)$\footnote{The initial 0 in the first entry of $\mathbf{Q}(t)$, which propagates for all time $t$, is a necessary inclusion for the encoding of an insertion word \cite{bib:era}. Also, as an implicit extension of \cite[Equation 4.35]{bib:era} is the interpretation of $\mathbf{Q}(t+1)$ as an accumulation of row growths.} and $\mathbf{W}(t)=\begin{array}{ccccccc}
&&&2&&&\\
&&2&&0&&\\
&2&&0&&1&\\
2&&1&&2&&0
\end{array}$, so that $\mathbf{Q}(t)$ encodes the insertion word and $\mathbf{W}(t)$ encodes the initial tableau. The encoding is given by counting 1's, 2's, 3's and 4's. For the tableau, one counts in the rows. After performing the udFToda time evolution, as described in Definition \ref{definitionofudftodaevol}, one obtains $\mathbf{Q}(t+1)=(0,0,1,2,4)$ and $\mathbf{W}(t+1)=\begin{array}{ccccccc}
&&&2&&&\\
&&2&&1&&\\
&3&&2&&0&\\
5&&1&&2&&1
\end{array}$. The triangle $\mathbf{W}(t+1)$ captures the result of Schensted inserting the insertion word into the initial tableau. For reference, we include the udFToda diagram for the time evolution $(\mathbf{Q}(t),\mathbf{W}(t))\mapsto (\mathbf{Q}(t+1),\mathbf{W}(t+1))$.
\begin{figure}[H]
\centering
\tikz[scale=0.7]{
\node (H1) at (-3,0) {0};
\node (H2) at (-3,3) {0};
\node (H3) at (-3,6) {0};
\node (H4) at (-3,9) {0};
\node (H5) at (-3,12) {0};
\node (H6) at (0,12) {0};
\node (A1) at (0,0) {3};
\node (B1) at (3,0) {1};
\node (C1) at (6,0) {2};
\node (D1) at (9,0) {1};
\node (A2) at (0,3) {1};
\node (B2) at (3,3) {2};
\node (C2) at (6,3) {0};
\node (D2) at (9,3) {4};
\node (A3) at (0,6) {0};
\node (B3) at (3,6) {1};
\node (C3) at (6,6) {2};
\node (A4) at (0,9) {0};
\node (B4) at (3,9) {1};
\node (J1) at (-1.5,0.6) {2};
\node (E1) at (1.5,0.6) {1};
\node (F1) at (4.5,0.6) {2};
\node (G1) at (7.5,0.6) {0};
\node (J2) at (-1.5,2.4) {5};
\node (E2) at (1.5,2.4) {1};
\node (F2) at (4.5,2.4) {2};
\node (G2) at (7.5,2.4) {1};
\node (J3) at (-1.5,3.6) {2};
\node (E3) at (1.5,3.6) {0};
\node (F3) at (4.5,3.6) {1};
\node (J4) at (-1.5,5.4) {3};
\node (E4) at (1.5,5.4) {2};
\node (F4) at (4.5,5.4) {0};
\node (J5) at (-1.5,6.6) {2};
\node (E5) at (1.5,6.6) {0};
\node (J6) at (-1.5,8.4) {2};
\node (E6) at (1.5,8.4) {1};
\node (J7) at (-1.5,9.6) {2};
\node (E7) at (-1.5,11.4) {2};
\draw [-] (A1) -- (E1);
\draw [-] (B1) -- (E1);
\draw [-] (B1) -- (F1);
\draw [-] (C1) -- (F1);
\draw [-] (C1) -- (G1);
\draw [-] (D1) -- (G1);
\draw [-] (A2) -- (E2);
\draw [-] (B2) -- (E2);
\draw [-] (B2) -- (F2);
\draw [-] (C2) -- (F2);
\draw [-] (C2) -- (G2);
\draw [-] (D2) -- (G2);
\draw [-] (A2) -- (E3);
\draw [-] (B2) -- (E3);
\draw [-] (B2) -- (F3);
\draw [-] (C2) -- (F3);
\draw [-] (A3) -- (E4);
\draw [-] (B3) -- (E4);
\draw [-] (B3) -- (F4);
\draw [-] (C3) -- (F4);
\draw [-] (A3) -- (E5);
\draw [-] (B3) -- (E5);
\draw [-] (A4) -- (E6);
\draw [-] (B4) -- (E6);
\draw [->] (A1) -- (A2);
\draw [->] (A2) -- (A3);
\draw [->] (A3) -- (A4);
\draw [->] (B1) -- (B2);
\draw [->] (B2) -- (B3);
\draw [->] (B3) -- (B4);
\draw [->] (C1) -- (C2);
\draw [->] (C2) -- (C3);
\draw [->] (D1) -- (D2);
\draw [->] (H1) -- (H2);
\draw [->] (H2) -- (H3);
\draw [->] (H3) -- (H4);
\draw [->] (H4) -- (H5);
\draw [->] (A4) -- (H6);
\draw [-] (H1) -- (J1);
\draw [-] (A1) -- (J1);
\draw [-] (H2) -- (J2);
\draw [-] (A2) -- (J2);
\draw [-] (H2) -- (J3);
\draw [-] (A2) -- (J3);
\draw [-] (H3) -- (J5);
\draw [-] (A3) -- (J5);
\draw [-] (H3) -- (J4);
\draw [-] (A3) -- (J4);
\draw [-] (H4) -- (J6);
\draw [-] (A4) -- (J6);
\draw [-] (H4) -- (J7);
\draw [-] (A4) -- (J7);
\draw [-] (H5) -- (E7);
\draw [-] (H6) -- (E7);
}
\end{figure}
\end{ex}
\begin{rem}\label{rem:rskasfulludtoda}
We observe further that the vector $\mathbf{Q}(t+1)$ captures precisely the amount by which each row of the tableau has grown. In Example \ref{schenstedudftoda}, one sees that $\mathbf{Q}(t+1)=(0,0,1,2,4)$, when read from right to left, is understood as a growth of 4 boxes in the first row, a growth of 2 boxes in the second row, and a growth of 1 box in the third row. The fourth row does not grow. We state this now, referring the reader to Equation 4.53 of \cite{bib:era} for the key component required to prove this observation inductively. The referenced equation says that a word insertion into a row of a tableau grows the row by the last coordinate of $\mathbf{Q}(t+1)$. The recursive applications for lower dimensions, corresponding to the subsequent rows, yields the full result.\\[-8pt]
\n The significance of this observation is that a recursive application of udFToda, driven inhomogeneously by a sequence of words, captures the full RSK correspondence (the insertion and recording tableaux, for those familiar with this combinatorial bijection) and will be explored further by the authors in an upcoming paper.
\end{rem}
\section{The Geometry of the Full Kostant Toda Lattice and its Discretizations} \label{geometry}
As mentioned at the end of Section \ref{history}, the matrix reformulation of the Toda lattice in Flaschka's variables leads to natural Lie-theoretic interpretations and extensions. While we continue to remain in the phase space setting of Hessenberg matrices, we introduce, in this section, a modicum of concepts/terminology from the theory of Lie groups and their representations that will facilitate a natural geometric re-interpretation of our results.
\subsection{Kostant's Theorem, a Flag Manifold Embedding and Linearization} \label{sec:kostant}
\n We consider the Lie algebra decomposition of $n \times n$ matrices
\begin{eqnarray} \label{frak}
\frak{g} = \frak{gl}(n, \mathbb{R}) & = & \frak{n}_- \oplus \frak{b}_+
\end{eqnarray}
where $\frak{n}_-$ is the lower triangular nilpotent sub-algebra (nilradical) and $\frak{b}_+$ is a maximal solvable sub-algebra, referred to as a {\it Borel sub-algebra}:
\begin{eqnarray*}
\frak{n}_- = \left(\begin{array}{ccccc}
0 & & & &\\
*& 0 & & &\\
\vdots & \ddots & \ddots & \ddots &\\
\vdots & & \ddots & \ddots & \\
* & \dots & \dots & * & 0
\end{array} \right), &&
\frak{b}_+ = \left(\begin{array}{ccccc}
* & * &\dots &\dots &*\\
& * & * & &\vdots\\
& & \ddots & \ddots &\vdots\\
& & & \ddots & *\\
& & & & *
\end{array} \right).
\end{eqnarray*}
We will also use $\frak{b}_-$, the transpose of $\frak{b}_+$. Employing the {\it principal nilpotent} element,
\begin{eqnarray*}
\epsilon &=& \left(\begin{array}{ccccc}
0 & 1 & & &\\
& 0 & 1 & &\\
& & \ddots & \ddots &\\
& & & \ddots & 1\\
& & & & 0
\end{array} \right)
\end{eqnarray*}
one defines an extended Toda phase space
\begin{eqnarray*}
\epsilon + \frak{b}_- &=& \left(\begin{array}{ccccc}
* & 1 & & &\\
*& * & 1 & &\\
\vdots & \ddots & \ddots & \ddots &\\
\vdots & & \ddots & \ddots & 1\\
* & \dots & \dots & * & *
\end{array} \right), \\
\end{eqnarray*}
(which is the space of all lower Hessenberg matrices, $\mathcal{H}$, introduced earlier) on which the Toda Lax equation (\ref{Lax2}) as well as the discrete time Toda equation (\ref{symeseqn}) and their extensions are defined. It will be useful to introduce the following algebra and group projections,
\begin{eqnarray*}
\pi_- : \frak{g} \to \frak{n}_-, \qquad
\Pi_- : G \to N_- \\
\pi_+ : \frak{g} \to \frak{b}_+, \qquad
\Pi_+ : G \to B_+
\end{eqnarray*}
where $G = GL(n, \mathbb{R})$, $N_-$, the lower unipotent matrices, is the exponential group of the algebra $\frak{n}_-$ and $B_+$, the invertible upper triangular matrices, is the exponential group of the algebra $\frak{b}_+$. $\Pi_\pm$ are defined on the open dense subset of $G$ where there is an $LU$ factorization.
These factorizations provide the basis for effective linearization in both continuous and discrete time Toda. This is facilitated by a key theorem due to Kostant that provides a natural embedding of the Toda dynamics into a flag manifold that plays a role analogous to action-angle variables in classical integrable systems theory.\\
\smallskip
Before getting to the statement of Kostant's theorem we introduce the following distinguished bidiagonal element of $\mathcal{H}$,
\begin{equation} \label{epslam}
\epsilon_\Lambda = \left[\begin{array}{cccc}
\lambda_1 & 1\\
&\lambda_2 & \ddots\\
&&\ddots & 1\\
&&&\lambda_n
\end{array}\right],
\end{equation}
where $ \lambda_1 > \cdots > \lambda_n $ are the eigenvalues of $X_0$ which, for simplicity of exposition, we assume to be distinct. By isospectrality, these are the eigenvalues of $X(t)$ for all $t$. It is immediate from (\ref{Lax2}) that
$\epsilon_\Lambda$ is a fixed point of the Toda flow.
\begin{thm} \cite{bib:kostant} \label{KThm} For each $X \in \epsilon + \frak{b}_-$ there exists a {\bf unique} lower unipotent $L \in N_-$ such that
$$
X = L^{-1} \epsilon_\Lambda L.
$$
\end{thm}
Set
$$
\mathcal{H}_\Lambda = \{ X \in \epsilon + \frak{b}_- : \sigma(X) = \lambda_1 > \cdots > \lambda_n\},
$$
the isospectral manifold. It is immediate from the uniqueness statement in Theorem \ref{KThm} that the following defines an embedding into a compact, homogeneous space, known as a
{\it flag manifold}.
\begin{eqnarray} \label{eq:principal}
\kappa_\Lambda : \mathcal{H}_\Lambda &\to & G/B_+\\ \nonumber
X &\mapsto& L \mod B_+
\end{eqnarray}
where $X = L^{-1} \epsilon_\Lambda L.$
This mapping simultaneously linearizes and completes the Toda flows
\cite{bib:efh, bib:efs}. We will refer to $\kappa_\Lambda$ as the {\it principal embedding} because of its relation to the principal nilpotent element $\epsilon$.
\n If $X_0 = L_0^{-1} \epsilon_\Lambda L_0$ then
\begin{eqnarray} \nonumber
X(s) &=& \Pi_-^{-1}(e^{s X_0})X_0 \Pi_-(e^{s X_0})\\ \nonumber
&=& \Pi_-^{-1}(e^{s X_0})L_0^{-1}\epsilon_\Lambda L_0 \Pi_-(e^{s X_0})\\ \nonumber
\kappa_\Lambda(X(s)) &=& L_0 \Pi_-(e^{s X_0}) \mod B_+\\ \nonumber
&=& L_0 (e^{s X_0}) \mod B_+\\ \label{linprinc}
&=& e^{\epsilon_\Lambda s} L_0 \mod B_+
\end{eqnarray}
Thus, one sees that under the embedding $\kappa_\Lambda$, the Toda flow maps to a linear semigroup action by $e^{\epsilon_\lambda s}$ through the image of the initial value,
$\kappa_\lambda(X_0) = L_0$ which exists for all time.
The continuous Toda flow commutes with the discrete time Toda evolution. In fact, there is a hierarchy of continuous flows commuting with dToda and with one another given by Lax equations of the form
\begin{eqnarray} \label{hierarchy}
\frac{d}{ds_m} X = \left[ X, \pi_- X^m\right].
\end{eqnarray}
On $\mathcal{H}_\Lambda$ the first $n-1$ of these flows are locally independent and their respective images under the principal embedding has the form
\begin{eqnarray} \label{hiflow}
e^{\epsilon^m_\Lambda s_m} L_0 \mod B_+
\end{eqnarray}
and together they generate a torus action (see section \ref{sec:toremb}) that spans the closure of $\kappa_\Lambda(\mathcal{H}_\Lambda)$ in $G/B_+$ \cite{bib:efs}.
\medskip{}
\begin{rem} \label{rem:Lie} The language we have used in this section to describe the Toda phase space and dynamics (Borel and nilradical sub-algebras, principal nilpotent elements) have immediate extensions to the setting of general real semisimple Lie algebras to define {\it full} versions \cite{bib:efs} of the so-called {\it generalized Toda lattices} \cite{bib:kostant2}. Kostant also introduced an analogue of $\epsilon_\lambda$ in Lemma 3.52 of \cite{bib:kostant2}.
This element is distinguished by the fact that it lies in the intersection
$\mathcal{H}_\Lambda \cap \frak{b_+}$, and in that regard it is essentially unique. (There are $n!$ elements in this intersection corresponding to permutations of the distinct eigenvalues.)
\end{rem}
\subsection{Representation Theory and tau-functions} \label{sec:repn}
Let $G$ denote the group $GL(\ell +1)$ and let $(\rho_n, V_n)$ denote the $n^{th}$ fundamental representation of $G$. $\rho_1$ is the
{\it birth representation} with $V_1 = \mathbb{C}^{\ell + 1}$ defined by
$$
\rho_1(g) v = g v
$$
for $v \in V_1$. This induces a representation on the exterior algebra of $V_1$ that defines the remaining fundamental representations, respectively, on
$V_n = \bigwedge^n \mathbb{C}^{\ell + 1}$ given by
$$
\rho_n(g) v_1 \wedge \dots \wedge v_n = g v_1 \wedge \dots \wedge g v_n.
$$
With respect to the standard basis $e_1, \dots, e_{\ell + 1}$ of $\mathbb{C}^{\ell + 1}$
one defines a Hermitian inner product on $V_n$ by
$$
\langle e_{i_1} \wedge \dots \wedge e_{i_n}, e_{j_1} \wedge \dots \wedge e_{j_n}\rangle = \delta_{i_1, j_1} \cdots \delta_{i_n, j_n}.
$$
Set $v^{(n)} = e_1 \wedge \dots \wedge e_n$ and $v_{(n)} = e_{\ell - n + 2} \wedge \dots \wedge e_{\ell + 1}$. These are, respectively, the highest and lowest weight vectors, with respect to lexicographic order, of the representation $\rho_n$. \cite{bib:fuha}
We can now define the $n^{th}$ {\it $\tau$-function} to be
$$
\tau_n(s) = \langle \exp(s X_0) v^{(n)}, v^{(n)}\rangle
$$
which is just the $n^{th}$ principal minor of $\exp{s X_0}$.
\subsection{Homogeneous Space Representations of dFToda} \label{sec:homog}
Invoking Proposition \ref{prop:dft}, the principal embedding defined for Full Toda may be dynamically extended to its discretization as
\begin{eqnarray} \nonumber
\kappa_\Lambda: \mathcal{H}^{>0}_\Lambda & \to & G/B_+\\
X(t) &\mapsto& \epsilon^t_\Lambda L_0 \mod B_+ \label{kappaSymes}
\end{eqnarray}
for $t \in \mathbb{Z}$. If $X(0)$ is totally positive then, by Corollary \ref{cor:wp}, $X(t)$ remains totally positive and so defined for all time. Thus, $\kappa_\Lambda(X(t))$ is defined and by (\ref{linprinc}) and Proposition \ref{prop:dft} it takes the value stated in (\ref{kappaSymes}). Moreover, if $L_0$ is totally positive, then $[\epsilon^t_\Lambda L_0]_-$ is totally positive as well (see Remark \ref{tphiflow}).
However, it is important to note that the total positivity of $X$ does not necessarily imply that $L = \kappa_\Lambda(X)$ is totally positive. Nonetheless, the Symes factorization algorithm may be directly related to the principal embedding as follows. Begin by applying Kostant's theorem to the initial condition followed by its LU factorization, then by the Symes commutation action and continue to iterate this
\begin{eqnarray*}
X(0) &=& L_0^{-1} \epsilon_\Lambda L_0\\
&=& L(0) R(0)\\
X(1) &=& L(0)^{-1} X(0) L(0)\\
&=& L(1) R(1)\\
X(2) &=& L(1)^{-1} X(1) L(1)\\
&=& L(2) R(2) \cdots
\end{eqnarray*}
By Corollary \ref{cor:wp} the sequence $L(0), L(1), L(2), \dots $ is well-defined (for all discrete time) and totally positive. This sequence defines an orbit under another flag manifold embedding
\begin{eqnarray} \label{Symesemb}
S_\Lambda: \mathcal{H}^{>0}_\Lambda &\to& (G/B_+)^{>0} \\
X &\to & \,\,\,\, L
\end{eqnarray}
where $X = LR$. This will be referred to as the {\it Symes embedding}.
\begin{rem} \label{TPFlag}
For a complete introduction to the totally positive flag manifold $(G/B_+)^{>0}$ and relations to FKToda we refer the reader to \cite{bib:kw}. For our purposes here we will only need to consider $N_-^{>0}$ which is an open dense cell in $(G/B_+)^{>0}$. By Corollary \ref{cor:wp}, dFToda preserves $N_-^{>0}$; so we slightly abuse nomenclature when we speak of the Lusztig parametrization as providing a parametrization of $(G/B_+)^{>0}$.
\end{rem}
\begin{rem} \label{tphiflow}
We note that since $e^{\epsilon^m_\Lambda s_m}$ is manifestly totally positive and invertible, if $L_0$ is also totally positive then by the Loewner-Whitney theorem the product in (\ref{hiflow}) is TNN and so
by Corollary \ref{cor:lw}
the corresponding flow induced through $\kappa_\Lambda$ on the flag manifold
preserves $(G/B_+)^{>0}$. The same holds for the discrete flow (\ref{kappaSymes}) as long as the eigenvalues are all positive.
\end{rem}
\bigskip
To relate the Symes orbit to
(\ref{kappaSymes}) define the sequence $L_t \in N^{>0}_-$ coming from that principal embedding by
\begin{eqnarray*}
\epsilon^t_\Lambda L_0 &=& L_t R_t \,\,\,\, \mbox{or} \\
L_t &=& \left[ \epsilon^t_\Lambda L_0 \right]_- \left( \doteq \epsilon^t_\Lambda L_0 \mod B_+ \right).
\end{eqnarray*}
Then
\begin{eqnarray*}
L(0) &=& \left[ L_0^{-1} \epsilon_\Lambda L_0 \right]_-\\
&=& L_0^{-1} \left[ \epsilon_\Lambda L_0 \right]_-\\
&=& L_0^{-1} L_1 \\
L(1) &=& \left[ L(0)^{-1} X(0) L(0) \right]_- = \left[ L(0)^{-1} X(0) L(0) R(0) \right]_-\\
&=& \left[ L(0)^{-1} X(0)^2\right]_-\\
&=& \left[L(0)^{-1} L_0^{-1} \epsilon^2_\Lambda L_0 \right]_-\\
&=& L(0)^{-1} L_0^{-1} \left[\epsilon^2_\Lambda L_0 \right]_-\\
&=& L_1^{-1} L_0 L_0^{-1} L_2\\
&=& L_1^{-1} L_2\\
&\vdots&\\
L(t) &=& L_t^{-1} L_{t+1}
\end{eqnarray*}
Thus the Symes orbit in $G/B_+$ is seen to be a discrete, matrix ``logarithmic derivative" of the linear semigroup flow (\ref{kappaSymes}).
\subsection{Torus Embedding} \label{sec:toremb}
This section describes a modification of the principal embedding that represents the collective hierarchy of Toda flows, (\ref{hierarchy}), as a canonical torus action on a related flag manifold. (This action is the analogue of angle variables in classical integrable systems theory.)
We make use of the following basic lemma (see \cite{bib:er}, Lemma 4.4 for a derivation):
\begin{lem}\label{uppertodiagefhexpl}
If $\lambda_1,\ldots,\lambda_n$ are distinct, then one has $\epsilon_\Lambda=U D_\Lambda U^{-1}$, where $U=(u_{ij})$ is the upper triangular matrix given by
$$u_{ij}=\prod_{k=1}^{i-1}(\lambda_j-\lambda_k),~~~1\leq i\leq j\leq n$$
and $D_\Lambda=\epsilon_\Lambda-\epsilon=\text{diag}(\lambda_1,\ldots,\lambda_n)$.
\end{lem}
\n Based on this one is led to consider a variation on the principal embedding,
\begin{eqnarray*}
\mbox{tor}_\Lambda : \mathcal{H}_\Lambda &\to & G/B_+\\
X &\mapsto& U^{-1}L \mod B_+
\end{eqnarray*}
from $X$ to its matrix of left eigenvectors
$\left([U^{-1}L] X = D_\Lambda [U^{-1}L]\right)$. As before one can track the dFToda dynamics through this embedding and find that
\begin{eqnarray*}
\mbox{tor}_\Lambda(X(t)) &=& D^t_\Lambda U^{-1}L \mod B_+.
\end{eqnarray*}
Also, as before, this can be extended to the commuting hierarchy of Toda flows (\ref{hierarchy}),
\begin{eqnarray*}
\mbox{tor}_\Lambda(X(t_1, \dots, t_n)) &=&
\left({\sum_{m=1}^{n-1} D^{m t_m}_\Lambda}\right)U^{-1}L \mod B_+,
\end{eqnarray*}
$t_j \in \mathbb{Z}$.
This amounts to the discretization of the torus action,
\begin{eqnarray} \label{torusact}
(\mathbb{R}^*)^n \times G/B_+ &\to & G/B_+\\ \nonumber
(r_1, \dots, r_n) \times U^{-1} L &\to & \mbox{diag} (r_1, \cdots, r_n) U^{-1} L,
\end{eqnarray}
explaining why $\mbox{tor}_\Lambda$ is referred to as the \textit{torus embedding}. Since $\mbox{tor}_\Lambda(\mathcal{H}_\Lambda)$ is a Zariski open subset of $G/B_+$, the closures of the orbits of
(\ref{torusact}) stratify $G/B_+$. These {\it torus strata} have interesting combinatorial applications
described in \cite{bib:ggms}. Their relevance for the Full Toda Lattice is considered in more detail in
\cite{bib:efs, bib:kw, bib:ks}. Here we will simply briefly elaborate on the feature that dFToda is the stroboscope of the completely integrable system described in \cite{bib:efs}.
The flag manifold has its own natural embedding into a large projective space whose coordinates are comprised of the minors of $\ell$-tuples of column vectors of a representative matrix $g \in G$. For $g$ in the image of $\mbox{tor}_\Lambda$, these are minors within $\ell$-tuples of column vectors of $U^{-1} L$ (equivalently, $\ell$-tuples of eigenvectors of $X$). As projective coordinates these minors are referred to as {Pl\"ucker coordinates} and, for those of interest to us, we may index them as follows. For an $(n-k)$-set $J \subset \{ 1, \dots, n\}$, $J_0 = \{ 1, \dots, n-k\}$, $\pi_J$ will denote the $k \times k$ minor of the first $k$ columns of $U^{-1}L$ with row indices in
$\{ 1, \dots, n\} \backslash J$ and $\pi^*_J$ is the $(n-k) \times (n-k)$ minor of the first $n-k$ columns of $U^{-1}L$ and rows from $J$. A key observation here is that under the torus action (\ref{torusact}) the product $\pi_J \pi^*_J$ transforms just by the factor $\prod_{i=1}^n h_i$, independent of $J$. Hence the rational function $\pi_J \pi^*_J/\pi_{J_0} \pi^*_{J_0}$ is invariant under the torus action which is consistent with the restriction of the dynamics to the torus strata. Moreover, $\pi_J \pi^*_J/\pi_{J_0} \pi^*_{J_0} \circ \mbox{tor}_\Lambda$ is a rational function in the coordinates of $X$ which is invariant under full Toda and dFToda.
\section{Tau functions and the Discrete Dynamics of Lusztig parameters}
\label{sec:LusDynam}
One can now ask if or how the torus orbit stratification under the torus embedding passes over to the Symes embedding. One can at least start by revisiting the relation between the principal and Symes embedding discussed in Section \ref{sec:homog}. This can be summarized as
\begin{eqnarray*}
\kappa_\Lambda: \mathcal{H}_\Lambda &\to& G/B_+\\
|| && \,\,\,\,\, \downarrow \mathcal{I}\\
S_\Lambda: \mathcal{H}_\Lambda &\to& G/B_+
\end{eqnarray*}
where
$$
\mathcal{I}(L_0) = L(0) = L_0^{-1} [\epsilon_\Lambda L_0]_-.
$$
The iterated dFToda dynamics is well-defined, making this diagram commute, if one starts with an initial matrix $ X(0) \in \mathcal{H}^{>0}_\Lambda $. One can now examine how the dynamics of dFToda distributes itself across the recursive
dToda structure described in section \ref{section:dfktodarecdtoda} and gets expressed on the factors of the Lusztig embedding (\ref{LusFlag}). Recall that the recursive dToda structure is given in terms of a sequence of time-dependent tridiagonal Hessenberg matrices, $H^{(i)}(t)$
defined in (\ref{defnhessenbergtruncs}).
The evolution is then given by a coupled dToda evolution,
\begin{align*}
H^{(i)}(t)&=T_i(t+1)R^{(i)}(t)=R^{(i-1)}(t)T_i(t)\\
H^{(i)}(t+1)&=R^{(i-1)}(t+1)T_i(t+1)\\
&=R^{(i-1)}(t+1)H^{(i)}(t)(R^{(i)}(t))^{-1}\\
&=R^{(i-1)}(t+1)R^{(i-1)}(t)H^{(i)}(t-1)(R^{(i)}(t-1))^{-1}(R^{(i)}(t))^{-1}\\
& \vdots \\
&= \prod^{1}_{s=t+1}R^{(i-1)}(s) H^{(i)}(0) \prod^{t}_{s=0}(R^{(i)}(s))^{-1}\\
&= \prod^{0}_{s=t+1}R^{(i-1)}(s) T_i(0) \prod^{t}_{s=0}(R^{(i)}(s))^{-1}.
\end{align*}
From this it is possible to inductively define the dynamics on the $R^{(i)}$ directly through the $tau$-function; i.e., independently of the dynamics on the $T_i$ or factorization.
\begin{eqnarray*}
\tau^i_j(t+1) &=& \Big\langle \prod^{0}_{s=t+1}R^{(i-1)}(s) T_i(0) \prod^{t}_{s=0}(R^{(i)}(s))^{-1} v^{(j)}, v^{(j)}\Big\rangle\\
&=& \prod^{t}_{s=0} (\tau^i_j(s))^{-1}\Big\langle \prod^{0}_{s=t+1}R^{(i-1)}(s) T_i(0) v^{(j)}, v^{(j)}\Big\rangle
\end{eqnarray*}
One can also describe this dynamics in terms of coordinates. As mentioned after Definition \ref{TPFlag}, $N^{>0}$ is an open dense cell in $\left( G/B_+\right)^{>0}$, invariant dFToda
and so the Lusztig parameters provide coordinates on this cell in terms of which the dynamics under the Symes embedding, $S_\Lambda$, may be described. This is already apparent from
(\ref{tauf}) which we will now expand on.
\medskip
Let $\mathcal{T}$ denote the triangular array of Lusztig parameters which we display, in Figure \ref{lusztignetwork}, as edge weights on a directed graph (edges are oriented in the downward and right directions and unlabelled edges will be taken to have weight 1).
\begin{rem} Figure \ref{lusztignetwork} provides a diagrammatic representation for the coordinatization of $N_-^{>0}$ by Lusztig parameters. The $a_{ij}$ entry of the corresponding lower unipotent matrix is given as the sum of multiplicative path weights from {\rm source} i on the left to {\rm sink} j on the right. Similarly the $D_{ii}$ give the diagonal entries of a general lower triangular matrix. This diagrammatic approach may also be used to calculate minors of lower triangular matrices \cite{bib:fz}.
\end{rem}
Now we describe the dFToda dynamics in terms of the evolution of these Lusztig parameters. This can be presented in terms of the tridiagonal decomposition into the $H^{(i)}$ as described in Definition \ref{def:dft}. In these terms, here is the sequence of moves that gives a time-step in the evolution of the Lusztig parrameters:
\begin{itemize}
\item Stage 0: One begins with the initial Hessenberg matrix $X(0) = L(0)R(0)$ where by Theorem \ref{theorem:bfz} one has the unique factorization
$L(0) = T_1(u(0))T_2(u(0))\cdots T_{n-1}(u(0))$ and $R(0) = R^{(0)}(t=0)$ is determined by the principal minors, $\tau^0_j(0)$, of $X(0)$ as in (\ref{R-repn}).
\item Stage 1: One has
$H^{(i)}(t):=R^{(i-1)}(t)T_i(t)$ where the Lusztig parameters $\vec{u}(i,t)$ of $T_i(t)$ are those appearing in the $i^{th}$ ``column" of Figure \ref{lusztignetwork}. As in Stage 0, the principal minors, $\tau^i_j(t)$, of $H^{(i)}(t)$ determine $R^{(i)}(t)$.
\item Stage 2: Then, by Definition \ref{def:dft}(ii), $T_i(t+1) = H^{(i)}(t) (R^{(i)}(t))^{-1}$ which updates the Lusztig parameters in the $i^{th}$ column. Explicitly, in the $i^{th}$ column, the update is given by
(\ref{tauf})
$$u_j^{i+j-1}(t+1) = u_j^{i+j-1}(t) \frac{\tau_{j+1}^{i-1}(t) \tau_{j-1}^i(t)}{\tau_{j}^{i-1}(t) \tau_{j}^i(t)}$$
\item Stage 3: Finally, applying (\ref{taus}), one has
$\tau_{j}^{i-1}(t+1) = \tau_j\left( R(t+1) T_1(u(t+1)) T_2(u(t+1)) \cdots T_{i-1}(u(t+1)) \right)$ which determines $R^{(i-1)}(t+1)$.
$R(t+1)$ is determined ab initio for all $t$ since $\tau^0_j(t)$ is determined by the minors of $X(0)$ and $\epsilon_\Lambda$ \cite{bib:er}. Thus we have inductively determined $$H^{(i)}(t+1):=R^{(i-1)}(t+1)T_i(t+1)$$
completing the step from $t$ to $t+1$ and setting the stage for the subsequent update.
\end{itemize}
\begin{figure}[H]
\centering
\tikz[scale=1.4]{
\foreach \y in {1,2,3,4,5}{
\draw[-] (0,\y) -- (7.5,\y);
\node at (-0.3,\y) {\y};
\node at (7.8,\y) {\y};
\foreach \x in {0,1,2,3,4,5}{
\node at (1.5*\x,\y) {$\bullet$};
}
}
\draw[-] (0,2) -- (1.5,1);
\draw[-] (0,3) -- (3,1);
\draw[-] (0,4) -- (4.5,1);
\draw[-] (0,5) -- (6,1);
\node at (0.75,5.2) {$D_{55}$};
\node at (0.9,4.6) {$u_4^4$};
\node at (0.9,3.6) {$u_3^3$};
\node at (0.9,2.6) {$u_2^2$};
\node at (0.9,1.6) {$u_1^1$};
\node at (2.25,4.2) {$D_{44}$};
\node at (2.4,3.6) {$u_3^4$};
\node at (2.4,2.6) {$u_2^3$};
\node at (2.4,1.6) {$u_1^2$};
\node at (3.75,3.2) {$D_{33}$};
\node at (3.9,2.6) {$u_2^4$};
\node at (3.9,1.6) {$u_1^3$};
\node at (5.25,2.2) {$D_{22}$};
\node at (5.4,1.6) {$u_1^4$};
\node at (6.75,1.2) {$D_{11}$};
}
\caption{A network diagram for Lusztig parameters}\label{lusztignetwork}
\end{figure}
\section{Return to Full Kostant Toda and Continuous Dynamics} \label{sec:return}
Having completed our analysis of the discrete and ultradiscrete Toda systems it is natural to ask if that analysis provides any new insights into the continuous FKToda system,
(\ref{Lax2}), from which these were derived. That is indeed the case and that is what will be described in this section. In particular we derive a direct representation of the FKToda flow in terms of Lusztig parameters which in some sense is the analogue of the Lusztig parameter dynamics for dFToda described in section \ref{sec:LusDynam}. The reader should recall the group and algebra projections, $\Pi_\pm$ and $\pi_\pm$ respectively, defined in
section \ref{sec:kostant} which will be used throughout this section.
\medskip
Recall from Remark \ref{tphiflow} that if $L_0$ is totally positive then
$$
L_s \doteq \Pi_-(e^{s \epsilon_\Lambda} L_{0})
$$ remains totally positive for all (continuous) $s$. This is the FKToda flow represented under the principal embedding, (\ref{eq:principal} - \ref{linprinc}) as a flow on $(G/B_+)^{>0}$. This implicitly defines the FKToda dynamics as a dynamics on Lusztig parameters. We will now make this parameter dynamics explicit.
We work with the equations
\begin{eqnarray} \label{OC1}
e^{s\epsilon_\Lambda} L_{0} &=& L_s R_s\\ \label{OC2}
\epsilon_\Lambda e^{s\epsilon_\Lambda} L_{0} &=& \frac{d}{ds}\left(L_s R_s\right).
\end{eqnarray}
\n Noting that
\begin{eqnarray} \label{OC3}
L_s &=& T_1(s)\cdots T_{n-1}(s)
\end{eqnarray}
is the unique Lusztig
factorization of $L_s$, noting that the left-hand side of \ref{OC2} is $\epsilon_\Lambda L_sR_s$, and substituting this Lusztig factorisation in for $L_s$, we obtain the following equation:
$$\epsilon_\Lambda T_1(s)\cdots T_{n-1}(s) R_s = \dfrac{d}{ds}(T_1(s)\cdots T_{n-1}(s)R_s).$$
Applying the product rule, multiplying on the left by $L_s=T_1(s)\cdots T_{n-1}(s)$ and on the right by $R_s^{-1}$ yields
\begin{eqnarray} \label{eqn:dKLe}
T^{-1}_{n-1}\cdots T^{-1}_1 \epsilon_\Lambda T_1\cdots T_{n-1} &=&
\sum_{j=1}^{n-1}\left(T^{-1}_{n-1}\cdots T^{-1}_{j+1} \left(T^{-1}_{j} \frac{d}{ds}T_j\right) T_{j+1}\cdots T_{n-1}\right) + \frac{d}{ds}R.
\end{eqnarray}
Now assume there are upper bidiagonal matrices $R^{(i)}, i = 0,\dots , n-1 $, with $R^{(0)} = \epsilon_\Lambda$ satisfying
\begin{eqnarray} \label{eqn:dLe}
\frac{d}{ds}T_j &=& R^{(j-1)} T_j
- T_j R^{(j)}\\ \label{eqn:findLe}
\frac{d}{ds}R &=& R^{(n-1)}
\end{eqnarray}
The equations (\ref{eqn:dLe}) are equivalent to
\begin{eqnarray} \label{eqn:split}
T^{-1}_{j}\frac{d}{ds}T_j = T^{-1}_{j}R^{(j-1)} T_j
- R^{(j)}.
\end{eqnarray}
Substituting these latter, inductively on $j$, into (\ref{eqn:dKLe}), reduces
(\ref{eqn:dKLe}) to
(\ref{eqn:findLe}). (Note that $R(s)$ is not necessarily bidiagonal Hessenberg.)
It follows from (\ref{eqn:split}) that the $R^{(j)}, j>0,$ must, inductively, satisfy
\begin{eqnarray} \label{Rjandjminusone}
R^{(j)} &=& \pi_+ \left( T^{-1}_{j}R^{(j-1)} T_j \right)\\
\label{Rj}
&=& \pi_+ \left( T^{-1}_{j} \cdots T_1^{-1} \epsilon_\Lambda T_1 \cdots T_j \right)
\end{eqnarray}
where the $T_j(s)$ arre given by (\ref{OC3}).
To then verify that
(\ref{eqn:findLe}) holds, first observe that
\begin{eqnarray} \nonumber
R^{(n-1)} &=& \pi_+ \left( T^{-1}_{n-1} \cdots T_1^{-1} \epsilon_\Lambda T_1 \cdots T_{n-1} \right)\\ \nonumber
&=&\pi_+ \left( L_s^{-1} \epsilon_\Lambda L_s\right)\\
\label{plusproj}
&=& \pi_+ \left( X(s) \right).
\end{eqnarray}
where, by Theorem \ref{factorisationthmbkg}, $X(s)$ solves FKToda with initial condition
$X(0) = L_0^{-1} \epsilon_\Lambda L_0$.
We recall from that theorem the construction of the factorization solution of FKToda, we set
$L(s) = L_{0}^{-1}L_s$ and
differentiate the relation
\begin{eqnarray*}
e^{sX} &=& L(s) R(s)\\
X e^{sX} &=& \frac{dL}{ds} R + L \frac{dR}{ds}\\
X L(s) R(s) &=& \frac{dL}{ds} R + L \frac{dR}{ds}\\
L^{-1}(s) X L(s)
&=& L^{-1}(s) \frac{dL}{ds} + \frac{dR}{ds} R^{-1}(s)\\
X(s) &=& L^{-1}(s) \frac{dL}{ds} + \frac{dR}{ds} R^{-1}(s).
\end{eqnarray*}
From this it is clear that
$$
\pi_+ \left( X( s)\right) = \frac{dR}{ds} R^{-1}(s).
$$
Comparing this to (\ref{plusproj})
verifies (\ref{eqn:findLe}).
\begin{rem}
It is worth noting that $L(s)$ represents the continuous analogue of
the relation between the discrete principal and Symes embeddings described in Section \ref{sec:homog}. This suggests a relation between the two in the nature of the latter corresponding to B\"acklund transformations for the former.
\end{rem}
We now turn to the derivation based on
(\ref{eqn:dLe} - \ref{eqn:findLe}) of explicit ODEs for the evolution of the Lusztig parameters. Starting with $j=1$,
equation (\ref{eqn:split}) becomes
\begin{eqnarray*}
T^{-1}_{1}\frac{d}{ds}T_1 &=& T^{-1}_{1} \epsilon_\Lambda T_1
- R^{(1)}\\
&=& T^{-1}_{1} \epsilon_\Lambda T_1
- \pi_+(T^{-1}_{1} \epsilon_\Lambda T_1)\\
&=& \pi_-(T^{-1}_{1} \epsilon_\Lambda T_1)\\
\frac{d}{ds}T_1 &=& T_{1}\pi_-(T^{-1}_{1}\epsilon_\Lambda T_1).
\end{eqnarray*}
In general one knows from (\ref{Rj}) and Kostant's theorem that $R^{(i)}$ is upper Hessenberg. Furthermore, from the lower bidiagonal form of $T_j$, one has that the only non-zero entries of $\frac{d}{ds} T_j$ are those of height -1.\\
Hence the equation
\begin{eqnarray}
\frac{d}{ds}T_j = T_{j}\pi_-(T^{-1}_{j}R^{(j-1)} T_j)\label{TjandRjminusone}
\end{eqnarray}
only yields non-trivial equations
from the principal sub-diagonals of each side, which is a set of $n-j$ scalar ODEs:
\begin{eqnarray*}
\left(\frac{d}{ds}T_j\right)_{i+1,i} &=& \left(T_{j}\pi_-(T^{-1}_{j}R^{(j-1)} T_j)\right)_{i+1,i}\\
&=& \left(\pi_-(T^{-1}_{j}R^{(j-1)} T_j)_-\right)_{i+1,i}; \,\,\,\, i = 1, \dots, n-j.
\end{eqnarray*}
The second equality follows since $T_j$ is lower unipotent; hence, multiplying by it on the left does not alter the entries on the principal sub-diagonal of $\pi_-(T^{-1}_{j}R^{(j-1)} T_j)$.
\begin{rem}\label{rem:extlusztig}
In what follows, for convenience, we extend the Lusztig coordinates $(u_i^j)_{1\leq i\leq j\leq n-1}$ to $(u_i^j)_{i,j\in\Z}$ by declaring $u_i^j=0$ for $i$ and $j$ not satisfying $1\leq i\leq j\leq n-1$.
\end{rem}
\begin{thm}\label{thm:matrixandcoordrepoffktoda}
Equations \ref{Rjandjminusone} and \ref{TjandRjminusone}, along with the initialisation $R^{(0)}=\epsilon_\Lambda$, are equivalent to the following equations:
\begin{equation}\frac{d}{ds}T_j=[T_j,T_j\epsilon-R^{(j-1)}],~~~~R^{(j)}=\epsilon_\Lambda+\left[\epsilon,\sum_{i=1}^j T_i\right]\label{eq:commutatorsfulltoda}\end{equation}
which are equivalent to the following equations for the Lusztig coordinates:
\begin{eqnarray}
\dfrac{\dot{u}_i^{j+i-1}}{u_i^{j+i-1}} &=& R_{i+1,i+1}^{(j-1)}-R_{i,i}^{(j-1)}+u_{i-1}^{j+i-2}-u_i^{j+i-1},~~~1\leq j\leq n-1,~~~1\leq i\leq n-j\label{inductiveueqn}\\
R_{i,i}^{(j)} &=& R_{i,i}^{(j-1)}+u_i^{j+i-1}-u_{i-1}^{j+i-2},~~~1\leq j\leq n-1,~~~1\leq i\leq n,\label{inductiveReqn}
\end{eqnarray}
\end{thm}
\begin{proof}
Since $T_j(s)$ is lower unipotent, one has that $T_{j,-}(s):=T_j(s)-\mathbb{I}_n$ is lower nilpotent, with $(T_{j,-}(s))^n=0$. It therefore follows that the inverse of $T_j(s)$ can be expressed as follows:
$$T_j^{-1}=\mathbb{I}_n-T_{j,-}+T_{j,-}^2-\cdots +(-1)^{n-1}T_{j,-}^{n-1}.$$
We now decompose $R^{(j-1)}$ as $\epsilon+R^{(j-1)}_\Delta$, where $(R^{(j-1)})_\Delta$ is a diagonal matrix. We therefore write $T_j^{-1}R^{(j-1)}T_j$ as
$$\left(\mathbb{I}_n-T_{j,-}+T_{j,-}^2-\cdots +(-1)^{n-1}T_{j,-}^{n-1}\right)(\epsilon +(\epsilon T_{j,-}+R^{(j-1)}_\Delta) + R^{(j-1)}_\Delta T_{j,-})$$
where each single term in the above is a matrix whose entries lie solely along some diagonal of a given height.\\
From the above, one sees that
\begin{enumerate}[(i)]
\item The principal superdiagonal of $T_j^{-1}R^{(j-1)}T_j$ is $\epsilon$, as expected.
\item The diagonal of $T_j^{-1}R^{(j-1)}T_j$ is $\epsilon T_{j,-}+R^{(j-1)}_\Delta-T_{j,-}\epsilon$.
\item The principal subdiagonal of $T_j^{-1}R^{(j-1)}T_j$ is $R^{(j-1)}_\Delta T_{j,-}-T_{j,-}(\epsilon T_{j,-}+R^{(j-1)}_\Delta)+T_{j,-}^2\epsilon$
\item Likewise, the subdiagonals below the principal subdiagonal can be extracted similarly.
\end{enumerate}
We now claim that $T_j[T_j^{-1}R^{(j-1)}T_j]_-$ has nonzero entries only on its principal subdiagonal. What is clear \textit{a priori} is that this matrix is strictly lower triangular since it is the product of a lower triangular matrix, $T_j$, and and strictly lower triangular matrix $[T_j^{-1}R^{(j-1)}T_j]_-$. We now decompose the matrix into its upper and strictly lower projections:
$$T_j^{-1}R^{(j-1)}T_j=\pi_+(T_j^{-1}R^{(j-1)}T_j)+\pi_-(T_j^{-1}R^{(j-1)}T_j).$$
Multiplying both sides on the left by $T_j$ and rearranging yields
\begin{equation}T_j\pi_-(T_j^{-1}R^{(j-1)}T_j)-=R^{(j-1)}T_j-T_j\pi_+(T_j^{-1}R^{(j-1)}T_j).\label{eq:tjprojconjdecomp}\end{equation}
Now, we observe that $R^{(j-1)}T_j$, being the product of an upper bidiagonal matrix and a lower bidiagonal matrix, has no nonzero entries below the principal subdiagonal. Likewise, $T_j\pi_+(T_j^{-1}R^{(j-1)}T_j)$, being the product of a lower bidiagonal matrix and an upper bidiagonal matrix, also has no nonzero entries below the principal subdiagonal. Therefore, by Equation \ref{eq:tjprojconjdecomp}, the strictly lower triangular matrix $T_j\pi_-(T_j^{-1}R^{(j-1)}T_j)$ has no nonzero entries below its principal subdiagonal, and is therefore itself a principal subdiagonal matrix.\\
\n Now we may proceed to unpack Equations \ref{TjandRjminusone} and \ref{Rjandjminusone}. Firstly, since $T_j$ is lower unipotent, the principal subdiagonal of $T_j\pi_+(T_j^{-1}R^{(j-1)}T_j)$ is equal to the principal subdiagonal of $\pi_+(T_j^{-1}R^{(j-1)}T_j)$. Thus, Equation \ref{TjandRjminusone} is equivalent to
\begin{equation}
\frac{d}{ds}T_j =R^{(j-1)}_\Delta T_{j,-}-T_{j,-}(\epsilon T_{j,-}+R^{(j-1)}_\Delta)+T_{j,-}^2\epsilon\label{eq:derivTjdiags}
\end{equation}
\n Therefore,
\begin{equation}\dot{u}_i^{i+j-1}=R_{i+1,i+1}^{(j-1)}u_i^{i+j-1}-(u_i^{i+j-1})^2-u_i^{i+j-1}R_{i,i}^{(j-1)}+u_i^{i+j-1}u_{i-1}^{i+j-2}\label{eq:lusztigder}\end{equation}
where, when $i=1$, the last term $u_0^{j-1}$ is understood to be zero (Remark \ref{rem:extlusztig}). Likewise, also using the convention set forth in Remark \ref{rem:extlusztig}, when $i>n-j$, $u_i^{i+j-1}=0$, so that Equation \ref{eq:lusztigder} becomes vacuous when beyond this bound. Dividing Equation \ref{eq:lusztigder} by $u_i^{i+j-1}$ and reordering terms yields Equation \ref{inductiveueqn} precisely.\\
\n Turning our attention to Equation \ref{Rjandjminusone}, we recall that $T_j^{-1}R^{(j-1)}T_j$ is Hessenberg, hence $\pi_+(T_j^{-1}R^{(j-1)}T_j)$ is comprised of the superdiagonal and diagonal of $T_j^{-1}R^{(j-1)}T_j$. By the analysis above, namely items (i) and (ii), Equation \ref{Rjandjminusone} yields
\begin{equation}
R^{(j)}_{i,i}=\left(\epsilon T_{j,-}+R^{(j-1)}_\Delta-T_{j,-}\epsilon\right)_{i,i}=u_i^{i+j-1}+R_{i,i}^{(j-1)}-u_{i-1}^{i+j-2}\label{eq:Rsbydiags}
\end{equation}
which gives Equation \ref{inductiveReqn}.\\
\n Finally, to see the commutator expressions for the evolutions of $T_j$ and $R^{(j)}$, we note that Equation \ref{eq:derivTjdiags} can immediately be written as
$$\frac{d}{ds}T_{j,-}=[T_{j,-},~T_{j,-}\epsilon-R^{(j-1)}_\Delta].$$
Next, we note that we can add the identity inside the derivative, as well as in the first argument of the commutator without affecting either the derivative or commutator, and we may also add and subtract $\epsilon$ in the second argument:
$$\frac{d}{ds}T_j=[T_j,~T_{j,-}\epsilon+\epsilon-\epsilon-R^{(j-1)}_\Delta]\\
=[T_j,~T_j\epsilon-R^{(j-1)}],$$
which is the first equation of \ref{eq:commutatorsfulltoda}. For the second equation of \ref{eq:commutatorsfulltoda}, we see that Equation \ref{eq:Rsbydiags} can be expressed as
$$R^{(j)}_\Delta=R^{(j-1)}_\Delta +[\epsilon,T_{j,-}].$$
We add $\epsilon$ to both sides, and add the identity matrix in the second argument of the commutator to obtain
$$R^{(j)}=R^{(j-1)} +[\epsilon,T_{j}]=R^{(j-2)} +[\epsilon,T_{j}]+[\epsilon,T_{j-1}]=\cdots =R^{(0)}+\sum_{i=1}^j [\epsilon,T_{i}].$$
The second equation of \ref{eq:commutatorsfulltoda} follows from this by recalling that $R^{(0)}=\epsilon_\Lambda$ and by using linearity of the commutator in the second argument.
\end{proof}
We show in Appendix \ref{appendixb}
how Equations \ref{inductiveueqn} and \ref{inductiveReqn} can be used for an alternative derivation of O'Connell's system of differential equations in (6.4) of \cite{bib:o}.
\section{Concluding Remarks} \label{sec:conclusions}
\subsection{Connections to the Literature}\label{sec:connectionsliterature}
\n The following equations in \cite{bib:sik} describes their dhToda dynamics:
\begin{equation}\label{shinjohungrytodaeqs}
\left\{
\begin{array}{l}
Q_1^{(s,n)}=q_1^{(s,n)}-\mu^{(n)},\\
q_k^{(s,n+1)}+Q_{k+1}^{(s,n)}=q_{k+1}^{(s,n)}+Q_k^{(s+M,n)},~~~k=1,2,\ldots,m-1,\\
q_k^{(s,n+1)}Q_k^{(s,n)}=q_k^{(s,n)}Q_k^{(s+M,n)},~~~k=1,2,\ldots,m,\\
e_{k-1}^{(s,n+1)}+Q_k^{(s+1,n)}=e_k^{(s,n)}+Q_k^{(s,n)},~~~k=1,2,\ldots,m,\\
e_{k}^{(s,n+1)}Q_k^{(s+1,n)}=e_{k}^{(s,n)}Q_{k+1}^{(s+1,n)},~~~k=1,2,\ldots,m-1\\
e_0^{(s,n)}:=0,~~~e_m^{(s,n)}:=0,
\end{array}
\right.
\end{equation}
\n In a coupled matrix representation, the dynamic evolution is achieved by a combination of iterated lower-upper factorisations, each of which is a regular dToda evolution, followed by an additional factoring algorithm between two upper bidiagonal matrices (one for the $q$-variables and another for the $Q$-variables). Taking the sequence $(\mu^{(n)})_{n\in\N}$ to be the zero sequence forces $Q_k^{(s,n)}=q_1^{(s,n)}$, which can be seen from the first three equations in \ref{shinjohungrytodaeqs}. The resulting system is still a little more general than the dFToda system we present in this paper, but choosing to set some additional $e$-variables equal to zero restricts this system to dFToda.\\
The approach of \cite{bib:sik} is based on extended Hankel determinants that are related to a (discrete) time variation of the measure of orthogonality for families of biorthognal polynomials. We mention that the connection between biorthogonal polynomials and FKToda was already noticed by Ercolani and McLaughlin in [EM01]. The approach of this paper, based on a direct embedding of Symes's algorithm into full Toda, avoids the need to oonsider any ancillary determinants and leads to a more direct derivation of the relations between FKToda and dFKToda. In addition the approach of \cite{bib:sik} is formal and this makes it difficult to precisely characterize under what conditions the systems considered are well-posed. The derivations in this paper, based on Lusztig parametrization, makes it possible to rigorously establish a completely precise characterization of when dFToda is well-posed (Theorem \ref{thm:wp}). Furthermore, the constructions introduced in section \ref{geometry} reveal how many of the results in this paper may be generalized to Full generalized Toda lattices associated to more general real semisimple Lie algebras (see Remark \ref{rem:Lie}).
It will be of interest to consider possible extensions of the results here to the more general dhToda systems described in \cite{bib:sik} which have connections to numerical eigenvalue algorithms. That work also points out intriguing connections to ecological models know as discrete hungry Lotka-Volterra systems and continuum analogs. Furthermore, in ultradiscrete versions of discrete hungry Toda \cite{bib:tns}, an advanced box-ball system is presented, involving labelling balls and moving balls according to a label-determined order of priority. In these treatments, the key focus regarding solitonic behaviour is the so-called \textit{soliton scattering rule} which describes how labels redistribute amongst blocks under the ultradiscrete hungry Toda evolution, as well as associated conserved quantities of this particular cellular automaton realisation of the dynamics. The system we introduced in section \ref{sec:ud} is simpler in the sense that it does not require additional structures beyond standard BBS (\textit{i.e.}, labellings, higher capacities, etc.); nevertheless, the fact that the discrete full Toda lattice may be obtained as a particular special case of the discrete hungry Toda molecule, suggests that the general ultradiscrete systems may lead to further interesting extensions to what we have done in this paper, such as Lie-theoretic interpretations.
~\\
\n We have noted that O'Connell derived ODEs for the induced dynamics of the Toda flow on Lusztig parameters. We have compared this to the FKToda flow on parameters that we derived in section \ref{sec:return}. From this we are able to directly recover O'Connell's equations (see Appendix \ref{appendixb}). What we do is in many ways a more direct and elementary derivation of these ODEs; however, O'Connell is able to extend these to stochastic equations for reflected Brownian motions on the line, with many deep and fascinating applications to processes that break open new territory. It would be interesting to explore similar applications for our approach. In particular, it may be promising to interpret dFToda as B\"acklund transformations for FKToda in a stochastic setting.
\subsection{Future Directions }
The classical Toda lattice has been central to developments and applications in many areas of mathematics ranging from representation theory to quantum mechanics \cite{bib:howe}, from Hamiltonian dynamics/ symplectic geometry to statistical mechanics \cite{bib:spohn} and from combinatorics to random geometry \cite{bib:ew}. We believe the developments described in this paper set the stage for new developments and applications in related veins. Here we just mention a couple of those directions (by no means exhaustive) that it would be natural to pursue in the near future.
\subsubsection{Integrability and Lie Theoretic Generalizations}
One such direction concerns the complete integrability of FKToda and its discretizations done in an extended fully Lie theoretic setting. While the eigenvalues of $X_0$ provide just enough constants of motion (see (\ref{Isospectral})) to establish the complete integrability of the
tridiagonal Toda lattice, they do not suffice for the Full Toda lattice. Finding additional invariants relies on reformulating the Full Toda equations as a Hamiltonian system with respect to a geometrically defined Poisson bracket that meshes naturally with the geometry of the flag manifolds just discussed. This Poisson structure is based on the (co-adjoint) orbit method of Kirillov-Kostant
(K-K). The terminology stems from the fact that the Lie bracket on the Lie algebra $\frak{h}$ of a general Lie group $H$ induces a natural Poisson bracket on the dual space $\frak{h}^*$. The orbits on this dual (called co-adjoint orbits) induced by the conjugation action of $H$ on $\frak{h}$ are the symplectic leaves (submanifolds of $\frak{h}^*$ on which the Poisson bracket restricts to be non-degenerate).
Using the G-invariant, non-degenerate inner product, $ \langle X, Y\rangle = \tr(XY)$, on $\frak{g}$, the dual space $\frak{b}_+^*$ may be identified with $\mathcal{H}$. For the K-K Poisson bracket on $\mathcal{H}$ this may be concretely expressed, for functions $f, g$ on $\frak{g}$, as
\begin{eqnarray} \label{KKPB}
\{\tilde{f}, \tilde{g} \}(X) &=& \langle X, \left[ \pi_+ \nabla f(X), \pi_+ \nabla g(X)\right]\rangle
\end{eqnarray}
where $\tilde{f} = f|_{\mathcal{H}}$ and $\nabla$ denotes the gradient with respect to the inner product on $\frak{g}$. The tridiagonal elements of $\mathcal{H}$, with $b_i > 0$, comprise a coadjoint orbit of $B_+$ in $\frak{b}_+^*$ and the restriction of
(\ref{KKPB}) to these matrices does indeed yield a bracket equivalent to th standard one for the Toda lattice equations of (\ref{hamiltonian}). Moreover, the Hamiltonian for Full Toda is the same as that for the tridiagonal case in Flaschka's coordinates: $\tr X^2$. So despite its initially somewhat complicated appearance, the $\frak{b}_+^*$ K-K orbit structure is the correct generalization of Hamiltonian structure for Full Toda.
The additional constants of motion needed to completely integrate Full Toda are constructed by considering nested sequences
of parabolic subalgebras $\frak{p}_{k}^*$ of $\frak{g}$ (a parabolic subalgebra is one that contains $\frak{b}_+$) with corresponding Lie group $P_k$:
\begin{eqnarray} \nonumber
\frak{b}_+ &=& \frak{p}_{m+1} \subset \cdots \subset \frak{p}_{k} \subset \cdots \subset \frak{p}_{0} = \frak{g} \\ \label{parabolic}
\frak{b}_+^* &=& \frak{p}_{m+1}^* \leftarrow \cdots \leftarrow \frak{p}_{k}^* \leftarrow \cdots \leftarrow \frak{p}_{0}^* = \frak{g}^*
\end{eqnarray}
in which the maps on the second line are the projections induced by duality from the inclusions on the first line. Those projections are {\it Poisson maps}, meaning that the pullback of the K-K Poisson bracket of two functions on
$\frak{p}_{k-1}^*$ equals the K-K Poisson bracket of their pullbacks on $\frak{p}_{k}^*$ .
One can show, based on the Factorization theorem, that functions on $\frak{p}_{k-1}^*$ invariant under the coadjoint action of
$P_{k-1}$, Poisson commute with {\it all} functions on $\frak{p}_{k-1}^*$. Since the projections in (\ref{parabolic}) are all Poisson maps one can collectively pull back all these sequentially invariant functions to
$\frak{g}^*$ to form a family of involutive (commuting) functions there. Then restricting this family to $\frak{b}_+^*$ will, by duality, give an involutive family of functions on $\mathcal{H}$ that are constants of motion for Full Toda. This is called the {\it method of Thimm} \cite{bib:thimm}. Of course it could be that this family is empty; however, that is not the case. In \cite{bib:efs} it was shown that for a particular chain of parabolic subalgebras this method yields exactly enough independent invariants of this type to demonstrate complete integrability for the Hessenberg phase space. They further showed that this analysis can also be successfully applied to generalized full Toda systems associated to Lie algebras $B_n, C_n$ and $D_n$. Subsequently Gekhtman and Shapiro, in an elegant reformulation \cite{bib:gs,bib:r}, showed how this further applies to all simple Lie algebras.
Our re-formulation of FKToda and dFToda in Section \ref{geometry} now makes it natural to extend these considerations of Poisson-Lie structures and integrability to the dFToda setting of Hamiltonian/symplectic maps. Notions of total positivity and Lusztig parameters fit naturally into our framework but appear to be little studied. Extensions of the above described ideas to the ultra-discrete/box-ball systems also poses some fascinating avenues for study. One possible link there is to recent developments concerning crystals for affine Lie algebras (see \cite{bib:scrimshaw} and references therein). Links between factorization and crystal bases for quantum groups have been discussed by Lusztig \cite{bib:bfz}; but, relating this to symplectic geometry or Hamiltonian dynamics remains largely unexplored as far as we know.
\subsubsection{Representation Theory of Solvable Lie groups and Geometric Quantization}
Connections between completely integrable Hamiltonian systems and quantum mechanics go back to the earliest days of the latter theory, the so-called {\it old quantum theory} of Bohr and Sommerfeld with subsequent refinements by Einstein-Brillouin-Keller (EBK). The basic connection here is that completely integrable systems typically have big symmetry groups related to their constants of motion \`a la Noether's theorem. The first step of quantization is to take the Hamiltonian that defines the dynamical system, written in canonical variables, and replace the $p_i$ by $i \partial/\partial q_i$ and regard functions of the $q_i$ as multiplication operators. This then effectively replaces the nonlinear Hamiltonian by a {\it linear} differrential operator also referred to as the Hamiltonian. In the case of the tridiagonal Toda lattice the Hamiltonian (\ref{hamiltonian}) is replaced by the operator
\begin{eqnarray} \label{quantumT}
\widehat{H} = - \frac12 \sum_{i =1}^n \partial^2/\partial q_i^2 + \sum_{j=1}^{n-1}e^{q_j-q_{j+1}}.
\end{eqnarray}
The other constants of motion, $H_i $ in involution with $H$ also have such linear operator representations $\widehat{H}_i$.
Then the EBK formalism determines which values of the motion constants are admissible as spectra (``energy" levels) for this
commuting family of Hamiltonians. This associates to those levels, regarded as eigenvalues, common eigenfunctions which are
the {\it states} of the quantum system and should comprise a basis for its associated Hilbert space.
On the other hand, the symmetry group associated to the classical integrability acts as a group of symmetries on the collective operators $\{\widehat{H}\} \cup \{ \widehat{H}_i\}$ and so in turn acts as a group representation on the associated Hilbert space.
The decomposition of that representation into a sum of irreducible representations then describes the composition of general states in terms of fundamental eigenstates which is a fundamental question in quantum theory. Due to the connection with group theory (about to be discussed) one expects the eigenfunctions to have explicit expressions, such as in terms of spherical functions.
Turning this around, one can also start with a Lie group and try to build for it integrable systems that realize irreducible representations by the above prescription. These integrable systems would then become the fundamental building blocks for representation of the group. This is precisely what the program, referred to as the Kirillov-Kostant (K-K) orbit method (the quantum analogue of the Poisson orbit method mentioned in the previous subsection), is designed to do. It is the most important component of Kostant's more general program of {\it Geometric Quantization} which attempts to place the quantization prescription described above on a firm rigorous footing. The K-K orbit method may be implemented for the Lie group $B_+$, the group that underlies the work in this paper. The co-adjoint orbits of $B_+$ on $\frak{b}_+^*$ are symplectic leaves on which local canonical coordinates of $(p_i, q_i)$ may be constructed as the basis for geometric quantization. For the tridiagonal symplectic leaves one has the quantization of the tridiagonal Toda Lattice described above.
The representation theory in this case has been completely worked out by Kostant \cite{bib:kostant}. The irreducible representations are infinite dimensional (possible since $B_+$ is non-compact) and are indexed by the joint continuous spectrum of (\ref{quantumT}) and the other $\widehat{H}_i$ that Poisson commute with it. The eigenfunction basis is expressed in terms of generalizations of the classical Whittaker functions. This class of examples is one of the principal models and success stories for the K-K orbit method.
The focus in relation to this paper will be to extend the above type of analysis to other symplectic leaves ($B_+$ orbits) such as generic maximal dimensional orbits of the Full Toda lattice or the $m$-banded invariant sets for $3 < m < n$ as well as to their discretizations. This is wide open territory with many challenges but the results described in this paper make it possible to begin to undertake those challenges.
\section*{Funding}
This work was supported by NSF grant DMS-1615921.
|
2,877,628,090,449 | arxiv | \section*{Version fran\c{c}aise abr\'eg\'ee}
\section*{Abridged French version}
Soit $X$ une vari\'et\'e alg\'ebrique point\'ee d\'efinie sur le corps ${\mathbf{C}}$ des nombres complexes, suppos\'ee irr\'eductible et quasi-projective.
L'espace topologique point\'e $X({\mathbf{C}})$ est alors connexe;
on d\'esigne par $\pi_1(X):=\pi_1^{\textup{top}}(X({\mathbf{C}}))$ son groupe fondamental, appel\'e groupe fondamental topologique de $X$.
Soit $\sigma$ un automorphisme du corps ${\mathbf{C}}$ (pas forc\'ement continu).
En appliquant $\sigma$ aux coefficients des polyn\^omes d\'efinissant $X$, on obtient une vari\'et\'e $\sigma X$ sur ${\mathbf{C}}$, dite vari\'et\'e conjugu\'ee.
Les compl\'et\'es profinis des groupes $\pi_1(X)$ et $\pi_1(\sigma X)$ sont canoniquement isomorphes (comme groupes topologiques),
car ils s'identifient naturellement au groupe fondamental \'etale de $X$.
En revanche, les groupes $\pi_1(X)$ et $\pi_1(\sigma X)$ ne sont pas toujours isomorphes,
par un r\'esultat de Serre \cite{Serre}.
Les exemples de Serre comprennent des surfaces projectives lisses.
D'autres exemples ont \'et\'e obtenus plus r\'ecemment: des vari\'et\'es de Shimura dans \cite{MS, R},
et des surfaces projectives dans \cite{BCG, GJ} pour des choix tr\`es g\'en\'eraux de l'automorphisme $\sigma$
(dans \cite{GJ} pour tout $\sigma$
dont la restriction \`a $\overline{{\mathbf{Q}}}$ diff\`ere de l'identit\'e et de la conjugaison complexe).
Dans cette note, nous donnons un exemple des {\em espaces homog\`enes} conjugu\'es avec groupes fondamentaux topologiques non isomorphes.
Le plan de la note est le suivant.
Nous consid\'erons, dans le \S\ref{s:fundam}, les groupes fondamentaux de certains espaces homog\`enes topologiques de la forme $G/\Gamma$,
o\`u $G$ est un groupe de Lie r\'eel connexe et $\Gamma\subset G$ est un sous-groupe discret.
Nous en d\'eduisons, dans le \S\ref{s:algebraic}, une formule explicite pour d\'ecrire le groupe fondamental $\pi_1(G/\Gamma)$ dans le cas o\`u
$G$ est un groupe alg\'ebrique lin\'eaire connexe d\'efini sur ${\mathbf{C}}$, et $\Gamma$ est un sous-groupe fini de $G$.
En utilisant cette formule, nous construisons dans le \S\ref{s:example}
un exemple d'espace homog\`ene affine $X=G/\Gamma$ d\'efini sur ${\mathbf{C}}$
et un automorphisme $\sigma$ de ${\mathbf{C}}$
tels que les groupes fondamentaux topologiques $\pi_1(\sigma X)$ et $\pi_1(X)$ ne sont pas isomorphes.
Pr\'ecis\'ement, on choisit $G={\rm SL}(n,{\mathbf{C}})\times{\mathbf{C}}^*$ avec $n\ge 5$, et $\Gamma$ un sous-groupe non ab\'elien fini d'ordre 55.
L'inclusion de $\Gamma$ dans $G$ est donn\'ee
par un plongement arbitraire de $\Gamma$ dans ${\rm SL}(n,{\mathbf{C}})$ et par un homomorphisme non trivial de $\Gamma$ dans ${\mathbf{C}}^*$.
Notre formule permet de v\'erifier que $\pi_1(X)$ et isomorphe \`a $({\mathbf{Z}}/11{\mathbf{Z}})\rtimes_4{\mathbf{Z}}$,
o\`u la notation signifie que le g\'en\'erateur $1$ de ${\mathbf{Z}}$ agit sur ${\mathbf{Z}}/11{\mathbf{Z}}$ par multiplication par 4,
tandis que pour $\sigma$ envoyant $\zeta=\exp 2\pi i/5$ sur $\zeta^2$, le groupe fondamental $\pi_1(\sigma X)$
de la vari\'et\'e conjugu\'ee est isomorphe \`a $({\mathbf{Z}}/11{\mathbf{Z}})\rtimes_9{\mathbf{Z}}$.
Un argument simple permet de v\'erifier que ces deux groupes ne sont pas isomorphes.
\selectlanguage{english}
\section{Introduction}
\label{s:Intro}
Let $X$ be a pointed algebraic variety defined over ${\mathbf{C}}$.
We assume that $X$ is irreducible and quasi-projective.
The pointed topological space $X({\mathbf{C}})$ is then connected, and
we denote by $\pi_1(X)$ the topological fundamental group of $X({\mathbf{C}})$, i.e.,
$\pi_1(X):=\pi_1^{\textup{top}}(X({\mathbf{C}}))$.
Let $\sigma$ be a field automorphism of ${\mathbf{C}}$, not necessarily continuous.
On applying $\sigma$ to the coefficients of the polynomials defining
$X$, we obtain a conjugate algebraic variety $\sigma X$ over ${\mathbf{C}}$.
Though the profinite completions of $\pi_1(X)$ and $\pi_1(\sigma X)$ are isomorphic,
the groups $\pi_1(X)$ and $\pi_1(\sigma X)$ themselves are not necessarily isomorphic.
Serre \cite{Serre} obtained the first examples of conjugate varieties $X$ and $\sigma X$
with $\pi_1(\sigma X)\not\simeq\pi_1(X)$.
Serre's examples include smooth projective surfaces.
More examples were obtained recently: Shimura varieties in \cite{MS} and \cite{R},
and smooth projective surfaces in \cite{BCG} and \cite{GJ} for a very general choice of $\sigma$
(in \cite{GJ} for any $\sigma$ whose restriction to $\overline{{\mathbf{Q}}}$ differs from the identity and the complex conjugation).
In this note we give an example of conjugate {\em homogeneous spaces} with non-isomorphic topological fundamental groups.
The outline of the note is as follows.
In Section \ref{s:fundam} we consider topological homogenous spaces of the form $G/\Gamma$,
where $G$ is a connected real Lie group and $\Gamma\subset G$ is a discrete subgroup.
In Section \ref{s:algebraic} we write an explicit formula for $\pi_1(G/\Gamma)$
when $G$ is a complex linear algebraic group and $\Gamma\subset G$ is a finite subgroup.
In Section \ref{s:example} using this formula we construct an example
of an affine homogeneous space $X=G/\Gamma$ over ${\mathbf{C}}$ and an automorphism $\sigma$ of ${\mathbf{C}}$
such that $\pi_1(\sigma X)$ is not isomorphic to $\pi_1(X)$.
In our example $G={\rm SL}(n,{\mathbf{C}})\times {\mathbf{C}}^*$ with $n\ge 5$, and $\Gamma$ is a nonabelian finite subgroup of order 55.
\section{The quotient of a Lie group by a discrete subgroup}
\label{s:fundam}
Let
\[1\to S\labelto{i} G\labelto{\tau} T\to 1\]
be a short exact sequence of connected real Lie groups.
Let $\Gamma\subset G$ be a discrete subgroup such that the projection $\Lambda=\tau(\Gamma)\subset T$ is discrete.
Our goal is to describe $\pi_1(G/\Gamma)$, where $G/\Gamma$ is viewed as a pointed manifold with base point the image of 1.
Set $\Gamma_S=\Gamma\cap S$.
The homomorphism $\tau\colon G\to T$ induces a fibration $ G/\Gamma\to T/\Lambda$ with fiber $S/\Gamma_S$\kern 0.6pt,
which gives rise to an exact sequence in homotopy groups
\[\pi_1(S/\Gamma_S)\labelto{i_{*}}\pi_1(G/\Gamma)\labelto{\tau_{*}} \pi_1(T/\Lambda)\to 1.\]
The fibration $G\to G/\Gamma$ with fiber $\Gamma$ gives rise to an exact sequence in homotopy groups
\[1\to\pi_1(G)\to\pi_1(G/\Gamma)\labelto{f} \Gamma\to 1,\]
where $f$ is a homomorphism by Lemma \ref{l:connecting} below.
Considering the above fibrations and also the fibrations $S\to S/\Gamma_S$, $T\to T/\Lambda$ and $G\to T$,
we obtain the following commutative diagram of groups and homomorphisms with exact rows and columns:
\[
\xymatrix@R=18pt{
1\ar[r] &\pi_1(S)\ar[d]\ar[r] &\pi_1(S/\Gamma_S)\ar[d]^(0.44){i_{*}}\ar[r] &\Gamma_S\ar[d]^(0.44){i} \ar[r] &1 \\
1\ar[r] &\pi_1(G)\ar[d]\ar[r] &\pi_1(G/\Gamma)\ar[d]^(0.44){\tau_{*}}\ar[r]^-f &\Gamma\ar[d]^(0.44){\tau}\ar[r] &1 \\
1\ar[r] &\pi_1(T)\ar[d]\ar[r] &\pi_1(T/\Lambda)\ar[d]\ar[r]^-{f_T} &\Lambda\ar[d]\ar[r] &1\\
&1 &1 &1
}
\]
From this diagram we obtain homomorphisms
\[\chi\colon \pi_1(S)\to\pi_1(S/\Gamma_S)\labelto{i_*} \pi_1(G/\Gamma)\quad\text{and}\quad
\phi\colon\pi_1(G/\Gamma)\to \pi_1(T/\Lambda)\underset{\Lambda}{\times} \Gamma,\]
where the fiber product $\pi_1(T/\Lambda)\times_\Lambda\kern 0.6pt \Gamma$ is the group of pairs $(x,\gamma)\in \pi_1(T/\Lambda)\times \Gamma$
such that $f_T(x)=\tau(\gamma)$.
The homomorphism $\phi$ takes $y\in\pi_1(G/\Gamma)$ to the pair $(\tau_*(y),f(y))\in \pi_1(T/\Lambda)\times_\Lambda\kern 0.6pt \Gamma$.
\begin{theorem}\label{t:pi-1}
With the above notation, the sequence
\[\pi_1(S)\labelto{\chi}\pi_1(G/\Gamma)\labelto{\phi} \pi_1(T/\Lambda)\underset{\Lambda}\times \Gamma\to 1\]
is exact. In particular, if $S$ is simply connected, then $\phi$ is an isomorphism.
\end{theorem}
\noindent {\bf Proof.}
We prove the theorem by diagram chasing.
Clearly $\phi\circ\chi=1$. We show that $\ker\,\phi\subset{\rm im}\,\chi$.
Let $y\in\ker\,\phi\subset \pi_1(G/\Gamma)$, then $f(y)=1$ and $\tau_*(y)=1$.
Then $y$ comes from some element $z\in\pi_1(G)$, whose image in $\pi_1(T)$ is 1.
Hence $z$ comes from some element $u\in\pi_1(S)$.
We see that $y=\chi(u)$, as required.
We show that $\phi$ is surjective.
Let $(x,\gamma)\in \pi_1(T/\Lambda)\times_\Lambda \Gamma$, i.e.,
$x\in\pi_1(T/\Lambda)$, $\gamma\in\Gamma$, and $f_T(x)=\tau(\gamma)$.
We can lift $x$ to some element $y\in\pi_1(G/\Gamma)$,
then $\tau(f(y))=\tau(\gamma)$.
Set $z=f(y)\gamma^{-1}$, then $\tau(z)=1$, hence $z$ comes from some element of $\Gamma_S$
and from some element $u$ of $\pi_1(S/\Gamma_S)$.
Set $y'=i_*(u)^{-1}y\in\pi_1(G/\Gamma)$, then $f(y')=\gamma$ and $\tau_*(y')=\tau_*(y)=x$.
We see that $(x,\gamma)=\phi(y')$, as required.
\qed
\medskip
The following lemma, which we used above, is well-known.
\begin{lemma}\label{l:connecting}
Let $G$ be a connected Lie group, $\Gamma\subset G$ be a (closed) Lie subgroup, not necessarily connected.
Then the connecting map $f\colon \pi_1(G/\Gamma)\to \pi_0(\Gamma)$ in the exact sequence
\[\pi_1(\Gamma)\to\pi_1(G)\to\pi_1(G/\Gamma)\labelto{f}\pi_0(\Gamma)\to 1\]
is a homomorphism.
\end{lemma}
\noindent {\bf Proof.}
Denote by $\lambda\colon \Gamma\to\pi_0(\Gamma)$ the canonical epimorphism.
Consider two based loops $\theta_i\colon[0,1]\to G/\Gamma$ in $G/\Gamma$ ($i=1,2$).
Let ${\tilde{\theta}}_i\colon [0,1]\to G$ be a path lifting the loop $\theta_i$ to $G$
with ${\tilde{\theta}}_i(0)=1$, and set $\gamma_i={\tilde{\theta}}_i(1)\in \Gamma$.
By definition $f([\theta_i])=\lambda(\gamma_i)\in\pi_0(\Gamma)$, where $[\theta_i]$
denotes the class of the based loop $\theta_i$ in $\pi_1(G/\Gamma)$.
Then $\gamma_1{\tilde{\theta}}_2$ is a path in $G$ from $\gamma_1$ to $\gamma_1\gamma_2$ mapping in $G/\Gamma$ to the loop $\theta_2$,
hence the concatenation of ${\tilde{\theta}}_1$ and $\gamma_1{\tilde{\theta}}_2$ is a path in $G$ from $1$ to $\gamma_1\gamma_2$
mapping in $G/\Gamma$ to the loop obtained by concatenation of $\theta_1$ and $\theta_2$.
Thus $f([\theta_1]\cdot[\theta_2])=\lambda(\gamma_1 \gamma_2)=\lambda(\gamma_1)\,\lambda(\gamma_2)=f([\theta_1])\, f([\theta_2])$, as required.
\qed
\section{The quotient of a complex algebraic group by a finite subgroup}
\label{s:algebraic}
Let $G$ be a connected linear algebraic group over ${\mathbf{C}}$.
Let $\Gamma\subset G$ be a finite subgroup.
Set $X=G/\Gamma$.
We wish to compute the topological fundamental group $\pi_1(X)$.
Let $U$ denote the unipotent radical of $G$, then $G':=G/U$ is reductive.
The canonical epimorphism $\rho\colon G\to G'$
induces a fibration $G/\Gamma\to G'/\Gamma'$ with fiber $U$, where $\Gamma'=\rho(\Gamma)$, and hence,
the induced homomorphism $\rho_*\colon\pi_1(G/\Gamma)\to\pi_1(G'/\Gamma')$ is an isomorphism.
Therefore, we may and shall assume that $G$ is reductive.
Replacing the reductive group $G$ by a finite cover and $\Gamma$ by its inverse image, we may and shall
assume that the semisimple group $S:=[G,G]$ is simply connected.
Let $\Lambda$ denote the image of $\Gamma$ in the algebraic torus $T:=G/S$,
then $T/\Lambda$ is also an algebraic torus, hence $\pi_1(T/\Lambda)$ is a free abelian group isomorphic to ${\mathbf{Z}}^{{\rm dim}\, T}$.
The next corollary, which follows immediately from Theorem \ref{t:pi-1},
describes $\pi_1(G/\Gamma)$ in terms of $\Gamma$ and the free abelian group $\pi_1(T/\Lambda)$.
\begin{corollary}\label{c:pi-1-alg}
Let $G$ be a connected reductive algebraic group over ${\mathbf{C}}$ such that the commutator subgroup $S$ of $G$ is simply connected.
Set $T=G/S$.
Let $\Gamma\subset G$ be a finite subgroup, and let $\Lambda$ denote the image of $\Gamma$ in $T$.
Then there is a canonical isomorphism
\[ \pi_1(G/\Gamma)\labelto{\cong} \pi_1(T/\Lambda)\underset{\Lambda}\times \Gamma,\]
where $\pi_1(T/\Lambda)\times_{\Lambda}\kern 0.6pt \Gamma$ is the fiber product
with respect to the epimorphism $\pi_1(T/\Lambda)\to\Lambda$ of Lemma \ref{l:connecting}
and the canonical epimorphism $\Gamma\to\Lambda$.
\end{corollary}
\section{Example}
\label{s:example}
Let $A={\mathbf{Z}}/m{\mathbf{Z}}$, the additive group of residues modulo $m$.
Let $B\subset ({\mathbf{Z}}/m{\mathbf{Z}})^*$ be a {\em cyclic} subgroup of some order $r$
in the multiplicative group of invertible residues modulo $m$.
The group $B$ acts naturally on $A$ by multiplication:
an element $b\in B\subset({\mathbf{Z}}/m{\mathbf{Z}})^*$ acts by $a\mapsto ba$.
Set
\[H=A\rtimes B \]
(the semidirect product).
We regard $B$ as a subgroup of $H$.
Consider an embedding $\varphi\colon B\hookrightarrow{\mathbf{C}}^*$, then $\varphi(B)=\mu_r\subset {\mathbf{C}}^*$, the group of $r$-th roots of unity.
Choose an embedding $\alpha\colon H\hookrightarrow {\rm SL}(n,{\mathbf{C}})$ for some natural number $n$. Set
\[G= {\rm SL}(n,{\mathbf{C}})\times {\mathbf{C}}^*.\]
For $(a,b)\in A\rtimes B=H$ set
\[\psi(a,b)=(\alpha(a,b),\varphi(b))\in {\rm SL}(n,{\mathbf{C}})\times{\mathbf{C}}^*.\]
We obtain an embedding $\psi=\psi_{\alpha,\varphi}\colon H\hookrightarrow G$.
Set $\Gamma=\psi(H),$ \ $X=X_{\alpha,\varphi}=G/\Gamma$.
Then $X$ is an affine algebraic variety over ${\mathbf{C}}$.
Let $b\in B$.
Write $A\rtimes_b {\mathbf{Z}}$ for the semidirect product of $A$ and ${\mathbf{Z}}$,
where the generator $1$ of ${\mathbf{Z}}$ acts on $A$ by multiplication by $b$.
Set $\zeta=\exp 2\pi i/r\in \mu_r$.
\begin{proposition}\label{p:pi-1-X}
$\pi_1(X_{\alpha,\varphi})\simeq ({\mathbf{Z}}/m{\mathbf{Z}})\rtimes_{\varphi^{-1}(\zeta)}{\mathbf{Z}}$.
\end{proposition}
\noindent {\bf Proof.}
Set $S={\rm SL}(m,{\mathbf{C}})$, $T={\mathbf{C}}^*$.
Let $\tau\colon G=S\times T\to T$ denote the projection, then
$\tau(\psi(a,b))=\varphi(b)$ for $(a,b)\in A\rtimes B =H$.
Set $\Lambda=\tau(\Gamma)=\tau(\psi(H))\subset T$,
then $\Lambda=\varphi(B)=\mu_r\subset{\mathbf{C}}^*=T$.
Consider the following universal covering of $T={\mathbf{C}}^*$:
\[ \varepsilon\colon {\mathbf{C}}\to{\mathbf{C}}^*=T,\quad z\mapsto \exp 2\pi iz \ \text{ for }\ z\in{\mathbf{C}},\]
it induces a universal covering of $T/\Lambda$:
\[ {\mathbf{C}}\labelto{\varepsilon}{\mathbf{C}}^*\to{\mathbf{C}}^*/\mu_r=T/\Lambda\simeq{\mathbf{C}}^*.\]
We identify $\pi_1(T/\Lambda)$ with $\varepsilon^{-1}(\mu_r)=\tfrac{1}{r}{\mathbf{Z}}\subset{\mathbf{C}}$,
then the homomorphism $\pi_1(T/\Lambda)\to\Lambda=\mu_r$ of Lemma \ref{l:connecting}
is the restriction of $\varepsilon$ to $\tfrac{1}{r}{\mathbf{Z}}$, hence it takes
the generator $\tfrac{1}{r}\in\tfrac{1}{r}{\mathbf{Z}}=\pi_1(T/\Lambda)$ to the element $\varepsilon(\tfrac{1}{r})=\zeta\in\mu_r$.
Since $S={\rm SL}(n,{\mathbf{C}})$ is simply connected, by Corollary \ref{c:pi-1-alg} we have
\[\pi_1(X_{\alpha,\varphi})=\pi_1(G/\Gamma)=\pi_1(T/\Lambda)\underset{\Lambda}{\times}\Gamma\simeq\tfrac{1}{r}{\mathbf{Z}}\underset{\mu_r}{\times}H,\]
where the homomorphism $\tfrac{1}{r}{\mathbf{Z}}\to\mu_r$ takes $\tfrac{1}{r}$ to $\zeta$
and the homomorphism $H\to\mu_r$ takes $(a,b)\in H$ to $\tau(\psi(a,b))=\varphi(b)$.
Since $\tfrac{1}{r}{\mathbf{Z}}$ is a free abelian group, the group extension
\[ 1\to \{0\}\times A\to \tfrac{1}{r}{\mathbf{Z}}\underset{\mu_r}{\times}H\to \tfrac{1}{r}{\mathbf{Z}}\to 1\]
splits, hence $\pi_1(X_{\alpha,\varphi})\simeq A\rtimes\tfrac{1}{r}{\mathbf{Z}}$.
The action of $\tfrac{1}{r}{\mathbf{Z}}$ on $A$ in this semidirect product decomposition
is the canonical action of the quotient group $\tfrac{1}{r}{\mathbf{Z}}$ of $\tfrac{1}{r}{\mathbf{Z}}{\times}_{\mu_r}\kern 0.6pt H$ on the normal abelian subgroup $A$.
Since the element $\tfrac{1}{r}\in\tfrac{1}{r}{\mathbf{Z}}$ has image $\zeta$ in $\mu_r$, which lifts to $\varphi^{-1}(\zeta)\in B\subset H$,
we see that $\tfrac{1}{r}\in\tfrac{1}{r}{\mathbf{Z}}$ lifts to $(\tfrac{1}{r},\varphi^{-1}(\zeta))\in \tfrac{1}{r}{\mathbf{Z}}{\times}_{\mu_r}\kern 0.6pt B\subset \tfrac{1}{r}{\mathbf{Z}}{\times}_{\mu_r}\kern 0.6pt H$,
hence $\tfrac{1}{r}$ acts as $\varphi^{-1}(\zeta)$ on $A$.
Identifying $\tfrac{1}{r}{\mathbf{Z}}$ with ${\mathbf{Z}}$ via $x\mapsto rx$ for $x\in\tfrac{1}{r}{\mathbf{Z}}$, we obtain the assertion of the proposition.
\qed
\medskip
Now let us take $m=11$, then $A={\mathbf{Z}}/11{\mathbf{Z}}$.
We take $B=({\mathbf{Z}}/11{\mathbf{Z}})^{*2}$, the group of nonzero quadratic residues modulo 11.
The group $B$ is a cyclic group of order 5, namely, $B=\{\bar1,\bar4,\bar9,\bar5,\bar3\}$.
Then $H=A\rtimes B$ is a finite nonabelian group of order 55.
Let $n\ge 5$, then there exists an embedding $\alpha\colon H\hookrightarrow {\rm SL}(n,{\mathbf{C}})$ .
For $b\in B$, $b\neq \bar1$, let $\varphi_b$ denote the embedding $B\hookrightarrow{\mathbf{C}}^*$ taking the generator $b$ of $B$ to $\zeta$, then $\varphi_b^{-1}(\zeta)=b$.
We write $X_{\alpha,b}$ for $X_{\alpha,\varphi_b}$.
Let $\sigma$ be any field automorphism of ${\mathbf{C}}$ taking $\zeta$ to $\zeta^2$.
Consider the conjugate variety $\sigma X_{\alpha,b}$.
\begin{theorem}\label{t:neq}
For $A={\mathbf{Z}}/11{\mathbf{Z}}$, \ $B=({\mathbf{Z}}/11{\mathbf{Z}})^{*2}$, \ $\sigma\in{\rm Aut}({\mathbf{C}})$ taking $\zeta$ to $\zeta^2$, the groups
$\pi_1(X_{\alpha,4})$ and $\pi_1(\sigma X_{\alpha,4})$ are not isomorphic.
\end{theorem}
\noindent {\bf Proof.}
We have $\sigma(\zeta)=\zeta^2$.
The homomorphism $\sigma\circ\varphi\colon B\to {\mathbf{C}}^*$ takes $\bar4$ to $\sigma(\zeta)=\zeta^2$,
hence it takes $\bar4^3=\bar9$ to $(\zeta^2)^3=\zeta$.
Thus $\sigma\circ\varphi_{4}=\varphi_{9}$.
For our group $G$ defined over ${\mathbf{Q}}$ and for $X=G/\Gamma$, we have
$\sigma X= G/\sigma(\Gamma)$,
where $\sigma$ acts on ${\rm SL}(n,{\mathbf{C}})$ and on ${\mathbf{C}}^*$ via the action on ${\mathbf{C}}$.
For an embedding $\varphi\colon B\hookrightarrow{\mathbf{C}}^*$ we have
\[\sigma X_{\alpha,\varphi}=G/(\sigma\circ\psi_{\alpha,\varphi})(H)=
G/\psi_{\sigma\circ\alpha,\sigma\circ\varphi}(H)=X_{\sigma\circ\alpha,\sigma\circ\varphi}\,.\]
We obtain that
\[\sigma X_{\alpha,4}=\sigma X_{\alpha,\varphi_{4}}=X_{\sigma\circ\alpha,\sigma\circ\varphi_{4}}=X_{\sigma\circ\alpha,\varphi_{9}}=X_{\sigma\circ\alpha,9}.\]
By Proposition \ref{p:pi-1-X} we have
\[ \pi_1(X_{\alpha,b})\simeq ({\mathbf{Z}}/11{\mathbf{Z}})\rtimes_b {\mathbf{Z}},\]
hence
\[\pi_1(X_{\alpha,4})\simeq ({\mathbf{Z}}/11{\mathbf{Z}})\rtimes_{4}{\mathbf{Z}}\quad\text{and}\quad
\pi_1(\sigma X_{\alpha,4})=\pi_1(X_{\sigma\circ\alpha,9})\simeq ({\mathbf{Z}}/11{\mathbf{Z}})\rtimes_{9}{\mathbf{Z}}.\]
Now the theorem follows from the next Lemma \ref{l:neq}.
\qed
\begin{lemma}\label{l:neq}
$({\mathbf{Z}}/11{\mathbf{Z}})\rtimes_{4}{\mathbf{Z}}\not\simeq ({\mathbf{Z}}/11{\mathbf{Z}})\rtimes_{9}{\mathbf{Z}}$.
\end{lemma}
We first need the following group-theoretic fact.
\begin{lemma}\label{p:Gamma-4-9}
Let $A$ be any group without nonzero homomorphisms into ${\mathbf{Z}}$.
When ${\kappa}$ is an automorphism of $A$, we write $A\rtimes_{\kappa}{\mathbf{Z}}$ for the semidirect product of $A$ and ${\mathbf{Z}}$,
where the generator $t=1$ of ${\mathbf{Z}}$ acts on $A$ by ${\kappa}$.
Fix two automorphisms ${\kappa}_1,{\kappa}_2\in{\rm Aut}(A)$, and denote by ${\overline{\vk}}_i$ the image of ${\kappa}_i$
in the group of outer automorphisms ${\rm Out}(A)$.
If ${\overline{\vk}}_1$ is conjugate to neither ${\overline{\vk}}_2$ nor ${\overline{\vk}}_2^{-1}$ in ${\rm Out}(A)$, then the semidirect products
$G_1=A\rtimes_{{\kappa}_1} {\mathbf{Z}}$ and $G_2=A\rtimes_{{\kappa}_2} {\mathbf{Z}}$ are not isomorphic.
\end{lemma}
\noindent {\bf Proof.}
By contraposition, let $\lambda\colon G_1\labelto{\cong} G_2$ be an isomorphism.
Since for each of $i=1,2$, the subgroup $A$ is equal to the kernel
of some/any nonzero homomorphism $G_i\to{\mathbf{Z}}$, we have $\lambda(A)=A$.
Let ${\kappa}\in{\rm Aut}(A)$ denote the restriction of $\lambda$ to $A$.
For the generator $t\in {\mathbf{Z}}\subset G_1$,
write $\lambda(t)$ as $a\kern 0.6pt t^e\in G_2$ with $a\in A$ and $e\in{\mathbf{Z}}$.
Since $t$ generates $G_1/A$, we see that $\lambda(t)$ generates $G_2/A$ and hence $e=\pm 1$.
Then for all $a'\in A$, writing $\gamma_a(a')=aa'a^{-1}$ we have
\[{\kappa}({\kappa}_1(a'))=\lambda(ta't^{-1})=a\kern 0.6pt t^{e} {\kappa}(a')\kern 0.6pt t^{-{e}}a^{-1}=\gamma_a({\kappa}_2^{e}({\kappa}(a'))),\]
whence ${\kappa}_1={\kappa}^{-1}\gamma_a{\kappa}_2^{\kern 0.6pt{e}}\kern 0.6pt{\kappa}$.
Hence ${\overline{\vk}}_1={\overline{\vk}}^{\,-1}\ {\overline{\vk}}_2^{\,{e}}\ {\overline{\vk}}$.
Thus ${\overline{\vk}}_1$ and ${\overline{\vk}}_2^{\,{e}}$ are conjugate in ${\rm Out}(A)$. \qed
\medskip
\noindent {\bf Proof of Lemma \ref{l:neq}.}
We use Lemma \ref{p:Gamma-4-9} when $A={\mathbf{Z}}/11{\mathbf{Z}}$, in which case ${\rm Aut}(A)={\rm Out}(A)$, is abelian and can be identified with $({\mathbf{Z}}/11{\mathbf{Z}})^*$.
Hence the assumption of Lemma \ref{p:Gamma-4-9} in this case is just that ${\kappa}_1$ and ${\kappa}_2^{\pm 1}$ are distinct as elements of $({\mathbf{Z}}/11{\mathbf{Z}})^*$.
Here ${\kappa}_1=\bar4$ and ${\kappa}_2=\bar9$. Since modulo $11$ we have $\bar9\neq \bar4$ and $\bar9^{-1}=\bar5\neq \bar4$,
Lemma \ref{p:Gamma-4-9} applies and we see that
$({\mathbf{Z}}/11{\mathbf{Z}})\rtimes_{4}{\mathbf{Z}}\not\simeq ({\mathbf{Z}}/11{\mathbf{Z}})\rtimes_{9}{\mathbf{Z}}$.
This completes the proofs of Lemma \ref{l:neq} and Theorem \ref{t:neq}.
\qed
\bigskip
{\sc Acknowledgements.}
We are grateful to Boris Kunyavski\u\i\ for helpful comments.
This note was completed during a stay of the first-named author
at the Max-Planck-Institut f\"ur Mathematik, Bonn, and he is grateful to this institute for hospitality, support and
excellent working conditions.
|
2,877,628,090,450 | arxiv |
\section{Introduction}
\label{sec:intro}
\begin{figure}[!t]
\centering
\subfloat{\includegraphics[trim={100 335 110 290},clip,width=\columnwidth]{images/cmp_entrance_hdr}%
\label{fig:cmp_entrance_hdr}}
\vspace*{-6pt}
\subfloat{\includegraphics[trim={100 335 110 290},clip,width=\columnwidth]{images/cmp_entrance_ldr_fix}%
\label{fig:cmp_entrance_ldr_fix}}
\caption{
Top: an office scene reconstructed in HDR with the proposed exposure time controller. Note the difference in color between the wooden panel on the left, the gray wall, and the milky white door on the right. Bottom: the same scene reconstructed in LDR with ElasticFusion using a fixed exposure time setting. The dynamic range of the scene exceeds that of an LDR image; as a result the panel, wall, and door were overexposed and appear to have the same color in the reconstruction.
}
\label{fig:cmp-entrance}
\end{figure}
The introduction of \mbox{RGB-D}\xspace cameras to the consumer market spawned a revolution in 3D vision. The progress in camera tracking and online dense reconstruction reached impressive milestones in terms of tracking accuracy and robustness \cite{Zhou2015,Hsiao2017}, as well as in scale and geometric fidelity of large reconstructions \cite{Dai2017,Whelan2015}.
The quality of color in dense 3D mapping received less attention despite being equally important in applications such as virtual and augmented reality, semantic analysis, and recognition. Recent works demonstrated an improvement in the texture quality through radiometric calibration of the camera \cite{Alexandrov2016}, subsequent offline texture map generation and optimization of camera parameters \cite{Zhou2014,Jeon2016}, online estimation of surface reflectance \cite{Kim2017} and light sources within the reconstructed scenes \cite{Whelan2016}.
As the 3D reconstructions grow in scale, the problem of low dynamic range (LDR) of camera sensors is becoming more evident. The 24-bit colors are inadequate for the representation of the wide range of light intensities found in large real-world scenes. Thus a promising yet relatively unexplored avenue is the application of High Dynamic Range (HDR) imaging techniques in 3D reconstruction \cite{Meilland2013}. This will enable the acquisition of radiance maps of scenes with a dynamic range greatly exceeding LDR cameras, achieving more visually appealing renderings of the reconstructed models as well as more consistent textures (see \fref{cmp-entrance}).
HDR imaging is a well-understood topic in photography. Its basic form involves taking multiple LDR images at different exposure times and combining them into a composite radiance map. The problems addressed in the HDR imaging literature include calibration of the camera response function \cite{Debevec1997}, noise modeling and sample weighting schemes \cite{Granados2010}, exposure time selection \cite{Hasinoff2010}, coping with inconsistent measurements due to camera jitter or scene dynamics \cite{Zimmer2011}.
The application of HDR imaging techniques in online 3D reconstruction is not straightforward. The domain is not restricted by the image pixel grid and grows arbitrarily in space. The limited memory and processing time budgets permit only a small pre-allocated space per map point, excluding the possibility to store multiple observations. Together with the need for online feedback, this dictates the usage of incremental estimation schemes instead of batch optimization. Furthermore, in HDR imaging camera motion is regarded as an unwanted effect, whereas in SLAM the moving camera is the \emph{modus operandi}, leading to an increased amount of outlier measurements. It also adds a temporal aspect to the problem, requiring efficient control of exposure time to ensure that the color of the map points is estimated with sufficient accuracy before they leave the field of view of the camera.
To overcome these problems, we propose \mbox{HDR-SLAM}\xspace \footnote{Our system extends ElasticFusion \cite{Whelan2015} and is hosted as a GitHub fork \url{https://github.com/taketwo/ElasticFusion/tree/hdr}}, an online incremental 3D reconstruction approach that captures the scene appearance in HDR colors. We contribute:
\begin{itemize}
\item a map-aware exposure time controller integrated in the SLAM loop aiming to maximize the information gain of each
observation;
\item HDR color fusion rules tailored to the incremental nature of SLAM reconstruction;
\item a noise model for off-the-shelf \mbox{RGB-D}\xspace cameras.
\end{itemize}
We evaluate the performance of \mbox{HDR-SLAM}\xspace with a static camera, where a comparison with batch-processed ``ground truth'' reconstruction can be made. The quantitative results show that the proposed controller outperforms other baselines. Furthermore, we demonstrate side-by-side comparisons of the HDR reconstructions and those obtained with a baseline LDR SLAM approach, showing significantly improved texture quality.
The rest of the paper is structured as follows. \cref{sec:pre} gives the necessary background in HDR imaging and \cref{sec:rw} covers the related work. \cref{sec:arch} presents the system architecture, followed by the description of camera noise model (\cref{sec:model}), incremental HDR reconstruction (\cref{sec:inc}), exposure time controller (\cref{sec:ctrl}), and additional implementation details (\cref{sec:impl}). The system is evaluated in \cref{sec:eval} and the paper is concluded in \cref{sec:ciao}.
\section{HDR imaging foundations}
\label{sec:pre}
Application of HDR color acquisition techniques in the context of dense 3D reconstruction requires understanding of the image formation process, its associated noise sources and limitations, as well as the core ideas behind HDR imaging. This section serves as a brief introduction; an in-depth discussion can be found in \cite{Sen2016,Kolb1995,Healey1994}.
\subsection{Image formation process}
Scene surfaces emit or reflect light rays; the amount of radiant power that they carry (\emph{radiance}) determines the appearance of the scene to the observer. To capture the appearance, a camera maps radiances into image pixel \emph{intensities} through a sequence of nonlinear transformations.
First, the light rays enter the camera aperture, pass through the lens system, and land on a lattice of photon wells. The radiant power density (\emph{irradiance}) incident on the well surface is integrated over the time $t$. The accumulated radiant energy (\emph{exposure}) is then converted into a voltage, amplified, digitized, and mapped through a nonlinear radiometric response function into a pixel intensity.
The output value at an image location $\mathbf{u}$ is given by
\begin{equation}
Z(\mathbf{u}) = f(tE(\mathbf{u})),
\label{eqn:rif}
\end{equation}
where $E$ are pixel irradiances and $f : \mathbb{R}\rightarrow\left\{0,\ldots,255\right\}$ is a composition of the
amplification, digitization, radiometric response, and quantization, which will be further referred to as the camera response function (CRF).
Radiance and irradiance are directly proportional, the constant of proportionality being dependent on the properties of the lens system. In most applications the absolute scale of radiance is not important and these terms are used interchangeably. However, due to several factors, collectively referred to as vignetting effects \cite{Goldman2005}, the coefficient of proportionality is not uniform across the image plane, typically exposing radial fall-off from the center to the edges. This is important in the context of our work, thus we distinguish between the two terms and define
\begin{equation}
E(\mathbf{u}) = L(\mathbf{u})V(\mathbf{u}),
\label{eqn:irradiance}
\end{equation}
where $L$ are pixel radiances and $V$ are image location dependent vignetting attenuation coefficients.
\subsection{Noise sources}
The image formation process is affected by multiple error sources~\cite{Healey1994}. Due to the quantum nature of light, the number of photo-induced electrons collected at a photon well follows a Poisson distribution; its uncertainty is called photon shot noise (PSN). Dark current contributes thermo-induced electrons that add up to the accumulated energy. Several other noise sources associated with conversion from charge to digital values are collectively referred to as readout noise. Due to imprecisions in the manufacturing process, for different pixels the photo-response is not uniform (PRNU) and so is the amount of dark current (DCNU).
%
%
%
%
%
%
%
%
%
%
%
%
%
%
%
%
%
\subsection{Dynamic range}
The noise and the finite capacity of photon wells limit the range of radiant energies $\left[x^{\text{min}},x^{\text{max}}\right]$ that can be accumulated and detected by a camera. For any given exposure time $t$, this determines the \emph{detectable range} of irradiances
\begin{equation}
\epsilon_{t} =
\left[\frac{x^{\text{min}}}{t},\frac{x^{\text{max}}}{t}\right] =
\left[e^{\text{min}}_t,e^{\text{max}}_t\right].
\label{eqn:detectable-range}
\end{equation}
Scene points inducing irradiance below or above will appear as black (underexposed) or white (overexposed) pixels respectively. The ratio of the boundary values is independent of the exposure time and is called the \emph{dynamic range}.
Let $\mathcal{T}$ be the set of all supported exposure time settings. The union of their corresponding detectable ranges
\begin{equation}
\epsilon_{\text{sys}} =
\bigcup_{t\in\mathcal{T}}\epsilon_{t} =
\left[\frac{x^{\text{min}}}{\left\lceil \mathcal{T}\right\rceil },\frac{x^{\text{max}}}{\left\lfloor \mathcal{T}\right\rfloor }\right] =
\left[e^{\text{min}},e^{\text{max}}\right]
\label{eqn:system-detectable-range}
\end{equation}
is the \emph{effective detectable range} of the camera. Irradiances outside of this range can not be measured.
\subsection{HDR imaging}
The goal of HDR imaging is to recover an irradiance image of a scene in its full dynamic range. A set of LDR images $Z_i$ is taken at different exposure times $t_i$. Each of them is converted into irradiance image $\hat{E}_i$ according to \eref{rif}. The irradiance estimates in these images are in a linear space at a fixed common scale, which allows us to combine them using a weighted average:
\begin{equation}
\bar{E}
=
\frac{\sum_{i=1}^{n}W_i\hat{E}_{i}}{\sum_{i=1}^{n}W_i},
\label{eqn:hdr}
\end{equation}
where $n$ is the number of images and $W_i$ are per-pixel confidence weights. The purpose of the weights is to discard poorly exposed pixels that carry no information and to emphasize more reliable samples.
Various weighting schemes were proposed in the literature. In the early work somewhat ad-hoc options were used, including the gradient of CRF \cite{Mann1995} and a hat function \cite{Debevec1997}. Later, Kirk and Andersen~\cite{Kirk2006} characterized several other weighting schemes, concluding that the variance-based weighting gives best lower bound on signal-to-noise ratio. Granados \etal[Granados2010] presented a rigorous camera noise model that takes into account both temporal and spatial sources. They note that variance-based weighting indeed yields Maximum Likelihood Estimate (MLE), however due to the photon shot noise, the variance of a sample depends on the true irradiance. This introduces a circular dependency that they propose to solve with iterative estimation.
\section{Related work}
\label{sec:rw}
\subsection{HDR-aware mapping}
The idea of using HDR colors in the context of 3D reconstruction with \mbox{RGB-D}\xspace cameras was pioneered by Meilland \etal[Meilland2013]. In their visual SLAM system the scene is modeled by a graph of super-resolved keyframes. Each keyframe is a product of HDR-aware fusion of a sequence of aligned camera frames. They model the camera response with the gamma function and ignore the vignetting effects. Unlike the classical HDR imaging where exposure time of each frame is preselected and known, they rely on the built-in auto exposure controller (AEC) function. This means they need to estimate the relative exposure time change jointly with the camera transform during camera tracking.
Recently, Li \etal[Li2016] extended a volumetric SLAM framework to accumulate colors in HDR space. They also use AEC, but solve the camera tracking problem in the normalized radiance space that is independent of exposure time. Once the frames are aligned, the exposure time change is computed as a weighted average of the radiance ratios between corresponding pixels. They use a calibrated CRF and variance-based weights to fuse new color measurements into the global volumetric representation, but do not account for the vignetting effects.
Zhang \etal[Zhang2016] noted that frame-by-frame estimation of exposure time changes suffers from drift accumulation. They propose an offline method where per-frame exposure times and point radiances are the unknowns in a nonlinear optimization problem. Solving it allows to obtain globally optimal HDR textures for the reconstructed 3D model.
Unlike the mentioned works, we propose to actively control the exposure time. This means drift-free operation without the need for global optimization. Additionally, we use full radiometric calibration including vignetting effects.
\subsection{Exposure time control}
The problem of selecting a set of exposure times (\emph{bracketing set}) has been studied in the HDR imaging literature. Barakat \etal[Barakat2008] proposed several algorithms aimed to compute minimal bracketing sets. However, he was interested to obtain a single non-saturated observation per pixel and did not consider other properties of reconstruction such as signal-to-noise ratio (SNR). Hasinoff \etal[Hasinoff2010] investigated the problem of selecting exposure times and gains for noise-optimal HDR capture, however they assume that the distribution of radiances in the scene is known \emph{a priori}. Ilstrup and Manduchi \cite{Ilstrup2010} introduced an algorithm that determines the single optimal exposure time for a scene given one suboptimally exposed image. In the context of visual odometry, Zhang \etal[Zhang2017] proposed an active exposure controller that maximizes a gradient-based image quality metric. Differently from these works, our exposure time controller has no \emph{a priori} knowledge about the scene and aims to maximize the SNR of every reconstructed point.
\section{System architecture}
\label{sec:arch}
Our system builds upon a typical dense SLAM pipeline, where map updates are alternated with camera tracking \wrt the rendered view of the map. Specifically, we extended the open-source system of Whelan \etal[Whelan2015], where the map is represented by a surfel cloud. However, it should be noted that our approach can be combined with other map representations (\eg volumetric or keyframe-based).
A simplified system architecture diagram is given in \fref{architecture}. The components that are not essential for the topic of this paper (\eg loop detection, nonrigid map deformation) are not shown. The frames coming from the camera are radiometrically rectified using the camera model described in \cref{sec:model}. After the current pose of the camera is computed by the frame-to-model tracking module, the depth and rectified color images are fused into the map according to the rules outlined in \cref{sec:inc}. Next, the view of the map from the current pose is predicted, and this prediction is used by the exposure time controller as detailed in \cref{sec:ctrl}.
\begin{figure}[!t]
\centering
\includestandalone[width=\columnwidth]{architecture}
\vspace*{1pt}
\caption
{
Simplified system architecture diagram. The blue components are typical for dense SLAM systems and form the mapping loop (bold arrows). We introduce the yellow components to support acquisition of HDR colors.
}
\label{fig:architecture}
\end{figure}
\section{Camera noise model}
\label{sec:model}
Online dense 3D reconstruction is dominated by \mbox{RGB-D}\xspace sensors. Typically, they are equipped with low-end color cameras that do not provide access to the raw measurements of radiant energy. Instead, a certain on-chip post-processing (\eg denoising, black frame subtraction) takes place. Consequently, accurate noise characterization as in \cite{Hasinoff2010,Granados2010} is not feasible. In the lack of knowledge about the camera internals, it was suggested to jointly model the noise sources with a compound Gaussian \cite{Robertson1999}. Liu \etal[Liu2008] proposed a model with two additive zero-mean components:
\begin{equation}
Z(\mathbf{u}) = f(tE(\mathbf{u}) + n_s + n_c).
\label{eqn:noise-simplified}
\end{equation}
The first component $n_s$ accounts for the noise dependent on the signal; its variance $\sigma^2_s$ is proportional to the exposure. The second component $n_c$ captures the independent noise sources and has a fixed variance. Our experiments with Asus Xtion Live Pro\xspace cameras (see \cref{ssec:ver}) have confirmed the general suitability of this model, however we found that the independent component can be neglected.
The model \eref{noise-simplified} ignores spatially varying error sources. However, it was demonstrated that vignetting effects are significant in \mbox{RGB-D}\xspace cameras; correcting them has a positive impact on both tracking accuracy and map quality \cite{Engel2016a,Alexandrov2016}. Therefore, we include vignetting effects and propose at the following model of individual pixel intensity:
\begin{equation}
Z(\mathbf{u})=f(X(\mathbf{u})+n_{s}),
\label{eqn:imaging-equation}
\end{equation}
where $ X(\mathbf{u})=tL(\mathbf{u})V(\mathbf{u})$ is the exposure, and its variance $\sigma_{X(\mathbf{u})}^{2}$ is proportional to $X(\mathbf{u})$ with some coefficient $a$. We derive an estimator for the radiance
\begin{equation}
\hat{L}(\mathbf{u}) =
\frac{f^{-1}(Z(\mathbf{u}))}{tV(\mathbf{u})} =
\frac{X(\mathbf{u})}{tV(\mathbf{u})},
\label{eqn:radiance}
\end{equation}
and its uncertainty
\begin{equation}
\sigma_{L(\mathbf{u})}^{2} =
\frac{\sigma_{X(\mathbf{u})}^{2}}{t^{2}V(\mathbf{u})^{2}} =
\frac{at\hat{L}(\mathbf{u})V(\mathbf{u})}{t^{2}V(\mathbf{u})^{2}} =
\frac{a\hat{L}(\mathbf{u})}{tV(\mathbf{u})}.
\label{eqn:radiance-uncertainty}
\end{equation}
\subsection{Calibration}
Computation of pixel radiance using \eref{radiance} requires the function $g \equiv f^{-1}$ and the map of vignetting attenuation factors $V$ to be known. We use the method of Debevec \etal[Debevec1997] to obtain the former. As the absolute scale is not important, the calibrated response function is normalized such that $g(255) = 1$. The latter is calibrated using the method of Alexandrov \etal[Alexandrov2016]. It is worth noting that their method can not separate the vignetting effects from PRNU, jointly modeling them as single attenuation factor per pixel. \fref{calibration} shows the obtained radiometric calibration of one of the color channels of an Asus Xtion Live Pro\xspace camera.
\begin{figure}[!t]
\centering
\subfloat{\includegraphics[width=0.48\columnwidth]{radiometric_response}%
\label{fig:rr}}
\hfil
\subfloat{\includegraphics[width=0.48\columnwidth]{vignetting_response}%
\label{fig:vr}}
\vspace*{5pt}
\caption{
Radiometric calibration of the red color channel of an Asus Xtion Live Pro\xspace camera. The camera response function (left) nonlinearly maps exposure to pixel intensity. The vignetting effects (right) introduce spatially varying attenuation; while the central pixels are not affected, the corners are up to two times darker.
}
\label{fig:calibration}
\end{figure}
\subsection{Verification}
\label{ssec:ver}
We verify the proposed noise model \eref{imaging-equation} by demonstrating that the noise variance is indeed linearly dependent on the signal. To show this, we fix the camera in front of a high dynamic range scene and select a bracketing set such that exposures span the whole effective detectable range of the camera. For every exposure time setting we capture 900 frames and compute the mean and variance of every pixel's exposure $g(Z(\mathbf{u}))$. Next we group these observations based on the mean exposure into bins spanning $[0..1]$ range. In each bin we select the median variance observation.
\fref{exposure-variance} shows the obtained relation between the exposure and variance. We observe a linear dependency which supports the signal-dependent noise component of our model. Furthermore, there is no noticeable offset along the y-axis, suggesting that the signal-independent component can be ignored. The drop in the variance near the maximum exposure is explained by the fact that the distribution is truncated due to the sensor saturation.
\begin{figure}[!t]
\centering
\includegraphics[width=\columnwidth]{exposure_variance}
\caption
{
Dependency between exposure and its variance for different color channels. Straight lines through the origin are fitted to the data points to evince the suitability of the noise model \eref{imaging-equation}.
}
\label{fig:exposure-variance}
\end{figure}
The slope of the fitted lines corresponds to the coefficient $a$ in \eref{radiance-uncertainty}. As expected, it is significantly larger for the blue channel. Our experiments show that it depends on the gain setting of the camera. Since our system operates with a fixed gain, the exact relation is not important.
\section{Incremental HDR color reconstruction}
\label{sec:inc}
In line with the incremental nature of SLAM, we formulate an online HDR color reconstruction scheme. As the mapping progresses, the camera delivers a stream of color images $Z_i$ taken with exposure times $t_i$. Consider a scene point $\mathbf{x}$ that projects onto the pixel locations $\mathbf{u}_i$ in these images. The task is to maintain an estimate of the point radiance and its uncertainty by merging the observations $z_i = Z_i(\mathbf{u}_i)$ as they become available.
Each pixel observation $z_i$ is an RGB triplet; we consider it \emph{valid} if all color channels are well-exposed and \emph{invalid} otherwise. Invalid pixels are never fused into the radiance estimate, even if some color channels are within the saturation limits. Effectively, this means that updates to all color channels of a point radiance estimate are synchronized. This is in contrast with the standard HDR imaging, where color channels are treated in isolation.
This rule is motivated by the fact that the camera motion and imprecisions in tracking cause errors in pixel-to-point association. Thus, independent updates of color channels may introduce arbitrary distortion of the apparent point color. In a batch processing system this problem can be addressed by consistency tests \cite{Granados2013}. In the incremental setting this can not be resolved entirely; synchronized updates reduce the impact allowing only blur-like distortions.
An immediate consequence of this rule is that the color of a reconstructed scene point can be in one of the two states: \emph{incomplete} and \emph{complete}. The color is created incomplete; it turns complete when the first valid observation of this point is made. A toy example of such evolution is given in \fref{evolution}. Below we formally define the performed operations based on the state of the color and observation.
\begin{figure*}[!t]
\centering
\includestandalone[width=\textwidth]{evolution}
\vspace*{3pt}
\caption
{
A HDR color (top row) evolves over time as new pixel observations (bottom row) become available. Upon creation, the color is incomplete and no bounds on its radiance are known (indicated by full color bars). The arriving invalid observations (indicated by red background) are used to update the radiance bounds. For example, the overexposed red channel of the first observation means that the radiance of the red channel is at least 0.061. When the first valid observation becomes available (indicated by green background), the color turns complete and the bounds are replaced with exact radiance estimates and a confidence value. All subsequent invalid observations are ignored, whereas the valid observations are averaged in, increasing the certainty of the radiance estimate.
}
\label{fig:evolution}
\end{figure*}
\subsection{Incomplete state}
Upon creation and until the first valid observation becomes available, a point is in the incomplete state, meaning that it has no exact radiance estimate. Instead, it is represented by a tuple of ranges \tuple{\lambda^{R},\lambda^{G},\lambda^{B}} that bound the radiance of each color channel and inform the decisions of the exposure time controller (as detailed in \cref{sec:ctrl}).
Starting from the whole effective dynamic range of the camera $\lambda_{\text{sys}} = \left[l^{\text{min}},l^{\text{max}}\right]$ the upper and lower bounds are progressively refined using the information contained in the invalid observations. An important insight is that underexposed channels give an upper bound on the radiance, while overexposed channels give a lower bound.
Below we define how the bounds are computed from invalid pixels. All color channels are treated the same way, thus we only discuss a single channel and drop the superscripts to simplify the notation. Suppose that a set of observations $\mathcal{Z}=\{z_i\}$ was made and a set $\mathcal{L}$ of their radiances was computed using \eref{radiance}. Let $\check{\mathcal{L}}$, $\hat{\mathcal{L}}$, and $\bar{\mathcal{L}}$ be subsets containing radiances from under-, over-, and well-exposed observations respectively. From these data, the radiance range
\begin{equation}
\lambda =
\begin{cases}
\left[\max(\hat{\mathcal{L}} \cup l^{\text{min}}),\min(\check{\mathcal{L}} \cup l^{\text{max}})\right] & \text{if }\bar{\mathcal{L}}=\varnothing\\
\left[\min(\bar{\mathcal{L}}),\max(\bar{\mathcal{L}})\right] & \text{otherwise}.
\end{cases}
\label{eqn:incomplete}
\end{equation}
Note that this is trivially translated into incremental updates.
\subsection{Complete state}
After the first valid observation becomes available, a point is in the complete state, meaning that it has an exact radiance estimate. In order to reduce the variance of this estimate, new valid observations are averaged in.
Under the assumption of compound Gaussian noise, Granados \etal[Granados2010] have shown that optimal HDR reconstruction is achieved if observations are weighted using the inverse of sample variance. In their system the photon and dark current shot noises are modeled, leading to a circular dependency between the estimates of radiance and sample variance. Thus an iterative optimization is required, ruling out online operation.
Differently from them, in our simplified camera model the dark current shot noise is excluded and the sample variance is assumed to be proportional to the true radiance. Substituting the radiance \eref{radiance} and inverse of sample variance from \eref{radiance-uncertainty} as weight into \eref{hdr}, we obtain the following formula for the radiance estimate after the $k^{\text{th}}$ measurement:
\begin{equation}
\bar{L}_k =
\frac{\displaystyle\sum_{i}^{k}\frac{t_{i}v_{i}}{aL}\frac{g(z_{i})}{t_{i}v_{i}}} {\displaystyle\sum_{i}^{k}\frac{t_{i}v_{i}}{aL}} =
\frac{\displaystyle\sum_{i=1}^{k}g(z_{i})}{\displaystyle\sum_{i=1}^{k}t_iv_i},
\label{eqn:complete-radiance}
\end{equation}
where $v_i = V(\mathbf{u}_i)$ are the vignetting attenuation factors at the locations of pixel observations. This estimate can be updated incrementally as new valid observations become available \cite{West1979}. The value in the denominator is the accumulated weight $w_k$ of the radiance estimate.
\section{Map-aware exposure time control}
\label{sec:ctrl}
The long-term goal in HDR mapping is to obtain a reliable color estimate for each reconstructed scene point.
In this section we describe a controller that chooses an exposure time with maximum expected utility in the next frame.
There are two types of points in the map: incomplete and complete. Ideally, each point should become complete and have high SNR. Therefore, the objective for incomplete points is to obtain a valid observation, and for complete points is to increase their accumulated weight.
The camera motion is not controlled by the mapping system; it has no knowledge of the planned trajectory. The only reasonable assumption is that the motion is locally smooth and the velocity is such that two consecutive frames capture almost the same part of the scene. This allows to restrict the control decisions to be based only on a subset of the map visible in the last frame. Conveniently, the view prediction component is already a part of the SLAM loop; it renders the current state of the reconstruction into the image space $\Omega\subset\mathbb{N}^{2}$ for the purposes of frame-to-model tracking. Below we assume that besides from the depth map, it produces another three maps: radiance $L:\Omega\rightarrow\mathbb{R}^3$, weight $W:\Omega\rightarrow\mathbb{R}$, and radiance bounds $\Lambda:\Omega\rightarrow\mathbb{R}^6$.
We define an utility function that evaluates the expected gain of choosing a particular exposure time $t$ given the rendered state of the reconstruction:
\begin{equation}
U(t,L,W,\Lambda)=U_{e}(t,\Lambda)+U_{r}(t,L,W).
\label{eqn:utility}
\end{equation}
It consists of two terms, exploration $U_e$ and refinement $U_r$. The former is targeted at the incomplete points and analyzes the radiance bounds map $\Lambda$; the latter is concerned with the complete points and analyzes the radiance and weight maps $L$ and $W$. The balance between the exploration and refinement can be adjusted by scaling one of the terms.
\subsection{Exploration}
The controller aims to turn incomplete points into complete by finding an exposure time that allows to get a valid observation. This search is guided by the estimates of boundaries on radiance maintained for each incomplete point, as described in the previous section.
We assume that the true radiance of a point is log-uniformly distributed within the radiance bounds $\lambda$. Therefore, given the detectable range $\lambda_t$ of a certain exposure time $t$, the probability that the point will be observed without saturation can be computed as
\begin{equation}
p(\lambda,\lambda_{t}) =
\frac{\left\langle\lambda\cap\lambda_{t}\right\rangle}{\left\langle\lambda\right\rangle},
\end{equation}
where $\left\langle\cdot\right\rangle$ denotes the interval length in log-space. Since each point has three color channels, a product of the individual channel probabilities has to be computed. Denoting the subset of pixel locations of incomplete points as $\mathcal{I}$, the exploration utility is therefore defined as:
\begin{equation}
U_{e}(t,\Lambda) =
\sum_{\mathbf{u}\in\mathcal{I}}\prod_{c\in\left\{ R,G,B\right\} }p(\Lambda^{c}(\mathbf{u}),\lambda_{t}^{c}).
\end{equation}
\subsection{Refinement}
A complete point has an exact, albeit noisy, estimate of its radiance. It can be improved by integrating additional, preferably low-variance, samples. The goal is, thus, for each point to get a valid observation at maximum possible exposure time, as it will have highest possible weight.
As discussed above, our attention is limited to the points visible in the previous frame. Their radiances and accumulated weights were rendered into the $L$ and $W$ maps. Assuming that the points will project to approximately the same pixel locations in the next frame, and since vignetting effects expose spatially smooth variations, we expect to receive irradiance $E = LV$ at the sensor. Depending on the exposure time, some of these will fall in the detectable range and give valid observations. For exposure time $t$, let
\begin{equation}
\mathcal{V}_{t}=\left\{\mathbf{u}\in\Omega\mid E(\mathbf{u})\in\epsilon_t\right\}
\label{eqn:valid-pixels}
\end{equation}
be a subset of pixels that will have valid observations. They will be fused into the model. The contribution of each observation is proportional to exposure time and inversely proportional to the weight already accumulated by the point. Thus, we define the refinement utility as:
\begin{equation}
U_r(t,L,W)=\sum_{\mathbf{u}\in\mathcal{V}_{t}}\frac{t}{W(\mathbf{u})}.
\label{eqn:refine}
\end{equation}
\section{Implementation details}
\label{sec:impl}
We based our system on the open-source implementation of ElasticFusion~\cite{Whelan2015}. It is not HDR-aware and works with colors in LDR image space, conventionally representing them as 24-bit RGB triplets. However, the space allocated for each color is 64 bits. By fitting our HDR color representation into this space, we avoid any impact on the memory footprint. The complete colors are represented by 3 radiances and a common weight, thus 16-bit integers are used, which is sufficient to represent the full dynamic range supported by the system. The incomplete colors are represented by a zero weight and 3 radiance ranges (\ie 6 numbers), each truncated into 8-bit integers.
How quickly a camera reacts to the changes in exposure time setting (\ie control lag) is of high practical importance. The Asus Xtion Live Pro\xspace cameras that we have tested respond to the control commands within 3 frames. Therefore, while tracking and data fusion run at the full framerate, the controller is limited to approximately 10 Hz.
The base SLAM implementation utilizes dense direct odometry with geometric and photometric residuals as a tracking front-end. In our implementation the photometric residuals is lifted into the HDR color space.
\section{Experimental evaluation}
\label{sec:eval}
\subsection{Exposure time selection with a static camera}
\label{sub:exposure-time-selection-on-static-scene}
We quantitatively demonstrate the efficiency of the proposed exposure time controller in comparison with a set of baselines. The baseline controllers sweep through the allowed exposure time range in upward and downward direction with either multiplicative or additive steps.
We fix the camera in front of a high dynamic range scene and perform HDR reconstruction using the standard batch-processing approach to obtain the ground truth (see \fref{bunny}). Next we perform incremental HDR reconstruction using the method described in \cref{sec:inc} and selecting next exposure time with our controller and a set of baselines. After fusing each frame the mean reconstruction error \wrt the ground truth and the fraction of complete points is recorded.
\begin{figure}[!t]
\centering
\includestandalone[width=\columnwidth]{bunny}
\caption
{
Top: ground truth reconstruction of a high dynamic range scene. Bottom row: subset of used images (taken with minimum, mid-range, and maximum exposure times).
}
\label{fig:bunny}
\end{figure}
\fref{controller-evaluation} demonstrates the obtained results. Our controller explores the scene faster, leaving no incomplete points after observing 4 frames. The mean reconstruction error also decreases faster, reaching a steady state of about 2\% after integrating 15 frames.
\begin{figure}[!t]
\centerline
{
\includegraphics[width=1.0\columnwidth]{controller_evaluation}
}
\caption
{
Evaluation of the controllers in terms of the percentage of complete points and mean reconstruction error. Our controller (\ColorGreen[$\blacksquare$]), multiplicative controller (\ColorRed[$\blacktriangle$]), and incremental controller (\ColorBlue[$\bullet$]).
}
\label{fig:controller-evaluation}
\end{figure}
\subsection{HDR reconstruction with a moving camera}
We qualitatively demonstrate the performance of our system by reconstructing several office scenes and comparing the results with the maps produced by vanilla ElasticFusion with and without AEC. \fref{cmp-entrance,cmp} present side-by-side comparisons. The LDR reconstructions have numerous artifacts in their color textures. With disabled AEC (\fref{cmp-entrance}), the insufficiency of dynamic range of the LDR colors is manifested in overexposed white surfaces that appear to have the same color in the reconstruction. With enabled AEC (\fref{cmp}), the changes in exposure time are not accounted for by the LDR system and manifest in both strong and smooth color gradients on the walls. The HDR reconstructions do not have such defects.
\begin{figure}[!t]
\centering
\subfloat{\includegraphics[width=\columnwidth]{cmp_michi_hdr}}%
\vspace*{-6pt}
\subfloat{\includegraphics[width=\columnwidth]{cmp_michi_ldr_aec}}%
\vspace*{-6pt}
\subfloat{\includegraphics[width=\columnwidth]{cmp_sofa_hdr}}%
\vspace*{-6pt}
\subfloat{\includegraphics[width=\columnwidth]{cmp_sofa_ldr_aec}}%
\caption{
First and third: office scenes reconstructed in HDR with the proposed exposure time controller. Second and fourth: the same scenes reconstructed in LDR with ElasticFusion using camera built-in AEC function. Note the abrupt changes in color texture on the walls and on the floor in LDR reconstructions.
}
\label{fig:cmp}
\end{figure}
\section{Conclusions and future work}
\label{sec:ciao}
We presented an HDR-aware dense 3D reconstruction system. It leverages full radiometric camera calibration and relies on a simplified noise model tailored for the off-the-shelf \mbox{RGB-D}\xspace sensors. We introduced a concept of incomplete/complete colors that allows incremental HDR color fusion not common for classical HDR imaging methods. We also introduced an active exposure time controller into the mapping loop. It analyzes the reconstructed map to make decisions and maximize information gain in the next frames. In a set of experiments we demonstrated an improved visual quality of color appearance in acquired models compared to a baseline LDR system.
In the future it will be interesting to evaluate the impact that improved HDR textures have on the tracking performance. Another research direction would be to address the changes in scene illumination and reflective materials.
\section*{Acknowledgments}
The work presented in this paper has been funded by the European Union Seventh Framework Programme (FP7/2007-2013) under grant agreement No. 600623 ("STRANDS") and No. 610532 ("SQUIRREL"). We thank the anonymous reviewers for their helpful comments.
\clearpage
{\small
\bibliographystyle{ieee}
|
2,877,628,090,451 | arxiv | \section*{Abstract}
{\bf
Recently in Ref.\,\cite{Wen:2021qgx}, one of the authors introduced the balanced partial entanglement (BPE), which has been proposed to be dual to the entanglement wedge cross-section (EWCS). In this paper, we explicitly demonstrate that the BPE could be considered as a proper measure of the total intrinsic correlation between two subsystems in a mixed state. The total correlation includes certain crossing correlations, which are minimized by particular balance conditions. By constructing a class of purifications from Euclidean path-integrals, we find that the balanced crossing correlations show universality and can be considered as the generalization of the Markov gap for the canonical purification. We also test the relation between the BPE and the EWCS in three-dimensional asymptotically flat holography. We find that the balanced crossing correlation vanishes for the field theory invariant under BMS$_3$ symmetry (BMSFT) and dual to the Einstein gravity, indicating the possibility of a perfect Markov recovery. We further elucidate these crossing correlations as a signature of tripartite entanglement and explain their interpretation in both AdS and non-AdS holography.
}
\vspace{10pt}
\noindent\rule{\textwidth}{1pt}
\tableofcontents\thispagestyle{fancy}
\noindent\rule{\textwidth}{1pt}
\vspace{10pt}
\section{Introduction}\label{sec:intro}
The structure of quantum entanglement in quantum systems plays a fundamental role in understanding the quantum information-theoretic nature of quantum gravity. The quantum nature of entanglement is captured by the entanglement entropy, which has been extensively studied in quantum field theories and many-body systems \cite{Calabrese:2004eu}. In the context of AdS/CFT correspondence \cite{Maldacena:1997re, Gubser:1998bc, Witten:1998qj}, the entanglement entropy in the field theory of a region $A$ in the boundary has a dual description in terms of the area of a minimal surface $\mathcal{E}_{A}$ in the dual gravity side and goes by the name of Ryu-Takayanagi formula \cite{Ryu:2006bv, Ryu:2006ef, Nishioka:2009un}
\begin{align}
S_{A}=\frac{\text{Area}\,(\mathcal{E}_{A})}{4G_N}\,.
\end{align}
The entanglement entropy is a good measure of quantum entanglement between a subsystem and its complement when the boundary state is pure. However, for mixed states, the entanglement entropy is not a proper measure to capture the intrinsic correlation between the subsystems \cite{Horodecki:2009zz, Takayanagi:2018zqx}. Recently, the study of said correlation in mixed states has attracted considerable attention. Several quantities have been defined to measure the mixed state correlations, like the mutual information, the (logarithmic) entanglement negativity \cite{Calabrese:2012ew, Calabrese:2012nk, negativity1, negativity2,negativity3, Kudler-Flam:2018qjo,Kusuki:2019zsp,Chaturvedi:2016rcn}, and the entanglement of purification (EoP) \cite{EoP, HEoP1, HEoP2}. From the dual gravity side, this correlation is supposed to be captured by a special geometric quantity, known as the entanglement wedge cross-section (EWCS). It is a natural candidate to measure the intrinsic correlation between the two subsystems in a mixed state with a gravity dual. Moreover, in order to understand the EWCS better, new information-theoretic quantities have been proposed by the high-energy physics community, such as the reflected entropy \cite{Dutta:2019gen}, the ``odd entropy'' \cite{Tamaoka:2018ned}, the ``differential purification'' \cite{Espindola:2018ozt}, entanglement distillation \cite{Agon:2018lwq, Levin:2019krg} and the balanced partial entanglement (BPE) \cite{Wen:2018whg,Wen:2019iyq, Wen:2021qgx}. See \cite{Bhattacharyya:2018sbw, Hirai:2018jwy, Harper:2019lff, Bao:2019wcf, BabaeiVelni:2019pkw, Lin:2020yzf, Kusuki:2019rbk, Akers:2019gcv, Kusuki:2019evw, Umemoto:2019jlz, Bhattacharyya:2019tsi, Camargo:2020yfv, Ghodrati:2020vzm,Khoeini-Moghaddam:2020ymm,Sahraei:2021wqn} for some recent explorations on the study of the purification and the EWCS.
In Ref.\cite{Wen:2021qgx}, based on the concept of the partial entanglement entropy (PEE) and its holographic picture, it was observed that the PEE satisfying certain balance conditions could be considered as the area of the EWCS in the dual gravity picture. This specific PEE is called the \textit{balanced partial entanglement entropy} (BPE). In principle, the definition of BPE is more general, and it can be defined for any purification, including the non-holographic ones. For the specific case of the canonical purification, the BPE reduces to half of the reflected entropy, suggesting BPE as a more generic measure than the reflected entropy. It has been observed BPE obeys the entropy relations that are satisfied by the EoP and the EWCS.
In this paper, we take a step towards understanding why the BPE could be a proper measurement of the total intrinsic correlation between the subsystems $A$, and $B$ is a mixed state $\rho_{AB}$. For such a mixed state, it is useful to consider an auxiliary system $A'B'$, which together with $AB$, forms a pure state. This pure state is called the purification of the mixed state and is highly non-unique. A meaningful way to explore the correlation structure in a mixed state is to study the entanglement entropy $S_{AA'}$ in the purifications. It is important to note that some of the correlations, for example, the correlation inside $A'B'$, contribute to $S_{AA'}$ but not to the intrinsic correlation inside $AB$. We need to either minimize those correlations or try to exclude them from $S_{AA'}$ in a proper way. This is how the EoP and the reflected entropy are defined. In the following, we briefly introduce the EWCS, the EoP, and the reflected entropy.
\textit{Entanglement wedge cross section (EWCS)}:
Let us consider a region in the boundary field theory with a partition $AB\equiv A\cup B$, described by a reduced density matrix $\rho_{AB}$. Its holographic dual is a bulk region known as the entanglement wedge \cite{Czech:2012bh, Wall:2012uf, Headrick:2014cta}. For a static time slice, the entanglement wedge is the region enclosed by the boundary subregion and the corresponding minimal surface. This allows us to define the EWCS $\Sigma_{AB}$ as the minimal area cross-section separating the regions $A$ and $B$.
\textit{Entanglement of purification (EoP)}: Let $\ket{\Psi}\in \mathcal{H}_{AA'}\otimes \mathcal{H}_{BB'}$ be any purification of $\rho_{AB}$. One defines the EoP $E_{p}(A:B)$ \cite{EoP} as
\begin{align}
E_{p}(A:B)=\mathop{\text{min}}\limits_{\ket{\Psi},A'}S_{AA'}\,,
\end{align}
where we take the minimization over all purifications of $AB$ and over all the possible partitions of $A'B'$. The EoP is then given by the minimal value of $S_{AA'}$. The minimization procedure in some sense excludes the contribution from the intrinsic correlation between $A'$ and $B'$ to $S_{AA'}$, hence could be a valid measure of correlation between $A$ and $B$.
\textit{Reflected entropy}:
Consider a bipartite system $AB$ associated with the Hilbert space $\mathcal{H}_{AB}$ and an orthonormal basis $\{\ket{\Psi_{i}}\}$ of $\mathcal{H}_{AB}$. In general, any mixed state can be written as
\begin{align}
\rho_{AB}=\sum\limits_{i}p_i \ket{\Psi_{i}}\bra{\Psi_{i}}\,.
\end{align}
To define the reflected entropy, we need to introduce an auxiliary system $A' B'$, whose Hilbert space is the identical copy of the original Hilbert space. One then defines the canonical purification as
\begin{align}
\ket{\sqrt{\rho_{AB}}}=\sum\limits_{i}\sqrt{p_i} \ket{\Psi_{i}}_{AB}\ket{\Psi_{i}}^*_{A'B'}\,.
\end{align}
Here $\{\ket{\Psi_{i}}^*\}$ is an orthonormal basis of the Hilbert space $\mathcal{H}_{A'B'}=\mathcal{H}_{AB}$. The definition of reflected entropy is invoked through the definition the entanglement entropy for $AA'$ as
\begin{align}
S_{R}(A:B)=S_{AA'}=-\text{Tr}\rho_{AA'}\log \rho_{AA'}\,,
\end{align}
where the mixed state $\rho_{AA'}=\text{Tr}_{BB'}\ket{\sqrt{\rho_{AB}}}\bra{\sqrt{\rho_{AB}}}$ is obtained by tracing out the degrees of freedom in $BB'$. Since the complement $A'B'$ is just a copy of $AB$, the entanglement between $AA'$ and $BB'$ could be the double of the intrinsic total correlation in $AB$. Thus the total intrinsic correlation between $A$ and $B$ is captured by half of the reflected entropy.
Both the EoP and the reflected entropy are lower bounded by half of the mutual information $I(A:B)/2$, which we consider as a direct correlation between $A$ and $B$. They also get contributions from the ``crossing correlations", including the correlations between $A$ and $B'$ and between $B$ and $A'$, which have not been explicitly studied before. We will study all these correlations in the context of the PEE. Hence the crossing correlations are also called the crossing PEEs, which we argue to be a generalized version of the Markov gap. We show that the crossing PEEs are minimized under some balance conditions, and they are independent of any purification. More interestingly, in $(1+1)$-dimensional CFTs, the crossing PEE is universal. The way BPE captures the total intrinsic correlation is similar to the reflected entropy, and, in fact, the partition of $A'B'$ in the canonical purification automatically satisfies the balance conditions, as we will see later sections.
The definition of the BPE can be applied to generic theories. Its correspondence to the EWCS should also go beyond AdS/CFT. The Markov gap or the crossing PEEs computed in AdS gravity is non-vanishing, and the non-vanishing Markov gap demands the existence of an approximate Markov recovery process. To understand the BPE and the Markov gap in general, we consider the duality between BPE and EWCS in flat holography. We explicitly compute the crossing PEEs for the field theory invariant under the BMS$_3$ symmetries (BMSFT), which is dual to the $3$-dimensional flat space. The crossing PEE in the field theory dual to Einstein gravity identically vanishes; thus, it reflects the existence of a perfect Markov recovery process in BMSFT.
The structure of the paper is organized as follows. In section \ref{secpeeandbpe}, we will briefly review the concept of PEE and the balanced PEE. Then in section \ref{secbpepurifications} we construct a class of purifications for $\rho_{AB}$ from path-integral optimization and calculate the BPE. This calculation shows that the BPE is independent of this class of purifications. In section \ref{secbpeminimized}, we focus on the details of the balance conditions. We show that when the balance conditions are satisfied, the crossing PEEs reach their minimal value; hence, the BPE reasonably measures the total correlation in the mixed state. We also show that the crossing PEEs can be considered as a generalized version of the so-called Markov gap. In section \ref{bpemarkov}, we discuss the details of the Markov recovery process and calculate the BPE for BMSFT. We show that the BPE coincides with the EWCS, and as a result, the crossing PEE vanishes for the BMSFT dual to the Einstein gravity. We summarize our results and give possible future directions in section \ref{secdiscussion}.
\section{The balanced partial entanglement}\label{secpeeandbpe}
\subsection{The partial entanglement entropy}
We first introduce the concept of the \textit{entanglement contour} \cite{Vidal}. It is a local measure of entanglement capturing the contribution coming from each degree of freedom inside a region $A$ to the entanglement entropy $S_{A}$. It is a function denoted by $s_A(x)$, where $x \in A$. Note that the function is non-local since it depends on the region $A$. This paper focuses on two-dimensional systems; hence, $x$ characterizes the spatial direction. We recover the entanglement entropy by collecting the contributions from all the sites inside $A$, hence $S_{ A}=\int_{ A}s_A(x)\,\mathrm{d}x$.
It is sometimes more useful to study a quasi-local measure of entanglement, i.e., the so called \textit{partial entanglement entropy} (PEE) $s_{A}( A_i)$. Instead of capturing the contribution from each degree of freedom, one is interested to consider the contribution from a subset of region $A_i \subset A$. One defines
\begin{align}\label{sA2}
s_{ A}( A_i)=\int_{A_i}s_A(x)\,\mathrm{d} x\,.
\end{align}
The PEE $s_{A}(A_i)$ captures certain type of the correlation between the subregion $A_i$ and the system $\bar{A}$ that purifies $A$. Since the correlation is mutual, similar to the mutual information (MI), one can also denote the PEE as
\begin{align}
s_{A}(A_i)\equiv\mathcal{I}(A_i,\bar{A})\equiv\mathcal{I}_{A_i \bar{A}}\,.
\end{align}
One should be careful not to confuse the PEE $\mathcal{I}$ with the MI, $ I(A:B) = S_A + S_B - S_{AB}$.
We interchangeably use the notation between $s_{A}(A_i)$ and $\mathcal{I}(A_i,\bar{A})$.
The PEE should respect certain physical requirements \cite{Vidal, Wen:2019iyq}. For self consistency, we list them in the following:
\begin{enumerate}
\item \textit{Additivity}: For any two spacelike-separated regions $B$ and $C$ such that
$B\cap C=\emptyset$, the additivity says $\mathcal{I}(A,B\cup C)=\mathcal{I}(A,B)+\mathcal{I}(A,C)$.
\item \textit{Unitary invariance}: $\mathcal{I}(A,B)$ should be invariant under any local unitary transformation inside the regions $A$ and $B$.
\item \textit{Symmetry transformation}: For any symmetry transformation $\mathcal{T}$ such that $\mathcal{T}A=A'$ and $\mathcal{T} B= B'$, the PEEs should remain invariant i.e., $\mathcal{I}(A,B)=\mathcal{I}(A',B')$.
\item \textit{Normalization}: $ \mathcal{I}(A,B)|_{B\to \bar{A}}\to S_{A}\,.$
\item \textit{Positivity}: $\mathcal{I}(A,B)\ge 0$.
\item \textit{Upper bound}: $
\mathcal{I}(A,B) \leq \text{min}\{S_{A},S_{B}\} \,.
$
\item \textit{The permutation symmetry between $A$ and $B$: $\mathcal{I}(A,B)=\mathcal{I}(B,A)$.}
\end{enumerate}
So far, there are several proposals\footnote{The entanglement contour has been largely explored in free theories \cite{Botero, Vidal, PhysRevB.92.115129, Coser:2017dtb, Tonni:2017jom, Alba:2018ime, DiGiulio:2019lpb, Kudler-Flam:2019nhr}. In the purview of holography, one can consider the finer description offered by the Ryu-Takayanagi prescription to relate points on the minimal surface to the corresponding boundary points \cite{Wen:2018whg,Wen:2018mev}. Explicit constructions of the entanglement contour and the PEE in terms of bit threads \cite{Freedman:2016zud} are provided in \cite{Lin:2021hqs, Kudler-Flam:2019oru, Agon:2021tia, Rolph:2021hgz} (also see \cite{Kudler-Flam:2019nhr, Wen:2018whg, Wen:2018mev, Wen:2019iyq}).} for the PEE that satisfy the above 7 requirements. In this paper, we will mainly use the \emph{additive linear combination} (ALC) proposal \cite{Wen:2018whg,Wen:2020ech} in two-dimensional field theories. It was shown in \cite{Wen:2018whg,Kudler-Flam:2019oru,Wen:2020ech,Wen:2019iyq} that the ALC proposal satisfies all the above-mentioned properties.
\begin{itemize}
\item \textit{The ALC proposal}:
Consider a boundary region $A$. Suppose that it can be partitioned into three non-overlapping subregions $A=\alpha_L\cup\alpha\cup\alpha_R$, where $\alpha$ is some subregion inside $A$ and $\alpha_{L}$ ($\alpha_{R}$) denotes the regions left (right) to it. On this configuration, the claim of the \textit{ALC proposal} is the following:
\begin{align}\label{ECproposal}
s_{A}(\alpha)=\mathcal{I}(\alpha,\bar{A})=\frac{1}{2}\left(S_{ \alpha_L\cup\alpha}+S_{\alpha\cup \alpha_R}-S_{ \alpha_L}-S_{\alpha_R}\right)\,.
\end{align}
\end{itemize}
The \textit{ALC proposal} is supposed to be general and applicable for any theories.
However, as we have stated before, a specific configuration and order between the subsets are required inside $A$. This order is essential for the PEE to satisfy the additivity and is naturally possessed by one-dimensional (spatial) systems. The calculation of PEE using \textit{ALC proposal} in higher dimensions is ambiguous. This is because there is no natural ordering of the configuration of the subsets inside a given region. Still, in those cases, the \textit{ALC proposal} only applies for configurations with high symmetry such that the contour function only depends on one specific coordinate, which can be used as a direction to define a natural ordering. See \cite{Han:2019scu} for more discussion in higher dimensions.
The \textit{entanglement contour} has been studied extensively in the context of the evolution of entanglement \cite{Vidal,Kudler-Flam:2019oru,DiGiulio:2019lpb,Rolph:2021nan, MacCormack:2020auw}. PEE is rather a quasi-local measure of entanglement and gained significant importance in the context of holography \cite{Wen:2018whg, Wen:2018mev} as it provides a finer description between quantum entanglement of the boundary theory and the geometry of the bulk \cite{Wen:2018mev}. PEE can also be extended to define the dual to the entanglement wedge cross-section. This is achieved by imposing some balance conditions \cite{Wen:2021qgx}, which will be our primary focus on this paper. This balanced PEE can be shown to be applicable for generic purification and naturally incorporates the reflected entropy \cite{Dutta:2019gen} as a specific case. More details on PEE, especially a first law-like version of the \textit{entanglement contour} and its role in recently proposed island proposal can be found in \cite{Han:2021ycp, Ageev:2021ipd, Rolph:2021nan}.
\subsection{The balanced partial entanglement entropy}
\begin{figure}[t]
\centering
\includegraphics[width=0.4\textwidth]{BP1.PNG} \quad
\includegraphics[width=0.4\textwidth]{BP2.PNG}
\caption{The purification of a region $AB$ is given by $ABB'A'$ for adjacent (left) and non-adjacent (right) intervals. When the intervals are adjacent the auxiliary system $A'B'$ consists of $A'B' = A' \cup B'$. One the other hand, for the non-adjacent case, the auxiliary system $A'B'$ consists of $A'B' = A_1' \cup B_1' \cup A_2' \cup B_2'$. }
\label{bpe1}
\end{figure}
Based on previous discussions, we introduce the so-called balanced partial entanglement entropy (BPE) \cite{Wen:2021qgx}.
\begin{itemize}
\item Consider a bipartite system $\mathcal{H}_{A}\otimes \mathcal{H}_{B}$, equipped with the density matrix $\rho_{AB}$. Let us further consider a purified system $ABA'B'$ with a pure state $\ket{\psi}$ which is purified by an auxiliary system $A'B'$, so that $\text{Tr}_{A'B'}|{\psi}\rangle \langle {\psi}|=\rho_{AB}$. Then we consider the special partition of the auxiliary system $A'B'=A'\cup B'$ that satisfies the following \textit{balance requirements}
\begin{align}\label{requirement1}
\mathrm{balance~ requirements}:\qquad s_{AA'}(A)=s_{BB'}(B)\,,\qquad s_{AA'}(A')=s_{BB'}(B')\,.
\end{align}
With the balance requirements satisfied, the $\text{BPE}\,(A:B)$ is just given by the PEE $s_{AA'}(A)$, i.e.,
\begin{align}
\text{BPE}\,(A:B)=s_{AA'}(A)\vert_{\mathrm{balance}}\,.
\end{align}
\end{itemize}
However, in general, the partition that satisfies the balance requirements is not unique. To emphasize this point and to remove the ambiguity, we further impose the condition of \textit{minimality}. This ensures that among all the possible partitions, we need to pick the partitions such that $s_{AA'}(A)$ is minimized. The partition has been done so that $A'$ is settled to be as far from $B$ as possible while $B'$ is settled to be as far from $A$ as possible. Also, there should be no embedding between $B'$ and $A'$. See Fig.\ref{bpe1} for an explanation and refer to \cite{Wen:2021qgx} for more details.
Since $S_{AA'}=S_{BB'}$, we can consider one of the conditions in \eqref{requirement1} as independent. Furthermore, we can write the entanglement entropies as\footnote{Form now onwards we use the shorthand notation $\mathcal{I}_{AB}$ for $\mathcal{I}(A,B)$.}
\begin{align}
s_{AA'}(A)=\mathcal{I}_{AB}+\mathcal{I}_{AB'}\,,\qquad s_{BB'}(B)=\mathcal{I}_{BA}+\mathcal{I}_{BA'}\,.
\end{align}
Since $\mathcal{I}_{AB}$ is independent from the purifications, and $\mathcal{I}_{AB}=\mathcal{I}_{BA}$, the balance conditions \eqref{requirement1} can also be rephrased as
\begin{align}\label{requirement1a}
\mathrm{balance~requirement}:\qquad\mathcal{I}_{AB'}=\mathcal{I}_{BA'}\,.
\end{align}
Later we will also refer to the PEE $\mathcal{I}_{AB'}$ or $\mathcal{I}_{BA'}$ as the \textit{crossing PEE} of $|{\psi}\rangle$.
In the case where $A$ and $B$ are not adjacent, the complement $A'B'$ becomes disconnected. In such a case, one can separate out the disconnected regions into further subregions as $A'=A'_1\cup A'_2$ and $B'=B'_1\cup B'_2$. As a result, they can be considered in pairs (see Fig.\ref{bpe1} (right)), and the balance conditions should be imposed on both of the pairs, leading to the generalized balance conditions for the disconnected regions as \cite{Wen:2021qgx}
\begin{align}\label{BR2}
s_{AA'}(A)=s_{BB'}(B),~~~~~~s_{AA'}(A_1')=s_{BB'}(B_1'), ~~~~~~ s_{AA'}(A_2')=s_{BB'}(B_2')\,.
\end{align}
Similar to the adjacent case, here, two conditions are independent, which is enough to determine the exact partitions.
We conclude this section by stating some inequalities satisfied by the BPE \cite{Wen:2021qgx}
\begin{align}
&1.~~ \mathrm{BPE}\,(A:B) \leq \mathrm{min}\,(S_A, S_B)\,. \\
&2.~~ \mathrm{BPE}\,(A:B) \geq \frac{1}{2} I(A:B)\,. \label{minf}\\
&3.~~ \mathrm{BPE}\,(A:B) + \mathrm{BPE}\,(A:C) \geq \mathrm{BPE}\,(A:BC)\,. \\
&4.~~ \mathrm{BPE}\,(A:BC) \geq \mathrm{BPE}\,(A:B)\,.
\end{align}
The inequality $1$ holds for both of the holographic and non-holographic cases. The property $2$ holds in holographic theories \cite{Wen:2021qgx}, for both adjacent and non-adjacent cases. The proof relies on the monogamy of the mutual information \cite{Hayden:2011ag}. For non-holographic theories, the validity of property $2$ is not clear in the case of non-adjacent intervals. Inequality $3$ follows from inequality $2$. Finally, inequality $4$ holds for generic theories.
\section{The BPE for purifications from Euclidean path-integral}\label{secbpepurifications}
In this section, we construct a class of purifications for the mixed state $\rho_{AB}$ using the Euclidean path-integral \cite{Caputa:2017urj, Caputa:2017yrh}. Then we study the BPE for these purifications and find that the BPE is independent of this class of purifications.
\subsection{The Euclidean path-integral and its optimization}
Here we follow the steps in \cite{Caputa:2017urj, Caputa:2017yrh} to prepare pure states from the optimization of Euclidean path-integrals. As shown in Fig.\,\ref{fig:bn}, we first consider the adjacent-interval case where $AB$ is connected and embedded in a pure state $|\Psi\rangle$ of a CFT. The mixed state we consider is just the reduced density matrix $\rho_{AB}$ of the region $AB$
\begin{align}
\rho_{AB}=\text{Tr}_{A'B'}|\Psi\rangle \langle\Psi|.
\end{align}
Consider the CFT on a Euclidean flat space $\mathbb{R}^2$
\begin{align}\label{flatm}
\mathrm{d}s^2=\mathrm{d} z^2+\mathrm{d} x^2,
\end{align}
where $-z \equiv \tau$ is the Euclidean time.
One prepares the ground-state wave functional for CFTs on this metric i.e., on two-dimensional flat space by computing the Euclidean path integral
\begin{align}
\Psi_{\text{CFT}}\left(\tilde{\varphi}(x)\right)=\int \left(\prod_{x}\prod_{\epsilon<z<\infty}\mathrm{D}\varphi(z,x)\right)e^{-S_{\text{CFT}}(\varphi)}\prod_{x}\delta\left(\varphi(\epsilon,x)-\tilde{\varphi}(x)\right).
\end{align}
The path-integral optimization aims to prepare a ground state wavefunctional by integrating over a different geometry such that the ground state prepared over the new geometry is proportional to the ground state prepared over the flat geometry. The procedure works in any general dimension. However, in two dimensions, the procedure enjoys significant simplification as in this case, any metric can be written in a diagonal form via a coordinate transformation
\begin{align}\label{weylmetric}
\mathrm{d}s^{2}=e^{2 \phi(z,x)}\left(\mathrm{d}z^2+\mathrm{d}x^2\right)\,,\qquad e^{2\phi(z=\epsilon,x)}=1\,.
\end{align}
The second condition is the boundary condition imposed on $z=\epsilon$, where the state is being prepared and where the metric of both geometries coincide.
After the Weyl transformation, the metric is now described by the scalar field $\phi(z,x)$. With the universal ultraviolet (UV) cutoff $\epsilon$, the measure of quantum fields $\varphi$ in the CFT changes under the Weyl transformation \cite{Caputa:2017urj}
\begin{align}
[\mathrm{D}\varphi]_{g_{ab}=e^{2\phi}\delta_{ab}}=e^{S_{L}(\phi)-S_{L}(0)}[\mathrm{D}\varphi]_{g_{ab}=\delta_{ab}}\,,
\end{align}
where $S_{L}(\phi)$ is the Liouville action \cite{Caputa:2017yrh}
\begin{align}\label{Liouville}
S_{L}[\phi]=\frac{c}{4\pi}\int^{\infty}_{-\infty}\mathrm{d}x\int^{\infty}_{\epsilon}\mathrm{d}z\left((\partial_{x}\phi)^2+(\partial_{z}\phi)^2+\mu e^{2\phi}\right).
\end{align}
Here $\mu$ is the potential and $c$ is the central charge. Furthermore, since the action $S_{L}(\varphi)$ is invariant under the Weyl transformation and the boundary condition for the scalar field at $z=\epsilon$ agrees with the original one, the ground-state wave function $\Psi_{g_{ab}=e^{2\phi}\delta_{ab}}$ computed from path-integral with the Weyl transformed metric \eqref{weylmetric} is proportional to the original one with the flat metric. In other words
\begin{align}\label{Psieq}
\Psi_{g_{ab}} \left(\tilde{\varphi}(x)\right)=e^{S_{L}(\phi)-S_{L}(0)}\Psi_{\delta_{ab}}\left(\tilde{\varphi}(x)\right)\,,
\end{align}
which implies that the state $\Psi_{g_{ab}}$ is still the same CFT vacuum state (up to the proportionality constant).
Thus, our task is to consider the special configuration of $\phi(z,x)$ that minimizes the Liouville action $S_{L}(\phi)$ using the path-integral optimization. The optimization is equivalent to minimizing the normalization $e^{S_{L}(\phi)}$ of the wavefunctional. This can be achieved by solving the equations of motion for the Liouville action \eqref{Liouville}, whose general solution is given by
\begin{align}
e^{2\phi}=\frac{4 A'(w)B'(\bar{w})}{(1-A(w)B(\bar{w}))^2}\,,
\end{align}
where $w=z+i x$ and $\bar{w}=z- ix$. For the boundary condition in \eqref{weylmetric}, the explicit solution is given by $A(w)=w, B(\bar{w})=-1/\bar{w},$ hence
\begin{align}
e^{2\phi}=\frac{\epsilon^2}{z^2}\,,
\end{align}
which exactly gives the metric of the Poincar\'e patch of AdS$_{3}$. It is worth mentioning that the authors of \cite{Caputa:2017urj} interpret this as a continuous limit of the conjectured relation between tensor networks and AdS/CFT correspondence. They also suggest that the optimization is analogous to the estimation of the computational complexity (see also \cite{Caputa:2017yrh}). The reader can refer to \cite{Caputa:2017urj, Caputa:2017yrh} for further details about path-integral optimization.
When setting boundary conditions for $\phi$ on the whole time slice, the optimization will not change the state we compute. However, when we consider the reduced density matrix of a sub-region and whole time slice as a purification for this region, we will only set boundary conditions on the region. In this case, performing the path-integral optimization will give us a specific configuration for the scalar field on the complement at $z=\epsilon$, which corresponds to a special purification for the region \cite{Caputa:2018xuf}.
\subsection{Purifications for adjacent intervals}
\subsubsection{Purifications from path-integrals without optimization}
Here we follow the method outlined in \cite{Caputa:2018xuf}. We consider an interval $[a,b]$ on an infinitely long line. The interval is decomposed into two sub-intervals
\begin{align}\label{ABadj}
A=[a,p], ~~~~~ B=[p,b], ~~~~~~~ \mathrm{where}~~ - \infty < a < p < b < \infty\,.
\end{align}
Here $A$ and $B$ are adjacents. Originally we introduced a uniform cutoff $\epsilon$ on the line and considered a discretization of the Euclidean path-integral for preparing the vacuum state of the system. The mixed state $\rho_{AB}$ is defined from this CFT vacuum by tracing out the complement of $AB$. The point $Q$ with coordinate $q$ divides the complement into two subsystems $A'$ and $B'$.
We cut the interval $AB$ open, then the path-integral representation of the density matrix $\rho_{AB}$ is given by the path-integral on this cut manifold with the imposed boundary condition
\begin{align}\label{bcrhoab}
e^{2\phi(z=\epsilon,x)}=1,\qquad a\leq x \leq b
\end{align}
on the upper and lower edge of the slit $AB$. Here we want to generate pure states as purifications for $\rho_{AB}$ from path-integral. A class of the purifications can be achieved from path-integral by setting different boundary conditions for $\phi(z,x)$ at $z=\epsilon$ on $A'B'$. These classes of purifications are still the vacuum state of the CFT but settled on different manifolds. The difference is that the lattice cutoff on $A'B'$ is no longer a uniform constant $\epsilon$. Instead it has spatial dependence controlled by the boundary condition for $\phi(z,x)$ on $A'B'$.
\begin{figure}[t]
\centering
\includegraphics[scale=0.8]{bpe1.png}
\caption{Single interval optimization. (a) The full interval $AB = [a,b]$ is denoted by the red line. The coordinate for the partition point of $AB$ is $p$ and the one for $A'B'$ is $q$. }
\label{fig:bn}
\end{figure}
It was shown in \cite{Caputa:2018xuf} that the entanglement entropy for an arbitrary interval covering the region $(x_1,x_2)$ in the purifications $\Psi_{\text{CFT}}^{\phi}$ is given by performing a scale transformation of the standard formula for entanglement entropy
\begin{align}\label{ee}
S_{EE}=\frac{c}{3}\log\left(\frac{x_2-x_1}{\epsilon}\right)+\frac{c}{6}\phi(x_2)+\frac{c}{6}\phi(x_1)\,.
\end{align}
Then the entanglement entropy for the intervals in Fig.\,\ref{fig:bn} are, for example, given by,
\begin{align}\label{SABApBp}
S_A &= \frac{c}{3} \log \left( \frac{p-a}{\epsilon}\right) + \frac{c}{6}\phi(a) + \frac{c}{6} \phi(p),~~~~ S_{A'} = \frac{c}{3} \log \left( \frac{q-a}{\epsilon}\right) + \frac{c}{6} \phi(a) + \frac{c}{6} \phi(q),\cr
S_B &= \frac{c}{3} \log \left( \frac{b-p}{\epsilon}\right) + \frac{c}{6} \phi(p) + \frac{c}{6} \phi(b),~~~~ S_{B'} = \frac{c}{3} \log \left( \frac{q-b}{\epsilon}\right) + \frac{c}{6} \phi(b) + \frac{c}{6}\phi(q).
\end{align}
For the mixed state $\rho_{AB}$ we choose, the scalar fields should vanish inside $AB$ due to the boundary conditions, hence
\begin{align}
\phi(a)=\phi(b)=\phi(p)=0\,.
\end{align}
Now we consider an arbitrary purification from the path-integral, which means we will not specify the boundary condition for $\phi$ on $A'B'$ at the beginning. Then we impose the balance condition \cite{Wen:2021qgx}
\begin{align}
S_A - S_B = S_{A'} - S_{B'}. \label{bal1}
\end{align}
Substituting the entanglement entropies \eqref{SABApBp} into the above balance condition, we get the partition for $A'B'$ with the position of $Q$ settled at
\begin{align}\label{solq}
q = \frac{2ab - (a+b)p}{a + b - 2p} .
\end{align}
When the balance condition is satisfied, we substitute the above $q$ in to the PEE $s_{AA'}(A)$ to find the $\mathrm{BPE}\,(A:B)$, which is given by
\begin{align}\label{BPE}
\text{BPE}\,(A:B)=s_{AA'}(A)|_{\mathrm{balance}}=&\frac{1}{2}\left(S_{AA'}+S_{A}-S_{A'}\right)|_{\mathrm{balance}}\,,
\cr
=&\frac{c}{6}\log\left(\frac{2(p-a)(b-p)}{\epsilon(b-a)}\right).
\end{align}
Note that the above $\mathrm{BPE}\,(A:B)$ is independent from the scalar field on $A'B'$, which implies that the BPE is independent from the class of purifications determined by the boundary conditions of $\phi(z,x)$ on $A'B'$. For the case where the purification is just the vacuum state dual to the global or Poincar\'e AdS$_3$ (\emph{i.e.,} $\phi(z=\epsilon,x)=0$ on the whole time slice), the above BPE is just the area of the EWCS (see Fig.\ref{Case1} and \cite{Caputa:2018xuf} for more details). In all these purifications the BPE captures exactly the same class of correlation between $A$ and $B$ as the EWCS.
\begin{figure}[t]
\centering
\includegraphics[width=0.62\textwidth]{ewcs.pdf}
\caption{The $x$ coordinates for points $K_1$, $P$, $K_2$ and $Q$ are $a,p,b,q$. The PEE is given by $s_{AA'}(A) = s_{BB'}(B)$ which is dual to the EWCS $\Sigma_{AB}$ (shown by the blue line).
\label{Case1} }
\end{figure}
Then we calculate the crossing PEE for this class of purifications with the balance condition satisfied. It can be easily computed as
\begin{align}\label{crossingpee1}
\mathcal{I}_{AB'}|_{\mathrm{balance}} = s_{AA'B}(A)|_{\mathrm{balance}} &= \frac{1}{2}\left(S_{AA'}+S_{AB}-S_{A'}-S_{B}\right)|_{\mathrm{balance}}\,,
\cr
&=\frac{c}{6} \log 2\,,
\end{align}
which is also independent from the scalar field. Furthermore, it is given by a constant independent from the partition of $AB$ as well as the details of the CFT. This constant was previously found in \cite{Wen:2021qgx} for the vacuum state duals to pure AdS$_3$.
We stress that in the previous discussion, the partition \eqref{solq} of $A'B'$ determined by the balance conditions is exactly the partition that minimizes $S_{AA'}$ when path-integral is optimized \cite{Caputa:2018xuf}. This implies that the balanced condition could also be a procedure to minimize certain kinds of correlations, as in the case of the purification from optimized path-integral. We will confirm this expectation in the next section.
\subsubsection{Purification from the optimized path-integral}
We now consider the special purification determined by path-integral optimization. Following \cite{Caputa:2018xuf}, we first perform the following conformal transformation that maps the interval $[a,b]$ to an infinitely long line
\begin{align}
u = \sqrt{\frac{x-a}{b-x}}\,.
\end{align}
Then the boundary condition and the optimization in the $u$ space is exactly the same as the simple case for the vacuum state. The optimized metric is then written as \cite{Caputa:2018xuf}
\begin{align}
\mathrm{d}s^2 = \frac{\epsilon^2}{\tau^2} \,\mathrm{d} u \,\mathrm{d}\bar{u} = \frac{\epsilon^2}{\tau^2} \frac{(b-a)^2}{4 |b-y|^3 |y-a|}\, \mathrm{d} y \,\mathrm{d}\bar{y} = e^{2 \phi} \, \mathrm{d} y \,\mathrm{d} \bar{y}\,. \label{opm}
\end{align}
Here we have regularized the coordinate $\tau$ as $-\infty < \tau < - \epsilon$ with $\epsilon > 0$. The scalar field $\phi(z,x)$ at the time slice $\tau=-\epsilon$ after the optimization is then given by
\begin{align}\label{solphi}
\phi(x)=\left\{ \begin{array}{lcl}
0 & ~~~~\mbox{for} & a\leq x\leq b, \\
\log\left(\frac{\epsilon(b-a)}{2 (x-a)(x-b)}\right) & ~~~~\mbox{for} & x>b\text{ or }x<a.
\end{array}\right.
\end{align}
These are obtained from the optimized metric \eqref{opm}. Note the fact that the points $P$ and $Q$ get mapped into the coordinates $u_P = -\sqrt{(p-a)/(b-p)}$ and $u_Q = i \sqrt{(q-a)/(q-b)}$ respectively \cite{Caputa:2018xuf}. We call the pure state with boundary conditions \eqref{solphi} the optimized purification.
It is worth mentioning that path-integral optimization provides a useful tool to evaluate the EoP \cite{Caputa:2018xuf}. In the optimized purification, all the entanglement entropies can be calculated via \eqref{ee}. If we minimize $S_{AA'}$ by choosing a suitable $q$ that partitions $A'B'$, the minimized $S_{AA'}$ coincides with the EWCS of $\rho_{AB}$ \eqref{BPE}. If the correspondence between the EoP and EWCS is valid, then the optimized purification is exactly the one that minimizes $S_{AA'}$ under a suitable partition, and the $E_p(A,B)$ is just the minimized $S_{AA'}$.
Interestingly, the suitable partition that minimizes $S_{AA'}$ is given by \eqref{solq}, which is exactly the solution to the balance condition \eqref{bal1}. Furthermore, as we have shown that the $\mathrm{BPE}\,(A:B)$ \label{BPE} also yields the area of the EWCS and is independent of the boundary conditions for the scalar field in the optimized purification with $q$ given by \eqref{solq}, we have
\begin{align}
S_{AA'}=\text{BPE}\,(A:B)=s_{AA'}(A)=\frac{\mathrm{Area} \,(\Sigma_{AB})}{4G_N}\,.
\end{align}
This implies that the contribution to $S_{AA'}$ from $A'$ is zero, \emph{i.e.,}
\begin{align}
s_{AA'}(A')=\mathcal{I}_{A'B}+\mathcal{I}_{A'B'}=0\,.
\end{align}
Since the balanced condition is satisfied, according to \eqref{crossingpee1} we have $\mathcal{I}_{A'B}=c/6\log 2$, which implies
\begin{align}
\mathcal{I}_{A'B'}=\frac{1}{2}I(A':B')=-\frac{c}{6}\log 2\,.
\end{align}
One can further check the above result with a direct calculation in the optimized purification
\begin{align}\label{negativepee}
\mathcal{I}_{A'B'}=s_{A'AB}(A') &= \frac{1}{2}\left(S_{A'AB}+S_{A'}-S_{AB}\right), \nonumber \\
&= \frac{ c}{6} \left(\log \left[\frac{(q-a)(q-b)}{(b-a) \epsilon }\right]+\phi(q)\right),
\nonumber \\
&= - \frac{c}{6}\log 2\,,
\end{align}
where in the last line we substituted in the partition \eqref{solq} and the scalar field after optimization \eqref{solphi}. This is puzzling because the negative PEE and mutual information in the optimized purification breaks the Araki-Lieb inequality.
\subsection{Purifications for non-adjacent intervals}
\begin{figure}[t]
\centering
\includegraphics[scale=0.82]{double.png}
\caption{Double interval optimization.}
\label{fig:double}
\end{figure}
Next, we consider the case when $A$ and $B$ are not adjacent. More explicitly, we will consider the case shown in Fig.\,\ref{fig:double}, where
\begin{align}\label{ABadj}
A=[a,b], ~~~~~ B=[c,b], ~~~~~~~ \mathrm{where}~~ - \infty < a < b < c < d < \infty\,,
\end{align}
Also in this case the complement region $A'B'$ consists of two disconnected regions. We use $q_1$ and $q_2$ to partition them into $A'_1\cup B'_1$ and $A'_2\cup B'_2$.
As in the adjacent case, the path-integral under different boundary conditions for $\phi(z=\epsilon,x)$ on $A'B'$ define a class of purifications, which are the vacuum state of the same CFT but on different manifolds. For any configurations of $\phi$ on $A'B'$, we can calculate the entanglement entropies for intervals via \eqref{ee}. Using the ALC proposal to calculate the PEEs, the balance conditions \eqref{BR2} equate the following to equations \cite{Wen:2021qgx}
\begin{align}
S_{A'_1} - S_{B'_1} = S_{AA'_2} - S_{BB'_2}, ~~~~~~ S_{A'_2} - S_{B'_2} = S_{AA'_1} - S_{BB'_1}. \label{bal2}
\end{align}
Using \eqref{ee} to calculate the entanglement entropies, we find that all the scalar fields cancel with each other, and the balance conditions are solved by
\begin{align}
q_1=& \frac{\sqrt{(a-b) (a-c) (b-d) (c-d)}+a d-b c}{a-b-c+d},\cr
q_2=& -\frac{\sqrt{(a-b) (a-c) (b-d) (c-d)}-a d+b c}{a-b-c+d}.
\end{align}
We then calculate the BPE under the above balance condition, which is again independent from the scalar fields and given by
\begin{align}\label{BPEnonadjacent}
\mathrm{BPE}\,(A:B) &= s_{AA'}(A)|_{\mathrm{balance}} \,, \nonumber \\
& = \frac{1}{2} (S_{AA'_1} + S_{AA'_2} - S_{A'_1} - S_{A'_2})|_{\mathrm{balance}} \,, \nonumber \\
&= \frac{c}{6} \log \left(\frac{a (b+c-2 d)+2 \sqrt{(a-b) (a-c) (d-b) (d-c)}+d (b+c)-2 b c}{(a-d) (b-c)}\right).
\end{align}
Though the above result looks a bit complicated, in the context of AdS/CFT, it gives the EWCS when the entanglement wedge of $AB$ is connected.
Let us consider the following symmetric case where
\begin{align}
A= [1,1/x]\,, ~~~~ B = [-1/x,-1]\,,~~~~~~~ \mathrm{where}~~ 0 < x < 1.
\end{align}
Substituting the above parameters into the BPE \eqref{BPEnonadjacent}, we find that the partition respects the reflection symmetry and is given by $q_1=0, \, q_2\to\infty$, and the BPE is given by the following simple formula
\begin{align}
\mathrm{BPE}\,(A:B) = -\frac{c}{6} \log x\,,
\end{align}
which exactly matches with the leading term of EoP obtained through the minimization of $S_{AA'}$ in the optimized purification in \cite{Caputa:2018xuf}. It is worth noting, however, that the calculation in \cite{Caputa:2018xuf} is only valid when $x$ is small.
\section{Balance conditions as extremal conditions}\label{secbpeminimized}
In this section, we demonstrate that the BPE could be a proper measure of the total intrinsic correlation between $A$ and $B$ in the mixed state $\rho_{AB}$. In the following, we characterize all the correlations in terms of the PEE and denote this total intrinsic correlation between $A$ and $B$ as $\mathcal{C}(A, B)$. Let us forget for a moment about the BPE and start from the unknown measure $\mathcal{C}(A, B)$ with the following expectations:
\begin{itemize}
\item $\mathcal{C}(A, B)$ is intrinsic to $A$ and $B$, and hence should be independent of the purifications.
\item $\mathcal{C}(A, B)$ should give the reflected entropy when we consider the canonical purification.
\item For a purification with holographic dual, $\mathcal{C}(A, B)$ should be given by the EWCS.
\end{itemize}
Later we will show how to determine the expression of $\mathcal{C}(A, B)$ in terms of PEE.
\subsection{Minimizing the crossing correlation for adjacent cases}
Let us first consider the simple example where $A$ and $B$ are adjacents. When a purification is given, it is not clear why the balance condition for the PEE $s_{AA'}(A)=s_{BB'}(B)$ can help us identify the total intrinsic correlation $\mathcal{C}(A, B)$. We perform a decomposition of the complement $A'B'=A'\cup B'$ so that the purification consists of four different regions whose correlation structure can be characterized by the following six different PEEs:
\begin{align}
&\mathcal{I}_{AB},~\qquad \mathcal{I}_{AB'},\qquad \mathcal{I}_{AA'},\qquad \mathcal{I}_{BB'},\qquad \mathcal{I}_{A'B},\qquad \mathcal{I}_{A'B'}.
\end{align}
Some of the PEE and entanglement entropies are determined by the density matrix $\rho_{AB}$, hence are independent of the purifications. These include the entanglement entropies $S_{AB}$, $S_{A}$ and $S_{B}$, which can be written as a collection of PEEs
\begin{align}\label{pisasbsab}
S_{A}=&\mathcal{I}_{AB}+\mathcal{I}_{AB'}+\mathcal{I}_{AA'}\,,
\cr
S_{B}=&\mathcal{I}_{AB}+\mathcal{I}_{BB'}+\mathcal{I}_{BA'}\,,
\cr
S_{AB}=&\mathcal{I}_{AB'}+\mathcal{I}_{AA'}+\mathcal{I}_{BA'}+\mathcal{I}_{BB'}\,.
\end{align}
The above identities can be derived using the ALC proposal of PEE in \eqref{ECproposal}. The main implication is that the PEE sums up to give the exact entropies in the left-hand sides (LHSs). For example, the first one computes the contribution from $B$, $B'$, and $A'$ to the region $A$.
The independence from purifications for $\mathcal{I}_{AB}$ can be derived from the purification independence of the above entanglement entropies. Also, we find the purification independence of the following two linear combinations of the PEE, which we denote as $\mathcal{P}_1$ and $\mathcal{P}_2$
\begin{align}\label{C1C2}
1) ~~\mathcal{I}_{AB'}+\mathcal{I}_{AA'}=\mathcal{P}_1\,,\qquad \qquad 2) ~~\mathcal{I}_{BB'}+\mathcal{I}_{BA'}=\mathcal{P}_2\,.
\end{align}
In the following, we analyze which part of the six PEEs may contribute or relate to the intrinsic correlation $\mathcal{C}(A,B)$:
\begin{itemize}
\item The PEE $\mathcal{I}_{AB}$ is a direct and intrinsic measure of certain correlations between $A$ and $B$, hence should be included in $\mathcal{C}(A,B)$. Furthermore, in the special case where $A$ and $B$ are adjacent, $\mathcal{I}_{AB}=I(A:B)/2$.
\item The crossing PEEs $\mathcal{I}_{AB'}$ and $\mathcal{I}_{BA'}$ that cross the four regions contribute partially to $\mathcal{C}(A,B)$. They also contribute to the correlation between $AB$ and $A'B'$.
\item The PEEs $\mathcal{I}_{AA'}$ and $\mathcal{I}_{BB'}$ mainly sustain the entanglement between $AB$ and $A'B'$. They may also partially contribute to $\mathcal{C}(A,B)$.
\item The PEE $\mathcal{I}_{A'B'}$ definitely has no contribution to $\mathcal{C}(A,B)$. It is possible to eliminate this correlation part via unitary transformations inside $A'B'$.
\end{itemize}
How can we extract the correlation $\mathcal{C}(A, B)$ from the PEEs with a given purification? Unlike the EoP, we are not going to minimize over all the possible purifications to find the minimal $S_{AA'}$. Also, unlike the reflected entropy $S_{R}(A, B)$, we will not restrict ourselves to the canonical purification, which needs the explicit density matrix of $\rho_{AB}$. When the purification is fixed, the only parameter we can adjust is the partition of the complement region $A'B'$, i.e., the position of the partition point $Q$ in this case and all the PEEs except $\mathcal{I}_{AB}$ will be affected by the position of $Q$. Suppose that we start at some point near $A$, then move $Q$ towards $B$ by a small distance $dq$. This operation expands $A'$ and shrinks $B'$, hence increases $\mathcal{I}_{BA'}$ and decreases $\mathcal{I}_{AB'}$. At the same time, we keep in mind that the combinations \eqref{C1C2} do not depend on $Q$. Due to the additivity of the PEE, the changing of the PEEs can be explicitly described in the following way:
\begin{align}
\mathcal{I}_{BA'}\to \mathcal{I}_{BA'}+\mathcal{I}_{B(\mathrm{d}q)}\,, \qquad \mathcal{I}_{BB'}\to \mathcal{P}_2-\mathcal{I}_{BA'}-\mathcal{I}_{B(\mathrm{d}q)}\,,
\cr
\mathcal{I}_{AB'}\to \mathcal{I}_{AB'}-\mathcal{I}_{A(\mathrm{d}q)}\,, \qquad \mathcal{I}_{AA'}\to \mathcal{P}_1-\mathcal{I}_{BA'}+\mathcal{I}_{A(\mathrm{d}q)}\,.
\end{align}
We do not need to discuss the change of $\mathcal{I}_{A'B'}$ since it will be excluded from $\mathcal{C}(A, B)$.
When we say $Q$ is close to $A$, we mean $\mathcal{I}_{A(\mathrm{d}q)}>\mathcal{I}_{B(\mathrm{d}q)}$. If we move $Q$ towards $B$, we will first arrive at a balance point where
\begin{align}\label{balancec2}
\mathcal{I}_{A(dq)}=\mathcal{I}_{B(dq)}\,,
\end{align}
then we enter the region close to $B$ where $\mathcal{I}_{A(\mathrm{d}q)}<\mathcal{I}_{B(\mathrm{d}q)}$. Then it is natural to consider the combination $\mathcal{I}_{AB'}+\mathcal{I}_{BA'}$ which decreases at first, reaches its minimal value at the balance point, then increase as $Q$ moves further towards $B$. One can check this with an explicit calculation of this combination
\begin{align}
\mathcal{I}_{AB'}+\mathcal{I}_{BA'}=&\frac{1}{2}\left( S_{AA'}+S_{AB}-S_{A'}-S_{B}\right)+\frac{1}{2}\left(S_{AB}+S_{BB'}-S_{A}-S_{B'}\right),
\cr
=&\frac{c}{6} \log \left(\frac{(a-b)^2 (p-q)^2}{(a-p) (a-q) (p-b) (b-q)}\right),
\end{align}
where the PEEs are calculated by the ALC proposal \eqref{ECproposal}. The above expression reaches its minimal value $c/3\log 2$ at the saddle point \eqref{solq}, where the balance condition is satisfied (see Fig.\ref{saddle}). Since the minimization and the balance condition coincide with each other, we conclude
\begin{align}\label{crpee1}
\left(\mathcal{I}_{AB'}+\mathcal{I}_{BA'}\right)|_{\mathrm{minimized}}=2\mathcal{I}_{AB'}|_{\mathrm{balance}}=2\mathcal{I}_{A'B}|_{\mathrm{balance}}=\frac{c}{3}\log 2\,.
\end{align}
\begin{figure}[t]
\centering
\includegraphics[width=0.55\textwidth]{bbal.pdf}
\caption{Here we set $c=1$, $a=0$, $b=2$, and $p=3/2$. The curve shows how the sum of the crossing PEEs $\mathcal{I}_{AB'}+\mathcal{I}_{BA'}$ varies with respect to $q$. The sum reaches its minimal value of $(\log 2)/3 = 0.231049$ at $q=3$, which is the balance point.
\label{saddle} }
\end{figure}
The above analysis shows that if we exclude the contribution from $\mathcal{I}_{A'B'}$, then the sum of the crossing correlations between $AB'$ and $BA'$ is minimized at the balance point. This looks similar to the definition of EoP. However, for an arbitrary purification, $S_{AA'}-\mathcal{I}_{A'B'}$ still captures more correlations than $\mathcal{C}(A,B)$. If the crossing PEEs contribute to $\mathcal{C}(A,B)$, they also contribute to the correlation $\mathcal{C}(A',B')$ in a similar sense. This implies that the crossing PEE can only contribute partially to $\mathcal{C}(A, B)$. Before we determine the $\mathcal{C}(A, B)$ in terms of the PEE, we still need to answer the following two questions:
\begin{itemize}
\item How are the minimized crossing PEEs assigned to the correlations $\mathcal{C}(A,B)$ and $\mathcal{C}(A',B')$ respectively?
\item When the crossing PEEs are minimized, do the PEEs $\mathcal{I}_{AA'}$ and $\mathcal{I}_{BB'}$ contribute to $\mathcal{C}(A,B)$?
\end{itemize}
\subsection{The universality of the crossing PEE and the Markov gap}
The answers for the above two questions can be found from our expectation that $\mathcal{C}(A,B)$ should give the reflected entropy for the canonical purification and be independent from any particular purification. This means that the different value between $\mathcal{C}(A,B)$ and the intrinsic $\mathcal{I}_{AB}$ should be purification-independent and given by
\begin{align}\label{generalizemarkovgap}
\mathcal{C}(A,B)-\mathcal{I}_{AB}=\frac{1}{2}S_{R}(A:B)-\mathcal{I}_{AB}\,.
\end{align}
In the adjacent case where $\mathcal{I}_{AB}=I(A:B)/2$, the above difference is just given by the so called Markov gap \cite{Zou:2020bly, Siva:2021cgo, Hayden:2021gno}, which is defined as the difference between the half of the reflected entropy and the mutual information\footnote{Our notation differs from \cite{Zou:2020bly, Siva:2021cgo} by a factor of $1/2$.}
\begin{align}
h(A,B)=\frac{1}{2}\left( S_{R}(A:B)-I(A:B)\right).
\end{align}
Remarkably, explicit calculations in 2d CFTs show that the Markov gap is given by a universal constant $c/6\log 2$ for the adjacent case. However, the equation \eqref{generalizemarkovgap} suggests a generalization of the Markov gap from the canonical purification to more generic purifications, where it is generalized to be the difference between the correlation $\mathcal{C}(A,B)$ and $\mathcal{I}_{AB}$.
We argue that the proper interpretation for $\mathcal{C}(A,B)-\mathcal{I}_{AB}$ is just the minimized (or balanced) crossing PEE
\begin{align}\label{cabinterpretation}
\mathcal{C}(A,B)-\mathcal{I}_{AB}=\frac{1}{2}\left(\mathcal{I}_{AB'}+\mathcal{I}_{A’B}\right)|_{\mathrm{minimized}}=\mathcal{I}_{AB'}|_{\mathrm{balance}}=\mathcal{I}_{A'B}|_{\mathrm{balance}}\,,
\end{align}
which directly suggest that the correlation $\mathcal{C}(A,B)$ is exactly given by the $\mathrm{BPE}\,(A:B)$
\begin{align}\label{CAB}
\mathcal{C}(A,B)=\mathcal{I}_{AB}+\mathcal{I}_{AB'}|_{\mathrm{balance}}=\text{BPE}\,(A:B)\,.
\end{align}
The reason we propose \eqref{cabinterpretation} is the following: firstly, it was shown in \cite{Wen:2021qgx} that for the canonical purification, the Markov gap coincides with the balanced crossing PEE. In this case, the partition of $A'B'$ automatically satisfies the balance condition and the reflected entropy can be naturally interpreted as the PEE $s_{AA'}(A)=S_{AA'}/2$ due to the symmetry between $A'$ and $A$. Also the reflection symmetry between $AB$ and $A'B'$ implies that the minimized crossing PEEs contribute equally to $\mathcal{C}(A,B)$ and $\mathcal{C}(A',B')$. Hence, at the balance point, only half of the crossing PEEs contribute to $\mathcal{C}(A,B)$. Secondly, for all the purifications we explored, the balanced crossing PEEs are all given by the same constant $c/6\log 2$, which is the same as the Markov gap. These purifications include:
\begin{itemize}
\item The vacuum state of the holographic CFT$_2$ \cite{Wen:2021qgx}.
\item The canonical purification in holographic \cite{Wen:2021qgx, Zou:2020bly} and several generic $2d$ CFTs \cite{Zou:2020bly} including the Ising CFT, the tricritical Ising CFT, the compactified free boson CFT with different compactification radius. The universality of this constant even extends to ($2+1$)-dimensional topological phases \cite{Siva:2021cgo, Liu:2021ctk}.
\item The class of purifications we obtained from path-integral optimization with different boundary conditions for the metric on the compliment (see section \ref{secbpepurifications}).
\end{itemize}
Like the Markov gap, the balanced crossing PEEs in all the above purifications are independent of both the partition of $AB$ and the length of $AB$. It is then natural to propose that the generalization of the Markov gap for general purifications is just the balanced crossing PEE, which is a universal constant that only depends on the central charge of the CFT. It is independent of not only the purifications but also the details of the mixed state. One can give a naive argument for this universality when the pure state is defined on a circle. In CFT$_2$, the entanglement entropy of a single interval is given by a universal formula. Since the crossing PEE can be written as a special linear combination of the entanglement entropy for single intervals and all the lengths, cancel with each other when the balance condition is satisfied, and the crossing PEE for CFTs should be given by the same constant. At the balance point, since the PEE $\mathcal{I}_{AB}$ plus the balanced crossing PEE exactly give the reflected entropy and the EWCS, $\mathcal{C}(A,B)$ does not receive any contribution from $\mathcal{I}_{AA'}$ and $\mathcal{I}_{BB'}$.
\subsection{Minimizing the crossing correlation for non-adjacent cases}\label{Non-adjacent case}
Now we consider two non-adjacent intervals in the vacuum of the CFT$_2$ which correspond to the global AdS$_3$. We set the length of the boundary to be $2\pi$, thus the entanglement entropy for a single interval with length $\ell$ is given by $S=\frac{c}{3}\log\left(\frac{2}{\varepsilon}\sin\frac{\ell}{2}\right)$. In this case, the setup for non-adjacent system $AB$ is shown in Fig.\,\ref{bpe1} (right). The intervals have the following lengths:
\begin{align}
l_A=2a,\quad l_B=2b, \quad l_{A_1'}=2a_1,\quad l_{B_1'}=2b_1\quad l_{A_2'}=2a_2,\quad l_{B_2'}=2b_2.
\end{align}
We also define the length of the boundary circle to be $2\pi$ and $a_1+b_1=\alpha$, thus we have $a_2+b_2=2\pi-2a-2b-2\alpha$. As soon as the length and coordinates of $A$ and $B$ are given, the parameter $\alpha$ is also determined, thus only two parameters remain undetermined. We take them as $a_1$ and $a_2$. Hence, we have
\begin{align}\label{b1b2}
b_1=\alpha-a_1\,, \qquad b_2=\pi-\alpha-a-b-a_2\,.
\end{align}
The balance requirements \eqref{BR2} are equivalent to the following equations \cite{Wen:2021qgx}
\begin{align}
S_{A_1'}-S_{B_1'}=S_{AA_2'}-S_{BB_2'}\,,
\qquad
S_{A_2'}-S_{B_2'}=S_{AA_1'}-S_{BB_1'}\,.
\end{align}
which gives the following equations
\begin{equation}
\begin{aligned}
\frac{\sin \left[a_{1}\right]}{\sin \left[\alpha-a_{1}\right]} =\frac{\sin \left[a+a_{2}\right]}{\sin \left[\alpha+a+a_{2}\right]},~~~~~
\frac{\sin \left[a_{2}\right]}{\sin \left[\alpha+a+b+a_{2}\right]} =\frac{\sin \left[a+a_{1}\right]}{\sin \left[a+a_{2}\right]},
\end{aligned}\label{br}
\end{equation}
and determine the position of the partition points $P_1$ and $P_2$. The solution is given by \cite{Wen:2021qgx}
\begin{equation}
\begin{aligned}
a_{1}=&\cos ^{-1}\left[\frac{\sin \left(a-\alpha+a_{2}\right)+3 \sin \left(a+\alpha+a_{2}\right)}{\sqrt{2} \sqrt{-2 \cos \left(2\left(a+\alpha+a_{2}\right)\right)-2 \cos \left(2\left(a+a_{2}\right)\right)+\cos (2 \alpha)+3}}\right],\\
a+2 a_{2}=&\tan ^{-1}[\sin (a-b)(\sin (a) \cos (\eta)-\sin (b))-2 \xi \sin (a) \sin (\eta)\\
&-\sin (a) \sin (\eta) \sin (a-b)-2 \xi \sin (a) \cos (\eta)+2 \xi \sin (b)],
\end{aligned}\label{brs}
\end{equation}
where
\begin{align}
&\xi=\sqrt{\sin (a) \sin (b) \sin (a+\alpha) \sin (\alpha+b)}\,, \\
&\eta=a+b+2 \alpha\,.
\end{align}
Now we define three combinations of the crossing PEEs:
\begin{align}\label{def}
C_1\equiv&~\mathcal{I}_{AB_1^\prime}+\mathcal{I}_{BA_1^\prime}+\mathcal{I}_{AB_2^\prime}+\mathcal{I}_{BA_2^\prime}\,,
\cr
C_2\equiv&~\mathcal{I}_{A_1^\prime B_2^\prime}+\mathcal{I}_{B_1^\prime A_2^\prime}\,,
\cr
C_3\equiv&~C_1+C_2.
\end{align}
All the above PEEs can be easily calculated by the ALC proposal. The direct generalization of the crossing PEE is $C_1$. It was also verified in \cite{Wen:2021qgx} that the $\mathrm{BPE}\,(A:B)$ has the following decomposition as in \eqref{cabinterpretation}
\begin{align}
\text{BPE}\,(A:B)=s_{AA'}(A)|_{\mathrm{balance}}=\mathcal{I}_{AB}+\frac{1}{2}C_1|_{\mathrm{balance}}\,,
\end{align}
where $C_{1}$ plays exactly the role of the balanced crossing PEE. The above BPE furthermore gives the area of the EWCS \cite{Wen:2021qgx}. The combination $C_2$ is new compared with the adjacent case, which is correlation crossing the subregions but inside $A'B'$.
Then we take the first derivatives of $C_i$ with respect to $a_1,a_2$ and solve the equations
\begin{align}
\partial_{a_i}C_j=0\,,\qquad i=1,2;\quad j=1,2,3\,. \label{Cisaddle}
\end{align}
Generally, in the following regions
\begin{equation}
\begin{aligned}
&0<a_1<\alpha\,,\\
&0<a_2<\pi-a-b-\alpha\,.
\end{aligned}
\end{equation}
$C_1=C_1(a_1,a_2)$ and $C_3=C_3(a_1,a_2)$ have a minimum value while $C_2=C_2(a_1,a_2)$ has a saddle point. See Fig.\,\ref{c1c2} for a typical example. One can further check that the solutions \eqref{brs} to the balance conditions also solve the saddle conditions \eqref{Cisaddle} for all the three combinations $C_i$. So, as in the adjacent case, the crossing PEEs are minimized at the balance point.
However, we note that the balanced crossing PEEs, in this case, are no longer a universal constant but depend on the interval sizes of $A$ and $B$. This can be traced back to the fact that the correlation $\mathcal{I}_{AB}$ does not reduce to $I(A:B)/2$ for the non-adjacent intervals. Hence, one needs to be more careful about defining a generalized version of the Markov gap, which is expected to be universal. However, like the Markov gap \cite{Hayden:2021gno}, the above crossing PEES might satisfy certain bounds. When the balanced conditions are satisfied, one can verify that
\begin{align}
\mathcal{I}_{A_2'(BB_1')}=\mathcal{I}_{B_2'(AA_1')}=\mathcal{I}_{A_1'(BB_2')}=\mathcal{I}_{B_1'(AA_2')}=\frac{c}{6}\log 2\,,
\end{align}
which is because the adjacent configuration is recovered when we consider, for example $\rho_{(AA_2')(BB_2')}$ as a new bipartite system with the two subsystems being $AA_2'$ and $BB_2'$. We will make some brief comments about this in later sections.
\begin{figure}[t]
\centering
\begin{minipage}[t]{0.3\linewidth}
\centering
\includegraphics[width=4.8cm]{C1.png}
\end{minipage}
\begin{minipage}[t]{0.3\linewidth}
\centering
\includegraphics[width=4.8cm]{C2.png}
\end{minipage}
\begin{minipage}[t]{0.3\linewidth}
\centering
\includegraphics[width=4.8cm]{C3.png}
\end{minipage}
\caption{Left: $C_1=C_1(a_1,a_2)$, middle: $C_2=C_2(a_1,a_2)$, right: $C_3=C_3(a_1,a_2)$ in the case of $a=\pi/2,b=\pi/4$ and $\alpha=\pi/8$. We find that $C_1$, $C_3$ has a minimum point and $C_2$ has a saddle point. Note that we have introduced a cutoff as $a_1,a_2$ starts from $0.1$ rather than $0$.}
\label{c1c2}
\end{figure}
\section{BPE in flat holography and Markov recovery}\label{bpemarkov}
\subsection{Holographic entanglement and EWCS in 3d flat holography}
Our claim that the BPE captures the total intrinsic correlations in the mixed state should apply to generic theories. In holographic theories beyond AdS/CFT, the BPE should also correspond to the EWCS. Here we conduct an explicit test for our claim in 3d flat holography. In this case, the asymptotic symmetry group is the three-dimensional Bondi-Metzner-Sachs (BMS$_3$) group, which is infinite-dimensional. The correspondence between 3d asymptotic flat spacetime and the field theory invariant under the BMS$_3$ group (BMSFT)\footnote{Since the algebra of BMS$_3$ group and the Galilean conformal algebra (GCA) \cite{Bagchi:2009pe} are isomorphic, the duality is also denoted as the GCFT/flat-space correspondence.} settled on the null infinity was proposed in \cite{Bagchi:2010eg,Bagchi:2012cy}. Cardy-like formulas \cite{Bagchi:2012xr,Barnich:2012xq,Jiang:2017ecm} are proposed to reproduce the Bekenstein-Hawking entropy for the cosmological horizons. More importantly, the geometry for the holographic entanglement entropy is constructed in \cite{Jiang:2017ecm} (see also \cite{Wen:2018mev,Hijano:2017eii,Apolo:2020bld}), which furthermore inspires the construction of the EWCS in flat spacetime \cite{Basu:2021awn}. See also \cite{Bagchi:2014iea,Basu:2015evh} for other discussions of entanglement in BMSFTs and \cite{Hijano:2017eii,Hijano:2018nhq} for further development in 3d flat holography.
For an asymptotically flat 3d spacetime, we characterize the null infinity where the dual BMSFT lives with the coordinates $(u,\phi)$, where the $u$ direction is null, and the $\phi$ direction is spacelike. The generators of the asymptotic symmetries, which forms the BMS$_3$ group, are the following
\begin{align}
L_n = u^{n+1} \partial_u + (n+1) u^n \phi \partial_\phi, ~~~~~ M_n = u^{n+1} \partial_\phi.\qquad n=0,\pm 1,\pm 2\cdots
\end{align}
The conserved charges satisfy a centrally extended version of the algebra given by
\begin{align}
[L_m, L_n] &= (m-n) L_{m+n} + \frac{c_L}{12} (m^3 - m) \delta_{m+n,0}, \nonumber \\
[L_m, M_n] &= (m-n) M_{m+n} + \frac{c_M}{12} (m^3 - m) \delta_{m+n,0}, \nonumber \\
[M_m, M_n] &= 0.
\end{align}
Note the two types of central charges $c_{L}$ and $c_M$ depend on the specific gravity dual. Here we only focus on the BMSFT that duals to the Einstein gravity, which corresponds to the following choice for the central charges \cite{Barnich:2006av}
\begin{align}
c_L = 0\,,\qquad c_M=\frac{3}{G_N}\,.
\end{align}
Here we focus on some classical background among the general classical solutions of Einstein gravity without cosmological constant, which take the following form in Bondi gauge \cite{Barnich:2010eb}
\begin{align}\label{ClassSol}
ds^2= 8G_N M \,du^2-2 \,du\,dr+8G_N J \,du \,d\phi+r^2 d\phi^2.
\end{align}
We only discuss the case of the null-orbifold with the parameters $M=J=0$. This is dual to the zero temperature BMSFT on a plane (an analog of the zero temperature BTZ black hole). One can take a boundary interval $A : \{(u_1, \phi_1), (u_2, \phi_2)\}$ and proceed to calculate the entanglement entropy in this theory. The entanglement entropy of such an interval takes the following simple formula \cite{Bagchi:2014iea, Basu:2015evh, Jiang:2017ecm}
\begin{align}\label{EEflat}
S_{A}= \frac{c_L}{6} \ln \bigg(\frac{u_{12}}{\varepsilon}\bigg) + \frac{c_M}{6} \bigg(\frac{\phi_{12}}{u_{12}}\bigg),
\end{align}
where $\phi_{12} = \phi_2 - \phi_1$ and $u_{12} = u_2 - u_1$ and $\varepsilon$ is the (lattice) cutoff. Note that the first term is logarithmic with central charge $c_L$, whereas the second term does not involve any logarithm. Equipped with this result, we will calculate the BPE in BMSFT on some purification.
\begin{figure}[t]
\centering
\begin{minipage}[t]{0.4\linewidth}
\centering
\includegraphics[width=6cm]{flatEE1.png}
\end{minipage}
\begin{minipage}[t]{0.5\linewidth}
\centering
\includegraphics[width=6cm]{FlatEE2.png}
\end{minipage}
\caption{In the left figure, we consider an interval $A$ on the future null infinity $\mathcal{I}^{+}$. The red lines are the null geodesics $\gamma_{1,2}$ along the $r$ direction emanating from the endpoints $x_{1,2}$. The solid blue line $\mathcal{E}_{A}$ is the RT surface which is the saddle geodesic with minimal length among all the bulk geodesics connecting $\gamma_{1,2}$. The right figure shows the same geometric picture in Cartesian coordinate coordinates where all the geodesics are straight lines. The two planes are the two normal null hypersurfaces $\mathcal{N}_{\pm}$ emanating from $\mathcal{E}_{A}$.}
\label{RTflat}
\end{figure}
Another essential ingredient for our discussion is the geometric picture for holographic entanglement entropy \cite{Jiang:2017ecm}, which can be used to construct the analogue of the EWCS in $3d$ flat space. The construction of the EWCS was explicitly studied in \cite{Basu:2021awn}, with the motivation to establish the duality between the EWCS and the entanglement negativity\footnote{See also \cite{Basu:2021axf, Setare:2021ryi} for relevant discussions.} \cite{Malvimat:2018izs} in $3d$ flat holography.
The geometric picture for entanglement entropy in flat holography \cite{Jiang:2017ecm} not only contains a bulk spacelike geodesic, but also contains additional null geodesics, which are novel compared with the RT formula in AdS/CFT. This novel geometric picture with null geodesics for holographic entanglement is argued \cite{Wen:2018mev} to be valid for spacetimes with non-Lorentz invariant duals based on a modified version of the Lewkowycz-Maldacena prescription \cite{Lewkowycz:2013nqa}. Let us consider a boundary interval $A$ on the null infinity with endpoints $x_{1,2}$. To each endpoint $x_i$, we associate a null geodesic $\gamma_{i}$ emanating from it and extending into the bulk. The null geodesic is determined by the requirement that it should follow the bulk modular flow. In the Bondi gauge, they are just null lines along the $r$ direction. The spacelike geodesic $\mathcal{E}_{A}$ whose length gives the holographic entanglement entropy is just the geodesic that gives the minimal length among all the geodesics whose endpoints are respectively anchored on $\gamma_1$ and $\gamma_2$. For simplicity, we also denote $\mathcal{E}_{A}$ as the RT surface. See Fig.\,\ref{RTflat} for the geometric picture in both the Bondi gauge and the Cartesian coordinates. The boundary together with the two normal null hypersurfaces $\mathcal{N}_{\pm}$ emanating from $\mathcal{E}_{A}$ enclose a bulk region, which is the analog of the entanglement wedge in flat space.
\begin{figure}[t]
\centering
\begin{minipage}[t]{0.4\linewidth}
\centering
\includegraphics[width=5.5cm]{FlatHolo_EWCS_AdjInt.pdf}
\end{minipage}
\begin{minipage}[t]{0.5\linewidth}
\centering
\includegraphics[width=5.5cm]{FlatHolo_EWCS_DisjInt.pdf}
\end{minipage}
\caption{Boundary intervals (left: adjacent, right: non-adjacent) and their entanglement wedge. The red lines emanating from the boundary points $x_i$ are the null lines $\gamma_i$. Here $y_i \,(=1,2,3,4)$ are the endpoints of relevant RT surfaces connecting $\gamma_{i}$. $y_b$ and $y_b'$ are the endpoints of the EWCS (green), which is the saddle geodesic connecting those RT surfaces. In the left figure, $y_b$ is just a point on $\gamma_{2}$. The figures are inspired by \cite{Basu:2021awn}.}
\label{c1c3}
\end{figure}
The EWCS in 3d flat space \cite{Basu:2021awn} is defined in a similar way as the EWCS on a time slice of the AdS background. Let us consider the configurations shown in Fig.\,\ref{c1c3}. For the adjacent case where $x_2$ is the endpoint shared by $A$ and $B$, the EWCS is the saddle geodesic whose endpoints $y_b$, and $y_b'$ are anchored on $\mathcal{E}_{AB}$ and $\gamma_2$ respectively. For the non-adjacent case, the entanglement wedge of $AB$ also undergoes a phase transition from the disconnected phase to the connected phase when $A$ and $B$ are close enough. In the connected phase, the EWCS is then given by the saddle geodesic whose endpoints are anchored on $\mathcal{E}_{23}$ and $\mathcal{E}_{14}$. Here, for example, $\mathcal{E}_{23}$ is the RT surface of the interval with endpoints being $x_2$ and $x_3$. The EWCSs are drawn by the solid green lines in Fig.\,\ref{c1c3}, and their lengths are explicitly calculated in \cite{Basu:2021awn}. In the following, we will reproduce the EWCS via the BPE in BMSFT.
\subsection{BPE for adjacent cases and the Markov recovery}
First, we consider two adjacent intervals $A : \{(u_1, \phi_1), (u_2, \phi_2)\}$ and $B : \{(u_2, \phi_2), (u_3, \phi_3)\}$. The system is purified with an auxiliary system $A'B'$ partitioned by the point $Q:(u_q, \phi_q)\}$. The entanglement entropy for each interval is given by
\begin{align}
S_A &= \frac{c_L}{6} \ln \bigg(\frac{u_{12}}{\varepsilon}\bigg) + \frac{c_M}{6} \bigg(\frac{\phi_{12}}{u_{12}}\bigg), ~~~~~ S_B = \frac{c_L}{6} \ln \bigg(\frac{u_{23}}{\varepsilon}\bigg) + \frac{c_M}{6} \bigg(\frac{\phi_{23}}{u_{23}}\bigg), \nonumber \\
S_{A'} &= \frac{c_L}{6} \ln \bigg(\frac{u_{q1}}{\varepsilon}\bigg) + \frac{c_M}{6} \bigg(\frac{\phi_{q1}}{u_{q1}}\bigg), ~~~~~ S_{B'} = \frac{c_L}{6} \ln \bigg(\frac{u_{3q}}{\varepsilon}\bigg) + \frac{c_M}{6} \bigg(\frac{\phi_{3q}}{u_{3q}}\bigg),
\end{align}
where $\phi_{q1} = \phi_q - \phi_1$ and $u_{q1} = u_q - u_1$ and similarly for $\phi_{3q}$ and $u_{3q}$. As we are considering BMSFT dual to the Einstein gravity, we set $c_L = 0$. The balance condition \eqref{bal1} implies
\begin{align}
\frac{\phi_{12}}{u_{12}} - \frac{\phi_{23}}{u_{23}} = \frac{\phi_{q1}}{u_{q1}} - \frac{\phi_{3q}}{u_{3q}}\,.
\end{align}
Previously for the CFT$_2$ case, we defined the pure state on a time slice, and hence only one parameter is needed to be determined. In BMSFT, the entanglement entropy for an interval diverges when the end points are settled on a $u$ slice. Here we consider the covariant configuration where the partition point is characterized by two parameters. However, note that the balance condition is only one equation. Imposing the balance condition, we obtain a line for the partition point $Q:(u_q,\phi_q)$ as
\begin{align}
\phi_q = \bigg(\frac{1}{u_{q1}} + \frac{1}{u_{3q}}\bigg)^{-1} \bigg(\frac{\phi_{12}}{u_{12}} + \frac{\phi_1}{u_{q1}} - \frac{\phi_{23}}{u_{23}} + \frac{\phi_3}{u_{3q}}\bigg). \label{xq}
\end{align}
One can verify that all the points on the above line give the same BPE. We first calculate the PEE $s_{AA'}(A)$ with $Q$ undetermined
\begin{align}
s_{AA'}(A) = \frac{1}{2}(S_A + S_{AA'} - S_{A'}) = \frac{c_M}{12} \left(\frac{\phi_{12}}{u_{12}} +\frac{\phi_{q2}}{u_{q2}} - \frac{\phi_{q1}}{u_{q1}} \right). \label{bbp}
\end{align}
Then we substitute the solution \eqref{xq} of the balance condition into \eqref{bbp} to obtain the BPE
\begin{align}
\mathrm{BPE}\,(A:B) = \frac{c_M}{12} \bigg(\frac{\phi_{12}}{u_{12}} + \frac{\phi_{23}}{u_{23}} - \frac{\phi_{13}}{u_{13}} \bigg)\,,
\end{align}
which is independent of $u_q$. Upon the substitution $c_M = 3/G_N$, this result exactly matches the EWCS obtained in \cite{Basu:2021awn} for adjacent intervals.
Furthermore, we can calculate the crossing PEE at the balance point, which is given by
\begin{align}
\mathcal{I}_{AB'} = \frac{1}{2} (S_{A'A} + S_{AB} - S_{A'} - S_B) = \frac{c_M}{12} \log \left[\frac{\phi_{q2}}{u_{q2}} + \frac{\phi_{13}}{u_{13}} - \frac{\phi_{q1}}{u_{q1}} - \frac{\phi_{23}}{u_{23}} \right]. \label{cr}
\end{align}
On the solution \eqref{xq}, Eq.\,\eqref{cr} becomes
\begin{align}
\mathcal{I}_{AB'} = 0\,,
\end{align}
\emph{i.e.}, the balanced crossing PEE vanishes\footnote{Similarly, the difference between the entanglement negativity and half of the mutual information was found to be vanishing \cite{Basu:2021awn}. This is consistent with our results, since the entanglement negativity is also claimed to be the dual to the EWCS.}. This is in contrast to the AdS$_3$ case, where the crossing PEE is a non-zero constant.
The vanishing of the crossing PEE may have a special physical meaning related to the Markov recovery process. This follows from the observation that the PEE, hence also the BPE and the crossing PEE, can be expressed in terms of the conditional mutual information (CMI) \cite{Rolph:2021nan}. The CMI for a three-party system is defined as \cite{Hayden:2021gno}
\begin{align}
I(A:B|C) = S_{AC} + S_{BC} - S_{ABC} - S_{C} = I(A:BC) - I(A:C). \label{cmi2}
\end{align}
Here we are interested in the crossing PEE. One can check that the crossing PEE $\mathcal{I}_{AB'}$ can be written as a CMI
\begin{align}\label{cmi1}
\mathcal{I}_{AB'}
&= \frac{1}{2}(S_{AA'}+ S_{AB} - S_{A'} - S_B),
\cr
&= \frac{1}{2}(S_{BB'} + S_{AB}- S_{ABB'}- S_{B}),
\cr
&= \frac{1}{2} I(B':A|B)\,,
\end{align}
where in the second line we used the fact that $ABA'B'$ is in a pure state. More explicitly, the above equation \eqref{cmi1} states that given a purification $ABA'B'$, the crossing PEE captures the correlation between $B'$ and $A$ from the point of view of $B'$. The CMI is also symmetric in $A$ and $B'$, and it can also be expressed as $I(B':A|A')/2$.
Expressing the crossing PEE in terms of the CMI allows us to express it in terms of a specific Markov recovery process. To give a general overview of Markov recovery, consider a three-party system comprised of $A, B$ and $C$. Define the reduced density matrix of $A$ and $B$ as $\rho_{AB}$, which is generally a mixed state. We define a map $\mathcal{M}_{B \rightarrow BC}:B \rightarrow BC$, such that it acts on $\rho_{AB}$ and produces a tripartite state $\tilde{\rho}_{ABC}$, such that \cite{Hayden:2021gno}
\begin{align}
\tilde{\rho}_{ABC} = \mathcal{M}_{B \rightarrow BC} (\rho_{AB})\,. \label{mar1}
\end{align}
The question we want to address is whether it is possible to perfectly recover a tripartite state $\rho_{ABC}$ under a mapping \eqref{mar1}, provided the recovery mapping $\mathcal{M}_{B \rightarrow BC}$ exists. In fact, the recovery could be perfect or approximate (imperfect) \cite{Bhattacharya:2021dnd}. The degree of imperfectness is given by the fidelity\footnote{The fidelity between two density matrices $\rho_1$ and $\rho_2$ is given by $\mathcal{F}(\rho_1, \rho_2) = \Big(\mathrm{Tr}\sqrt{\sqrt{\rho_1} \rho_2 \sqrt{\rho_1}\Big)}\,.$}
\begin{align}
\underset{\mathcal{M}_{B \rightarrow BC}}{\mathrm{max}} \,\mathcal{F} \big(\rho_{ABC}, \mathcal{M}_{B \rightarrow BC} (\rho_{AB})\big) \geq e^{-I(A:C|B)}\,, \label{mkv}
\end{align}
\emph{i.e.}, it is lower bounded by the exponential of the (negative) CMI and ranges between $[0,1]$, \emph{i.e.,} $0 \leq \mathcal{F}(\rho_1, \rho_2) \leq 1$. If the two states are equal, then the fidelity is $1$. On the other hand, the fidelity vanishes if the states are infinitely far away from each other in the Hilbert space, \emph{i.e.,} if they are orthogonal to each other. The recovery is exact when the CMI vanishes, whereas for other cases, the recovery is approximate. In both cases, on of the goals is to find an explicit expression of the recovery map. Authors in \cite{Petz, 2015} have, in fact, considered such maps, which is known as the Petz map \cite{Cotler:2017erl, Chen:2019gbt}.
When the partition of $A'B'$ satisfies the balance condition, the crossing PEE, as we have argued, is a generalization for the Markov gap. More interestingly, in 3d flat holography, the crossing PEE vanishes \emph{i.e.,}
\begin{align}
\mathcal{I}_{AB'}|_{\mathrm{balance}}=\mathcal{I}_{BA'}|_{\mathrm{balance}}=0\,,
\end{align}
and hence, the following CMI vanish
\begin{align}
I(A:B|B')|_{\mathrm{balance}}=I(B:B'|A')|_{\mathrm{balance}}=0\,.
\end{align}
According to Eq.\eqref{mkv}, we can write
\begin{align}
0\geq - \frac{1}{2}~\underset{\mathcal{M}_{A \rightarrow AA'}}{\mathrm{max}} \,\log \mathcal{F} \big(\rho_{AA'B}, \mathcal{M}_{A \rightarrow AA'} (\rho_{AB})\big)\Big|_{\mathrm{balance}}.
\\
0\geq - \frac{1}{2}~\underset{\mathcal{M}_{B \rightarrow BB'}}{\mathrm{max}} \,\log \mathcal{F} \big(\rho_{ABB'}, \mathcal{M}_{B \rightarrow BB'} (\rho_{AB})\big)\Big|_{\mathrm{balance}}. \label{Markovgap}
\end{align}
Since the fidelity ranges from 0 to 1, the above inequalities can only be satisfied when the fidelity is 1. This implies that the vanishing crossing PEE (or Markov gap) implies a perfect Markov recovery from $\rho_{AB}$ to $\rho_{ABB'}$ or $\rho_{AA'B}$ where $A'$ and $B’$ are determined by the balance conditions.
\begin{figure}[t]
\centering
\includegraphics[scale=0.4]{flatnonadjacent.png}
\caption{The non-adjacent intervals $A: \{(u_1, \phi_1), (u_2, \phi_2)\}$ and $B: \{(u_3, \phi_3), (u_4, \phi_4)\}$.}
\label{flatnonadjacent}
\end{figure}
\subsection{BPE and EWCS for non-adjacent cases}
Then we turn to the non-adjacent case where $A$ and $B$ are given by
\begin{align}
A : \{(u_1, \phi_1), (u_2, \phi_2)\} \,,\qquad B : \{(u_3, \phi_3), (u_4, \phi_4)\}\,.
\end{align}
Here we set $u_4>u_3>u_2>u_1$, hence keeping the degrees of freedom in $AB$ in some order. The complement $A'B'$ is then partitioned by two points
\begin{align}
Q_i:(u_{q_i},\phi_{q_i})\,,\qquad i=1,2\,.
\end{align}
See Fig.\,\ref{flatnonadjacent}. The EWCS for this case was also given in \cite{Basu:2021awn}, which is given by
\begin{align}\label{ewcsnona}
E_{W}\,(A:B)=\frac{1}{4G_N}\bigg|\frac{X}{\sqrt{T}(1-T)}\bigg|\,,
\end{align}
where $T$ and $X$ are defined by
\begin{align}
T=\frac{u_{12}u_{34}}{u_{13}u_{24}}\,,\qquad X=T\left(\frac{\phi_{12}}{u_{12}}+\frac{\phi_{34}}{u_{34}}-\frac{\phi_{13}}{u_{13}}-\frac{\phi_{24}}{u_{24}}\right)\,.
\end{align}
Then let us calculate the $\mathrm{BPE}\,(A:B)$. Again we consider the covariant configuration, and hence we need four parameters to determine the two partition points. However, the balance conditions \eqref{bal2} only give two equations
\begin{align}
\frac{\phi_{2q_1}}{u_{2q_1}} - \frac{\phi_{q_1 3}}{u_{q_1 3}} = \frac{\phi_{q_2 2}}{u_{q_2 2}} - \frac{\phi_{q_2 3}}{u_{q_2 3}}\,,\qquad \frac{\phi_{q_2 1}}{u_{q_2 1}} - \frac{\phi_{q_2 4}}{u_{q_2 4}} = \frac{\phi_{q_1 1}}{u_{q_1 1}} - \frac{\phi_{q_1 4}}{u_{q_1 4}}\,,
\end{align}
which are not enough to determine the two partition points. For simplicity we can set
\begin{align}
(u_1,\phi_1)=(0,0)\,,
\end{align}
due to the translation symmetries along $u$ and $\phi$. Solving the above equations we get $\phi_{q_1}$ and $\phi_{q_2}$ in terms of $\phi_i,u_i$, $u_{q_1}$ and $u_{q_2}$, where $i=1,2,3,4$. Then we are left with two undetermined parameters $u_{q_1}$ and $u_{q_2}$.
One may boldly expect that, as in the adjacent case, when we substitute the solutions for $\phi_{q_1}$ and $\phi_{q_2}$ into the PEE $s_{A'_1AA'_2}(A)$, we will get a result exactly equals to the EWCS \eqref{ewcsnona}, which is independent of $u_{q_1}$ and $u_{q_2}$. However in this case we obtain
\begin{align}\label{bpeflatna}
\text{BPE}\,(A:B)=\frac{c_M u_{q_1 q_2} \left(u_3 \left(u_4 u_{34} \phi _2+u_2 u_{23} \phi _4\right)-u_2 u_4 u_{24} \phi _3\right)}{12 u_4 u_{23} \left(u_2 u_3 \left(u_4-u_{q_1}\right)+\left(\left(u_2+u_3-u_4\right) u_{q_1}-u_2 u_3\right) u_{q_2}\right)}\,,
\end{align}
which indeed depends on $u_{q_1}$ and $u_{q_2}$. If we take the limit $\phi_3\to \phi_2$ first, and then take the limit $u_3\to u_2$, we will find $(\phi_{q1},u_{q1})=(\phi_2,u_2)$ and the BPE \eqref{bpeflatna} will reproduce the result for the adjacent case.
We then compute the difference between $\mathrm{BPE}\,(A:B)$ \eqref{bpeflatna} and $E_{W}(A:B)$ \eqref{ewcsnona} and find that, as long as one of the following two conditions is satisfied
\begin{align}\label{addreq1}
u_{q_1}= \frac{u_2 u_4}{(1-\sqrt{T}) u_2+\sqrt{T} u_4}\,,
\\\label{addreq2}
u_{q_2}= \frac{u_2 u_4}{(1+\sqrt{T}) u_2-\sqrt{T} u_4}\,,
\end{align}
then we will have
\begin{align}
\text{BPE}\,(A:B)=E_{W}\,(A:B)\,.
\end{align}
Note that the right-hand side of \eqref{addreq1} (or \eqref{addreq2}) only depends on the $u$ coordinates of the four endpoints of $AB$. This makes \eqref{addreq1} and \eqref{addreq2} quite a simple requirement as an additional constraint. In summary, when the two balance conditions together with one additional simple requirement (\eqref{addreq1} or \eqref{addreq2}) are satisfied, we will get a BPE that equals to the EWCS and is independent of the rest of one free parameter. This observation indicates that the BPE we defined with only two balance conditions is closely related to the EWCS. If they are not related, we will need more than one requirement to equate the BPE and EWCS since there are still two free parameters. Secondly, for covariant configurations, where we need twice the number of parameters to determine the partition points for $A'B'$, the definition of the BPE needs further constraints along with the balance conditions.
We may also regard the additional condition \eqref{addreq1} (or \eqref{addreq2}) as the balance requirement, a fact which we did not understand until recently. In a recent work \cite{Basu:2022nyl}, authors studied the BPE for the BMSFTs dual to topologically massive gravity in asymptotically flat spacetimes. In these configurations, we have $c_L\neq 0$, and hence the entanglement entropies for single intervals contain a contribution from the logarithmic term in \eqref{EEflat}. It turns out that the logarithmic term should also satisfy an independent balance requirement that coincides with \eqref{addreq1} (or \eqref{addreq2}). Thus, our claim that the BPE gives the EWCS for non-adjacent cases in 3-dimensional flat holography is perfectly justified.
\subsection{Balanced crossing PEEs and tripartite entanglement}
It is interesting to visualize the interpretation of the crossing PEE or the generalized Markov gap from the gravity side. For the adjacent intervals, we have explicitly shown that the balanced crossing PEEs can be regarded as the generalized Markov gap. The Markov gap is supposed to be lower bounded by the ratio of the AdS radius and Newton's constant multiplied by the EWCS endpoints as shown in \cite{Hayden:2021gno}. Indeed, this is true for canonical purification, but it is unclear what happens for generic purifications. The universal nature of the balanced crossing PEE is expected to show some bound from the gravity side as well. Thus, for a generic purification, we propose the following\footnote{We have not explicitly found a similar statement for the non-adjacent case. This is because, the way we have defined, the crossing PEEs do not reduce to the Markov gap. However, it would be interesting to see whether, for the non-adjacent case, some modified version of the crossing PEE respects a similar bound.}
\begin{align}
\sum \mathrm{crossing~ PEE}\big|_{\mathrm{balance}} ~\geq~ \frac{\log 2}{4 \,G_N} \,|\partial \Sigma_{AB}|\,. \label{gePEE}
\end{align}
Here in the sum, one needs to include the contribution from all the crossing PEEs on the balanced condition, and $|\partial \Sigma_{AB}|$ accounts for the number of EWCS endpoints. From Eq.\eqref{crpee1}, we can evidently see that the purification from path-integral optimization actually saturates the above bound. The non-vanishing crossing PEEs suggest that the states might have large tripartite entanglement \cite{Akers:2019gcv}. On the other hand, we see the crossing PEE vanishes for the BMSFT dual to Einstein gravity, which violates the above inequality \eqref{gePEE}. This could imply the following two different cases: either the BMSFT states have the bipartite or GHZ state-type entanglement (sometimes they are referred to as the sum of triangle states (SOTS)) \cite{Zou:2020bly, Siva:2021cgo}, or the bound depends only on $c_L$, but not on $c_M$. In any case, Eq.\eqref{gePEE} needs more investigation for both AdS and non-AdS gravity as well as for generic CFTs, and we hope to address them in the future.
\section{Summary and outlook}\label{secdiscussion}
The primary objective of this paper is to demonstrate BPE as a proper measure for the total intrinsic correlations in a mixed state. First, we argued that the BPE is purification-independent by showing that, given a mixed state the BPE is the same for different purifications, including the class we constructed from the Euclidean path-integral, the canonical purification and pure states with a gravity dual. Secondly, in all the configurations where the BPE can be evaluated, the reflected entropy for the canonical purification turns out to be a specific case of the BPE. This implies that BPE holds for any purifications and generalizes the reflected entropy. Finally, we find that the correspondence between the BPE and the EWCS goes beyond AdS/CFT. We conducted a detailed evaluation of the BPE in 3d flat holography, coinciding with the EWCS result.
The purification independence for BPE was not addressed in \cite{Wen:2021qgx} because of the two special cases of purifications which give different BPE and balanced crossing PEEs from other purifications. The first one is the pure state constructed using the so-called state-surface correspondence \cite{Miyaji:2015yva} where the pure state is settled at the union of the boundary interval $AB$ and its minimal surface $\mathcal{E}_{AB}$. In this case the $\mathrm{BPE}\,(A:B)$ differs from the EWCS and the balanced crossing PEE $\mathcal{I}_{AB'}=c/12 \log 2$ (see section 3.3 in \cite{Wen:2021qgx} for details). The second case is the pure state calculated by the optimized path-integral, which is claimed to be the minimal purification where $S_{AA'}$ gives the EoP, as well as the EWCS \cite{Caputa:2018xuf}. However, both of the two cases are not robust enough to exclude the purification independence of the BPE. Especially, the support for the surface-state correspondence is far from enough; hence the pure states constructed under this context may not exist. Also, the existence of negative mutual information in the optimized purification is subtle.
We also find that the minimized crossing PEE is a natural generalization for the Markov gap. On the other hand, our crossing PEE construction covers more general cases and goes beyond the canonical purification. We decompose the BPE into two parts, the intrinsic PEE $\mathcal{I}_{AB}$ and the crossing PEEs. More interestingly, the crossing PEE (or their sum) at the balance point is shown to be minimal. For the adjacent cases, the minimized (or balanced) crossing PEE is the generalized version of the Markov gap, and it is observed to be universal, which is determined by the central charge alone. The minimized crossing PEE may capture a universal aspect for the entanglement structure in quantum systems. It may play an essential role in quantum information. One example we discussed is that, since the balanced crossing PEE can be expressed as a CMI, it can be used to characterize how precisely the Markov recovery process can be conducted. Remarkably, in the BMSFT that duals to $3d$ flat space in Einstein gravity, the balanced crossing PEE vanishes, suggesting the possibility of a perfect Markov recovery process. Furthermore, we interpret the crossing PEEs as a signature of tripartite entanglement.
\subsubsection*{Entanglement of purification revisited}
If the minimized crossing PEE is purification-independent, then the EoP may be explicitly calculated in the context of PEEs. As we showed that the crossing PEE at the balance point is minimized, the minimized $S_{AA'}$ among all the purifications and partitions should be
\begin{align}
S_{AA'}|_{\text{min}}=\mathcal{I}_{AB}+\left(\mathcal{I}_{AB'}+\mathcal{I}_{A'B}\right)|_{\mathrm{minimized}}+\mathcal{I}_{A'B'}|_{\mathrm{minimized}}\,.
\end{align}
In the adjacent case, on the right-hand side, the first term is independent of the purifications. The second term is evaluated at the balanced point and is given by a universal constant. The third term is purification dependent and could be turned off for some special purifications. Hence we conclude that
\begin{align}\label{eopshifted}
E_p(A:B)=S_{AA'}|_{\text{min}}=\frac{1}{2}I(A:B)+\frac{c}{3}\log 2\,,
\end{align}
which is greater than the EWCS by a constant $c/6\log 2$. This result apparently contradicts the claim of \cite{Caputa:2018xuf} that the EoP gives the EWCS. The contradiction lies within the exclusion of the negative PEE $\mathcal{I}_{A'B'}=-c/6\log 2$. After including this term, the conjecture perfectly holds. The negative PEE can be fixed by a constant shift for the scalar field; hence the negative PEE in \eqref{negativepee} can be shifted to zero. However, under this shift, the $S_{AA'}$ also changes to be \eqref{eopshifted}.
\subsubsection*{Future directions}
Though the concept of BPE and the crossing PEEs are inspired by our study for holographic systems, they can be defined on generic quantum systems.
Testing the purification independence for the BPE and universality of the balanced crossing PEE in more generic configurations will be some important future directions.
\begin{itemize}
\item One can consider more generic purifications in condensed matter systems. In our paper, we mainly focus on the vacuum states, and it will be crucial to test the purification independence for the BPE in other pure states which are excited. For example, it would be interesting to compute BPE in 2d free CFTs explicitly and compare it with existing techniques \cite{Camargo:2021aiq} to see the numerical advantages.
\item Both the EWCS and reflected entropy can be defined in higher dimensions, and it will be interesting to test the correspondence between the EWCS and the BPE in higher dimensions. In some highly symmetric configurations, the ALC proposal can still be valid. One can also use the formula for the so-called extensive mutual information \cite{Wen:2019iyq, EMI} to compute the PEE in higher dimensional CFTs.
\item One workable case is the (warped) AdS$_3$/ warped CFT correspondence \cite{Detournay:2012pc,Compere:2013bya}, where the geometric picture for entanglement entropy was also worked out in \cite{Song:2016gtd,Wen:2018mev,Apolo:2020bld}. In this case, EWCS can be constructed in a similar way as in the flat case; hence it is also possible to test the correspondence between the EWCS and the BPE.
\item Our calculations of BPE can also be generalized to the finite temperature cases and for other gravity duals like topological massive gravity (TMG) with non-zero $c_L$. The crossing PEE is supposed to be non-zero and should depend on the topological term as obtained in the context of the entanglement negativity \cite{Basu:2021awn}. The generalization of the EoP and EWCS from bipartite states to multipartite states was explored in \cite{Umemoto:2018jpc, Bao:2018gck}, it will also be interesting to explore the similar generalization for the BPE.
\item It will be interesting to explore the dynamics of the BPE and, more generally, the crossing PEEs by inserting heavy operators, which can be understood as the shock-wave perturbation from the dual geometry \cite{Kudler-Flam:2020url, Boruch:2020wbe}.
\item Both the BPE and the entanglement negativity are measures of mixed state correlations, which in holographic CFTs are captured by the same dual, \emph{i.e.}, the EWCS \cite{Kusuki:2019zsp}. However, the entanglement negativity is computed directly via the density matrix, while a definition for the PEE or BPE based on the density matrix is still not clear. Hence a direct comparison between them is quite tricky. Nevertheless, exploring the relation between the BPE and entanglement negativity is an interesting avenue of research.
\item The entanglement negativity contour was previously examined in \cite{Kudler-Flam:2019nhr}, where a version similar to the ALC proposal for negativity was introduced. It will be possible to impose balance conditions on the negativity, which might also provide a version of BPE for the negativity.
\end{itemize}
So far, quantities like the reflected entropy, EWCS, and EoP in covariant configurations are rarely studied (see \cite{Wang:2019ued, KumarBasak:2021lwm} for examples). The BPE we have defined can be naturally extended to covariant configurations. Our calculation in 3-dimensional flat holography shows a perfect match between the BPE and the EWCS in totally covariant configurations. It will be very interesting to explore the covariant configurations in AdS/CFT. In the static configurations where the subsystems and the complement $A'B'$ are confined on a time slice, the position for any partition point in $A'B'$ is determined by a single parameter and the number of the balance conditions equals the number the partition points. This helps us determine all the positions of the partition points using the balance requirements. However, for covariant configurations, one needs two parameters to determine the position of one partition point, while the number of balance conditions is the same as the static case, which is not enough to determine the positions of all the partition points in $A'B'$. Similar to \cite{Basu:2022nyl}, we expect additional balance requirements to appear if we consider the more generic CFTs with different left and right moving central charges. We hope to come back to this point in the near future.
\section*{Acknowledgements}
We wish to thank Pawel Caputa, Ling-Yan Hung, Jonah Kudler-Flam, Masamichi Miyaji, Huajia Wang for discussions and Pawel Caputa, Juan F. Pedraza, Tadashi Takayanagi for comments on the draft. We thank the anonymous referees of SciPost Physics for helpful suggestions. HC is partially supported by the International Max Planck Research School for Mathematical and Physical Aspects of Gravitation, Cosmology and Quantum Field Theory and by the Gravity, Quantum Fields and Information (GQFI) group at the Max Planck Institute for Gravitational Physics (Albert Einstein Institute). The GQFI group is supported by the Alexander von Humboldt Foundation and the Federal Ministry for Education and Research through the Sofja Kovalevskaja Award. PN is supported by University Grants Commission (UGC), Government of India. QW and HZ are supported by the ``Zhishan'' Scholars Programs of Southeast University.
\providecommand{\href}[2]{#2}\begingroup\raggedright |
2,877,628,090,452 | arxiv | \section{Introduction}
\; \; Percolation is a fundamental discrete model in random spatial processes and statistical physics that manifest phase transitions.
Bond percolation is the usual model on $\mathbb{Z}^d$, which has a phase transition at the
critical point $p_c \in (0, 1)$, the connected components are almost surely finite for
$p < p_c$ and for $p > p_c$ there is a unique infinite component.
Given $G= (V(G), E(G))$ with edge weights $(t_e)_{e \in G}$ and a path $\pi \subset E(G)$, define the passage time
$$
T(\pi) = \sum_{e \in \pi} t_e.
$$
Throughout the paper, we take $t_e = 0$ or $t_e=1$.
For a path $\pi \subset G = \mathbb{Z}^d$ from $x$ to $y$, the first passage time between two vertices is given by the following,
$$
T(x, y) = \inf \{ \sum T(\pi) : \pi \text{ is a path from } x \text{ to } y \}.
$$
The passage time between two vertex sets $A, B \subset V(G)$ is defined as
\begin{equation}
T(A,B) = \inf \{ T(\pi) : \pi \text{ is a path connecting some vertex of A with some vertex of B } \}.
\end{equation}
Let the variable $a(0,n) = T(0,n \mathbf{e_1})$ be called the \emph{point to point passage time} where $\mathbf{e_1}$ is the first coordinate vector.
Let the variable $b(0,n) = T(0, H_n )$ be the \emph{point to line passage time} where $$H_n : = \{ x \in G : x \cdot e_1 \geq n \}.$$
A generalization of $a(0,n)$ is given by $T(0,n \mathbf{u})$, the passage time from the origin $\{ 0 \}$ to the nearest point on the graph $G$ to $n \mathbf{u}$ for given unit vector $\mathbf{u}$.
We work with first passage percolation with $\{ t(e) : e \in E(G)\}$ i.i.d. Bernoulli random variables, with the probability given by,
\begin{equation}
\label{eqn:pre1}
\mathbb{P}( t(e)=1) = F(1)
\end{equation}
and
\begin{equation}
\label{eqn:pre2}
\mathbb{P}( t(e)=0) = F(0)
\end{equation}
We let $t=0$ be equivalent to closed edges, and let $t=1$ be equivalent to open edges.
Critical first passage percolation occurs when $F(0)= p_c(G)$, the critical probability of bond percolation of $G$.
The following dichotomy is well-known as one changes the distribution of $\{t_e\}$. For $F(0) <p_c$, $\frac{a_0}{n}$ and $\frac{b_0}{n}$ converge almost surely to a strictly positive constant. For $F(0) > p_c$, the families of random variables $\{ a_{0,n} \}$ and $\{b_{0,n} \}$ are tight (See e.g. [\cite{Kesten86} Theorem 6.1] and \cite{Zhangzhang}).
When $F(0) = p_c$, in \cite{Kesten86} it is proved for $\mathbb{Z}^2$ that for each unit vector $u$,
$$
C_3 \log n \leq \mathbb{E} T(0, nu) \leq C_4 \log n.
$$
\subsection{Main Result}
Our main result shows a CLT holds on slabs $\mathbb{S}_k = \mathbb{Z}^2 \times \{0, 1, \cdots, k\}$. We will prove the theorem for $b_{0,n}$, but the same proof also works for $a_{0,n}$ and for general $T(0,n\mathbf{u})$.
\begin{theorem}
\label{thm:1}
We simply denote $n =(n,0,0)$ . Given that $F$ is defined by (\ref{eqn:pre1}) and (\ref{eqn:pre2}) with $F(0) = p_c(\mathbb{S}_k)$, there exist constants $ 0 < C_1 , C_2 < \infty$ such that
\begin{equation}
\label{eqn:1.7}
C_1 \log n \leq \text{Var} T(0,n) \leq C_2 \log n, n \geq 2.
\end{equation}
Moreover,
\begin{equation}
\label{eqn:1.8}
\frac{b_{0,n} - \mathbb{E} b_{0,n} }{ \sqrt{ \text{Var} \; T(0,n) } } \overset{d}{\rightarrow} N(0,1)
\end{equation}
and for any $u \in S^1$
\begin{equation}
\label{eqn:1.9}
\frac{T(0,n\mathbf{u}) - \mathbb{E} T(0,n\mathbf{u})}{\sqrt{ \text{Var} \; T(0,n\mathbf{u})}} \overset{d}{\rightarrow} N(0,1)
\end{equation}
\end{theorem}
We prove Theorem \ref{thm:1} by representing $b_{0,n} - \mathbb{E} b_{0,n}$ as a sum of martingale differences, and this would allow us to apply a central limit theorem for martingales, by McLeish \cite{McLeish}.
We give a brief summary of the ideas in the proof for our result.
\subsection{Outline of Proof.}
Two main ingredients for our proof are an adaptation of the martingale central limit theorem in \cite{McLeish} and the Russo Seymour Welsh Theorem for Bernouilli percolation on slabs \cite{wu}. The analogous result on $\mathbb{Z}^2$ was proved by Kesten and Zhang \cite{kestenzhang} by adapting the martingale central limit theorem and the RSW Theorem on $\mathbb{Z}^2$, and considering the passage time within annuli $A_{n, 2n}$. We now summarize the argument of Kesten and Zhang for $\mathbb{Z}^2$.
Let $S(n)$ denote the ball of a given radius $n$. Let $A(r_1,r_2)$ denote the annulus with inner radius $r_1$ and outer radius $r_2$. By the Russo-Seymour-Welsh Theorem, with probability uniformly bounded from below, there is a $p_c$-closed circuit in $A(2^p, 2^{p+1})$. If it exists, let $\mathcal{C}_p$ denote the innermost (with respect to lexicographical ordering) circuit in $A_{2^p, 2^{p+1}}.$
Note any two vertices $v', v''$ on $\mathcal{C}_p$ are connected by a path that is part of $\mathcal{C}_p$ and have passage time equal to zero (since $\mathcal{C}_p$ is closed, and summing edges where $t_e=0$ gives no contribution to the passage time). Therefore, for all vertices $v \in \mathcal{C}_p$, the values of $T(\mathbf{0},v)$ are the same.
This fact implies that we may sum the passage times as,
\begin{equation}
\label{eqn:7}
T(\mathbf{0}, \mathcal{C}_q) = \sum_{p=0}^q T(\mathcal{C}_{p-1}, \mathcal{C}_p).
\end{equation}
Since $b_{0,n}$ is well-approximated by $T(0,\mathcal{C}_q)$ for $q$ that satisfies $2^{q-1} <n < 2^q$, it suffices to prove a CLT for (\ref{eqn:7}).
Given a circuit $C$ surrounding the origin and lying outside of $S(2^p)$, the event
$$
\{ \mathcal{C}_p =C \}
$$
only depends on
\begin{align*}
t(e) \text{ s.t. } e \in C \cup \text{int}(C) \setminus S(2^p), \text{ where int}(C)\text{ is the interior of } C.
\end{align*}
We may clearly see that random variables $\{ T(\mathcal{C}_{p-1}, \mathcal{C}_p), \mathcal{C}_p \}_{p \geq 0}$ form a Markov chain, since it is possible to determine $\mathcal{C}_p$ once $\mathcal{C}_{p-1}$ is given, even without knowledge of values $t(e)$ for any edges $e \in \text{int}(\mathcal{C}_{p-1})$.
Therefore, the proof for the CLT relies on the sum of martingale differences representation of $b_{0,n} - \mathbb{E} b_{0,n}$.
Define
\begin{align}
\label{eqn:1.21}
\mathcal{F}_p = \sigma-\text{field generated by } \mathcal{C}_p \text{ and } \{ t_e | e \in \text{int}( \mathcal{C}_p) \}
\end{align}
We therefore have,
\begin{equation}
\label{eqn:b0n}
b_{0,n} - \mathbb{E} b_{0,n} = \sum_{p=0}^q (\mathbb{E} [b_{0,n} | \mathcal{F}_p ] - \mathbb{E} [b_{0,n} | \mathcal{F}_{p-1}] ) .
\end{equation}
Then $\mathcal{G}_p := \mathbb{E} [b_{0,n} | \mathcal{F}_p] - \mathbb{E} [b_{0,n} | \mathcal{F}_{p-1} ] $ are martingale differences and are related to $T(\mathcal{C}_{p-1}, \mathcal{C}_p)$. The truncated versions of $\mathcal{G}_{p_1}$ and $\mathcal{G}_{p_2}$ are nearly independent for $| p_1 - p_2|$ large. This allows us to apply a central limit theorem for martingales (\cite{McLeish}) to obtain (\ref{eqn:1.8}).
\begin{figure}
\label{fig:widthkcircuit}
\centering
\includegraphics[width=3cm]{cpk.png}
\caption{$\overline{\mathcal{C}_p}$}
\end{figure}
We now discuss necessary modifications to prove Theorem \ref{thm:1} for $\mathbb{S}_k$. Given $A \subset \mathbb{Z}^2$, denote by
\begin{align*}
\overline{A} = A \times \{ 0, \cdots, k\}.
\end{align*}
Applying the RSW Theorem proved in \cite{wu}, we see that with probability uniformly bounded from below, there is a $p_c$-closed circuit in $\overline{A}(2^p, 2^{p+1})$. Still denote by $\mathcal{C}_p$ the innermost such circuit (given a fixed ordering of edges).
However, the equality (\ref{eqn:7}) would fail since the geodesic from $0$ to $\mathcal{C}_q$ may not intersect the $\{ \mathcal{C}_p \}_{p <q}$ (they will intersect $\{ \overline{\mathcal{C}}_p\}_{p < q}$ though). To deal with this, we modify the Def. (\ref{eqn:1.21}) as
\begin{align*}
\mathcal{F}_p = \sigma \text{-field generated by } \overline{\mathcal{C}_p} \text{ and } \{t_e : e \in \text{int}(\overline{\mathcal{C}_p}) \}.
\end{align*}
Then we can still write $b_{0,n} - \mathbb{E} b_{0,n}$ as the martingale difference sum $b_{0,n} - \mathbb{E} b_{0,n} = \sum_{p \leq q} \triangle_p$, where $\triangle_p = \mathbb{E}[b_{0,n} | \mathcal{F}_p] - \mathbb{E}[b_{0,n} | \mathcal{F}_{p-1} ]$.
The increments $( \triangle_p )_{p \leq q}$ will be different from the $(\mathcal{C}_p)_{p \leq q}$, but only by a finite number (in fact it is bounded by $\mathcal{C}_k$, where $k$ is the width of the slab). Therefore it still satisfies the condition of McLeish's CLT \cite{McLeish} and an application of that CLT concludes Theorem \ref{thm:1}.
\section{Preliminary Results}
In this section we recall the RSW Theorem for critical Bernouilli percolation on $\mathbb{S}_k$, proven in [\cite{wu} Theorem 3.1].
Let $\mathbb{P}_p$ be the product measure on the configuration space $\{0,1\}^E = \Omega$, such that $\mathbb{P}_p(e=1)=p$.
For $x < x'$, denote $[x, x'] = \{ x, x+1, \cdots, x' \}$. Let $R = \overline{[x, x'] \times [y, y']}$ be a rectangle in $\mathbb{S}_k$ that is equivalent to $[x, x'] \times [y, y'] \times [k]$. Say $R$ is crossed horizontally (denoted as $\mathcal{H}(R)$) if there is an open path from $\overline{\{x\} \times [y, y']}$ to $\overline{\{x'\} \times [y,y']}$ inside $R$.
For $m,n \geq 1$, and for $p \in [0,1]$, let us define
$$
f(m, n) = f_p (m,n) := \mathbb{P}_p [ \mathcal{H} (\overline{[0,m] \times [0,n]})].
$$
\begin{theorem}{(Box-crossing property)}
\label{thm:BCP}
Let $p= p_c(\mathbb{S}_k).$ For $\rho>0$, there exists a constant $c_{\rho} \in (0, 1)$, independent of $n$, such that for every $n \geq 1/ \rho$,
\begin{equation}
c_{\rho} \leq f(n, \lfloor \rho n \rfloor) \leq 1 - c_{\rho}.
\end{equation}
\end{theorem}
\begin{remark}
The constant $c_{\rho}$ depends on the thickness $k$ of the slab.
\end{remark}
The following are some consequences of the box-crossing property on the slab as given in [\cite{wu} Corollary 3.2].
\begin{corollary}
\label{cor:1}
For critical percolation on $\mathbb{S}_k$, we have
\begin{enumerate}
\item \label{item:1} ( Existence with positive probability of circuits in the annulus $\Lambda_{2n} \setminus \Lambda_n$.)
There exists $c>0$ such that for every $n \geq 1$,
\begin{equation}
\mathbb{P}_p [\text{there exists an open circuit in } \overline{A_{n, 2n}} \text{ surrounding } \bar{B_n}] \geq c.
\end{equation}
\item \label{item:2} (Existence of blocking surfaces with positive probability.)
There exists $c>0$ such that for every $n \geq 1,$
\begin{equation}
\mathbb{P}_p [\text{there exists an open path from } \bar{B_n} \text{ to } \partial \bar{B_{2n}}] \leq 1 - c.
\end{equation}
\end{enumerate}
\end{corollary}
\section{Proof of Theorem}
We consider the dyadic scales,
\begin{equation}
\label{eqn:2.1}
\{ 2^q \}_{q \in \mathbb{N}}
\end{equation}
Let $S(n) = [-n, n]^2 \times [k] $ be the square of size $2n$ with width $k$ centered at the origin, and $\partial S(n) = \partial [-n,n]^2 \times [k] $.
Let the annulus between scales $S(2^p)$ and $S(2^{p+1})$ be defined as
\begin{equation}
\label{eqn:annulusdefinition}
A(p) = S(2^{p+1}) \setminus S(2^p)
\end{equation}
We define $m(p)$ for $p \geq 0$ as
\begin{equation}
\label{eqn:defm}
m(p) = \inf \{ t \in \{ p, p+1, \cdots,\} : A(t) \text{ contains an open circuit surrounding the origin } \}.
\end{equation}
Properties of $m(p)$ include:
\begin{remark}
$m(p) \geq p$ but it is possible for $m(p) = m(p') \geq p' >p$, which occurs when there is no dual closed surface surrounding the origin in any of the annuli $A(p), A(p+1), \cdots, A(p'-1)$.
\end{remark}
Define the innermost open circuit as
\begin{equation}
\label{eqn:2.6}
\text{For } p \geq 0,
\mathcal{C}_p = \{ \text{ innermost $p_c$-open circuit that surrounds the origin } \textbf{0} \text{ within } A(m(p)) \}.
\end{equation}
We have the following fact.
\begin{fact}
\label{fact:2.7}
By definition, $p_1 \leq p_2$ implies $m(p_1) \leq m(p_2)$. Therefore, either
\begin{itemize}
\item $m(p_1) = m(p_2)$ and $\mathcal{C}_{p_1} = \mathcal{C}_{p_2}$ or,
\item $m(p_1) < m(p_2)$ and $\mathcal{C}_{p_1} \subset A(m(p_1)) \subset S(m(p_2)) \subset \text{int}(\mathcal{C}_{p_2})$
\end{itemize}
must hold.
\end{fact}
We introduce
$$
\overline{\mathcal{C}_p } =\mathcal{C}_p \times \{0, \cdots, k \}.
$$
Recall from Equation (\ref{eqn:1.21})
$$\mathcal{F}_p = \sigma-\text{field generated by } \mathcal{C}_p \text{ and } \{ t_e | e \in \text{int}( \overline{\mathcal{C}_p}) \}.$$
When considering the martingale difference array, instead of studying
$$T(0, n) - \mathbb{E} T(0,n),$$
we study
\begin{align}
\label{eqn:martingale}
T(0, \mathcal{C}_{\ell}) - \mathbb{E} T(0, \mathcal{C}_{\ell}) := \sum_{p=1}^{\ell} \triangle_p.
\end{align}
where $\ell$ satisfies $2^{\ell-1} < n <2^{\ell}$ and where
\begin{align}
\triangle_p ( \omega) = \mathbb{E} [ T(0, \mathcal{C}_{\ell}) |\mathcal{F}_p] - \mathbb{E} [ T(0,\mathcal{C}_{\ell}) | \mathcal{F}_{p-1}].
\end{align}
We now summarize some basic facts used in the proof.
\begin{fact}
\label{fact:1}
Given that there is a minimizing path from one point to the other, any subpath is a minimizer (between its extreme points).
\end{fact}
Fact \ref{fact:1} follows easily from the triangle inequality.
We have the following fact about $\overline{C}_p$,
\begin{fact}
\label{fact:3}
$$|T(0,x) - T(0, \mathcal{C}_p) | \leq k, \; \forall x \in \overline{\mathcal{C}_p}.$$
\end{fact}
\begin{figure}
\centering
\label{fig:proj}
\includegraphics[width=5.75cm]{bdk.png}
\caption{$d_1$ and $d_2$ are bounded by $k$}
\end{figure}
Let $\omega$ and $\omega'$ be independent samples from $\Omega$. In the proofs we will fix a scale $p \in \mathbb{N}$ and use $\omega'(S(2^p)^c)$ to denote the edge configuration of $\omega'$ outside $S(2^p)$ and use $\omega(S(2^p))$ to denote the configuration of $\omega$ inside of $S(2^p)$. Also let $\mathbb{E}$ and $\mathbb{E}'$ be the expectation with respect to $\omega$ and $\omega'$. This allows us to study how the martingale difference $\triangle_p$ depends on the configurations $\omega$ inside $\mathcal{C}_p$.
Let $\tau_{\overline{\mathcal{C}_p(\omega)}}(\omega)$ be the first intersection point of the geodesic from $\mathbf{0}$ to $\mathcal{C}_{\ell}$ at $\overline{\mathcal{C}_p}$.
More precisely, we take an arbitrary ordering on all edges, and then order all the paths in lexicographical order. If there are more than one geodesic from $\mathbf{0}$ to $C_{\ell}$, choose the smallest path with respect to the ordering. Let $\{ \mathcal{U}_1, \mathcal{U}_2 , \cdots \} $ be this smallest path (it has to be self-avoiding). We let $\tau_{\overline{\mathcal{C}_p(\omega)}}(\omega) = \mathcal{U}_m$ such that $m = \inf \{ \ell: \mathcal{U}_{\ell} \in \overline{\mathcal{C}_p} \}.$
\begin{lemma}
\label{lem:1}
\begin{align}
\label{eqn:lem1final}
\triangle_p = T(0, \mathcal{C}_p) - T(0, \mathcal{C}_{p-1}) + \mathbb{E}' [T(\tau_{\overline{\mathcal{C}_p}}, \mathcal{C}_{\ell})] - \mathbb{E}' [T(\tau_{\overline{\mathcal{C}_{p-1}}}, \mathcal{C}_{\ell})] + R
\end{align}
where $|R| \leq 8k$ with probability $1$.
\end{lemma}
\begin{proof}
Let us fix configuration of edges $\omega$.
Any path $\pi$ that traverses from $0$ to $\overline{\mathcal{C}_{\ell}}$ must intersect $\overline{\mathcal{C}_{p-1}}$ and $\overline{\mathcal{C}_p}$.
Let $\pi$ be the geodesic from 0 to $\overline{\mathcal{C}_{\ell}}$ on $\mathbb{Z}^2 \times [k]$ and let $\pi_1$ be the part of $\pi$ from $0$ to its first intersection with $\overline{\mathcal{C}_p}$.
Let $\pi_2$ be the part of $\pi$ from its first intersection with $\overline{\mathcal{C}_p}$ to its first intersection with $\overline{\mathcal{C}_{\ell}}$.
We claim
\begin{align}
\label{eqn:A}
T(\pi_1) = T(0, \overline{\mathcal{C}_p})+ R_1,
\end{align}
where $|R_1| \leq 2k$ holds with probability $1$.
Obviously, we have $T(\pi_1) \geq T(0, \overline{\mathcal{C}_p}).$ Let $\pi_p$ be the geodesic from $0$ to $\overline{\mathcal{C}_p}$.
Indeed, starting from $0$ one can first travel to $\overline{\mathcal{C}_p}$ via $\pi_p$, then move to $\mathcal{C}_p$ with a cost of at most $k$, move freely on the circuit $\mathcal{C}_p$, and finally reach the endpoint $x \in \mathcal{C}_p$ that satisfies $|\tau_{\overline{\mathcal{C}_p}} - x | \leq k.$ Then Fact \ref{fact:3} implies
$$
|T(0,x) - T(\pi_1)| \leq k
$$
and similarly
$$
|T(0,x) - T(0, \overline{\mathcal{C}_p})| \leq k.
$$
By a similar argument, we have
\begin{align}
\label{eqn:pi2}
T(\pi_2) = T(\tau_{\overline{\mathcal{C}_p}}, \overline{\mathcal{C}_{\ell}}) + \triangle T_{p, \ell},
\end{align}
where $| \triangle T_{p, \ell}| \leq 2k $ holds with probability $1$.
Combining Equation (\ref{eqn:A}) and (\ref{eqn:pi2}), we obtain
\begin{align}
\label{eqn:f1}
\mathbb{E}[T(0, \mathcal{C}_{\ell}) | \mathcal{F}_p] = T(0, \overline{\mathcal{C}_p})+ \mathbb{E}'[ T( \tau_{\overline{\mathcal{C}_p}}, \mathcal{C}_{\ell})]+ R_3
\end{align}
where $|R_3| \leq 4k$.
Similarly,
\begin{align}
\label{eqn:f2}
\mathbb{E} [T(0,\mathcal{C}_{\ell}) | \mathcal{F}_{p-1}] = T(0, \overline{\mathcal{C}_{p-1}}) + \mathbb{E}'[T(\tau_{\overline{\mathcal{C}_{p-1}}}, \mathcal{C}_{\ell})]+R_4,
\end{align}
where $|R_4|\leq 4k$.
Combining the above two Equations (\ref{eqn:f1}) and (\ref{eqn:f2}) yields the conclusion.
\end{proof}
\begin{remark}
As in \cite{kestenzhang}, by a little extra work one can write
$\triangle_p = \overline{\triangle_p}+ R_p$, where $|R_p| \leq 8k$, such that a truncated version of the random variables $\overline{\triangle_p}$ are independent. We will not use this fact in the remaining proof.
\end{remark}
\begin{definition}
Let us define
\begin{equation}
\label{def:ell}
n(p, \omega, \omega') = m(m(p, \omega)+1, \omega').
\end{equation}
Using the definition of $m(p)$ in (\ref{eqn:defm}), $n$ is the first geometric scale after $m(p)$ that contains an open circuit.
\end{definition}
\begin{lemma}
\label{lem:2}
Let $n(p, \omega, \omega')$ be as defined as (\ref{def:ell}) above.
Then
\begin{equation}
\label{eqn:2.24}
\triangle_p(\omega) = T(\mathcal{C}_{p-1}(\omega), \mathcal{C}_{p}(\omega))(\omega)+\mathbb{E}' T(\mathcal{C}_p(\omega), \mathcal{C}_{n(p, \omega, \omega')}(\omega'))(\omega') - \mathbb{E}' T(\mathcal{C}_{p-1}(\omega), \mathcal{C}_{n(p, \omega, \omega')} (\omega'))(\omega') + R'
\end{equation}
where $|R'| \leq 8k$ with probability $1$.
\end{lemma}
\begin{proof}
The statement is similar to Lemma \ref{lem:1} but with different notation.
The scale $n(p, \omega, \omega')$ is found with the following procedure.
First, determine $m(p,\omega)$. Then by exploring $p_c$-open clusters, one can find the smallest $t \geq m(p, \omega) +1$ such that there is an open circuit surrounding $\mathbf{0}$ in $A(t)$ in configuration $\omega'$.
The value of $t$ is $n(p, \omega, \omega')$, and the innermost open circuit in $A(n(p, \omega, \omega'))$ surrounding the origin in configuration $\omega'$ is $\mathcal{C}_{n(p, \omega, \omega')}(\omega').$
We have that
\begin{align*}
n(p, \omega, \omega') \geq m(p, \omega) +1,
\end{align*}
which lets us see that
\begin{align*}
\mathcal{C}_p (\omega) = \mathcal{C}_{m(p, \omega)}(\omega) \subset A(m(p, \omega)) \subset \text{int}(\mathcal{C}_{n(p, \omega, \omega')}(\omega')).
\end{align*}
Then the same argument as for Lemma \ref{lem:1} leads to (\ref{eqn:2.24}).
\end{proof}
\begin{lemma}
\label{lem:3}
There are constants denoted by $C_i >0 $ such that for $p, q \geq 1$ we have that,
\begin{equation}
\label{eqn:2.28}
\mathbb{P} [ m(p) - p \geq t ] \leq e^{-C_5 t } \; \forall t, p \geq 0
\end{equation}
\begin{equation}
\label{eqn:2.29}
\mathbb{P} [| \triangle_p | \geq x ] \leq C_6 e^{- c \sqrt{x} } \; \text{ for } x \text{ large enough}
\end{equation}
\begin{equation}
\label{eqn:2.30}
\mathbb{P} [ \max_{0 \leq p \leq q} | \triangle_p | \geq \epsilon q^{1/2} ] \leq 2 C_6 q e^{- c_1 q^{1/4} } \epsilon^{\frac{1}{2}}, \forall \epsilon >0
\end{equation}
\begin{equation}
\label{eqn:2.31}
\mathbb{E} [ \max_{ 0 \leq p \leq q } \triangle_p^2 ] \leq C_7 q
\end{equation}
\begin{equation}
\label{eqn:2.32}
C_8 q \leq \sum_{p=0}^q \mathbb{E} \triangle_p^2 \leq C_9 q
\end{equation}
\end{lemma}
\begin{proof}
We know that $m(p) - p \geq t $ occurs if and only if
there is no blocking surface surrounding the origin in any of the
annuli $A(p) , A(p+1), \cdots, A(p'-1)$, with $p' = p+t$.
It is known (\cite{smythewierman} or \cite{kesten82} or \cite{grimmett89}) that there is a constant $C_5>0$ such that
\begin{equation}
\mathbb{P} [ N_j := \text{ there is no blocking surface surrounding the origin in } A(j) ] \leq e^{-C_5}, j \geq 0.
\end{equation}
The annuli $A(j)$ are disjoint, and therefore the events $N_j$, of no blocking surfaces surrounding the origin, are independent for distinct $j$. So (\ref{eqn:2.28}) follows from this fact.
By (\ref{eqn:2.28}), we have the following for each fixed $\omega$,
\begin{equation}
\mathbb{P}' [n(p, \omega, \omega') \geq m(p, \omega)+1+t ] \leq e^{-C_5 t}.
\end{equation}
Now we want to obtain (\ref{eqn:2.29}).
By (\ref{eqn:2.24}) of Lemma \ref{lem:2},
\begin{equation}
\label{eqn:2.35}
|\triangle_p (\omega) | \leq T(\mathcal{C}_{p-1}(\omega), \mathcal{C}_p(\omega))(\omega) + \mathbb{E}' T(\mathcal{C}_{p-1}(\omega), \mathcal{C}_{n(p, \omega, \omega')}(\omega'))(\omega') + 8k.
\end{equation}
It is easy to see $T(\mathcal{C}_{p-1}(\omega), \mathcal{C}_p(\omega)) \leq \mathcal{C} 2^p$, which is bounded.
Now, we estimate for fixed $\omega$ the tail probability
\begin{equation}
\label{eqn:2.36}
\mathbb{P}' [ T ( \mathcal{C}_{p-1}(\omega), \mathcal{C}_{n(p, \omega,\omega')}(\omega')) > y].
\end{equation}
For the case when $p=0$ we shall interpret $S(2^{-1})$ be the origin $\mathbf{0}$, and let $A(-1):= S(1).$
For a square $S$ let $\partial S$ denote its topological boundary, let $\text{int}(S)$ denote its interior with the conditions $\partial S(2^{-1})= \mathbf{0}$ and $\text{int}(S(2^{-1})) = \emptyset$.
Given a fixed $\omega$, we are given a $m = m(p, \omega)$ and $\mathcal{C}_p (\omega) \subset A(m)$. For $r \geq n(p, \omega, \omega')+1$, any path on $\mathbb{S}_k$ from $\partial S(2^{p-1})$ to $\partial S(2^r)$ must intersect $\mathcal{C}_{p-1}(\omega)$ and $\mathcal{C}_{\ell(p, \omega, \omega')}.$
Therefore, Fact \ref{fact:3} implies
\begin{equation}
\label{eqn:2.37}
T( \mathcal{C}_{p-1} (\omega), \mathcal{C}_{n(p, \omega, \omega')} (\omega'))(\omega') \leq T( \partial S(2^{p-1}),\partial S(2^r))(\omega') + 4k.
\end{equation}
So it follows that for $t=0,1, \cdots,$
\begin{align}
\mathbb{P}' [T( \mathcal{C}_{p-1}(\omega), \mathcal{C}_{n(p, \omega, \omega')}(\omega'))(\omega') \geq y] & \\
\leq \mathbb{P}' [ \ell(p, \omega, \omega') \geq m(p, \omega) +1+t]& +\mathbb{P}'[ T(\partial S(2^{p-1}), \partial S(2^{m(p, \omega) +1 + t}))(\omega') \geq y ] -4k \\
\leq e^{-C_5 t} + \mathbb{P}' [ T(\partial S(2^{p-1}), &\partial S(2^{m(p, \omega)+1+t})(\omega') \geq y ] -4k. \label{eqn:2.38}
\end{align}
To estimate the RHS of the above, let us define
$$
\kappa( j, k , \omega') = \text{ minimal number of $p_c$-closed edges in any path from } \partial S(2^j) \text{ to } \partial S( 2^k) \text{ in } \omega'
$$
$$
\rho ( j, k , \omega' ) = \text{ maximal no. of edge-disjoint closed dual circuits which surround } S(2^j) \text{ in } S(2^k) \setminus S(2^j) \text{ in } \omega'.
$$
We may see that the number of closed edges $\kappa$ is equal to the number of dual surfaces $\rho$,
\begin{align}
\label{eqn:closededgesdualsurfaces}
\kappa( j, k, \omega') = \rho(j, k, \omega').
\end{align}
This is an example of the max-flow-min-cut theorem. For completeness we give a sketch of the proof in the Appendix.
We need to estimate the tail probability of $\rho$, and to do this we introduce events of $\Omega'$ given by
\begin{align}
\label{eqn:defG}
G(y) = G(y,j,k) = \{ \rho(j,k,\omega) \geq \lfloor y \rfloor \} = \{ \text{ There exist at least } \lfloor y \rfloor
\end{align}
$$
\text{ disjoint closed dual circuits surrounding } S(2^j) \text{ in } S(2^k) \setminus S(2^j) \}.
$$
$\mathbb{P}(G(y))$ can be estimated using the BK inequality [\cite{BergKesten}], stated as follows.
\begin{theorem} (BK Inequality)
For events $G_1, \cdots, G_r \subset \Omega'$ that depend on finitely many variables $J(e) := I [t(e,\omega') \text{ is open}]$,
\begin{align}
\label{eqn:Gs}
\mathbb{P}' [ G_1 \square G_2, \cdots \square G_r ] \leq \prod_{i=1}^r \mathbb{P}' [ G_i ]
\end{align}
where $ G_1 \square G_2, \cdots \square G_r $ is the event that $G_1, \cdots, G_r$ occur disjointly.
\end{theorem}
To study $\mathbb{P}(G(y))$, we apply the inequality with events given by
\begin{align*}
G_i = G ( 2 C_{10} ( k- j) + 1)
\end{align*}
which is a decreasing event; its characteristic function is a decreasing function of $J(e)$.
\cite{BergKesten} have shown Equation \ref{eqn:Gs} for this case.
By definition of $G$ in Equation (\ref{eqn:defG}),
\begin{align}
\label{eqn:rGs}
G ( r[ 2 C_{10} (k-j) + 1]) \subset G( 2 C_{10} (k-j) + 1) \square \cdots \square G(2 C_{10} (k-j) +1),
\end{align}
with $r$ events on the RHS above (disjoint occurrence).
Let us take
\begin{align}
r = [ \frac{y}{4 C_{10} (k-j)+1}]
\end{align}
Therefore, the following holds,
\begin{align}
\label{eqn:2.48}
\mathbb{P}' [ \nexists \text{ a path } \gamma: \partial S(2^j) \rightarrow \partial S(2^k) \text{ with } \leq r[ 2C_{10} (k-j)+1] \text{ closed edges }]
\end{align}
\begin{align*}
&\leq \mathbb{P}' [ \rho \geq r[ 2C_{10} (k-j)+1] ] \\
&\leq \mathbb{P}' [G(r[ 2C_{10} (k-j)+1] )] \\
&\leq ( \mathbb{P}' [G(r[ 2C_{10} (k-j)+1] ) ] )^r \text{ by Equations (\ref{eqn:rGs}) and (\ref{eqn:Gs}) } \\
& \leq 2 \cdot 2^{-C_{12} y/ (k-j)}.
\end{align*}
If an event given by the LHS of Equation (\ref{eqn:2.48}) fails, then there exists a path $\pi: \partial S(2^j) \rightarrow \partial S(2^k)$ in $\omega'$ with at most
\begin{align*}
s := \lfloor r [ 2 C_{10} (k-j) +1] \rfloor
\end{align*}
closed edges.
Now, $T( \partial S(2^j), \partial S(2^k))$ is dominated by $s$, and by our choice of $r$, is bounded by $\frac{y}{2} + 1$.
We take $j = p-1, k= m(p, \omega)+1+t$ to reach the estimate for Equation (\ref{eqn:2.36}).
Combining Equation (\ref{eqn:2.38}) and (\ref{eqn:2.48}), for $t=0,1, \cdots$,
\begin{align}
\mathbb{P}' [ T(\mathcal{C}_{p-1}(\omega), \mathcal{C}_{n(p, \omega, \omega'} (\omega'))(\omega') \geq y ] \leq e^{-C_5 t} + 2 \cdot 2^{-C_{12} y/(m - p + t + 2)} + \mathbb{P}( s \geq y).
\end{align}
Taking $t = \lfloor \sqrt{y} \rfloor$, for constants $C_{15}, C_{16} \in (0, \infty)$ and for all $\omega \in \Omega, y \geq 0$,
\begin{align}
\label{eqn:2.52}
\mathbb{P}' [ T (\mathcal{C}_{p-1}(\omega), \mathcal{C}_{n (p, \omega, \omega')} (\omega'))(\omega') \geq y] \leq C_{15} exp (-C_{16} \frac{y}{m-p+\sqrt{y}})
\end{align}
By the integration of Equation \ref{eqn:2.52} over $y$, an upper bound of the second term of the RHS of Equation (\ref{eqn:2.35}) is obtained,
\begin{align}
\label{eqn:2.53}
\mathbb{E}' T(\mathcal{C}_{p-1}(\omega), \mathcal{C}_{n(p, \omega, \omega')}(\omega'))(\omega') \leq C_{17} [ m(p, \omega) - p +1].
\end{align}
Therefore, by Equation (\ref{eqn:2.35}), for $t=0, 1, \cdots, \lfloor x/2C_{17} \rfloor$,
\begin{align}
\label{eqn:2.54}
\mathbb{P} [ | \triangle_p (\omega)| \geq x] \leq \mathbb{P} [m(p,\omega)- p \geq t]+ \mathbb{P}[ T(\mathcal{C}_{p-1}(\omega), \mathcal{C}_p (\omega))(\omega) \geq x/2, m(p, \omega)- p <t].
\end{align}
The 2nd term of the RHS above is at most
\begin{align}
\mathbb{P} [T(\partial S(2^{p-1}), \partial S (2^{p+1}))(\omega) \geq x/2] \leq 2 \cdot 2^{-C_{12}x/2(t+1)}.
\end{align}
Therefore, Equations (\ref{eqn:2.54}) and (\ref{eqn:2.28}) let us conclude for $t \leq x/2C_{17}$,
\begin{align}
\mathbb{P} [ |\triangle_p(\omega)| \geq x] \leq e^{-C_5 t} + 2 \cdot 2^{-C_{12}x/2(t+1)}.
\end{align}
Taking $t = \lfloor \sqrt{x} \rfloor$, Equation (\ref{eqn:2.29}) of Lemma \ref{lem:1} follows.
From (\ref{eqn:2.29}), (\ref{eqn:2.30}) and (\ref{eqn:2.31}) clearly follow, as well as the second inequality of (\ref{eqn:2.32}). We only need to show the first inequality of (\ref{eqn:2.32}).
By Equations (\ref{eqn:2.24}) and (\ref{eqn:2.53}),
\begin{align*}
\triangle_p (\omega) \geq T(\mathcal{C}_{p-1}(\omega), \mathcal{C}_p(\omega)) (\omega) - C_{17} [ m(p, \omega) - p +1] - 8k.
\end{align*}
Observe that a path which crosses $k$ closed dual circuits must have a passage time of at least $k$. So for $p+2 \leq q$,
$$
\mathbb{E} \triangle_p^2 \geq \mathbb{P} [\triangle_p \geq 1] \geq \mathbb{P} [ m(p, \omega)= p+1 \text{ and } T(\mathcal{C}_{p-1}(\omega), \mathcal{C}_p(\omega))(\omega) \geq 2C_{17}+1+8k]
$$
\begin{align}
\geq \mathbb{P} [ \mathcal{C}_{p-1} (\omega) \subset A(p-1) \nexists \text{ open circuit in } A(p) \text{ but } \exists \text{ at least } (2C_{17}+1+8k)
\end{align}
$$
\text{ edge-disjoint closed dual circuits surrounding } S(2^{p-1}) \subseteq A(p) \text{ and there is an open circuit in } A(p+1). ]
$$
Let
$$
\mathcal{E}_1 := \{ \text{There exist open circuits surrounding } \mathbf{0} \text{ in } A(p-1) \text{ and } A(p+1) \},
$$
$$
\mathcal{E}_2 := \{ \text{ There does not exist an open circuit surrounding } \mathbf{0} \text{ in } A(p) \},
$$
and
$$
\mathcal{E}_3 := \{ \text{There exist at least } (2C_{17}+1+8k) \text{ edge-disjoint closed dual circuits surrounding } S(2^{p-1}) \text{ in } A(p) \}.
$$
By the independence of edges in $A(p-1), A(p), A(p+1)$ along with the Harris-FKG inequality,
\begin{align}
\label{eqn:2.57}
\mathbb{E} \triangle_p^2 \geq \mathbb{P} (\mathcal{E}_1) \cdot \mathbb{P} (\mathcal{E}_2) \cdot \mathbb{P} (\mathcal{E}_3).
\end{align}
The Russo-Seymour-Welsh Theorem (Cor.\ref{cor:1}) implies that the probabilities on the RHS of Equation (\ref{eqn:2.57}) are bounded away from $0$. We can therefore see that (\ref{eqn:2.32}) follows.
\end{proof}
\begin{lemma}
\label{lemm:4}
The following holds as $q \rightarrow \infty$,
\begin{equation}
\label{lem:4}
\frac{T(\mathbf{0}, \mathcal{C}_{m(q)}) - \mathbb{E} T(\mathbf{0}, \mathcal{C}_{m(q)})}{[ \sum_{p=0}^q \mathbb{E} \triangle_{p,q}^2 ]^{1/2}} \rightarrow N(0,1) \text{ in distribution.}
\end{equation}
\end{lemma}
We prove Lemma \ref{lemm:4} by proving the three conditions of McLeish's Theorem [Theorem 2.3 \cite{McLeish}], recalled below.
\begin{theorem}
\label{thm:McLeish} (McLeish's Theorem)
Let $X_{n,i}$ be a martingale difference array that satisfies the following,
\begin{itemize}
\item $ \max_{i \leq k_n} |X_{n,i} | $ uniformly bounded in $\ell_2$-norm,
\item $\max_{i \leq k_n} |X_{n,i} | \rightarrow_p 0$
\item $\sum_i X_{n,i}^2 \rightarrow_p 1$.
\end{itemize}
Then $S_n = \sum_{i=1}^{k_n} X_{n,i} \rightarrow_{\omega} N(0,1).$
\end{theorem}
\begin{proof} (Lemma \ref{lem:4})
We may set
$$
X_{p,q} = \frac{ \triangle_{p,q}}{ [\sum_{p=0}^q \mathbb{E} \triangle_{p,q}^2 ]^{1/2}}
$$
which would let us express the left hand side of (\ref{lem:4}) as,
\begin{equation}
\label{eqn:64}
\frac{T(\mathbf{0}, \mathcal{C}_{m(q)}) - \mathbb{E} T(\mathbf{0}, \mathcal{C}_{m(q)} )}{ [\sum_{p=0}^q \mathbb{E} \triangle_{p,q}^2 ]^{1/2}} = \sum_{p=0}^q X_{p,q}
\end{equation}
We now apply Theorem \ref{thm:McLeish} to (\ref{eqn:64}).
By Lemma \ref{lem:3} Eq. (\ref{eqn:2.32}) we have
$$
|X_{p,q}| \leq | \triangle_{p,q} | / [ C_8 q]^{1/2}
$$
Conditions 1 and 2 of McLeish's Lemma follow directly from (\ref{eqn:2.30}) and (\ref{eqn:2.31}), since (\ref{eqn:2.30}) gives a tail bound for $\max_{i \leq k_n } | X_{n,i} | $ and (\ref{eqn:2.31}) gives the bound on the $\max_{i \leq k_n} |X_{n,i} |^2$.
Now it remains to prove the last condition,
\begin{equation}
\sum_{p=0}^q X_{p,q}^2 \rightarrow 1 \text{ in probability. }
\end{equation}
The condition is equivalent to
\begin{equation}
\label{eqn:2.60}
\frac{1}{q} \sum_{p=0}^q [ \triangle_{p,q}^2 - \mathbb{E} \triangle_{p,q}^2 ] \rightarrow 0 \text{ in probability.}
\end{equation}
This is a weak law of large numbers type statement, and we prove it by bounding the variance of (\ref{eqn:2.60}).
We denote by
$$
\tilde{\triangle}_{p,q} = \triangle_{p,q} I [m(p) \leq p + \frac{3}{C_5} \log q].
$$
Then,
\begin{align*}
\mathbb{P} \{ \triangle_{p,q} \neq \tilde{\triangle}_{p,q} \text{ for some } p \leq q \} &\leq \sum_{p=0}^q \mathbb{P} \{ m(p) - p \geq \frac{3}{C_5} \log q \} \\
&\leq (q+1) e^{-3 \log q} \text{ by (\ref{eqn:2.28})} \\
&\rightarrow 0.
\end{align*}
Apply (\ref{eqn:2.29}) and (\ref{eqn:2.28}) and we have
\begin{align}
\sum_{p=0}^q [\mathbb{E} \triangle_{p,q}^2 - \mathbb{E} \tilde{\triangle}_{p,q}^2] = \sum_{p=0}^q \mathbb{E} [ \triangle_{p,q}^2 I [m(p) - p \geq \frac{3}{C_5} \log q ]] = o(q).
\end{align}
Now to show (\ref{eqn:2.60}) it suffices to prove the following,
\begin{equation}
\label{eqn:2.61}
\frac{1}{q} \sum_{p=0}^q [\tilde{\triangle}_{p,q}^2 - \mathbb{E} \tilde{\triangle}_{p,q}^2 ] \rightarrow 0 \text{ in probability.}
\end{equation}
We obtain (\ref{eqn:2.61}) by bounding the variance of the expression.
Note that $\tilde{\triangle}_{p,q}$ and $\tilde{\triangle}_{r,q}$ are independent when
$$
|p - r| \geq (\frac{3}{C_5}) \log q + 2
$$ holds. We also have the following uniform bound for $0 \leq p \leq q$,
\begin{align*}
Var(\tilde{\triangle}_{p,q}^2) &\leq \mathbb{E} [\tilde{\triangle}_{p,q}^4 ] \leq 4 \int_0^{\infty} x^3 \mathbb{P} ( |\triangle_{p,q} | \geq x ) dx \\
&\leq \int_0^{\infty} x^3 e^{-c_3 \sqrt{x}}dx < \infty.
\end{align*}
This shows each of the variances of $\tilde{\triangle}_{p,q}^2$ are finite. We just need to count the total amount.
This yields the following,
\begin{align*}
Var(\sum_{p=0}^q [\tilde{\triangle}_{p,q}^2 - \mathbb{E} \tilde{\triangle}_{p,q}^2]) \\
&\leq 2 \sum_{p=0}^q \sum_{p \leq r \leq p+(3/C_5)\log q +2} [Var (\tilde{\triangle}_{p,q}^2) Var (\tilde{\triangle}_{r,q}^2) ]^{1/2}\\
&= O( q \cdot \log q).
\end{align*}
which shows (\ref{eqn:2.61}) holds. Therefore the lemma follows from McLeish's Theorem 2.3 [\cite{McLeish} 1974].
\end{proof}
\subsection{Main results}
Now, we can prove the main results.
We generalize the $n$ to be any real number, from our previous requirement for $n$ to be a power of $2$. Given
\begin{equation}
\label{eqn:2.62}
2^{q-1} < n \leq 2^q,
\end{equation} define
\begin{equation}
\gamma_n = [\sum_{p=0}^q \mathbb{E} \triangle_{p,q}^2 ]^{1/2}.
\end{equation}
We must show that Equations (\ref{eqn:1.7}) and (\ref{eqn:1.8}) hold.
We define the half slab as follows,
\begin{definition}
The half slab $H_n$ is given by
$$
\{ (x,y,z) \in \mathbb{S}_k, x \geq n, z \in \{0,\cdots,k\} \}
$$
for some $n \geq 0$.
\end{definition}
\emph{Proof} of (\ref{eqn:1.7}), (\ref{eqn:1.8}).
From \ref{eqn:2.32} we see that $\gamma_n$ defined in above equation \ref{eqn:2.63} must satisfy (\ref{eqn:1.7}).
We note that $\mathcal{C}_{m(q)}$ surrounds the origin and it lies outside of $S(2^q)$, and therefore outside $S(n)$ given that $n$ is a dyadic scale or (\ref{eqn:2.62}) is satisfied. So $\mathcal{C}_{m(q)}$ must contain points in the half slab $H_n$ and therefore,
\begin{equation}
\label{eqn:2.64}
0 \leq b_{0,n} \leq T(0, \mathcal{C}_{m(q)}),
\end{equation}
since the passage time from the origin to a point $v$ is the same for all $v \in \mathcal{C}_{m(q)}$.
If for some $k$ the following holds,
\begin{equation}
\label{eqn:2.65}
\mathcal{C}_{m(q-k)} \subset S(2^{q-1}) \subset S(n),
\end{equation}
any path from the origin to $H_n$ must intersect $\mathcal{C}_{m(q-k)}$ and the following holds,
\begin{equation}
b_{0,n} \geq T( \mathbf{0}, \mathcal{C}_{m(q-k)}) = T( \mathbf{0}, \mathcal{C}_{m(q)}) - T(\mathcal{C}_{m(q-k)}, \mathcal{C}_{m(q)}).
\end{equation}
When $m(q) \leq q + t$ then the following holds,
\begin{equation}
\label{eqn:2.67}
T( \mathcal{C}_{m(q-k)}, \mathcal{C}_{m(q)}) \leq T(\partial S(2^{q-k}), \partial S(2^{q+t+1})),
\end{equation}
as in (\ref{eqn:2.37}).
Equations (\ref{eqn:2.64})-(\ref{eqn:2.67}) show that when (\ref{eqn:2.62}) holds, then for all $x \geq 0$ and $k \leq q, t \geq 0$,
\begin{align}
\mathbb{P} [| b_{0,n} - T( \mathbf{0}, \mathcal{C}_{m(q)}) | \geq x]& \nonumber \\
&\leq \mathbb{P} [ m(q-k) \geq q-1 ] +\mathbb{P} [ m(q) \geq q +t] + \mathbb{P} [ T(\partial S(2^{q-k}), \partial S(2^{q+t+1}) \geq x ] \nonumber \\
& \leq e^{-C_5(k-1)} + e^{-C_5 t}+2 \cdot 2^{-C_{12} x / (k+t+1)}. \label{eqn:2.68}
\end{align}
Take $k=t= \lfloor \sqrt{x} \rfloor $ satisfying $ t \leq q$. Therefore, for $C_{21}< \infty, \; x \leq q^2$, the following holds,
\begin{equation}
\label{eqn:2.69}
\mathbb{P} [ |b_{0,n} - T( \mathbf{0}, \mathcal{C}_{m(q)}) | \geq x ] \leq C_{21} e^{- c_0 \sqrt{x}}.
\end{equation}
This holds even for $x \geq q^2$.
From (\ref{eqn:2.64}), by (\ref{eqn:2.28}) and (\ref{eqn:2.50}),
\begin{align*}
\mathbb{P} [ | b_{0,n} - T( \mathbf{0}, \mathcal{C}_{m(q)} ) | \geq x ] \leq& \mathbb{P} [ T( \mathbf{0}, \mathcal{C}_{m(q)}) \geq x ] \\
\leq& \mathbb{P} [ m(q) \geq q+t ] + \mathbb{P} [ T(\mathbf{0}, \partial S(2^{q+t})) \geq x ] \\
\leq& e^{-C_5 t} + 2 \cdot 2^{-C_{12} x / (q+t+1)}.
\end{align*}
From (\ref{eqn:2.69}), for $q$ chosen s.t. (\ref{eqn:2.62}) holds, and as $n \rightarrow \infty$,
\begin{equation}
\frac{b_{0,n} - \mathbb{E} T( \mathbf{0}, \mathcal{C}_{m(q)}) }{\gamma_n} \rightarrow 0 \text{ in probability.}
\end{equation}
Because the following holds,
$$
\gamma_n = \gamma_{2^q} = [ \sum_{p=0}^q \mathbb{E} \triangle_{p,q}^2 ]^{1/2},
$$
by (\ref{eqn:2.62}), and by Lemma \ref{lemm:4} (\ref{lem:4}), we may conclude the following,
$$
\frac{ b_{0,n} - \mathbb{E} T( \mathbf{0}, \mathcal{C}_{m(q)}) }{\gamma_n} \rightarrow N(0,1) \text{ in distribution. }
$$
Finally, to prove (\ref{eqn:1.8}), it is clear from (\ref{eqn:2.69}) that
$$
\mathbb{E} b_{0,n} - \mathbb{E} T(\mathbf{0}, \mathcal{C}_{m(q)}) \text{ is bounded.}
$$
Let us define the following variables,
\begin{equation}
s_n = T( \mathbf{0}, \partial S(n)).
\end{equation}
Any path from $\mathbf{0} \rightarrow H_n$ must intersect $\partial S(n)$, so that
\begin{equation}
s_n \leq b_{0,n}.
\end{equation}
We also have the following whenever (\ref{eqn:2.65}) holds,
\begin{equation}
\label{eqn:2.73}
T(\mathbf{0}, \partial S(2^{q-k})) \leq s_n.
\end{equation}
So with the same proof technique as in the last proof,
\begin{equation}
\label{eqn:2.74}
\mathbb{P} [ |s_n - T(\mathbf{0}, \mathcal{C}_{m(q)})| \geq x ] \leq C_{21} e^{-c_0 \sqrt{x} }
\end{equation}
and that
\begin{equation}
\frac{s_n - \mathbb{E} s_n}{\gamma_n} \rightarrow N(0,1)
\end{equation}
in distribution.
From (\ref{eqn:2.74}) and (\ref{eqn:2.69}),
\begin{equation}
\mathbb{E} S_n = \mathbb{E} b_{0,n} + O(1),
\end{equation}
and therefore we conclude (\ref{eqn:1.8}).
Now we prove the main result,
$$
\frac{T(\mathbf{0}, nu) - \mathbb{E} T(\mathbf{0},nu)}{\sqrt{2} \gamma_n} \rightarrow N(0,1)
$$
in distribution where $N(0,1)$ is a standard normal variable with mean $0$ and variance $1$.
\begin{equation}
\mathbb{E} b_{0,n} - \mathbb{E} s_n = O(1).
\end{equation}
Similarly, one can use (\ref{eqn:2.83}) below to show that
\begin{equation}
\mathbb{E}T(0, nu)- 2\mathbb{E}c_n = O(1)
\end{equation}
for every unit vector u.
Let $u = (u_1, u_2)$ be a unit vector with a fixed position. Let $0 \leq u_2 \leq u_1 \leq 1$, without loss of generality. Therefore, $u_1 \geq 2^{-1/2}.$
Let the following hold for scale we denote by $r$,
\begin{equation}
2^{r-1} < \frac{1}{2} n u_1 \leq 2^r
\end{equation}
which combined with (\ref{eqn:2.62}) and the fact that $u_1 \geq 2^{-1/2}$ gives us that
\begin{equation}
\label{eqn:2.79}
q-3 \leq r \leq q.
\end{equation}
Consider the two squares denoted as
$$
S' = S(2^{r-1})
$$
and
$$
S'' = nu + S(2^{r-1}).
$$
The squares $S', S''$ are disjoint and $\mathbf{0} \in S'$ and $nu \in S''$.
Therefore, a path from the origin to $nu$ must contain the piece from $\mathbf{0}$ to the first intersection with $\partial S'$ and the piece of its last intersection with $\partial S''$ to $nu$.
So the following must hold,
\begin{equation}
\label{eqn:2.80}
T(\mathbf{0}, nu) \geq T(\mathbf{0}, \partial S') + T(nu, \partial S'').
\end{equation}
Now we wish to obtain an estimate in the other direction, and we consider the annuli $A(p), \cdots, A(p+1),\cdots$ such that $p \geq q+2$.
We have for each $p$,
\begin{equation}
\label{eqn:2.81}
S' \cup S'' \subset S(2^p)
\end{equation}
since $n \leq 2^q$ and $|u| =1$.
Recall the definition of $m(q+2),$ given by $\text{inf}[p \geq q+2: \exists \text{ dual blocking surface surrounding the origin in } A(p)]$.
Let $\mathcal{C} := \mathcal{C}_{m(q+2)}.$
By (\ref{eqn:2.81}), $\mathcal{C}$ must surround both $S', S''$ and therefore also must surround $\mathbf{0}$ and $nu$. Now, we connect $\mathbf{0}$ and $nu$ to $\mathcal{C},$ along an arc that lies on $\mathcal{C}$.
The following holds,
$$
\partial S (2^{m(q+2)+1}) \subset \text{int}(nu + S(2^{m(q+1)+2}))
$$
and leads to the following,
\begin{align*}
T( \mathbf{0}, nu) &\leq T(\mathbf{0}, \mathcal{C}) + T(nu, \mathcal{C}) \\
&\leq T(\mathbf{0}, \partial S(2^{m(q+2)+1})) + T(nu, nu + \partial S(2^{m(q+2)+2})).
\end{align*}
By (\ref{eqn:2.80}) this gives,
\begin{align*}
\mathbb{P} [ | T(\mathbf{0},nu) - T(\mathbf{0}, \partial S') - T(nu, \partial S'') | \geq 2x] &\leq \mathbb{P} [ | T(\mathbf{0}, \partial S') - T(\mathbf{0}, \partial S(2^{m(q+2)+1})) | \geq x ] \\
+& \mathbb{P} [ | T(nu, \partial S '') - T(nu,nu+ \partial S ( 2^{m(q+2)+2})) | \geq x ] \\
=& \mathbb{P} [ |T(\mathbf{0}, \partial S(2^{r-1}) - T(\mathbf{0}, \partial S(2^{m(q+2)+1})) | \geq x ] \\
&+ \mathbb{P} [ | T(\mathbf{0}, \partial S(2^{r-1})) - T(\mathbf{0}, \partial S(2^{m(q+2)+2})) | \geq x] \\
&\leq 2 \mathbb{P} [m(q+2) \geq q+2+t] + 2 \mathbb{P} [m(q-k) \geq q-4] \\
&+ \mathbb{P} [T(\partial S(2^{q-k}), \partial S(2^{q+3+t})) \geq x] \\
&+ \mathbb{P} [T(\partial S(2^{q-k}), \partial S(2^{q+4+t} )) \geq x] \\
&\leq 2e^{-C_5 t} + 2 e^{-C_5(k-4)} + 4 \cdot 2^{-C_{12} x /(k+t+4)}.
\end{align*}
The above follows from (\ref{eqn:2.28}), (\ref{eqn:2.50}), (\ref{eqn:2.79}) and translation invariance.
Take $t = k = \lfloor \sqrt{x} \rfloor$, and this yields,
\begin{equation}
\label{eqn:2.83}
\mathbb{P} [ | T(\mathbf{0}, nu) - T(\mathbf{0}, \partial S') - T(nu, \partial S'') | \geq 2x ] \leq C_{22} e^{-c_0 \sqrt{x}}.
\end{equation}
Since $S'$ and $S''$ are disjoint we may recall that $T(\mathbf{0}, \partial S')$ and $T(nu, \partial S'')$ are independent, both with the distribution $s_{2^{r-1}} = T(\mathbf{0}, \partial S')$.
It therefore follows that,
\begin{equation}
\label{eqn:2.84}
\frac{1}{\sqrt{q}} [T(\mathbf{0}, nu) - T(\mathbf{0}, \partial S') - T(nu, \partial S'')] \rightarrow 0 \text{ in probability.}
\end{equation}
This lets us conclude (by (\ref{eqn:2.75})),
\begin{equation}
\label{eqn:2.75}
\frac{T(\mathbf{0}, nu) - 2 \mathbb{E} s_{2^{r-1}}}{\sqrt{2} \gamma_{2^{r-1}} } \rightarrow N(0,1) \text{ in distribution.}
\end{equation}
Now we must show the following:
\begin{equation}
\label{eqn:2.86}
\frac{\gamma_{2^{r-1}}}{\gamma_n} \rightarrow 1,
\end{equation}
We must additionally show the following:
\begin{equation}
\label{eqn:2.87}
\mathbb{E} T(\mathbf{0}, nu) - 2 \mathbb{E} c_{2^{r-1}} = O(1).
\end{equation}
We state the following fact,
\begin{fact}
\label{fact:last}
For some $k$ fixed, if the following holds,
\begin{align*}
2^{q-k} < \tilde{n} \leq n \leq 2^q,
\end{align*}
then
\begin{align*}
s_{2^{q-k}} \leq s_{\tilde{n}} \leq s_n \leq s_{2^q},
\end{align*}
\end{fact}
It is clear that (\ref{eqn:2.86}) follows from Fact \ref{fact:last}.
Therefore,
$$
\frac{1}{\sqrt{q}} [ c_{2^q} - c_{2^{q-k}}] \rightarrow_p 0 \text{ as } q \rightarrow \infty
$$
from a similar argument as for (\ref{eqn:2.68}) and (\ref{eqn:2.69}).
Combined with (\ref{eqn:2.75}), for $n=2^q$ and $n=2^{q-k}$, this lets us conclude the error term satisfies,
$$
\frac{1}{\sqrt{q}} [ \mathbb{E} c_{2^q} - \mathbb{E} c_{2^{q-k}}] \rightarrow 0 \text{ and } \frac{ \gamma_n}{\gamma_{\tilde{n}}} \rightarrow 1.
$$
This is a special case of (\ref{eqn:2.86}) because of (\ref{eqn:2.79}), and (\ref{eqn:2.87}) follows immediately from (\ref{eqn:2.83}).
\section{Appendix}
\begin{proof} (Proof of (\ref{eqn:closededgesdualsurfaces}) )
It is clear that $\kappa \leq \rho$. The number of closed edges is less than number of closed dual circuits because each of the closed surfaces must give at least one closed edge.
For the other direction, look at the open cluster that contains the closed dual circuit, and if it reaches the outer cluster then $\rho \leq \kappa$ is trivial since $\kappa = zero$ so it is possible to go from $2^j$ to $2^k$ with $0$ closed edges.
If this is not true then the open cluster must end at some endpoint with a fixed radius, and it is possible to find a closed dual surface which surrounds the closed edges.
\end{proof}
\newpage
|
2,877,628,090,453 | arxiv | \section{Introduction\label{sec:Introduction } }
Once Poincare said \emph{\textquotedblleft The scientist does not
study nature because it is useful to do so. He studies it because
he takes pleasure in it; and he takes pleasure in it because it is
beautiful. If nature were not beautiful, it would not be worth knowing,
and life would not be worth living\dots{} I mean the intimate beauty
which comes from the harmonious order of its parts and which a pure
intelligence can grasp\textquotedblright{}} (as quoted in Chapter
4 of \cite{chandrasekhar1987truth}). It is the search of this harmony
in nature that brought different theories of light. If we look back
at Newton's corpuscular theory of light, it would not be difficult
to guess that the harmony of nature revealed through his 3 laws of
motion and the law of gravity (all four of which are obeyed by particles),
which together can explain almost every phenomena known at that time
might have forced him to consider light also as corpuscular. Later
on, this search for harmony led Maxwell to discover the intimate relation
between the earlier known laws of electricity, magnetism and light
(optics). It may be noted that Maxwell's main contribution in the
famous Maxwell's equation was to modify Ampere's law by introducing
the idea of displacement current and thus to introduce a symmetry
among the laws involving electric field and magnetic field \cite{maxwell1865dynamical}\footnote{Interested readers may freely read Maxwell's original paper at http://www.jstor.org/stable/pdf/108892.pdf}.
In fact, around 1860, Maxwell summed up all the laws of electricity
and magnetism in the form of 4 equations -{}- which are now known
as Maxwell\textquoteright s equations. He showed that, in free space,
electric field $\overrightarrow{E}(z,t)=\hat{x}E_{0}\cos\left(kz-\omega t\right)$
satisfies the 4 equations with the corresponding magnetic field given
by $\overrightarrow{H}(z,t)=\hat{y}H_{0}\cos\left(kz-\omega t\right);$
$H_{0}=\sqrt{\frac{\mu_{0}}{\epsilon_{0}}}E_{0},$ where $\hat{x}$
and $\hat{y}$ represent unit vector in $X$ and $Y$ directions,
respectively and $\epsilon_{0}=8.854\times10^{-12}\,{\rm C^{2}N^{-1}m^{-2}}$
is the dielectric permittivity and $\mu_{0}=4\pi\times10^{-7}$${\rm Ns^{2}C^{-2}}$
is the magnetic permeability of the free space (classical vacuum).
The above equations describe propagating electromagnetic waves. Thus
from the laws of electricity and magnetism, Maxwell predicted the
existence of electromagnetic waves, and by substituting the above
solutions in Maxwell\textquoteright s equations he showed that the
velocity (in free space) would be given by $c=\frac{\omega}{k}=\frac{1}{\sqrt{\epsilon_{0}\mu_{0}}}\thickapprox3\times10^{8}$
m/s. Thus, Maxwell not only predicted the existence of electromagnetic
waves, he also predicted that the speed of the electromagnetic waves
in air should be about $3\times10^{8}$ m/s. He found that this value
to be very nearly equal to the measured value of velocity of light
(in air) known in that time. In fact, in 1849, Fizeau measured the
speed of light (in air) as $3.14858\times10^{8}$ m/s. The sole fact
that the two values were very close to each other led Maxwell to propound
(around 1865) his famous electromagnetic theory of light. Here, we
may note that observing a great symmetry (the fact that velocity of
electromagnetic wave and that of light are nearly the same) present
in nature, Maxwell conjectured that light is an electromagnetic wave.
In making this powerful conjecture without any available experimental
evidence, Maxwell actually showed his confidence on the fact that
nature is beautiful and symmetric.
The confidence on the beauty of nature shown by Maxwell in particular
and scientists in general is nicely reflected in a conversation between
Einstein and Heisenberg, which was recorded by Heisenberg as \cite{chandrasekhar1987truth}-
\emph{\textquotedblleft If nature leads us to mathematical forms of
great simplicity and beauty\textemdash by forms, I am referring to
coherent systems of hypotheses, axioms, etc. (etc.,)\textemdash to
forms that no one has previously encountered, we cannot help thinking
that they are \textquotedblleft true,\textquotedblright{} that they
reveal a genuine feature of nature\dots . You must have felt this
too: the almost frightening simplicity and wholeness of the relationships
which nature suddenly spreads out before us and for which none of
us was in the least prepared.\textquotedblright{}} The simplicity
and beauty referred here were vibrantly present in Maxwell's equations
and those compelled Maxwell to consider light as an electromagnetic
wave. It was confidence on the beauty of the mathematical forms of
Maxwell's beautiful equations, which forced Einstein to show confidence
on these equations rather that on the century old and well tested
Galilean transformations\footnote{Everyday, we see that relative velocity of two cars that approach
each other with the same speed is double of the individual speed.
This is in accordance with the Galilean transformation, but according
to Maxwell's equation, light would always move with a constant velocity
$c$ in free space. Thus, if we send light from two torches in the
opposite direction, their relative velocity would still remain $c.$
This was in sharp contrast with the Galilean transformation. } and indirectly this confidence on the symmetry of the Maxwell's equations
led him to introduce the special theory of relativity.
Historically, light played an extremely important role in understanding
nature. For example, most of the information that we have about the
celestial bodies are received through light (may not be restricted
only to the visible range). However, at a more fundamental level,
an effort to understand the blackbody spectrum (i.e., to explain experimental
observations related to intensity of lights of different wavelength
emitted by a blackbody) led Planck to postulate that energy from an
electric oscillator (which constitutes the wall of a cavity) had to
be transferred to electromagnetic waves in different quanta of each
\cite{planck1901law}, but the waves themselves would follow the conventional
wave theory of Maxwell. This was postulated in 1900\footnote{Planck's paper cited here as \cite{planck1901law} was published in
1901, but the paper contains following note- In other form reported
in the German Physical Society (Deutsche Physikalische Gesellschaft)
in the meetings of October 19 and December 14, 1900, published in
Verh. Dtsch. Phys. Ges. Berlin, (1900) \textbf{2}, 202 and 237.
An English translation of Verh. Dtsch. Phys. Ges. Berlin, (1900) \textbf{2},
237 is available at http://hermes.ffn.ub.es/luisnavarro/nuevo\_maletin/Planck\%20(1900),\%20Distribution\%20Law.pdf}. Just after 5 years, in 1905, Einstein (while he was working at the
Swiss Patent office) published a set of five outstanding papers which,
according to John Satchel \emph{\textquotedblleft changed the face
of physics\textquotedblright{}} \cite{stachel1999einstein}. In one
of those 5 papers \cite{einstein1905erzeugung}, he introduced his
famous theory of light quanta according to which light is considered
to be consisted of mutually independent quanta of energy
\begin{equation}
E=h\nu,\label{eq:hnu}
\end{equation}
where $\nu$ is the frequency and $h$ is the Planck\textquoteright s
constant. Here it is important to note that there was a fundamental
difference between Planck's idea of light quanta and that of Einstein.
Specifically, Planck postulated that energy from an electric oscillator
had to be transferred to electromagnetic waves in different quanta
of each, but the waves themselves would follow the wave theory of
Maxwell. In contrast, Einstein assumed that energy is not only given
to an electromagnetic wave in separate quanta, but is also carried
in separate quanta. Einstein's revolutionary idea of light quanta
explained an interesting observation related to light. To be precise,
in 1887, Hertz did a simple experiment with light \cite{hertz1887ueber}.
In his experiment, electrodes illuminated with the ultraviolet (UV)
light were found to emit electrons. This phenomenon is known as the
photoelectric effect, and Einstein postulated ``light quantum'',
to explain this phenomenon. Thus, the revolutionary ideas of both
Planck and Einstein were theoretical in nature, but were obtained
from the efforts to explain experimental observations related to light
and these ideas subsequently played important role in the construction
of quantum physics, the best known model of nature.
Before we proceed further, it would be interesting to note that in
\cite{einstein1905erzeugung}, Einstein obtained Eq. (\ref{eq:hnu})
by comparing entropy of radiation with that of a gas having $n$ molecules.
Specifically, he had shown that if volume changes from $V_{0}$ to
$V$, then the change in entropy of radiation having a fixed amount
of total energy is given by
\begin{equation}
S-S_{0}=k\ln\left(\frac{V}{V_{0}}\right)^{\frac{E}{h\nu}},\label{eq:einstein1}
\end{equation}
whereas the corresponding change in entropy for an ideal gas having
$n$ particles is
\begin{equation}
S-S_{0}=k\ln\left(\frac{V}{V_{0}}\right)^{n}.\label{eq:einstein2}
\end{equation}
Comparing Eqs. (\ref{eq:einstein1}) and (\ref{eq:einstein2}), Einstein
reached to the conclusion that radiation behaves in manner\textcolor{red}{{}
}like it is composed of independent light quanta and $\frac{E}{h\nu}$
should represent the total number of light quanta ($n)$ having individual
energy of $h\nu$ \cite{ghatak2017optics}. It is of further interest
to note that later on, a few scientists have tried to explain photoelectric
effect without using the concept of photon or light quanta. They assumed
that the energy of the atoms constituting the electrode on which light
falls is quantized \cite{bosanac1998semiclassical}. Thus, photoelectric
effect can be explained by considering quantization of either light
or matter. However, it seems obvious that Einstein used the concept
of light quanta. This is so because Einstein provided an explanation
of the photoelectric effect in 1905, when neither Rutherford\textquoteright s
model (1909), nor Bohr model (1913) was known\footnote{It may be noted that Bohr model also originated in an effort to explain
the origin of lights of certain wavelengths (as was observed in Lyman,
Balmer, Paschen, Bracket and Pfund series.}, but Planck\textquoteright s idea was already present since 1900.
Naturally, Einstein used the concept of light quanta in his explanation
of the photoelectric effect. This discussion establishes two points:
\begin{enumerate}
\item It is important to know the history of a subject to understand that
subject.
\item Light played a fundamental role in the development of the most fascinating
and useful concepts of the modern physics.
\end{enumerate}
In what follows, we would keep this in mind and would try to provide
a historical (but not chronological) overview of the development of
various concepts related to modern optics and modern applications
of them.
Maxwell's work provided a clear understanding of electromagnetic wave
which still plays the most crucial role in communication engineering
and enables us to speak with friends and relatives through cell phones,
to see different channels in TV, to do online shopping, etc. On the
other hand, the concept of photon plays a crucial role in many of
the recently proposed path-breaking applications of quantum information
processing and quantum communication, such as unconditionally secure
quantum cryptography \cite{bennett1984quantum,bennett1992quantum},
quantum teleportation \cite{bennett1993teleporting}, and dense coding
\cite{bennett1992communication,mattle1996dense}. Before, we proceed
to describe some of these applications and briefly introduce the notion
of nonclassical light, we must mention that neither Planck nor Einstein
used the term ``photon''. It was only in 1926 that the American
chemist Gilbert Lewis coined the word ``photon''. In \cite{lewis1926conservation},
Lewis wrote ``\emph{\dots it spends only a minute fraction of its
existence as a carrier of radiant energy, while the rest of the time
it remains as an important structural element within the atom. \dots I
therefore take the liberty of proposing for this hypothetical new
atom, which is not light, but plays an essential part in every process
of radiation, the name photon}''. One can easily recognize that the
term photon is used today with a different meaning. Further, we would
like to note that in 1905, Maxwell\textquoteright s electromagnetic
theory was well established and consequently, Einstein\textquoteright s
idea of the light quantum was not readily accepted (for a discussion
see \cite{einstein1979autobiographical}). In fact, even today there
are some open questions related to the wave function of photon\footnote{The main problem in defining a wave function of photon in position
space arises because of the fact that it cannot be localized in position
space as it has a definite momentum.} (\cite{bialynicki1994wave,inagaki1998physical,sipe1995photon} and
references therein) and its momentum in a medium \footnote{Interested readers may read about Abraham\textendash Minkowski dilemma
in detail to know the origin of this interesting problem. About a
century ago, Abraham and Minkowski gave two different expressions
for momentum of light in a medium. To understand the dilemma, at the
single photon level, we may note that for free space momentum of a
photon is $\hbar k,$ and it's unambigious, but for a medium having
refractive index $n$, there are two competing expressions for photon
momentum: $n\hbar k$ and $\frac{\hbar k}{n}$. Both are used, and
thus the open question is: Which one of these two expressions is correct?
Apparently, the problem arises because even in the classical optics,
there is no universally accepted definition for the electromagnetic
momentum in a dispersive medium.} (see \cite{barnett2010enigma,barnett2010resolution} and references
therein); and there are people who are not confident on the existence
of photon (interested readers may read Lamb Jr.'s article entitled,
Anti-photon \cite{lamb1995anti}, where the author claimed that ``\emph{...there
is no such thing as a photon}''). Our view is different, and we believe
that the wide domain of optics can be classified into three sub-domains-
classical optics, semi-classical optics and quantum optics \cite{fox2006quantum}.
Specifically, science of describing those phenomena which can be explained
with the help of the classical theory of light (i.e., considering
light as an electromagnetic wave) and classical theory of matter (which
does not require quantization of atomic/molecular energy levels).
Reflection, refraction, dispersion, etc., are examples of phenomena
that fall under classical optics. Whereas explanation of another set
of phenomena, like Compton effect and photoelectric effect, requires
the quantum theory of matter, but does not essentially require quantum
theory of light\footnote{It is interesting to note that Nobel laureate C V Raman, provided
a semiclassical explanation of the Compton effect in Ref. \cite{raman1928classical}.}. Such phenomena fall under semiclassical optics. Finally, there exists
a set of phenomena (like the recoil of atom on the emission of light)
which cannot be explained without considering the quantum theory for
both atom and field. Those phenomena fall under the domain of quantum
optics, and a major part of this review is dedicated to the application
of such phenomena.
In his 1905 paper on the photoelectric effect \cite{einstein1905erzeugung},
Einstein conceptualized the notion of \textquotedblleft wave particle
duality\textquotedblright , which eventually led to the development
of quantum theory. Few years later, in 1923, de Broglie, showed confidence
on the symmetry and beauty of nature by claiming that nature manifest
itself in two forms- light and matter, if one of them has a dual character,
then the other one should also have the dual character \cite{de1923ondes,de1923quanta,de1923waves}.
Believing in the inherent harmony of nature, he conjectured that Fermat's
least optical path principle of optics and the least action principle
of mechanics are manifestations of the same law as their mathematical
forms are the same. This conjecture led to the idea of matter wave
and de Broglie wavelength, which again played a very important role
in the development of quantum mechanics. The fact that de Broglie
was convinced that there was a harmony in nature, and the duality
introduced through the work of Einstein was generally true, was captured
in many of de Brogile's own statements. For example, we may quote
(cf. p. 58 of \cite{cropper1970the}) : \emph{``I was convinced that
the wave-particle duality discovered by Einstein in his theory of
light quanta was absolutely general and extended to all of the physical
world, and it seemed certain to me, therefore, that the propagation
of a wave is associated with the motion of a particle of any sort-
photon, electron, proton or any other.}''
Recognizing the harmony of nature captured in the work of de Broglie,
he was awarded the 1929 Nobel Prize in Physics. The harmony of nature
discovered by him was nicely reflected in the presentation speech
of the Chairman of Nobel Committee for Physics (1929), who said: ``\emph{Louis
de Broglie had the boldness to maintain that not all the properties
of matter can be explained by the theory that it consists of corpuscles........Hence
there are not two worlds, one of light and waves, one of matter and
corpuscles. There is only a single universe.}'' (cf. Page 26.5 of
\cite{ghatak2017optics}).
In another direction of development, in 1917, Einstein \cite{einstein1917quantentheorie}
was able to introduce famous $A$ and $B$ coefficients, which can
describe the interaction between matter and radiation field. Specifically,
the stimulated emission which governs the operation of all laser (light
amplification by stimulated emission of radiation) systems were characterized
by $B$ coefficient, whereas spontaneous emission which leads to all
the spectral lines, can be characterized using Einstein's $A$ coefficient.
Einstein used thermodynamic argument to obtain $A$ coefficient. Ten
years later, in 1927, Dirac performed quantization of the electromagnetic
field \cite{dirac1927quantum}, which is now known as the second quantization\footnote{The word ``quantum'' means discrete. In quantum mechanics, we have
Hermitian operators for all the physical observables. These operators
satisfy eigenvalue equations, where the eigenfunctions are the wave
functions. Obtained eigenvalues corresponding to any operator is discrete
and on a particular measurement, we can obtain only one of those eigenvalues
as the value of the corresponding physical observable. Thus, in quantum
mechanics, we obtain discrete values for an observable, in other words,
in quantum mechanics allowed values of physical observables get quantized.
Historically, at the beginning of quantum mechanics (say, between
1925-1926), it was restricted to the quantization of the motion of
particles, only, and in all the early works of the founder fathers
of quantum mechanics (e.g., Schrodinger, Heisenberg, Dirac), electromagnetic
field was treated classically. Later, in 1927, Dirac quantized electromagnetic
field \cite{dirac1927quantum}, subsequently, Jordan and Wigner developed
a formalism in which particles are also represented by quantized fields.
This led to quantum field theory, which has been formulated in the
language of second quantization.}. In fact, quantization of field in general and radiation field in
particular is referred to as second quantization, and it naturally
yields Einstein's $A$ coefficient and the concept of light quanta.
In the mean time, in 1924, Bose \cite{bose1924plancks} provided a
quantitative explanation of Planck's law and paved the way for quantum
statistics by introducing a technique for counting statistics of particles
having zero rest mass \cite{agarwal2012quantum}. This work of Bose
was followed by another seminal paper of Einstein, in which counting
statistics for particles having finite mass (boson) was provided.
These works are relevant here because photons or light quanta are
bosons and they follow Bose-Einstein statistics, introduced through
the works of Bose and Einstein. Later, quantization of a system of
finite rest mass was performed by the Russian physicist Vladimir Fock
\cite{agarwal2012quantum}; the corresponding space (i.e., the appropriate
state space for the electromagnetic field) is called the Fock space
and the basis states of this space are referred to as the Fock states
or number states $|n\rangle$. To be precise, for the present review,
we are only interested in bosonic Fock space, and wish to express
states of the radiation field in Fock basis. From the discussion,
so far, we can easily recognize that if one uses second-quantization
formalism and Fock basis, he can express an arbitrary radiation field
state as $|\psi\rangle=\sum_{n=0}^{\infty}c_{n}|n\rangle,$ where
$|n\rangle$ represents a Fock state, more lucidly $|0\rangle$ corresponds
to vacuum state, $|1\rangle$ corresponds to a single photon state,
$|n\rangle$ corresponds to a state with $n$ photons, $|c_{n}|^{2}=P(n)$
is the probability of obtaining $n$ photons ($P(n)$ is also referred
to as the photon number distribution) if the number of photons present
in the quantum state $|\psi\rangle$ is measured. Clearly, in this
formalism (formalism of second quantization), notion of light quanta
follows, automatically. However, that's not our concern. Our concern
is now, as the electromagnetic field is quantized in general, and
as every state of the radiation field is essentially quantum because
it can always be described as a quantum state $|\psi\rangle=\sum_{n=0}^{\infty}c_{n}|n\rangle,$
how to distinguish classical and quantum light? Here, we need to come
out of the popular classification made by using particle nature and
wave nature of light and note that there are some properties of quantum
world which are not present in the classical world. For example, in
the quantum world, one cannot measure two non-commuting operators
(that represent two physical observables) simultaneously with arbitrary
accuracy. This is known as Heisenberg's uncertainty principle. No
such, uncertainty exists in the classical world, so a quantum state
can be approximated as classical if the observed uncertainty (associated
with both the noncommuting operators) reaches a minimum possible value
for that state. In some sense, in a world where every state is quantum,
such a quantum state can be viewed as the most classical state (or
a state which is closest to a classical state). Let's now translate
this scenario into the context of the radiation field. Traditionally,
when we look at a plane wave (a solution of Maxwell's equations),
the amplitude of the wave ($E_{0}$) is considered as a complex number,
real and imaginary parts of which are referred to as the in-phase
and out-of-phase quadratures of the field \cite{agarwal2012quantum}.
In the domain of quantum mechanics, $E_{0}$ is replaced by an annihilation
operator $a$ for that mode and the corresponding field quadratures
are defined as $X=\frac{1}{\sqrt{2}}\left(a+a^{\dagger}\right)$ and
$Y=-\frac{i}{\sqrt{2}}\left(a-a^{\dagger}\right).$ Clearly, $X$
and $Y$ don't commute as $[a,a^{\dagger}]=1.$ In fact, using $[a,a^{\dagger}]=1,$
we obtain $[X,Y]=i$. Thus, these field quadratures which correspond
to measurable quantities (i.e., physical observables) don't commute
and consequently cannot be measured simultaneously with arbitrary
accuracy. Specifically, we obtain an uncertainty relation involving
the fluctuations in the field quadrature as
\begin{equation}
\Delta X\Delta Y\geq\frac{1}{2}.\label{eq:uncertainty}
\end{equation}
In the above discussion, we have assumed $\hbar=1.$ Now, we know
that no such uncertainty exists for the classical field, and in principle,
one can perform homodyne measurement and simultaneously measure field
quadratures with arbitrary accuracy, but quantum mechanics does not
allow that. This led to a new question: How close a quantum state
can be to the states of the classical world, where there was no uncertainty.
The quantum state of light closest to classical world of no-uncertainty
would definitely be the one with minimum uncertainty, i.e., a state
of radiation field which would satisfy $\left(\Delta X\right)^{2}=\left(\Delta Y\right)^{2}=\frac{1}{2}.$
Such a state is called coherent state, which will be elaborated separately
in the next section. Coherent states and their statistical mixtures
are considered as classical states of radiation field (classical light),
and all other states of radiation field are referred to as nonclassical
states (nonclassical light). A more formal definition of nonclassical
states will be given in the next section, but before that we may just
note that the lucid classification of light made earlier, is consistent
with this modern view. This is so because, light quanta of Einstein
can be viewed as Fock state, and for a Fock state $|n\rangle,$ we
obtain $\left(\Delta X\right)^{2}=\left(\Delta Y\right)^{2}=n+\frac{1}{2},$
which clearly indicates that, except vacuum state $|0\rangle$, no
Fock state gives us minimum uncertainty states. Further, all Fock
states (except $|0\rangle)$, $(\Delta N)^{2}=0<\bar{N}=n$ and thus
they show sub-Poissonian photon statistics (which is a signature of
nonclassicality- cf. Sec. \ref{sec:Coherent-states-and} for a relatively
elaborate discussion) and are nonclassical states (in other words
they are quantum states having no classical analogue), whereas a vacuum
state can be considered as a classical state. Similarly, electromagnetic
fields for which field quadratures can be measured with accuracy are
definitely classical. Now, we may further stress on this point by
noting that in the framework of quantum mechanics, every state is
a quantum state. As a consequence, the so called classical states
are also quantum, and need to obey no go theorems of quantum mechanics,
like Heisenberg's uncertainty principle. However, for a classical
state, the uncertainty would be minimum. Thus, in the framework of
quantum mechanics a classical state would mean a state closest to
classical world (where there is no uncertainty), in the sense that
the uncertainty in the measured values of two noncommuting observables
(for us two quadratures of the field) would be minimum for them. However,
a state that satisfy Eq. (\ref{eq:uncertainty}), may have reduced
small fluctuations (reduced with respect to the coherent state value)
in one of the quadratures at the cost of increased fluctuations in
the other quadrature. Such a state is referred to as a squeezed state.
For example, any state of radiation field that would satisfy $(\Delta X)^{2}<\frac{1}{2}$
or $(\Delta Y)^{2}<\frac{1}{2}$ would be referred to as a squeezed
state, and all squeezed states are nonclassical. We will further elaborate
on squeezed states and their applications in Sec. \ref{subsec:Squeezed-state-and}.
Keeping this distinction between classical light and non-classical
light in mind, in what follows, we will first provide a more formal
definition of classical and nonclassical states of light and then
state various modern applications of both types of light.
The rest of the paper is organized as follows. In Sec. \ref{sec:Coherent-states-and},
we formally introduce coherent state and the notion of nonclassical
states and the Glauber-Sudarshan $P$-function. In Sec. \ref{sec:Nonlinear-optics-and},
a set of interesting nonlinear optical phenomena and their applications
are discussed. Sec. \ref{sec:Characterization-of-nonclassical} is
dedicated to the methods that are used to identify nonclassical light.
In Sec. \ref{sec:Applications-of-nonclassical}, applications of nonclassical
states of radiation field (i.e., nonclassical light) are discussed
with a specific focus on the applications of squeezed, antibunched
and entangled states of light and the recent developments in the field
of quantum state engineering and quantum information processing in
general and quantum communication in particular. In Sec. \ref{sec:Applications-of-classical},
the discussion on the modern applications of light is continued, and
the modern applications of classical light are reviewed. Finally,
the paper is concluded in Sec. \ref{sec:Conclusion} with a brief
mention of some classical and nonclassical light-based technologies
that may appear in the near future.
\section{Coherent states and the idea of classical and nonclassical states
of radiation field\label{sec:Coherent-states-and} }
Let us now formally define a coherent state. For this review, we may
consider a coherent state $|\alpha\rangle$ as a state of the radiation
field, which is defined as an eigenket of annihilation operator $a$
(thus, $a|\alpha\rangle=\alpha|\alpha\rangle$ defines a coherent
state). A coherent state can also be defined using two other equivalent
definitions. Specifically, as a displaced vacuum state or a minimum
uncertainty state (as mentioned in the previous section). In infinite
dimensional Hilbert space, these definitions are equivalent\footnote{It may be noted that for the finite dimensional Hilbert space, these
definitions are not equivalent, and any finite superposition of Fock
states is always nonclassical. } and lead to a well defined state which can be expanded in terms of
Fock basis $\left\{ |n\rangle\right\} $ (introduced in the previous
section) as $|\alpha\rangle=\sum_{n=0}^{\infty}\frac{\alpha^{n}\exp\left(-\frac{|\alpha|^{2}}{2}\right)}{\sqrt{n!}}|n\rangle=\sum_{n=0}^{\infty}c_{n}|n\rangle,$
where $|n\rangle$ represents a Fock state and $\bar{N}=\langle\alpha|a^{\dagger}a|\alpha\rangle=|\alpha|^{2}$
is the average photon number. Looking at the functional form of the
probability distribution defined by $P(n)=|c_{n}|^{2},$ one can identify
that the photon number distribution for the coherent state of light
is Poissonian\footnote{In our notation, a Poissonian distribution is one which follows $P(n)=\frac{\bar{N}^{n}}{n!}\exp\left(-\bar{N}\right)$
and $(\Delta N)^{2}=\bar{N}$. Here, $|\alpha\rangle$ can be easily
recognized as a coherent state by noting that $\bar{N}=|\alpha|^{2}.$}, and it would satisfy $(\Delta N)^{2}=\bar{N}.$ If a state satisfies
$(\Delta N)^{2}<\bar{N}$, then the state will be referred to as a
sub-Poissonian state and such a state will be nonclassical. Before
we elaborate on other nonclassical states, let us first define nonclassicality.
Now, we may note that in quantum mechanics, a pure state is either
described through its wave function $|\psi\rangle$ or through its
density matrix $\rho=|\psi\rangle\langle\psi|$. However, if two pure
states $|\psi_{1}\rangle$ and $|\psi_{2}\rangle$ are mixed with
probability $p_{1}$ and $p_{2}$ then the density matrix of the state
would be $\rho^{\prime}=p_{1}|\psi_{1}\rangle\langle\psi_{1}|+p_{2}|\psi_{2}\rangle\langle\psi_{2}|=p_{1}\rho_{1}+p_{2}\rho_{2}\,\,:p_{1}+p_{2}=1.$
Thus, in general, density matrix of a mixed state $\rho$ would be
$\rho=\sum_{i=1}^{N}p_{i}\rho_{i}:\,\,\sum_{i=1}^{N}p_{i}=1$. Now,
consider a state which is a mixture of coherent states $|\alpha\rangle$,
then we must have $\rho=\int P(\alpha)|\alpha\rangle\langle\alpha|d^{2}\alpha,$
where the summation has been replaced by integration considering $\alpha$
as a continuous variable, and discrete probability $p_{i}=p_{\alpha}$
is replaced by a probability distribution $P(\alpha):\,\,\int P(\alpha)d^{2}\alpha=1$.
Now, for a mixture of coherent states $P(\alpha)$ must be nonnegative
(i.e., $P(\alpha)\geq0\,\forall\alpha)$ and must satisfy $\int P(\alpha)d^{2}\alpha=1$.
In that case, we would say that $P(\alpha)$ is a true probability
distribution.
Coherent states form an over complete basis as for any two coherent
state $|\alpha\rangle$ and $|\beta\rangle,$ we obtain $\langle\alpha|\beta\rangle\neq\delta(\alpha-\beta).$
Thus, we may diagonally expand any quantum state\footnote{Note that this description is valid for any quantum state and it's
not restrcited to the quantum states of radiation field.} $\rho$ in the coherent state basis as
\begin{equation}
\rho=\int P(\alpha)|\alpha\rangle\langle\alpha|d^{2}\alpha.\label{eq:pfn}
\end{equation}
However, in this expansion, $P(\alpha)$ which is usually referred
to as Glauber-Sudarshan $P$-function\footnote{Although it is usually referred to as Glauber-Sudarshan $P$-function,
and the related formulation as the Glauber-Sudarshan $P$-representation
and Glauber won 2005 Nobel prize in Physics for developing this formalism,
it is a bit controversial. Many scientists and Sudarshan himself often
argue that this representation that provide correct quantum mechanical
theory of optical coherence was actually developed by Sudarshan, and
was later adopted by Glauber, who coined the term $P$-representation.
As $P$-representation or diagonal representation played crucial role
in the development of the non-classical optics, this debate about
the origin of $P$-representation is in existence since long. However,
it resurfaced in 2005-06, when Glauber won the Nobel prize in Physics
for this formulation, but Sudarshan missed it and wrote a strong letter
of objection to the Nobel committee (for a short description of the
controversy, interested readers may see \cite{sudarshan-debate1,sudarshan-debate2,sudarshan-debate3}).
To us it appears that Nobel committee gave more credit to Glauber's
1963 paper \cite{glauber1963photon} published in February 1963, over
Sudarshan's more powerful work \cite{sudarshan1963equivalence} published
in April 1963. However, $P$-representation or diagonal representation
(or, equivalently optical equivalence theorem) was actually developed
by Sudarshan and it would have been more appropriate it call it Sudarshan
diagonal representation or sudarshan's $\phi$-representation as he
had used $\phi(z)$ in place of $P(\alpha)$ in his pioneering work.
In fact, in Eq. (4) of Ref. \cite{sudarshan1963equivalence}, Sudarshan
expressed density function $\rho$ as $\rho=\int d^{2}z\phi(z)|z\rangle\langle z|$
where he considered $|z\rangle$ as quantum state. Almost five months
later, in Sec. VII of Ref. \cite{glauber1963coherent}, Glauber reintroduced
diagonal representation of Sudarshan as $P$-representation. Note
that Eq. (7.6) of \cite{glauber1963coherent} is the same as Eq. (\ref{eq:pfn})
given above. For a clear and chronological description of the events
that happened in 1963, see \cite{simon2009sudarshan}.} is not restricted to follow $P(\alpha)\geq0\,\forall\alpha,$ and
thus to remain a true probability distribution. To be specific a negative
value of $P$-function would mean that $P(\alpha)$ cannot be viewed
as a true probability distribution, and the corresponding state $\rho$
cannot be expressed as a mixture of coherent (classical) states. This
is why $P(\alpha)$ is often referred to as quasi-probability distribution,
and we usually say that a state which cannot be expressed as a mixture
of coherent states is nonclassical. Such nonclassical states are often
seen in the radiation field, and nonclassical states of the radiation
are the states of our interest as they don't have any classical analogue.
In what follows, any radiation field state with negative value of
$P(\alpha)$ for some $\alpha$ will be called nonclassical light,
whereas the rest will be considered as classical light. Here it would
be apt to note that the diagonal representation can be considered
as a valid representation iff an inversion formula exist \cite{simon2009sudarshan}.
Interestingly, in Eq. (6) of the pioneering work of Sudarshan \cite{sudarshan1963equivalence},
an explicit expression for $P(\alpha)$ (in Sudarshan's notation $\phi(z)$)
in terms of density matrix $\rho$ was given. Further, Sudarshan established
that the expectation value of any normal ordered operator $O=a^{\dagger k}a^{l}$
(i. e. , the operators are ordered in such a way that all creation
operators appear in the left and all the annihilation operators appear
in right), in the statistical state represented by density matrix
represented in the diagonal form given in (\ref{eq:pfn}), would be
\begin{equation}
Tr(\rho O)=Tr\left(\rho a^{\dagger k}a^{l}\right)=\int P(\alpha)\alpha^{*k}\alpha^{l}d^{2}\alpha.\label{eq:sudarshan}
\end{equation}
Great importance of this result was recognized by Sudarshan. Immediately
after introducing this result in Ref. \cite{sudarshan1963equivalence},
he wrote about Eq. (\ref{eq:sudarshan}) (notation is changed here
for the consistency), ``This is the same as the expectation value
of the complex classical function $\alpha^{*k}\alpha^{l}$ for a probability
distribution $P(\alpha)$ over the complex plane. The demonstration
above shows that any statistical state of the quantum mechanical system
may be described by a classical probability distribution over a complex
plane, provided all operators are written in the normal ordered form.
In other words, the classical complex representations can be put in
one-to-one correspondence with quantum mechanical density matrices.''
This lines describes optical equivalence theorem- probably the most
important result of quantum optics, or more precisely of nonclassical
optics. This is so because Sudarshan showed that all nonclassicalities,
if any, of a given state $\rho$ are fully captured in the departure
of the corresponding $P(\alpha)$ from being a genuine classical probability
\cite{simon2009sudarshan}. Thus, negativity of $P$-function appeared
as the defining criterion for nonclassicalty.
Negativity of $P$-function being the defining criterion for nonclassicalty,
it is both necessary and sufficient. However, $P$-function cannot
be measured experimentally\footnote{There exists an interesting paper by Kiesel et al., \cite{kiesel2008experimental}
in which experimental determination of a well-behaved $P$-function
is reported for a single-photon added thermal state. However, the
method cannot be generalized as $P$-functions of nonclassical states
are not always well-behaved. Further, to the best of our knowledge
this is the only work that reports experimental determination of $P$-function. }, and as a consequence, over time several operational criteria for
nonclassicality have been developed (for a systematic discussion on
various criteria and a long list of criteria see \cite{miranowicz2010testing}.
This list was obtained by generalizing the moment based criteria of
nonclassicality in general \cite{richter2002nonclassicality,shchukin2005nonclassical}
and entanglement in particular \cite{shchukin2005inseparability,miranowicz2009inseparability}.
A finite set of moment-based criteria for nonclassicality can only
serve as a witness of nonclassicality, while a sufficient and necessary
condition would require satisfaction of an infinite set of such nonclassicality
criteria \cite{richter2002nonclassicality,miranowicz2015statistical}.
In what follows, we will briefly mention some of these criteria.
The coherent states and the nonclassical states (such as squeezed
states) generated through the time evolution of an initial coherent
state in some physical Hamiltonian have many applications. Most of
these applications and excitement connected to them started in 1960s
after the discovery of laser and the initial excitement continued
until 1980s. However, coherent state and squeezed state were known
to the founding fathers of quantum mechanics (for an excellent review
see \cite{nieto1997discovery}, where the author describes a short
history of the discovery of coherent state and squeezed state). Just
like Einstein's miraculous year, Schrodinger also had a miraculous
year, it was 1926, first half of this year was extremely productive
for him, and he submitted 6 famous papers in this period. In one of
those papers \cite{schrodinger1926stetige}, he discovered coherent
state while he was looking for classical like states that satisfy
the minimum uncertainty condition. Just in the next year, squeezed
state was discovered by Kennard (see Sec. 4C of \cite{kennard1927quantenmechanik})\footnote{Although, coherent state and squeezed state were discovered in the
early years of quantum mechanics, their importance was realized much
later. Consequently, Schrodinger and Kennard did not receive much
credit for these discoveries. In this context, Nieto made following
very interesting remark in \cite{nieto1997discovery}- \emph{``To
be popular in physics you have to either be good or lucky. Sometimes
it is better to be lucky. But if you are going to be good, perhaps
you shouldn\textquoteright t be too good}.''}.
Consider an arbitrary state of electromagnetic field $\rho,$ which
can be expressed in the coherent state representation as shown in
Eq. (\ref{eq:pfn}). Photon number distribution of this state would
be $P(n)=\langle n|\rho|n\rangle=\int P(\alpha)\langle n|\alpha\rangle\langle\alpha|n\rangle d^{2}\alpha=\int P(\alpha)\left|\langle n|\alpha\rangle\right|^{2}d^{2}\alpha.$
Now, since $\left|\langle n|\alpha\rangle\right|^{2}>0,$ if $P(\alpha)$
is a true probability distribution (i.e., if $P(\alpha)$ has nonnegative
values for all $\alpha$ and $\int P(\alpha)d^{2}\alpha=1)$ then
$P(n)$ must be a positive quantity. In other words $P(n)=0$ would
imply negative value of $P(\alpha)$ for some value(s) of $\alpha.$
Thus, $P(n)=0$ for some values of $n$ (which refers to a hole in
the photon number distribution) is actually a signature of nonclassicality.
The process of creating holes in the photon number distribution is
known as hole burning \cite{escher2004controlled}. Various mechanisms
for hole burning have been proposed in the recent past \cite{baseia1999note,gerry2002hole,avelar2005controlled,escher2004controlled,baseia1998hole}.
Let us now, look at a finite superposition of Fock states, say a quantum
state $|\psi\rangle=\sum_{n=0}^{N}c_{n}|n\rangle.$ Clearly, for this
state $P(n)=|c_{n}|^{2}$ would describe the probability of finding
an $n$ photon state. In this case, $P(n)=0\forall n>N,$ and we may
thus view it as there are a large number of holes in the photon number
distribution. This leads us to the conclusion that a finite superposition
of Fock states is always nonclassical. Procedures adopted for hole
burning and/or creation of finite dimensional states are in the heart
of quantum state engineering, which we would elaborate separately.
From the above logic it is clear that different realizations of the
finite dimensional coherent states \cite{miranowicz1994coherent,leon1997finite}
must be nonclassical. Similarly, $m$ photon added coherent state
(PACS) introduced by Agarwal and Tara \cite{agarwal1991nonclassical}
and experimentally realized (for $m=1)$ in \cite{zavatta2004quantum}
must also be nonclassical, as after the addition of one photon ($m$
photons) to every Fock states, including vacuum, we must have $P(0)=0\,\,$$\left(P(n)=0\forall n<m\right).$
It is interesting to note that the procedure of obtaining a nonclassical
state (PACS) by adding a single photon to a classical state (coherent
state $|\alpha\rangle$) manifests one of the simplest procedures
that describes classical to quantum transition. Further, PACS and
similar states are often referred to as the intermediate states \cite{dodonov2003theory,verma2008higher,verma2009reduction,verma2010generalized}
as they reduce to different well known states at different limits.
In particular, a PACS is intermediate between a fully quantum single
photon Fock state $|1\rangle$ and a coherent state $|\alpha\rangle.$
Other popular intermediate states that show nonclassical characters
at different limits are binomial state \cite{stoler1985binomial,vidiella1994statistical},
reciprocal binomial state \cite{moussa1998generation}, various types
of generalized binomial state \cite{fu1996generalized,roy1997generalized,fan1999new},
negative binomial state \cite{barnett1998negative}, excited binomial
and negative binomial states \cite{obada2002odd,wang2000excited},
hypergeometric state \cite{fu1997hypergeometric}, negative hypergeometirc
state \cite{fan1998negative}. Among these states, except negative
binomial state \cite{barnett1998negative}, all the states are finite
dimensional and naturally show nonclassicality. In addition, negative
binomial state is defined as
\begin{equation}
|\eta,M\rangle=\sum_{n=M}^{\infty}C_{n}\left(\eta,M\right)|n\rangle,\label{eq:pacs}
\end{equation}
where $C_{n}\left(\eta,M\right)=\left[\left(\begin{array}{c}
n\\
M
\end{array}\right)\eta^{M+1}(1-\eta)^{n-M}\right]^{\frac{1}{2}},$ $0\leq\eta\leq1$ and $M$ is a nonnegative integer. Clearly, for
a nonzero $M,$ $P(0)=P(1)=\cdots P(M-1)=0,$ and these holes in photon
number distribution would imply that all negative binomial states
are nonclassical for $M\neq0.$ Thus, the fact that every finite superpostion
of Fock states is nonclassical, implies that the nonclassicalities
reported in various intermediate states \cite{verma2008higher,verma2009reduction,verma2010generalized,miranowicz2014phase,vidiella1994statistical,obada2002odd,fu1997hypergeometric,fan1998negative,pathak2014wigner}
are not surprising. Rather, they are manifestation of the above discussed
facts. Finally, a quantum scissors \cite{miranowicz2014phase,miranowicz2004dissipation}
which can be used to truncate the usual infinite dimensional Hilbert
space to a finite dimensional space must lead to nonclassicality (cf.
Fig. \ref{fig:quantum-scissors}).
\begin{figure}
\begin{centering}
\includegraphics[scale=0.7]{Adam.pdf}
\par\end{centering}
\caption{\label{fig:quantum-scissors}A cartoon depicting the role of quantum
scissors in the quantum state engineering, where quantum scissors
are used as devices to truncate the usual infinite dimensional Hilbert
space to $N$ dimension (where $N$ is finite) and thus to create
nonclassical state.}
\end{figure}
\section{Nonlinear optics and applications of nonlinear optical phenomena\label{sec:Nonlinear-optics-and} }
In 1960, Theodore H. Maiman built the first laser at Hughes Research
Laboratories \cite{maiman1960stimulated}. It was a ruby laser, in
which ruby was used as the active medium to produce stimulated emission
at 694.3 nm. The realization was based on a theoretical work by Arthur
Leonard Schawlow and Charles Hard Townes \cite{schawlow1958infrared}.
The advent of ruby laser was followed by the advent of other lasing
systems, including He-Ne laser, ${\rm CO_{2}}$ laser, semi-conductor
lasers, etc. The advent of laser also contributed highly in the development
of fiber optics and experimental quantum optics. However, in this
section we will restrict ourselves to the discussion on nonlinear
optics only.
Lasing increases the intensity of light, and the output of the laser
does not diverge. Thus, the advent of lasers allowed us to apply extremely
high electric field (of the order of $10^{6}$ volts/m) to a medium
and to investigate the effect of propagation of the intense electromagnetic
wave (laser) through a medium. Such investigation led to the birth
of a new field of optics, known as nonlinear optics, where due to
the high intensity of the incident wave the linear relation between
polarization (dipole moment per unit volume) and the applied electric
field gets modified and we obtain a nonlinear relation. In fact, the
first experiment that clearly demonstrated a nonlinear optical phenomenon
was performed only after 1 year of the realization of the laser by
Maiman. Specifically, in 1961, Franken, Hill, Peters, and Weinreich
at the University of Michigan reported generation of light of wavelength
347 nm, when the output of a ruby laser (694 nm) was incident on a
quartz crystal \cite{franken1961generation}. Thus, light of frequency
$2\omega$ was generated from the incident light having frequency
$\omega.$ This process is known as second harmonic generation, and
it defines a typical nonlinear optical phenomenon as in the normal
situation (in the regime of linear optics) wavelength of the incident
light would not have changed. This process can be used to generate
blue light by passing a red laser beam through a nonlinear crystal.
This often happens inside the blue laser pointers. The presence of
a small quadratic term in the optical polarizability of a nonlinear
optical crystal led to second harmonic generation, soon after demonstrating
the second harmonic generation, the same group of scientists recognized
that this small quadratic term in the optical polarizability would
also lead to mixing of light from two different sources with two different
frequencies \cite{bass1962optical}. In other words, if we send two
light waves at two frequencies $\omega_{1}$ and $\omega_{2}$, then
the crystal can mix these two frequencies to generate light of frequency
$\omega_{1}+\omega_{2}$ (known as sum frequency generation) and/or
$\omega_{1}-\omega_{2}$ (known as difference frequency generation).
The process is now usually referred to as frequency mixing, but in
the pioneering work of Bass et al. \cite{bass1962optical}, in which
sum frequency generation was experimentally demonstrated in 1962,
it was referred to as optical mixing. It is easy to recognize that
second harmonic generation is a special case of sum frequency generation
where $\omega_{1}=\omega_{2}.$ Similarly, we can visualize third
or higher harmonic generation process as a nonlinear optical process
where higher harmonics are generated by frequency mixing. Frequency
mixing process is often used to convert frequency of a given light
to the region 800 nm-1000 nm where detectors perform with highest
efficiency. The applicability of frequency mixing in general and second
harmonic generation in particular is huge. For example, second harmonic
generation imaging microscopy has been used in the diagnostics of
diseases \cite{campagnola2011second}, imaging cells and extracellular
matrix in vivo \cite{zoumi2002imaging} and in determination of ovarian
and breast cancers \cite{tilbury2015applications}. Further, we would
like to mention another interesting nonlinear optical process- subharmonic
generation, in which a stronger beam produces two beams of frequencies
lower than the original beam. A particular case of subharmonic generation
is spontaneous parametric down conversion (SPDC) process \cite{pathak2016optical}.
Here, it would be apt to note that the nonlinear optical phenomena
may happen in both classical and quantum worlds. \footnote{Classical nonlinear optics is discussed very frequently and can be
found in many text books. To obtain an idea of quantum nonlinear optics
interested readers may look at \cite{hanamura2007quantum,peyronel2012quantum,chang2014quantum}
and references therein.}. In other words, at the output of a nonlinear optical crystal, one
may obtain classical or non-classical light depending upon other conditions
of the experiment. Specifically, type I and type II SPDC processes
are primarily used to yield entangled states of light, which have
been successfully used in realizing various ideas of quantum information
processing and quantum communication. Considering its wide applicability,
entangled states\footnote{An entangled state is a quantum state of a composite system which
cannot be expressed as a tensor product of the component systems (sub-systems)
that constitute the composite system. Specifically, if the composite
state $|\psi\rangle_{AB}\neq|\psi\rangle_{A}\otimes|\psi\rangle_{B}$,
where $|\psi\rangle_{A}$ and $|\psi\rangle_{B}$ represent arbitrary
states of subsystem $A$ and $B$, then $|\psi\rangle_{AB}$ is considered
to be entangled, otherwise it is called separable. Thus, a two photon
state $|\psi\rangle_{AB}=\frac{|HH\rangle_{AB}+|VV\rangle_{AB}}{\sqrt{2}}$
is entangled, but the state $|\psi\rangle_{AB}=\frac{|HH\rangle_{AB}+|VH\rangle_{AB}}{\sqrt{2}}=\frac{|H\rangle_{A}+|V\rangle_{A}}{\sqrt{2}}\otimes|H\rangle_{B}$
is separable. Here $|H\rangle$ and $|V\rangle$ denote horizontal
and vertical states of polarization, respectively.} are discussed separately in Sec. \ref{subsec:Entangled-state-and}.
Here, we just note that in the SPDC process of type I two nonlinear
crystals are used in such a way that the photons generated from these
two crystals are in orthogonal polarization and therefore the down
conversion occurred in either of the crystals produces entangled photons
of same polarization; whereas\textcolor{red}{{} }type II SPDC process
uses a single nonlinear crystal and entangled photons of orthogonal
polarization are generated (for an elaborate discussion see \cite{pathak2016optical}).
Before, we proceed to describe the applications of entangled states,
we would like to mention about another common nonlinear optical phenomena
known as four wave mixing (FWM). In the quantum description of the
FWM process, simultaneous annihilation of two pump photons (which
may have different frequencies) creates a signal-idler photon pair.
This nonlinear optical phenomenon is of particular interest as its
applications have been reported in various contexts (\cite{dutt2015chip,reimer2014integrated,glasser2012stimulated,wu2004ultraviolet,fiorentino2002all,ding2014observation,agha2012low,zhang2013coherent}
and references therein). Specifically, its applications are reported
for optical parametric oscillators (OPOs) \cite{dutt2015chip}, optical
filtering \cite{ding2014observation}, low noise chip-based frequency
converter \cite{agha2012low}, single-photon sources for quantum cryptography
\cite{reimer2014integrated,wu2004ultraviolet,fiorentino2002all},
frequency-comb sources \cite{reimer2014integrated}, stimulated generation
of superluminal light pulses \cite{glasser2012stimulated}, etc. Further,
several useful optical phenomena (e.g., wavelength conversion, signal
regeneration and tunable optical delay) have been observed in silicon
nanophotonic waveguides using FWM (see \cite{liu2010mid} and references
therein). FWM microscopy is also used recently to study the nonlinear
optical responses of nanostructures \cite{wang2011four}.
With the advent of quantum information processing the challenge is
to perform nonlinear optical operations with a few photons or a low
intensity. This is so because the optical realization of ${\rm CNOT}$
and other similar quantum gates require nonlinearity, but quantum
information processing is performed with single photon or a few photons.
This is challenging because, conventional nonlinear optics is useful
only when the incident beam is sufficiently intense. The requirement
led to studies on nonlinear optics with low intensity sources \cite{harris1999nonlinear}
and a method to circumvent the problem by using linear optical elements
and a set of detectors (KLM approach) \cite{knill2001scheme}. The
KLM approach works because the quantum measurement itself is a nonlinear
process.\textcolor{red}{{} }
It is out of the scope of the present work to review all the nonlinear
optical phenomena and their applications. However, we would like to
mention that following nonlinear optical phenomena deserve special
attention because of their applications listed against their names.
\begin{enumerate}
\item Optical parametric amplification (OPA) has applications in linear
optical amplifier, transparent wavelength conversion, return-to-zero
(RZ)-pulse generation, all-optical limiters, etc., (see \cite{hansryd2002fiber}
for a review).
\item Optical parametric oscillation (OPO) has applications in quantum noise
reduction \cite{fabre1989noise}, frequency conversion \cite{johnson1995narrow},
twin-beam generation \cite{gao1998generation}, etc.
\item Optical rectification (OR) has applications in generation of tetrahertz
pulses \cite{schneider2006generation},
\item Optical Kerr effect, has applications in optical pulse compression,
mode locking of lasers, nonlinear intensity-dependent discriminator's,
picosecond time-resolved emission and absorption spectroscopy \cite{sala1975optical}.
\item Self-phase modulation (SPM) has applications in designing schemes
for all-optical data regeneration \cite{mamyshev1998all}.
\item Cross-phase modulation (XPM) has applications in quantum computation
\cite{shapiro2007continuous}, optical switching \cite{larochelle1990all},
etc.
\item Cross-polarized wave generation (XPW) has applications in the designing
of efficient temporal cleaner for femtosecond pulses \cite{jullien2006highly}.
\item Optical phase conjugation has applications in adaptive optics, lens-less
imaging, phase-conjugate resonators, image processing, associative
memory,\cite{giuliano1981applications,pepper1986applications}.
\end{enumerate}
\section{Characterization of nonclassical light\label{sec:Characterization-of-nonclassical} }
Here we aim to briefly mention the concepts that are used to identify
(characterize) a radiation field having nonclassical characteristics.
We have already mentioned that $P$-function cannot be measured directly.
The same is true for Wigner function, which is also a quasi-probability
distribution, and a quantum state in the quadrature phase space $\left(q,p\right)$
is defined as
\begin{equation}
\begin{array}{lcl}
W\left(q,p\right) & = & \frac{1}{2\pi\hbar}\int d\xi\langle q-\frac{\xi}{2}|\rho|q+\frac{\xi}{2}\rangle\exp\left(\frac{i\xi p}{\hbar}\right).\end{array}\label{eq:wigner}
\end{equation}
Negative values of it characterizes nonclassical state. See Fig. \ref{fig:fig2wigner},
where we have plotted Wigner function of coherent state in Fig. \ref{fig:fig2wigner}
(a) and the same for PACS in Fig. \ref{fig:fig2wigner} (b). Once
can clearly see that Wigner function of coherent state is always positive
as the state is classical, but the Wigner function of PACS is negative
in some places, and the observed negativity works as witness of nonclassicality
for PACS. Thus, Wigner function can be used as a witness of nonclassicality,
but there does not exist a general procedure for the measurement of
Wigner function. More precisely, there are a few papers \cite{lutterbach1997method,banaszek1999direct,bertet2002direct}
that report the determination of nonclassical characteristics of radiation
field (negative regions in the Wigner function) by direct measurement
of the Wigner function, but the methods adopted there work for particular
cases only and there does not exist any general method for the direct
determination of Wigner function. So we characterize nonclassicality
through other operational criteria for nonclassicality. Several experiments
are routinely performed to characterize nonclassical light. A nice
list of early experiments on nonclassical light is provided in Table
1 of Ref. \cite{teich1989squeezed}, which also provide a lucid introduction
to the experimental techniques used in those pioneering experiments.
Without elaborating on all the techniques here, we may mention that
in one of the pioneering experiments on quantum optics, in 1977, Kimble,
Dagenais, and Mandel demonstrated antibunching in resonance fluorescence
\cite{kimble1977photon}. Subsequently, in 1983, Short and Mandel
\cite{short1983observation} used the resonance fluorescence again
to demonstrate the existence of sub-Poissonian photon statistics,
and in 1985, quadrature squeezing of vacuum was shown using non-degenerate
FWM process in Na atoms \cite{slusher1985observation} by using an
idea proposed in 1979 by Yuen and Shapiro \cite{yuen1979generation}.
Thus, antibunched states were prepared in 1977, but it took another
8 years to generate squeezed state.\textcolor{red}{{} }More recently,
squeezed state generation in optomechanical systems \cite{pirkkalainen2015squeezing,rashid2016experimental}
and higher order correlations in various states of the radiation field
\cite{allevi2012measuring,avenhaus2010accessing,hamar2014non} have
also been reported. The set of experiments indicates the possibility
of characterizing higher order squeezed light. Further, several closely
related experiments having applications in realizing various schemes
for quantum communication have also been performed in the recent past
(see \cite{pathak2016optical} and references therein).
\begin{figure}
\begin{centering}
\includegraphics[scale=0.6]{TMfig2.pdf}
\par\end{centering}
\caption{\label{fig:fig2wigner}(Color online) Wigner function of (a) coherent
state and (b) photon added coherent state are shown. Here, we have
choosen coherent state parameter $\alpha=0$. Therefore, (a) actually
corresponds to vacuum state, which is classical and (b) corresponds
to Fock state $|1\rangle$, which is nonclassical. }
\end{figure}
Here, it would be relevant to note that realization of BB84 and various
other protocols of quantum cryptography requires single-photon sources,
but there does not exist any on-demand single-photon source. However,
there exist various approximate single-photon sources, and all of
them are expected to show antibunching \cite{pathak2010recent}. As
a consequence, it has become quite relevant to check whether a given
state of light is antibunched. This characterization is usually done
using the famous Hanbury Brown and Twiss (HBT) experiment \cite{brown1956test}.\textcolor{blue}{{}
}In this experiment, the light from a source is made to fall on a
beam-splitter, and two detectors (${\rm D_{1}}$ and ${\rm D_{2}}$)
are placed in the two output ports of the beam-splitter at equal distance
from the beam-splitter. The outputs of the detectors are connected
to a correlator or coincidence counter, which records both the number
of counts and the time delay between the clicks on two different detectors.
Specifically, the plots generated from the correlation counts reveal
the probability of simultaneous clicks of two detectors compared to
the consecutive clicks on the same detectors for different values
of delay.
This can generate three possible scenarios captured in the correlation
function. In the first case, when the probability of simultaneous
clicks is greater than that of the consecutive clicks (clicks with
a delay), the state of light is considered as bunched. On the contrary,
when simultaneous clicks are less probable than the consecutive clicks,
the light is characterized to be in the antibunched state. The correlation
function for the first (second) case shows a peak (dip) at zero delay
time. The third case is that of the equally probable consecutive and
simultaneous clicks, which corresponds to a laser source (coherent
state). The correlation function remains unchanged for every value
of delay time. A conventional light source (namely, filament bulb)
produces bunched states of light as they usually generate multiphoton
pulses. It is worth pointing out here that the photons are simultaneous
only if they reach the detectors within the resolution time (dead
time) of the detectors, which is about 20-50 ns.
This coincidence-count-based scheme can be easily modified to design
a scheme for detecting quadrature squeezing. Interestingly, in contrast
to antibunching, squeezing is a phase sensitive property. This is
why a strong laser beam (local oscillator) is made to incident on
the second input port of the beam splitter used in HBT experiment.
When the input light incident on the first input port of the beam
splitter mixes with a strong beam (local oscillator) at the beam splitter,
at the output the difference of the current from both the detectors
is used to observe squeezing by varying the phase of the local oscillator.\textcolor{red}{{}
}When the local oscillator of the same frequency as the beam under
consideration is used, it is referred to as the homodyne detection,
while different frequency corresponds to heterodyne detection.
There is one more interesting phenomenon associated with beam splitter,
i.e., when two single photons reach a beam splitter simultaneously
due to their bosonic nature both of them take the same output port
of the beam splitter. This can be verified by checking the photon
number detection in both the output ports. This phenomena is known
as Hong-Ou Mandel effect.\textcolor{red}{{} }
\section{Applications of nonclassical light\label{sec:Applications-of-nonclassical}}
In this section, we aim to discuss applications of different types
of nonclassical light (e.g., squeezed, antibunched and entangled states
of light) with a brief introduction to the corresponding history.
To begin with, let us briefly review the early history of squeezed
state and its modern applications.
\subsection{Squeezed state and its applications\label{subsec:Squeezed-state-and} }
We have already mentioned that squeezed state was discovered by Kennard
in 1927 \cite{kennard1927quantenmechanik}. An extremely interesting
history of this discovery can be found at Sec. 4 of \cite{nieto1997discovery}.
Here we would like to narrate the story in brief. Earle Hesse Kennard
was an assistant professor at Cornell University, and he was a granted
a sabbatical in 1926. In October 1926, he reached Institut ${\rm f\ddot{u}r}$
Theretische Physik, University of ${\rm G\ddot{o}ttingen}$, where
Max Born used to work at that time. There Kennard learned the matrix
mechanics of Heisenberg and the wave mechanics of Schrodinger. This
was a very productive period in physics, during this time, Heisenberg
submitted his famous paper on uncertainty relations paper and went
to Copenhagen to work with Bohr. Almost immediately after that (on
March 7, 1927, Kennard also reached Copenhagen to work with Bohr.
At Copenhagen, he completed the manuscript \cite{kennard1927quantenmechanik}
that reported the discovery of the squeezed states. In that paper
he acknowledged the help received from Bohr and Heisenberg. It was
great contribution, but its importance was not properly understood
until experimental quantum optics took shape. Although squeezed state
was discovered in 1927, the term ``squeeze'' was coined much later
in 1979 in the context of increased sensitivity of an antenna designed
for the gravitational-wave detection \cite{Hollenhorst1979quantum}.
It may be interesting to note that in \cite{Hollenhorst1979quantum}
terms like squeeze operator and squeeze factor were used, but squeezed
state was not used explicitly. Further, more interestingly, in 2016,
the existence of gravitational wave has been confirmed in the famous
LIGO experiment using squeezed states and a method in the context
of which the term squeeze appeared in the world of quantum optics
\cite{aasi2013enhanced,grote2013first}. The essential physics of
using squeezed state for the detection of gravitational wave was known
for long. The activities in this direction were actually initiated
in 1980s \cite{schechter1986searching}, through the seminal proposal
of Caves \cite{caves1981quantum}. In a lucid manner, the procedure
for gravitational wave detection can be visualized as follows. Consider
that we have a Michelson interferometer and a laser as a source of
light. Now, if a gravitational wave originated due to supernova explosion
or black hole merging causes vibration of a mirror of the Michelson
interferometer, then that would cause modulation of the reflected
laser light from that mirror and consequently the interference pattern
would be changed. The change in interference pattern can be detected
by the appropriate detectors, but in the usual situation (i.e., when
no squeezed light is used), the sensitivity of the interferometer
would be limited by the fluctuations of the vacuum state entering
through the unused port of the interferometer. Specifically, sensitivity
limit arises because of two types of noise- photon counting and radiation
pressure fluctuations, which originate due to fluctuations in the
two different quadratures associated with the vacuum that enters through
the unused input port of the interferometer. To beat this sensitivity
limit, squeezed vacuum state is injected into the system through the
otherwise unused port \cite{aasi2013enhanced,grote2013first,teich1990squeezed}.
This would reduce one of the above mentioned noises, depending upon
which quadrature of the squeezed vacuum state is squeezed. Following,
similar argument, sensitivity of other devices can also be improved
using squeezed light. To be precise, quantum fluctuations limit the
sensitivity of measuring devices. However, this quantum uncertainty
can be circumvented by using the quiet component (squeezed quadrature,
say $X$ quadrature) of the squeezed state of a radiation field and
by using a detection technique that is insensitive to the noise present
in the other quadrature ($Y$ quadrature in our case) \cite{teich1990squeezed}.
Another interesting application of squeezed state is an optical waveguide
tap which was introduced by Shapiro in 1980 \cite{shapiro1980optical}.
In an optical waveguide tap, squeezed state is sent through a waveguide
which is used to tap another waveguide that carries the actual information.
The use of squeezed state helps us to obtain a very high signal to
noise ratio (SNR).
Squeezed state can also be used for teleportation of coherent states
\cite{furusawa1998unconditional} and for continuous variable quantum
key distribution (CVQKD) \cite{hillery2000quantum} in particular,
and quantum communication in general \cite{braunstein2005quantum,braunstein2012quantum}.
Out of these interesting applications, CVQKD needs special mention
as it can provide unconditional security to the transmitted information.
Detail description of the scheme proposed by Hillery can be found
at \cite{hillery2000quantum}, here we briefly note that in Hillery's
work, a quantum state is viewed as a point in a phase space defined
by $X$ and $Y$ quadratures (axes) and the point is surrounded by
an error box. The error box would represent the quantum fluctuation.
For coherent state, it would be a circle of radius $r$, whereas for
a minimum uncertainty squeezed state (a state with $(\Delta X)^{2}(\Delta Y)^{2}=\frac{1}{4},$
but $(\Delta X)^{2}\neq(\Delta Y)^{2}$) it would become an ellipse
and thus allow us to define one quadrature in a precise manner at
the cost of precision in\textcolor{red}{{} }the other quadrature (cf.
Fig. \ref{fig:-HOM-HOA} a, where squeezing in $X$ quadrature is
witnessed for PACS defined in Eq. (\ref{eq:pacs}) through the reduction
of $(\Delta X)^{2}$ below $\frac{1}{2}$, which is the value of $(\Delta X)^{2}$
for the coherent state). Now if Alice wants to distribute a key to
Bob in a secret manner, using squeezed state, she may follow the following
strategy suggested by Hillery.\textcolor{red}{{} }The sender (Alice)
and receiver (Bob) divide both axes into segments (bins) of equal
sizes, which are essentially less than $\frac{1}{2}$ in size. Each
bin corresponds to a symbol, and the number of allowed bins depends
on the length of the major axis. As a specific case, we may consider
that only 2 bins are allowed and they correspond to 0 and 1, which
can be chosen randomly on either of the axes. Specifically, Alice
can encode a bit value 0 in two different ways, i.e., she can prepare
a quantum state centered at $X$-axis ($Y$-axis) in the first bin
squeezed in $X$ ($Y$) quadrature. This state has well defined $X$
($Y$) value and $Y$ ($X$) value is poorly defined. Independently,
Bob is also allowed to measure one of the quadratures at random using
Homodyne detection technique discussed in Sec. \ref{sec:Characterization-of-nonclassical}.
At the end of this step, Alice and Bob reconcile the choices of quadratures
they have made to encode and measure. They discard all the cases,
except where they have made the same choice. Using this method Alice
and Bob share a symmetric key, whose security is ensured by checking
half of the shared symmetric key.
As this review is not focused on squeezed states alone, we could not
describe all the aspects and applications of the squeezed state. Interested
readers may obtain more information about its interesting features
and applicability in classic reviews \cite{walls1983squeezed,loudon1987squeezed,nieto1997discovery}
and a few relatively new reviews \cite{dodonov2002nonclassical,andersen201630}.
\begin{figure}[H]
\begin{centering}
\includegraphics[scale=0.6]{TMfig.pdf}
\par\end{centering}
\caption{\label{fig:-HOM-HOA}(Color online) The variation of witnesses of
squeezing and antibunching is shown for the photon added coherent
state with the coherent state parameter $\alpha$ in (a) and (b),
respectively. Here, $m$ corresponds to the number of photons added
to the coherent state.}
\end{figure}
\subsection{Antibunched state and its applications\label{subsec:Antibunched-state-and} }
To lucidly visualize the phenomenon of antibunching, we may note that
sometimes (in some states of radiation field) photons prefer to travel
alone (one by one) and that leads to antibunching. Specifically, if
we find a state of light in which photons prefer to travel alone in
comparison to traveling with another photon then we refer to the state
of light as antibunched and the corresponding phenomenon as the photon
antibunching (see Chapter 8 of \cite{ghatak2015light}). Antibunched
light is nonclassical light as antibunched states don't have any classical
counterpart. This nonclassical state has been investigated since long
time \cite{kimble1977photon,loudon1980non,teich1990squeezed}. Recently
on-chip generation of antibunched light has been reported in Ref.
\cite{khasminskaya2016fully}. Similar to the notion of antibunching,
a notion of bunched states of light may be introduced as a state of
light in which photons prefer to travel in the company of other photons.
Sunlight and light received from the lamps used at home are in bunched
state. In addition, there are some sources of light (like lasers)
which neither show any preference for traveling alone nor for travelling
in groups. Such a state of light is considered coherent. A simple
experiment designed by Hanbury Brown and Twiss (HBT), who were astronomers
interested in measurement of diameter of stars can be used to determine
whether the light coming from a source is antibunched, bunched or
coherent. The experiment is briefly described in Sec. \ref{sec:Characterization-of-nonclassical}.
Usually possibility of observing antibunching is checked using the
following criterion: $g_{2}(0)<1,$ where $g_{2}(0)=\frac{\langle a^{\dagger}(t)a^{\dagger}(t)a(t)a(t)\rangle}{\langle a^{\dagger}(t)a(t)\rangle\langle a^{\dagger}(t)a(t)\rangle}<1.$
A coherent state always yields $g_{2}(0)=1$, and we say that the
light is unbunched and a thermal state gives $g_{2}(0)>1,$ which
implies a bunched state of light where photons prefer to travel together.
In Fig. \ref{fig:-HOM-HOA} b, one can easily observe that PACS defined
by Eq. (\ref{eq:pacs}) is antibunched. Antibunched states are of
interest for various reasons. Firstly, they show a unique manifestation
of nonclassicality. To illustrate this point, we may note that Fock
states, which are considered to be most nonclassical show antibunching,
but they don't show squeezing.
Antibunching is closely related to sub-Poissonian photon statistics.
Specifically, for a short counting time, the presence of antibunching
would ensure the presence of sub-Poissonian photon number distribution
and vice versa \cite{teich1990squeezed}. The sub-Poissonian photon
statistics is already defined above through the criterion $\left(\Delta N\right)^{2}<\bar{N}.$
As for coherent (Poissonian) state, we obtain $\left(\Delta N\right)^{2}=\bar{N},$
sub-Poissonian photon statistics essentially represent a state where
fluctuations in photon number is less than that in the most classical
(coherent) state. Thus, it may be referred to as the photon number
squeezed state. In context of the applications of squeezed states,
we have already discussed how the squeezing in one of the quadrature
helps us in performing accurate measurements. Following the same argument,
we may say that the photon number squeezed states may be useful in
performing precise measurements where the intensity of the incident
beam (the number of photons present in the beam) matters. A set of
such applications is discussed in \cite{teich1989squeezed,teich1990squeezed}.
Here we briefly note that sub-Poissonian light may be used to compare
the roles of photon noise (which is reduced in the case of the sub-Poissonian
light), retinal noise and neural noise in the visual response at threshold.
Specifically, in our retina, in response to light, ganglion cells
generate and transmit neural signals to higher visual centers of the
brain using the optic nerves. The statistical nature of this signal
gets affected by photon noise, retinal noise and neural noise. By
using sub-Poissonian light, we can reduce the effect of photon noise
and thus isolate the effect of other noises (for detail see \cite{teich1989squeezed,teich1990squeezed,teich1982multiplication}
and references therein). Further, the use of sub-Poissonian light
as a stimulus in visual psychophysics may help us to understand the
process of seeing at the threshold \cite{teich1990squeezed,teich1982multiplication}.
Specifically, it may help us to understand what governs the uncertainties
that appear in the human visual response near the threshold of seeing.
In optical communication systems, there are various sources of noise,
including photon noise intrinsic to the source of light. Use of sub-Poissonian
light as a source can reduce this particular type of noise and thus
the errors caused due to this noise \cite{teich1989squeezed}. In
brief the use of sub-Poissonian light helps us to improve the accuracy
of those equipment whose sensitivity is restricted by the quantum
fluctuations in the number of photons present in the radiation field.
As mentioned above, for a short counting time, antibunching and sub-Poissonian
photon statistics are equivalent and thus, these applications of sub-Poissonian
light can also be viewed as applications of antibunched light. Further,
antibunching is reported to be useful in characterizing single-photon
sources \cite{pathak2007mathematical}.
Recently, antibunching has been reported theoretically in \cite{thapliyal2014higher,thapliyal2014nonclassical,pathak2013nonclassicality}
and experimentally in \cite{kimble1977photon,zwiller2001single,zhou2016strong,gulati2014generation,stevens2014third}.
Thus, this particular type of nonclassical light seems to be easily
achievable in many physical systems and have interesting applications
in various domains of physics.
\subsection{Quantum state engineering \label{subsec:Quantum-state-engineering} }
Until now we have seen that there are several applications of nonclassical
states in general and nonclassical light in particular. Thus, in short,
we can say that, nonclassical states are in the heart of quantum optics.
The question is- how to generate a desired nonclassical state? There
are various ways. For example, we may find a suitable Hamiltonian
$H$ and construct corresponding unitary time evolution operator $U(t)=\exp\left(-\frac{iHt}{\hbar}\right)$
that would lead to the desired nonclassical state $|\psi_{{\rm desired}}\rangle=U(t)|\psi_{{\rm initial}}\rangle$
after evolution of a given initial state $|\psi_{{\rm initial}}\rangle$
for time $t;$ it may also be constructed by performing an\textcolor{magenta}{{}
}appropriate measurement on one of the subsystems of an entangled
system, and thus compelling the other subsystem to collapse into the
desired nonclassical state \cite{Garraway1994generation,vogel1993quantum}.
However, in practice, it is not possible to construct all Hamiltonian
or entangled states. This practical restriction encouraged scientists
to look for other routes to construct desired nonclassical states,
and the same led to a subject now known as quantum state engineering
which allows us to construct the desired nonclassical/quantum state.
In one of the pioneering works in this domain, in Ref. \cite{vogel1993quantum},
a prescription was provided for the construction any desired nonclassical
state of the radiation field using a simple single mode Hamiltonian.
This interesting approach led to many new ideas of quantum state engineering.
For example, in \cite{janszky1995quantum} Janaszky et al., provided
a recipe for the construction of a set of superposition states that
coincide with the Fock states for any practical purpose. Specifically,
it was shown that for all practical purposes, a Fock state $|n\rangle$
can be viewed as a superposition of $n+1$ coherent states having
small amplitudes. Earlier works of the same group \cite{janszky1993coherent,domokos1994role}
established that some nonclassical states can be arbitrarily well
approximated as a discrete superposition of coherent states. These
early efforts of quantum state engineering led to many recent and
interesting ideas. For example, as the state engineering allows one
to create finite dimensional states of the radiation field, the process
of generation of finite dimensional state is viewed as scissors which
can truncate the usually infinite dimensional Hilbert space into an
$N$ dimensional Hilbert space. Thus, the scissors cuts a finite dimensional
Hilbert space from the infinite dimensional Hilbert space. Such a
process of truncating the Hilbert space is referred to as quantum
scissors \cite{ozdemir2001quantum,miranowicz2004dissipation,miranowicz2014phase}
(see Fig. \ref{fig:quantum-scissors} for a feeling of the task performed
by quantum scissors). We have already noted that any finite superposition
of Fock states is nonclassical (for a review on nonclassicality of
the finite dimensional states see \cite{miranowicz2003quantum,miranowicz2003quantumII}),
thus quantum scissors usually provides nonclassical states. For example,
finite dimensional coherent states are nonclassical and they may be
produced using quantum scissors implemented with beam splitters, detectors
and mirrors as shown in \cite{miranowicz2014phase}. Further, states
produced through quantum scissors may be used for teleportation of
single mode optical states \cite{babichev2003quantum} and qudit states
\cite{miranowicz2005optical}.
\subsection{Many facets of quantum communication\label{subsec:Many-facets-of} }
In the physical implementations of various schemes of QKD and MDIQKD,
in place of a single-photon source weak coherent pulse (WCP) is used.
For example, see Fig. 1 of \cite{lo2012measurement} and Fig 3 of
\cite{abruzzo2014measurement}. When a weak coherent pulse ($|\alpha\rangle$)
is transmitted through a neutral density filter it reduces the average
photon number $|\alpha|^{2}$, without altering the photon number
distribution $P(n).$ In such cases, when $|\alpha|^{2}<1$, most
of the time there will be 0 photon in the output of WCP, whenever
there will be non-zero number of photons, it will be most likely 1
photon, the probability of obtaining the output in state $|2\rangle,$$|3\rangle$,
etc., will be negligibly small. Thus, the output of WCP can be used
as an approximated single-photon source. However, as far as the $P$-function-based
definition of nonclassical light is concerned, the output of WCP is
still in coherent state and thus it's a classical light. In the next
section, we will provide more examples of applications of classical
light, before that we would like to note that even in schemes of quantum
communication where technically we require a nonclassical state of
radiation field (Fock state $|1\rangle$), we often use classical
light (WCP) in place of that.
The simplest and most powerful use of single photon state (Fock state
$|1\rangle$, which is definitely maximally nonclassical by our discussion
so far) appeared in 1984, when Bennett and Brassard \cite{bennett1984quantum}
proposed an unconditionally secure scheme for quantum key distribution
(QKD), which is now known as BB84 protocol. In this scheme, Alice
randomly prepares a sequence of single photon states, where each state
is randomly prepared in one of the following states of polarization
$|H\rangle,\,|V\rangle,\,|\nearrow\rangle$ and $|\nwarrow\rangle$,
where $|\nearrow\rangle=\frac{|H\rangle+|V\rangle}{\sqrt{2}}$ and
$|\nwarrow\rangle=\frac{|H\rangle-|V\rangle}{\sqrt{2}}$ represent
a photon polarized at 45$^{o}$ and 135$^{o}$ with respect to the horizontal.
She transmits the sequence to Bob, who at a later time measures the
states randomly using $\left\{ |H\rangle,|V\rangle\right\} $ or $\left\{ |\nearrow\rangle,|\nwarrow\rangle\right\} $
basis, and announces the basis used to measure a particular qubit.
If the basis used by Bob for measurement and that used by Alice are
same, Bob keeps the qubits, otherwise they discard. Now, Bob randomly
selects half of the remaining qubits as verification qubits and announces
the outcomes of those measurements. Ideally (i.e., in the absence
of any Eavesdropper (Eve)), measurement outcomes of Bob would perfectly
match with the states prepared by Alice as they have used the same
basis. Any deviation from that would indicate the presence of Eve
or noise, and if a mismatch greater than a pre-computed tolerable
rate is found, they discard the protocol, otherwise they use the rest
of the qubits (after some post-processing as key for future communication).
Uncertainty principle restricts Eve from performing simultaneous accurate
measurement using $\left\{ |H\rangle,|V\rangle\right\} $ and $\left\{ |\nearrow\rangle,|\nwarrow\rangle\right\} $
bases as the corresponding measurement operators do not commute (cf.
$\left[M_{H},M_{\nearrow}\right]\neq0,$ where $M_{H}=|H\rangle\langle H|$
and $M_{\nearrow}=|\nearrow\rangle\langle\nearrow|$ are measurement
operators from $\left\{ |H\rangle,|V\rangle\right\} $ and $\left\{ |\nearrow\rangle,|\nwarrow\rangle\right\} $
basis, respectively. As Eve does not know which qubit (photon) is
prepared in which basis, any eavesdropping effort by her would imply
measurement of some of the qubits in the wrong basis (i.e., in a basis
other than the basis in which it was prepared), and that would leave
detectable traces of eavesdropping. The security of this single photon
(nonclassical state) based scheme is unconditional as it is obtained
from the fundamental laws of physics and not from the computational
difficulty of a mathematical problem. The unconditional security achieved
is a desired feature, but it is not achievable in the classical world.
This particularly interesting feature of this scheme led to a bunch
of similar Fock-state-based (single-photon-based) schemes for various
secure communication tasks. Some of them were restricted to QKD \cite{bennett1992communication}
and some of them were extended to perform secure direct quantum communication
\cite{kak2006three,lucamarini2005secure}\footnote{In \cite{kak2006three}, the author had described the scheme as a
scheme for QKD, but a careful look into the scheme easily reveals
that the scheme proposed in \cite{kak2006three} was actually a scheme
for quantum secure direct communication.}, where a message can be communicated directly without prior generation
of keys. Some foundationally important ideas have essentially been
explored using Fock state $|1\rangle.$ Specifically, the implementation
of counterfactual measurement or interaction free measurement or Elitzur-Vaidman
bomb testing \cite{elitzur1993quantum} and Guo-Shi scheme of counterfactual
QKD \cite{guo1999quantum} requires Fock state $|1\rangle$ and thus
the nonclassical light (for a lucid description of these schemes see
Chapter 8 of \cite{ghatak2015light}). Further, in various entangled-state-based
schemes for secure communication, single photons from a sequence of
single photons prepared randomly in $|H\rangle,\,|V\rangle,\,|\nearrow\rangle$
and $|\nwarrow\rangle$ are inserted randomly in the sequence of message
qubits as verification (decoy) qubits which are subsequently measured
and compared in a manner similar to what was followed in BB84 protocol
and this strategy analogous to BB84 protocol is referred to as BB84
subroutine \cite{sharma2016verification} gives unconditional security
to those schemes. For example, in the original Ping-Pong protocol
\cite{bostrom2002deterministic} and LM05 protocol \cite{lucamarini2005secure}
of quantum secure direct communication, B92 protocol \cite{bennett1992quantum}
for QKD, quantum key agreement protocol by Chong et al. \cite{chong2010quantum},
and Shi et al.'s quantum dialogue scheme \cite{shi2010quantum} unconditional
security is derived from the use of Fock state $|1\rangle.$ Further,
there exist a few commercial products, where single-photon sources
are used. Of course, there are various commercial solutions for QKD
\cite{IDQ,TOSHIBA,MITSHUBISHI}, but a quantum random number generator
needs a special mention (cf. QUANTIS sold by IdQuantique \cite{QUANTIS})
as there does not exist any true random number generator in the classical
world, although it's required for various applications including casinos.
Working of a quantum random number generator is simple. Let's send
a single photon (i.e., Fock state $|1\rangle$) through a 50:50 beam
splitter; post beam splitter the photon will be in a superposition
state $\frac{|{\rm reflected}\rangle+|{\rm transmitted}\rangle}{\sqrt{2}}$.
Now if we put one detector along the reflected path and one along
transmitted path, this will be equivalent to measuring the superposition
state using $\left\{ |{\rm reflected}\rangle,|{\rm transmitted}\rangle\right\} $
basis, and in accordance to quantum mechanics the state will collapse
randomly to one of the possibilities, in other words, detectors will
click randomly. We may consider the click of the detector along the
reflected (transmitted) path as 0 (1), and thus obtain a truly random
sequence of 0 and 1. Thus, the applications discussed so far require
a single-photon source. However, a source that can provide on-demand$|1\rangle$
states, is not available. In other words, a source of nonclassical
light that can emit single photon as and when it is required is not
available and this is why we use either WCP (a classical light source
approximated as a single-photon source) or a heralded entangled-state-based
single-photon source \cite{pittman2002single,migdall2002tailoring}.
Entangled states are nonclassical and their use is not restricted
to the design of single-photon sources. In fact, they are used to
propose many schemes for quantum communication. Some of them (e.g.,
teleportation and densecoding) have no classical analogue and entanglement
is essential for them. For the implementation of device-independednt-quantum-key-distribution
(DIQKD), we need Bell-nonlcal states, and all pure entangles states
are Bell-nonlocal and every Bell-nonlocal states are entangled (but
the converse is not true). For another set of schemes for secure quantum
communication, entanglement is found to be useful, but not essential
(say, quantum e-commerce and quantum voting). In the following subsection,
we list a few tasks where entangled states, which are always nonclassical,
are used.
\subsubsection{Entangled state and its applications\label{subsec:Entangled-state-and} }
It is already mentioned that entangled states, which are nonclassical
states, are essential for the realization of dense-coding \cite{bennett1992communication}
and quantum teleportation\footnote{Quantum teleportation is a very interesting process that nicely illustrates
the power of quantum mechanics. In this scheme, an unknown quantum
state is transferred using prior shared entanglement and classical
communication, but the state can not be found in the channels that
connect the sender and the receiver.} of an unknown quantum state \cite{bennett1993teleporting} and that
of a known quantum state, which is referred to as remote state preparation
\cite{pati2000minimum}. Further, entanglement is essential for implementation
of various variants of teleportation and remote state preparation,
such as probabilistic teleportation \cite{li2000probabilistic}, teleportation
using non-orthogonal states \cite{sisodia2017teleportation}, quantum
information splitting \cite{hillery1999quantum}, joint remote state
preparation \cite{wang2013joint}, hierarchical joint remote state
preparation \cite{shukla2016hierarchical}, bidirectional controlled
state teleportation \cite{thapliyal2015applications,thapliyal2015general},
bidirectional controlled remote state preparation \cite{sharma2015controlled,thapliyal2015general},
bidirectional controlled joint remote state preparation \cite{sharma2015controlled,thapliyal2015general}.
It can be used to implement schemes for secure quantum communication,
like- Ekert's protocol for QKD \cite{ekert1991quantum}, Ping-pong
protocol for QSDC \cite{bostrom2002deterministic}, protocols for
two-way secure direct quantum communication known as quantum dialogue\footnote{Due to the similarity of this two-way communication task with a telephone,
this type of scheme is also referred to as quantum telephone \cite{wen2007secure}
and quantum conversation \cite{jain2009secure}.} \cite{nguyen2004quantum,an2005secure,shukla2013group}, and its variant
asymmetric quantum dialogue \cite{banerjee2017asymmetric}, quantum
key agreement \cite{shukla2014protocols,shukla2017semi} where two
parties contribute equally to construct a key and no one alone can
decide any bit of the key, quantum conference \cite{banerjee2017quantum},
quantum voting \cite{thapliyal2016protocols}, quantum e-commerce
or online shopping \cite{shukla2017semi}, quantum sealed bid auction
\cite{sharma2016quantum}, quantum private comparison \cite{thapliyal2016orthogonal,shukla2017semi},
quantum secret sharing \cite{hillery1999quantum}, etc. Thus, in brief,
this particular nonclassical state (entangled state) is extremely
important for realizations of various schemes of secure quantum communication,
and some of such schemes have direct applications in our daily life.
For example, voting plays most crucial role in a democratic country,
secure online shopping and fair sealed bid auction is also crucial
for today's economy. In fact, for any task related to secure quantum
communication, if there exists a single-qubit-based scheme, there
must exist an entanglement-state-based counterpart (see \cite{sharma2016comparative}
for detail).
This is also an integral part of device independent quantum cryptography
\cite{acin2006bell}, which uses entangled states with stronger correlations
violating Bell's nonlocality.
\section{Applications of classical light\label{sec:Applications-of-classical} }
The applications described in the last section may give a perception
that all the modern applications of light are primarily focused around
nonclassical light. Such a perception is not true. In today's world,
we frequently use technologies that are based on classical light.
To be specific, just note that the output of a laser is in a coherent
state, which is a classical state of light as per the definition of
noclassicality provided through the Glauber-Sudarshan P-representation.
The recognition of the fact that laser is a classical state of light,
immediately reveals so many applications of classical light to us.
For example, we use laser to read CD/DVD, to operate cataract, to
destroy enemy's airplane in war, to send an information through optical
fiber. The domain of applications of laser is so vast that it is not
only beyond the scope of this review, it is also beyond the scope
of a single review dedicated on applications of laser. This is why
several nice reviews are written on the applications of lasers \cite{daido2012review,montross2002laser,peyre1995laser,rusak1997fundamentals,lee2004recent,hahn2012laser,harmon2013applications,michel2010review,radziemski1994review,tognoni2002quantitative,bass1995laser,black1996laser,schwarz2008laser,castellini1998laser,bagger2005review}.
However, most of them are focused on a set of particular applications.
For example, elaborate separate reviews are available on the applications
of laser-driven ion sources \cite{daido2012review}, laser shock processing
\cite{montross2002laser,peyre1995laser}, laser-induced breakdown
spectroscopy (LIBS) \cite{rusak1997fundamentals,lee2004recent,hahn2012laser,harmon2013applications}
in general, and single-shot LIBS \cite{michel2010review} and quantitative
micro-analysis performed by LIBS \cite{tognoni2002quantitative} in
particular, laser plasmas and laser ablation \cite{radziemski1994review},
laser tissue welding (a particularly important process for surgery
and tissue engineering) \cite{bass1995laser}, particle size measurement
in different industries \cite{black1996laser}, non-surgical periodontal
therapy \cite{schwarz2008laser}, laser Doppler vibrometry (LDVi)
\cite{castellini1998laser}, laser hybrid welding \cite{bagger2005review}.
From the above, we can see that LIBS drew much attention of the scientific
community. Keeping this in mind, we note that LIBS is a technique
for performing atomic emission spectroscopy using a highly energetic
(short) laser pulse as the excitation source. This method for elemental
analysis is extremely fast and in this method, the focused laser pulse
usually creates a micro-plasma on the sample surface, which leads
to the atomization and excitation of the sample. Further, almost all
kinds of traditional spectroscopic techniques (e.g., UV-VIS spectroscopy
\cite{perkampus1992uv}, luminescence spectroscopy \cite{gaft2015modern},
FTIR spectroscopy \cite{smith1996fourier,movasaghi2008fourier}, X-ray
spectroscopy \cite{chastain1995handbook,van2001handbook}, Raman spectroscopy
\cite{smith2013modern} and its variants, like surface enhanced Raman
spectroscopy (SERS) \cite{schlucker2014surface}, tip enhanced Raman
Spectroscopy (TERS) \cite{yano2014tip} and coherent anti-Stokes Raman
spectroscopy (CARS) \cite{tolles1977review}) can be viewed as applications
of classical light. These spectroscopic techniques play a crucial
role in nanotechnology (cf. applications of Raman spectroscopy in
nanotechnology \cite{amer2010raman,ferrari2013raman,jorio2012raman,souza2003raman})
to sensor designing \cite{singla2015turn}, characterization of materials
\cite{ghannoum2016optical,butler2016using,bottka1988modulation,de2016techniques}
to finding out the proof of big bang obtained through the detection
of cosmic microwave background radiation \cite{dicke1965cosmic,radziemski1994review},
drug designing \cite{cooper2002optical,fringeli1992situ} to medical
imaging \cite{arridge1999optical,tuchin2006optical,tuchin2007tissue,boas2001accuracy,bushberg2011essential},
and thus, classical light plays a crucial role in all these domains
of science. Further, almost all the quantum optical experiments use
a laser (classical light) as an initial source of light (often referred
to as pump) and generate nonclassical light via subsequent interaction
and thus the properties of classical light, even play a crucial role
in the experimental realizations of devices that can be viewed as
applications of nonclassical light.
Another interesting application of classical light (laser) is in achieving
extremely low temperature through magneto optical trapping (MOT) \cite{katori1999magneto,mckay2011low,drewsen1994investigation},
which helps us in realizing BEC (Bose Einstein Condensation) \cite{anderson1995observation,myatt1997production},
a completely quantum phenomenon. In a conventional MOT, six laser
beams (which are usually prepared from the same source) intersect
in a glass cell (cf. Fig. 1 of \cite{anderson1995observation}). Further,
we may note that in Sec. \ref{sec:Introduction }, we mentioned about
the velocity of light in the vacuum, which is very high and fixed
in free space. However, inside a medium, it reduces by a factor of
$n$, where $n$ is the refractive index of the material through which
light is passing. Usually, we come across materials with reasonable
values of refractive index. For example, $n_{{\rm glass}}=1.5,\,n_{{\rm water}}=1.33,\,n_{{\rm amber}}=1.55$.
Thus, if light passes through any of these media, it will slow down,
but would still travel with a velocity of a couple of thousand\textcolor{magenta}{{}
}km/sec, which is still very high compared to the velocities we come
across in our daily life. The question is- Is it possible to further
slow down the light? The answer is yes. Techniques for generating
ultra-slow light have been developed in the last two decades. In 1998,
laser pulses were slowed to propagate with a velocity of 17 km/sec
in a BEC of Na \cite{hahn2012laser}. Subsequently, in 2000, light
was almost stopped, stored and retrieved \cite{liu2001observation}.
The exciting progress in this domain is still continuing (for a quick
review see \cite{dutton2004art,krauss2008we} and see \cite{lukin2000nonlinear}
for a very interesting work on nonlinear optics of ultraslow single
photons).
Laser is not the only classical light in use. Lights received from
conventional sources are all classical and applications like traffic
red lights to glow signs all are classical. Such applications of classical
light are in existence since the beginning of the civilization (for
a short review on uses of classical light in optical communication
during early civilizations see Chapter 19 of \cite{ghatak2015light}).
\section{Conclusion\label{sec:Conclusion}}
The world of light is fascinating, and the discussion above provides
a glimpse of this world with a focus on different applications of
classical and nonclassical light. It is shown that many fundamental
ideas of physics were obtained through the effort to understand experiments
involving light. Further, a nonchronological review of the ideas that
have led to modern applications of optics has been provided. Using
Glauber-Sudarshan $P$-function, we have classified light as classical
light and nonclassical light and have separately discussed the modern
applications of classical and nonclassical light. In the context of
classical light, major attention is given to laser, whereas in the
context of nonclassical light, focused attention has been given to
the applications of squeezed, antibunched and entangled states of
light. Applications of single photon states have also been discussed.
As the focus of the review is modern applications of classical and
nonclassical light, we have restricted us from the detail discussion
of some closely related phenomena which arise mostly because of properties
of optical material (in some sense which is the case with the nonlinear
optics, too). Specially, we have not discussed negative refractive
index (NRI) materials \cite{shelby2001experimental,smith2004metamaterials,ramakrishna2005physics}.
We have not also discussed various types of lasers, optical fibers
and schemes of fiber optic communication. However, a set of excellent
reviews are already available in these topics. The domain of applications
of both classical light and nonclassical light is so broad that it
is almost impossible to do justice to every aspect of it. Naturally,
this review cannot also do justice to every application of light.
Still an effort has been made to lucidly introduce the readers with
the difference between classical and nonclassical light, the ideas
that led to this distinction, and the applications of these two types
of light. We conclude the review with a hope that this review will
show the link between various ideas of optics, and motivate the readers
to go through the more focused works on the applications of their
interest.\\
~
\textbf{Acknowledgment}: AP thanks Department of Science and Technology
(DST), India for the support provided through the project number EMR/2015/000393.
He also thanks Kishore Thapliyal, S Aravinda and J Banerji for their
interest in the work and Kathakali Mandal for drawing the cartoon
used in this paper. AG thanks the National Academy of Sciences India
(NASI), for supporting the present work through the M N Saha Distinguished
Fellowship.
\bibliographystyle{unsrt}
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.